Z1C and Tasker automation - Xperia Z1 Compact General

Thought I would create a thread to discuss Tasker automation on the Z1C (help requests, troubleshooting, automation ideas, etc.)
First a help request:
Has anyone been successful in getting AutoInput or Sendevent shell commands (double taps, swipes) to work on the Z1C?
I've used this script in the past to record events and replicate them to create macros using Tasker shell, but the Z1C input devices (touchscreen, buttons) don't seem to be allowing capture using getevent:
http://forum.xda-developers.com/showthread.php?t=2233865

wernyuen said:
Thought I would create a thread to discuss Tasker automation on the Z1C (help requests, troubleshooting, automation ideas, etc.)
First a help request:
Has anyone been successful in getting AutoInput or Sendevent shell commands (double taps, swipes) to work on the Z1C?
I've used this script in the past to record events and replicate them to create macros using Tasker shell, but the Z1C input devices (touchscreen, buttons) don't seem to be allowing capture using getevent:
http://forum.xda-developers.com/showthread.php?t=2233865
Click to expand...
Click to collapse
Edit: always execute as root!!!!
task Swipe:
Script
execute Shell
input swipe x1 y1 x2 y2 speed
like:
input swipe 310 870 310 320 200
keyevents:
http://developer.android.com/reference/android/view/KeyEvent.html
like
power on/off
input keyevent 26 or
input keyevent KEYCODE_POWER
pause between actions:
sleep 1
sleep 2 etc.
tap:
input tap x y
start app:
am start -n app.name.name/ManifestActivity
close app:
am force-stop app.name.name
complete action screen on, unlock, start spotify, autoplay, screen off
am start -n com.spotify.music/.MainActivity;
sleep 2;
input keyevent 26;
sleep 1;
input swipe 310 870 310 320 200;
sleep 1;
input keyevent 126;
sleep 1;
input keyevent 26;
i hope it help

Maetz3 said:
Edit: always execute as root!!!!
task Swipe:
Script
execute Shell
input swipe x1 y1 x2 y2 speed
like:
input swipe 310 870 310 320 200
keyevents:
http://developer.android.com/reference/android/view/KeyEvent.html
like
power on/off
input keyevent 26 or
input keyevent KEYCODE_POWER
pause between actions:
sleep 1
sleep 2 etc.
tap:
input tap x y
start app:
am start -n app.name.name/ManifestActivity
close app:
am force-stop app.name.name
complete action screen on, unlock, start spotify, autoplay, screen off
am start -n com.spotify.music/.MainActivity;
sleep 2;
input keyevent 26;
sleep 1;
input swipe 310 870 310 320 200;
sleep 1;
input keyevent 126;
sleep 1;
input keyevent 26;
i hope it help
Click to expand...
Click to collapse
thanks! very helpful.
what about multi-touch or pinch-to-zoom? have not been able to find solutions that have bridged that online
double tap also seems to be a challenge for the Z1C, somehow even without sleep in between two "input taps" it is still too slow to be recognized as a double tap

wernyuen said:
thanks! very helpful.
what about multi-touch or pinch-to-zoom? have not been able to find solutions that have bridged that online
double tap also seems to be a challenge for the Z1C, somehow even without sleep in between two "input taps" it is still too slow to be recognized as a double tap
Click to expand...
Click to collapse
i dont know but i think you need something like nova launcher
i think you can combine this controls in tasker
for zoom you can use:
KEYCODE_ZOOM_IN (input keyevent 168)
KEYCODE_ZOOM_OUT (input keyevent 169)

Maetz3 said:
i dont know but i think you need something like nova launcher
i think you can combine this controls in tasker
for zoom you can use:
KEYCODE_ZOOM_IN (input keyevent 168)
KEYCODE_ZOOM_OUT (input keyevent 169)
Click to expand...
Click to collapse
Thanks ... tried the ZOOM_IN/OUT keyevents, but I think that is for the camera, I'm trying to implement this on Google Maps and Maps Engine, since they removed the Zoom and Scaling buttons on the official Google apps it's been a pain (and driving hazard) to manually zoom in and out with touchscreen input. (trying to map the zoom actions to my media controls on the steering wheel)

Hi all,
Let me start with an apology as I am not sure that I am in the right thread here, however tasker automation does seem to have alot of threads - I am confident that someone who knows more than me might be able to answer my questions/problems.
I am currently trying to automate a search in youtube using autovoice and autoinput. Via voice command so far I can get autoinput to launch youtube, and click the search bar, however I cant get it to input any text into the search bar! - So I really need to know, how to get autoinput to input variable text and search for it within youtube, not even sure which function of autoinput it would be to be honest.
Still learning.
Any help you can give would be fantastic,
Cheers
Dan

If what you want to achieve is to search YouTube, there are easier ways:
1. Simply ask Google Now to search or YouTube using the "watch" or "listen" command; or
2. Use an intent in Tasker to activate YouTube search with the search term %avcommnofilter (rather than try to use autoinput)
If you really must use autoinput, you can use the UI query action to find the right variable name for the search box, then use the 'autoinput action' action to paste your search term (%avcommnofilter) into the search box
dan34321 said:
Hi all,
Let me start with an apology as I am not sure that I am in the right thread here, however tasker automation does seem to have alot of threads - I am confident that someone who knows more than me might be able to answer my questions/problems.
I am currently trying to automate a search in youtube using autovoice and autoinput. Via voice command so far I can get autoinput to launch youtube, and click the search bar, however I cant get it to input any text into the search bar! - So I really need to know, how to get autoinput to input variable text and search for it within youtube, not even sure which function of autoinput it would be to be honest.
Still learning.
Any help you can give would be fantastic,
Cheers
Dan
Click to expand...
Click to collapse

I have a slightly different question how to make a script turning stamina mode on and off? It would be usefull because stamina sometimes prevents alarm apps from ringing in time.

Related

[proof of concept app]Gesture recognition

I recently saw this tread:
http://forum.xda-developers.com/showthread.php?t=370632
i liked the idea, and when i thought about it, gesture recognition didn't seem to hard. And guess what - it really wasn't hard
I made a simple application recognizing gestures defined in an external configuration file. It was supposed to be a gesture launcher, but i didn't find out how to launch an app from a winCE program yet. Also, it turned out to be a bit to slow for that because of the way i draw gesture trails - i'd have to rewrite it almost from scratch to make it really useful and i don't have time for that now.
So i decided to share the idea and the source code, just to demonstrate how easy it is to include gesture recognition in your software.
My demo app is written in C, using XFlib for graphics and compiled using CeGCC so you'll need both of them to compile it (download and install instructions are on the Xflib homepage: www.xflib.net)
The demo program is just an exe file - extract it anywhere on your device, no installation needed. You'll also need to extract the gestureConfig.ini to the root directory of your device, or the program won't run.
Try some of the gestures defined in the ini (like 'M' letter - 8392, a rectangle - 6248, a triangle - 934), or define some of your own to see how the recognition works. Make sure that you have a string consisting of numbers, then space of a tabulator (or more of them) and some text - anything will do, just make sure that there's more than just the numbers in each line. Below you can set the side sensitivity to tweak recognition (see the rest of the post for description on how it works). Better leave the other parameter as it is - seems to work best with this value.
Now, what the demo app does:
It recognizes direction of drawn strokes, and prints them on the bottom of the screen as a string of numbers representing them (described bellow). If a drawn gesture matches one of the patterns in the config file, the entire drawn gesture gets highlited. It works the best with stylus, but is usable with finger as well.
Clicking the large rectangle closes the app.
And how it does it:
The algorithm i used is capable of recognizing strokes drawn in eight directions - horizontally, vertically and diagonally. Directions are described with numbers from 1 to 9, arranged like on a PC numerical keyboard:
Code:
7 8 9
4 6
1 2 3
So a gesture defined in config as 6248 is right-down-left-up - a ractangle.
All that is needed to do the gesture recognition is last few positions of the stylus. In my program i recorded the entire path for drawing if, but used only 5 last positions. The entire trick is to determine which way the stylus is moving, and if it moves one way long enough, store this direction as a stroke.
The easiest way would be to subtract previous stylus position from current one, like
Code:
vectorX=stylusX[i]-stylusX[i-1];
vectorY=stylusY[i]-stylusY[i-1];
[code]
But this method would be highly inaccurate due to niose generated by some digitizers, especially with screen protectors, or when using a finger (try drawing a straight line with your finger in some drawing program)
That's why i decided to calculate an average vector instead:
[code]
averageVectorX=((stylusHistoryX[n]-stylusHistoryX[n-5])+
(stylusHistoryX[n-1]-stylusHistoryX[n-5])
(stylusHistoryX[n-2]-stylusHistoryX[n-5])
(stylusHistoryX[n-3]-stylusHistoryX[n-5])
(stylusHistoryX[n-4]-stylusHistoryX[n-5]))/5;
//Y coordinate is calculated the same way
where stylusHistoryX[n] is the current X position of stylus, and stylusHistoryX[n-1] is the previous position, etc.
Such averaging filters out the noise, without sacrificing too much responsiveness, and uses only a small number of samples. It also has another useful effect - when the stylus changes movement direction, the vector gets shorter.
Now, that we have the direction of motion, we'll have to check how fast the stylus is moving (how long the vector is):
Code:
if(sqrt(averageVectorX*averageVectorX+averageVectorY*averageVectorY)>25)
(...)
If the vector is long enough, we'll have to determine which direction it's facing. Since usually horizontal and vertical lines are easier to draw than diagonal, it's nice to be able to adjust the angle at which the line is considered diagonal or vertical. I used the sideSensitivity parameter for that (can be set in the ini file - range its is from 0 to 100). See the attached image to see how it works.
The green area on the images is the angle where the vector is considered horizontal or vertical. Blue means angles where the vector is considered diagonal. sideSensitivity for those pictures are: left one - 10, middle - 42 (default value, works fine for me ), right - 90. Using o or 100 would mean that horizontal or vertical stroke would be almost impossible to draw.
to make this parameter useful, there are some calculations needed:
Code:
sideSensitivity=tan((sideSensitivity*45/100)*M_PI/180);
First, the range of the parameter is changed from (0-100) to (0-22), meaning angle in degrees of the line dividing right section (green) and top-right (blue). hat angle is then converted to radians, and tangent of this angle (in radians) is being calculated, giving slope of this line.
Having the slope, it's easy to check if the vector is turned sideways or diagonal. here's a part of source code that does the check, it is executed only if the vector is long enough (condition written above):
Code:
if( abs(averageVectorY)<sideSensitivity*abs(averageVectorX) ||
abs(averageVectorX)<sideSensitivity*abs(averageVectorY)) //Vector is turned sideways (horizontal or vertical)
{
/*Now that we know that it's facing sideways, we'll check which side it's actually facing*/
if( abs(averageVectorY)<sideSensitivity*averageVectorX) //Right Gesture
gestureStroke='6'; //storing the direction of vector for later processing
if( abs(averageVectorY)<sideSensitivity*(-averageVectorX)) //Left Gesture
gestureStroke='4';
if( abs(averageVectorX)<sideSensitivity*(averageVectorY)) //Down gesture
gestureStroke='2';
if( abs(averageVectorX)<sideSensitivity*(-averageVectorY)) //Up gesture
gestureStroke='8';
}
else
{ //Vector is diagonal
/*If the vector is not facing isdeways, then it's diagonal. Checking which way it's actually facing
and storing it for later use*/
if(averageVectorX>0 && averageVectorY>0) //Down-Right gesture
gestureStroke='3';
if(averageVectorX>0 && averageVectorY<0) //Up-Right gesture
gestureStroke='9';
if(averageVectorX<0 && averageVectorY>0) //Down-Left gesture
gestureStroke='1';
if(averageVectorX<0 && averageVectorY<0) //Up-Left gesture
gestureStroke='7';
}
Now, we have a character (i used char type, so i can use character strings for string gestures - they can be easily loaded from file and compared with strcmp() ) telling which way the stylus is moving. To avoid errors, we'll have to make sure that the stylus moves in the same direction for a few cycles before storing it as a gesture stroke by increasing a counter as long as it keeps moving in one direction, and resetting it if it changes the direction. If the counter value is bigger than some threshold (pathSensitivity variable is used as this threshold in my program), we can store the gestureStroke value into a string, but only if it's different from previous one - who needs a gesture like "44444" when dragging the stylus left?
After the stylus is released, you'll have to compare generated gesture string to some patterns (eg. loaded from a configuration file), and if it matches, do an appropriate acton.
See the source if you want to see how it can be done, this post already is quite long
If you have any questions, post them and i'll do my best to answer.
Feel free to use this method, parts of, or the entire source in your apps. I'm really looking forward to seeing some gesture-enabled programs
Very nice work. Reading your post was very insightful, and I hope this can provide the basis for some new and exciting apps!
great app... and well done for not just thinking that seems easy... but actually doing it...
ive been a victim of that myself
very nice work man.. one question in which tool did you write code.. i mean it looks like C but how you test and all..
Great app, i see that it is just proof of concept at this stage, but i see that it can be used in future applications control
Continiue with your great work
nik_for_you said:
very nice work man.. one question in which tool did you write code.. i mean it looks like C but how you test and all..
Click to expand...
Click to collapse
It is C, (no "++", no "#", no ".NET", just god old C ) compiled with opensource compiler CeGCC (works under linux, or under windows using cygwin - a unix emulator), developed in opensource IDE Vham (but even a notepad, or better notepad++ would do), tested directly on my Wizard (without emulator). I used XFlib which simplifies graphics and input handling to a level where anyone who ever programed anything at all should be able to handle it - it providea an additional layer between the programmer and the OS. You talk to Xflib, and Xflib talks to the OS. I decided to use this library, because i wanted to try it out anyway.
If i decide to rewrite it and make an actual launcher or anything else out of it, i'll have to use something with a bit faster and more direct screen access (probably SDL, since i already done some programing for desktop PC with it) - XFlib concentrates on usage of sprites - like 2D console games. every single "blob" of the gesture trail is a separate sprite , which has to be drawn each time the screen is refreshed - that is what slows down the app so much. The gesture recognition itself is really fast.
Very good program i just test it and it works very well some combinaison are pretty hard to realize but i like this blue point turning red with command2 and 934. Goond luck , i'll continue to see your job maybe you'll code a very interesting soft.
Interesting work.... would like to see this implemented in an app, could be very useful.
If you want I have some code I did for NDS coding, and which I ported on PocketPC for XFlib.
It works perfectly well and I use it in Skinz Sudoku to recognize the drawn numbers.
The method is pretty simple : when the stylus is pressed, enter the stylus coordinates in a big array. And when it's released, it takes 16 points (could be changed depending on what you need) at the same distance from each other, checks the angle, and gives you the corresponding 'char'.
To add new shapes, it's just a 15 character string which you link to any char (like, link right movement to 'r', or 'a', or a number, or whatever ^^). It works for pretty much any simple shape, and I even used it to do a graffitti-like thing on NDS which worked really well
Hey!
How do you get the last Stylus Positions?
and how often do you read them out
I want to realice such a code under vb.net, but I don't know how i should read out the last stylus positions, to get them perfectly for such calculations
Code:
Private Sub frmGesture_MouseMove(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles MyBase.MouseMove
If StylusJump = 1 Then
StylusJump += 1
If (CurrentStylusPosition.X <> frmGesture.MousePosition.X) Or (CurrentStylusPosition.Y <> frmGesture.MousePosition.Y) Then
LastStylusPosition(9).X = LastStylusPosition(8).X
LastStylusPosition(9).Y = LastStylusPosition(8).Y
LastStylusPosition(8).X = LastStylusPosition(7).X
LastStylusPosition(8).Y = LastStylusPosition(7).Y
LastStylusPosition(7).X = LastStylusPosition(6).X
LastStylusPosition(7).Y = LastStylusPosition(6).Y
LastStylusPosition(6).X = LastStylusPosition(5).X
LastStylusPosition(6).Y = LastStylusPosition(5).Y
LastStylusPosition(5).X = LastStylusPosition(4).X
LastStylusPosition(5).Y = LastStylusPosition(4).Y
LastStylusPosition(4).X = LastStylusPosition(3).X
LastStylusPosition(4).Y = LastStylusPosition(3).Y
LastStylusPosition(3).X = LastStylusPosition(2).X
LastStylusPosition(3).Y = LastStylusPosition(2).Y
LastStylusPosition(2).X = LastStylusPosition(1).X
LastStylusPosition(2).Y = LastStylusPosition(1).Y
LastStylusPosition(1).X = CurrentStylusPosition.X
LastStylusPosition(1).Y = CurrentStylusPosition.Y
CurrentStylusPosition.X = frmGesture.MousePosition.X
CurrentStylusPosition.Y = frmGesture.MousePosition.Y
End If
Dim LabelString As String
Dim iCount As Integer
LabelString = "C(" & CurrentStylusPosition.X & "\" & CurrentStylusPosition.Y & ")"
For iCount = 1 To 9
LabelString = LabelString & " " & iCount & "(" & LastStylusPosition(iCount).X & "\" & LastStylusPosition(iCount).Y & ")"
Next
lblGesture.Text = LabelString
ElseIf StylusJump <= 3 Then
StylusJump += 1
Else
StylusJump = 1
End If
End Sub
Sorry, i didn't notice your post before. I guess that you have the problem solved now, that you released a beta of gesture launcher?
Anyway, you don't really need 10 last positions, in my code i used only 5 for calculations and it still worked fine.
Nice thread, thanks for sharing.
Human-,achine interface has always been an interesting subject to me, and the release of ultimatelaunch has sparked an idea. I am trying to acheive a certain look-and-feel interface - entirely using components that are today screen and ultimatelaunch compatible. Basically a clock strip with a few status buttons at the top, and an ultimatelaunch cube for the main lower portion of the screen - gesture left/right to spin the cube, and each face should have lists of info / icons scrolled by vertical gesture. I'm talking big chunky buttons here - tasks, calendar appts, (quick) contacts, music/video playlists - all vertical lists, one item per row, scrolling in various faces of the cube.
Done the top bit using rlToday for now. Set it to type 5 so scrollbars never show for the top section. All good. Cobbling together bits for the faces, but few of the apps are exactly what I want, some (like that new face contacts one) are pretty close, and being a bit of an armchair coder, I thought now would be a good opportunity to check out WM programming and see if I can't at least come up with a mockup of what I want if not a working app.
I was wondering if anyone could advise me on whether I should bother recognising gestures in such a way as this. Does WM not provide gesture detection for the basic up, down, left, right? Actually, all the stuff I have in mind would just require up/down scrolling of a pane - I was thinking that I may well not need to code gesture support at all, just draw a vertical stack of items, let it overflow to create a scrollbar and then just use the normal WM drag-to-scroll feature (if it exists) to handle the vertical scrolling by gesture in the face of the cube. I would rather keep the requirements to a minimum (eg touchflo), both for dependancy and compatibility reasons, so maybe doing the detection manually would be the "way to go", I dunno.
Did you release source with the app launcher? A library maybe? I intend to go open source with anything I do, so if you are doing the same then I would love to have access to your working code
Nice work man.
Impressive.

[Q] HW Button, Finger Gesture, Bluetooth & input panel for HTC HD2

Help...!! I'm new with WM developing,
There are 4 questions I'm about to ask about HTC HD2 using WM6.5:
1. How to disable hardware function and handle their event 'on press' for every HW button (such as Home, Window, Talk, End and Back button)?
So far, I just able to disable it using :
[DllImport("coredll.dll")]
private static extern bool UnregisterFunc1(KeyModifiers modifiers, int keyID);
[DllImport("coredll.dll", SetLastError = true)]
public static extern bool RegisterHotKey(IntPtr hWnd, // handle to window
int id, // hot key identifier
KeyModifiers Modifiers, // key-modifier options
int key //virtual-key code
);
Buttons that I can successfully disable are Volume Up, Volume Down, Back, and Talk.
There is no way I can handle or disable Windows, Home and end button.
I have a trick by disable Window button:
IntPtr hTaskBar = FindWindow("HHTaskBar", null);
EnableWindow(hTaskBar, false);
But when I try to lock and unlock the screen using end button, Window button function will not disable anymore.
So far, some forums mention about SHCMBM_OVERRIDEKEY and handle the message by using Microsoft.WindowsCE.Forms.MessageWindow,
and again I still don't know how to attach MessageWindow to my foreground application
2. I'd like if my panel finger gesture behavior works as well as ListView or DataGrid. So, I found a library that can help me.
code.msdn.microsoft.com/gestureswm/Release/ProjectReleases.aspx?ReleaseId=3146
Unfortunatelly, it makes my controls in panel flicker badly. Do you have any other alternative?
3. To handle bluetooth, I'm using
[DllImport("BthUtil.dll")]
public static extern int BthGetMode(out BluetoothMode dwMode);
[DllImport("BthUtil.dll")]
public static extern int BthSetMode(BluetoothMode dwMode);
and InTheHand.Net Library to send data.
But since I'm starting with this device, I even cannot deactive the Bluetooth.
Can you help me with this Bluetooth control and data communication?
4. Since I'm using virtual keyboard as my input, How to define what kind of input panel it will show?
For Example, if my Textbox1 is focused, I have to show Numeric Keypad.
But if Textbox2 is focused, I have to show Qwerty keyboard.
Thank you very much.
PS : I'm using C# for developing the application. It'll be very good if you give me a C# example for those questions.
I know this is a few months old, and I don't wanna ressurect it per-say - and i'm not farmiliar with the rules as it relates to posting on these forums, but i've been viewing them forever, and as a member for a few months at least, and i've got to say that disabling the END key is a must for the HTC HD2, especially for Android NAND. At present with my phone, and thousands of others, everytime we press the END button, the screen stops being responsive. Funny thing is, if i NEVER press the end button, my screen works great. I end calls using the screen, and i've installed 'Button Saviour' from the market to use the 'END' button virtually on-screen, and I wake the phone using any of the other 4 buttons, save for 'END', and i get no problems. It's the Damned signal sent to the screen from the END button that causes the screen to disable and never re-enable the digitizer - I am just supposing these things, I'm in no way or shape a developer... Just something to think about?
This could be the end to thousands of HTC HD2 end user issues relating to lockscreen lock up in Android or otherwise...

[Q] GLSurfaceView and Soft Input

Hi,
I'm writing an open gl game and it requires the user to be able to type input for things such as high scores, saved games and whatnot.
The problem I'm having is that I can't get the soft input keyboard to display with the GLSurfaceView focused for input events.
Is there a trick to it? is it even possible? or do I have to draw my own keyboard with opengl?
I definitely do not want to show a separate activity with android controls because that would look really cheap and subtract from the game experience.
Any help would be appreciated,
Thank you.
Figured it out.
In the GLSurfaceView constructor I needed to set:
Code:
this.setFocusable(true);
this.setFocusableInTouchMode(true);
And then to show the keyboard:
Code:
((InputMethodManager) context.getSystemService(Context.INPUT_METHOD_SERVICE)).showSoftInput(view, 0);
And to hide:
Code:
((InputMethodManager) context.getSystemService(Context.INPUT_METHOD_SERVICE)).hideSoftInputFromWindow(view.getWindowToken(), 0)
Not difficult, but it just isn't documented anywhere, at least not the setFocusable() stuff.

[Guide][Tasker] Check cell network type

***Requires Root*** (unless someone can show me a way that doesn't)
Checks the cell network type, NOT DATA TYPE.
Background: I am a truck driver for three months a year during my summer break from university. I love streaming music, but I love Nexus phones (and my $30 a month truly unlimited everything from T-Mobile!). Because I am on T-Mobile, my data connection on the interstates SUCK. So, I went ahead and got a prepaid Verizon mobile hotspot with the 10GB a month plan to use while on the road. However, I go through large cities frequently, so using the hotspot all the time is not ideal. I came up with the idea to use Tasker to track which mobile network type I am connected to, and turn off wifi when I gain HSPA or LTE, or turn wifi on when I lose HSPA or LTE. Sounds simple right? WRONG! The problem is that the only state context that Tasker has is Mobile Data, and can be set to 2G, 3G, 3G-HSPA, or 4G. I thought this would be enough, however, once wifi is connected, the mobile data state becomes None. So instead, I had to figure which string I needed from a dumpsys, output that string's value to a Tasker variable, and from there I could set If / Else statements. So let's begin:
First, I had to open up a terminal emulator and type in:
Code:
su
This will ask for Superuser access, simply allow always, and accept.
Next, type in:
Code:
dumpsys | grep DUMP
This will show you the different services you can view with dumpsys! Quite a bit of info there.
Now, from the helpful people over in the Tasker groups forum, I learned that network status is in
Code:
telephony.registry
So to only show that (instead of a huge long list of all the info), you will type in:
Code:
dumpsys telephony.registry
From here you can scroll through all the states. There will be a lot of values, but the one we are looking for is
Code:
mServiceState
In this case, I see it as (will vary between phone and provider):
Code:
mServiceState=0 home T-Mobile T-Mobile 310260 HSPA CSS not supported -1 -1 RoamInd=-1 DefRoamInd=-1 EmergOnly=False
"That's cool. Now what do I do with it?"
I'll tell you! With this info, head over to Tasker and make a profile (I used a time profile to activate every two minutes). After you make your profile, name a new task (I named it Celltype). Now add an activity, select Shell, then Run Shell. We've got some cool stuff here right? Yes, we do. It is a good idea to simply type in the command:
Code:
dumpsys telephony.registry | grep mServiceState | awk '{print}'
and set your timeout (mine is 4 seconds). Check the Use Root box, then in the Store Output In field, type a new variable name (I used %STATE); don't use a variable that is already in use! Dandy! Now just hit the back button and add another action. This time, select Alert and pick one (I picked popup, so that is what I am using in this guide).
Find the Text field, and enter your variable (%STATE) for me. Now just hit the back button, then hit the Play button in the lower left hand corner. You should get a Superuser request, allow it. Then, you should get a popup with the output of mServiceState, which for me is:
Code:
0 home T-Mobile T-Mobile 310260 HSPA CSS not supported -1 -1 RoamInd=-1 DefRoamInd=-1 EmergOnly=False
If you do not get a string like that, try changing '{print}' to '{print $0}' in the Command field in your Run Shell action.
Now find your network type. For me it is HSPA. Then count the position that network type text is in the mServiceState string. This is separated by spaces, so if I count mine, HSPA is in the sixth position (your's may vary). Did you notice I had you put at the end of the dumpsys in the Run Shell action:
Code:
awk | '{print]'
Well that prints out the result to whatever you tell Tasker to. You can do a lot of cool things with the print function, but I'm only going to cover what is relevant to this guide, which is the position identifier, $. In this case, since the text I want is in the sixth position, I will change the Command field in the Run Shell action to:
Code:
dumpsys telephone.registry | grep mServiceState | awk '{print $6}'
Your's may be in a different position, say it is in position three. Then you will put '{print $3}' at the end of your Command field. Again, make sure Use Root is checked and Store Output In has your created variable typed in.
Hit the back button to bring you to the main Task screen, and hit the play button in the lower left hand corner again. This time, in your popup, you should get the network type only. In my case, I got a popup that had the text, HSPA. If all goes well, and you do get ONLY the network type, then congrats! If you didn't, go back and make sure everything is correct (variable matches across actions).
If everything is dandy at this point, you can go ahead and delete the popup action (or keep it until you are all done with everything, like I did). What I did next is add a new action, selected Task and selected If. In the If action; type your created variable into the field on the left, then choose your matching method (mine is Matches) and type what you want it to match (or not match, etc) in the field on the right (for me, I typed HSDPA/HSPA/HSPA+/HSUPA/LTE, the / meaning OR).
After that, you can add whatever sub actions you want executed if your If statement is met. Once you have those done, you can add an Else action and any sub actions you want executed if your If statement is NOT met!
I have attached my XML from Tasker (just unzip, then import the XML), so you can go ahead and check out my whole project, break it apart and turn it into your own, whatever. If you share it, just please mention me. Or don't :crying:
P.S. If I missed anything, let me know! However, I AM a truck driver right now, I don't have very much time, but I will try to help if anyone needs it.
A truck-driving, college-studying, android-coder... Art Bell would be so proud.
:good:
Thanks, appreciate the tutorial. Interesting...
Thank you so much for this tips
It's very nice to improve my 2g/3g switching profile.
Here's what im using now:
Profile Name: Display On + 3g
Events: Display Unlocked + Not Wifi Connected ***
My task: 3g Enable
1 - Send Intent
Action
action.intellig.CHANGE_NETWORK_TYPE
Extra
extra.intellig.NETWORK_TYPE:0
2 - Flash
Text 3g
3 - Wait
1 minute
4 - Run Shell
Command
dumpsys telephony.registry | grep mServiceState | awk '{print $7}'
Use Root checked
Store Output In %STATE
5 - If
Condition
%STATE ~ GSM
6 - Send Intent
Action
action.intellig.CHANGE_NETWORK_TYPE
Extra
extra.intellig.NETWORK_TYPE:2
7 - Flash
Text 3g Forced
Hey.
Amazing share but im trying it doesnt work right now on lolipop i think.
Or i cant do it i dont know
Do you use this task right now?

sendevent for input tap x y?

I'm struggling to build up a Tasker routine to automate the USB Audio setup with one screen tap.
To do that I need a method for emulating screen taps (like for "off" and "host" and "close" in the USB Mode Utility, for example). I've already discovered that Eclair lacks the shell command "input tap x y". That would have been the easy route.
I'm not sure about "sendevent" and I'm even less sure about the syntax. Every resource I look at seems written for people who already know the answer. The typical syntax seems to be something like
sendevent /dev/input/event2 x x x
I can see in the root directory that there is a dev folder and in that an input folder. In there are listed 5 things:
21:14 event0
21:14 event1
21:14 event2
21:14 mice
21:14 mouse0
Does anyone know if "sendevent" is a valid shell command in Eclair and, if so, the functions of the events listed above?
Edit: well, I have a partial answer. Running an adb shell getevent I found the following:
event2 = zForce Touchscreen
event1 = gpio-keys [hardware, I assume?)
event0 = TWL4030 Keypad
It said it could not find a driver version for mice or mouse0 because it was not a typewriter (duh).
So it looks like event2 is what I need to deal with. Now if I only understood how. I know I need the screen coordinates where the touch is to be emulated
and I have an app for that.
As much as I love UsbMode, you don't need it.
For a script, you are better off just doing what it does yourself.
Code:
echo host > /sys/devices/platform/musb_hdrc/mode
echo peripheral > /sys/devices/platform/musb_hdrc/mode
echo 0 > /sys/devices/platform/bq24073/force_current
echo 500000 > /sys/devices/platform/bq24073/force_current
Renate NST said:
As much as I love UsbMode, you don't need it.
For a script, you are better off just doing what it does yourself.
Code:
echo host > /sys/devices/platform/musb_hdrc/mode
echo peripheral > /sys/devices/platform/musb_hdrc/mode
echo 0 > /sys/devices/platform/bq24073/force_current
echo 500000 > /sys/devices/platform/bq24073/force_current
Click to expand...
Click to collapse
Wow, and I was so excited because I figured out how to use sendevent w/Tasker to "press" the OFF button in the USB Mode Utility app today!
I really appreciate your response, Renate, so please bear with my lack of Android understanding. I can see that the first line is equivalent to tapping "host" (at least I hope that's what it is...). The the second is how to get back to normal mode.
By extension I am guessing that the third line is equivalent to "off" while the last line is equivalent to "auto". Right so far?
Now the most important question: so is "echo" a shell command I can use? I looked it up and it appears to be, just want to check before I try typing that into Tasker (not that it's half as bad as 8 sendevents to "touch" the screen one time!).
And one last question: is there a similar command equivalent to the "beep" of AudioCTRL (i.e., to kickstart the audio)
Edit: oh, wait, this is it, isn't it: kill -9 19409 [that being the PID of mediaserver on my NST]
Thanks for your help!
Woo-Hoo!!
The shell commands from Renate work great in a simple toggle Task. I just need to work on a few wait times and it's a done deal. One-touch USB Audio!
One question: the command "echo 500000 > /sys/devices/platform/bq24073/force_current" leaves the Max. current setting at 500 mA rather than "Auto". I'm guessing since there is nothing attached to the USB port anyway when you're all done that this is OK?
@nmyshkin
Values are 0, 100000, 500000, 1500000, auto for off, 100mA, 500mA, 1.5A, auto
Renate NST said:
@nmyshkin
Values are 0, 100000, 500000, 1500000, auto for off, 100mA, 500mA, 1.5A, auto
Click to expand...
Click to collapse
Perfect! Thanks so much. I've got a little widget on my homescreen now that does the work behind the scenes! Still struggling with a shortcut that I could customize a little.
I looked at the App Creator for Tasker but see that it requires Android 2.3. I wonder if created apps would therefore be for 2.3 or up? If not, I'd install it on my Nook Tablet running CM 10.2, make an app and export it. That would be cool.
Edit: both completed. Tasker widget here, stand-alone apps here.
Digging up an old thread here, but I'm trying to figure out a way to use an 'input tap' type event for my nook touch. I've got everything set up for a digital picture frame that can dynamically load images but the only slideshow viewer that I found to work doesn't start automatically, it loads on a file location menu first and I need to manually start the slideshow with a button press. Is there an 'input tap' equivalent that will work with the nook?
Figured it out. The adb shell command getevent will return a series of commands when you touch the screen (make sure it is a simple touch and not multiple points). Use these results (converting your numbers from hex to dec) as the command, in my case the correct sequence was:
sendevent /dev/input/event2 3 0 509
sendevent /dev/input/event2 3 1 58
sendevent /dev/input/event2 1 330 1
sendevent /dev/input/event2 0 0 0
sendevent /dev/input/event2 1 330 0
sendevent /dev/input/event2 0 0 0
Obviously it will be different for you, but the general sequence is x coordinate, y coordinate, touch screen, blank, release touch, blank.
And it works (i'm using a series of tasker adb shell commands)!
I don't know which viewer you are using (or even anything about them), but I'll be that it can take a path as data in the actuating Intent.
Then you'd only need something like:
Code:
am start -n com.neatoh.viewer/.Viewer -e Path /MyPhotos
No, these are all hypothetical values.

Categories

Resources