Related
I recently saw this tread:
http://forum.xda-developers.com/showthread.php?t=370632
i liked the idea, and when i thought about it, gesture recognition didn't seem to hard. And guess what - it really wasn't hard
I made a simple application recognizing gestures defined in an external configuration file. It was supposed to be a gesture launcher, but i didn't find out how to launch an app from a winCE program yet. Also, it turned out to be a bit to slow for that because of the way i draw gesture trails - i'd have to rewrite it almost from scratch to make it really useful and i don't have time for that now.
So i decided to share the idea and the source code, just to demonstrate how easy it is to include gesture recognition in your software.
My demo app is written in C, using XFlib for graphics and compiled using CeGCC so you'll need both of them to compile it (download and install instructions are on the Xflib homepage: www.xflib.net)
The demo program is just an exe file - extract it anywhere on your device, no installation needed. You'll also need to extract the gestureConfig.ini to the root directory of your device, or the program won't run.
Try some of the gestures defined in the ini (like 'M' letter - 8392, a rectangle - 6248, a triangle - 934), or define some of your own to see how the recognition works. Make sure that you have a string consisting of numbers, then space of a tabulator (or more of them) and some text - anything will do, just make sure that there's more than just the numbers in each line. Below you can set the side sensitivity to tweak recognition (see the rest of the post for description on how it works). Better leave the other parameter as it is - seems to work best with this value.
Now, what the demo app does:
It recognizes direction of drawn strokes, and prints them on the bottom of the screen as a string of numbers representing them (described bellow). If a drawn gesture matches one of the patterns in the config file, the entire drawn gesture gets highlited. It works the best with stylus, but is usable with finger as well.
Clicking the large rectangle closes the app.
And how it does it:
The algorithm i used is capable of recognizing strokes drawn in eight directions - horizontally, vertically and diagonally. Directions are described with numbers from 1 to 9, arranged like on a PC numerical keyboard:
Code:
7 8 9
4 6
1 2 3
So a gesture defined in config as 6248 is right-down-left-up - a ractangle.
All that is needed to do the gesture recognition is last few positions of the stylus. In my program i recorded the entire path for drawing if, but used only 5 last positions. The entire trick is to determine which way the stylus is moving, and if it moves one way long enough, store this direction as a stroke.
The easiest way would be to subtract previous stylus position from current one, like
Code:
vectorX=stylusX[i]-stylusX[i-1];
vectorY=stylusY[i]-stylusY[i-1];
[code]
But this method would be highly inaccurate due to niose generated by some digitizers, especially with screen protectors, or when using a finger (try drawing a straight line with your finger in some drawing program)
That's why i decided to calculate an average vector instead:
[code]
averageVectorX=((stylusHistoryX[n]-stylusHistoryX[n-5])+
(stylusHistoryX[n-1]-stylusHistoryX[n-5])
(stylusHistoryX[n-2]-stylusHistoryX[n-5])
(stylusHistoryX[n-3]-stylusHistoryX[n-5])
(stylusHistoryX[n-4]-stylusHistoryX[n-5]))/5;
//Y coordinate is calculated the same way
where stylusHistoryX[n] is the current X position of stylus, and stylusHistoryX[n-1] is the previous position, etc.
Such averaging filters out the noise, without sacrificing too much responsiveness, and uses only a small number of samples. It also has another useful effect - when the stylus changes movement direction, the vector gets shorter.
Now, that we have the direction of motion, we'll have to check how fast the stylus is moving (how long the vector is):
Code:
if(sqrt(averageVectorX*averageVectorX+averageVectorY*averageVectorY)>25)
(...)
If the vector is long enough, we'll have to determine which direction it's facing. Since usually horizontal and vertical lines are easier to draw than diagonal, it's nice to be able to adjust the angle at which the line is considered diagonal or vertical. I used the sideSensitivity parameter for that (can be set in the ini file - range its is from 0 to 100). See the attached image to see how it works.
The green area on the images is the angle where the vector is considered horizontal or vertical. Blue means angles where the vector is considered diagonal. sideSensitivity for those pictures are: left one - 10, middle - 42 (default value, works fine for me ), right - 90. Using o or 100 would mean that horizontal or vertical stroke would be almost impossible to draw.
to make this parameter useful, there are some calculations needed:
Code:
sideSensitivity=tan((sideSensitivity*45/100)*M_PI/180);
First, the range of the parameter is changed from (0-100) to (0-22), meaning angle in degrees of the line dividing right section (green) and top-right (blue). hat angle is then converted to radians, and tangent of this angle (in radians) is being calculated, giving slope of this line.
Having the slope, it's easy to check if the vector is turned sideways or diagonal. here's a part of source code that does the check, it is executed only if the vector is long enough (condition written above):
Code:
if( abs(averageVectorY)<sideSensitivity*abs(averageVectorX) ||
abs(averageVectorX)<sideSensitivity*abs(averageVectorY)) //Vector is turned sideways (horizontal or vertical)
{
/*Now that we know that it's facing sideways, we'll check which side it's actually facing*/
if( abs(averageVectorY)<sideSensitivity*averageVectorX) //Right Gesture
gestureStroke='6'; //storing the direction of vector for later processing
if( abs(averageVectorY)<sideSensitivity*(-averageVectorX)) //Left Gesture
gestureStroke='4';
if( abs(averageVectorX)<sideSensitivity*(averageVectorY)) //Down gesture
gestureStroke='2';
if( abs(averageVectorX)<sideSensitivity*(-averageVectorY)) //Up gesture
gestureStroke='8';
}
else
{ //Vector is diagonal
/*If the vector is not facing isdeways, then it's diagonal. Checking which way it's actually facing
and storing it for later use*/
if(averageVectorX>0 && averageVectorY>0) //Down-Right gesture
gestureStroke='3';
if(averageVectorX>0 && averageVectorY<0) //Up-Right gesture
gestureStroke='9';
if(averageVectorX<0 && averageVectorY>0) //Down-Left gesture
gestureStroke='1';
if(averageVectorX<0 && averageVectorY<0) //Up-Left gesture
gestureStroke='7';
}
Now, we have a character (i used char type, so i can use character strings for string gestures - they can be easily loaded from file and compared with strcmp() ) telling which way the stylus is moving. To avoid errors, we'll have to make sure that the stylus moves in the same direction for a few cycles before storing it as a gesture stroke by increasing a counter as long as it keeps moving in one direction, and resetting it if it changes the direction. If the counter value is bigger than some threshold (pathSensitivity variable is used as this threshold in my program), we can store the gestureStroke value into a string, but only if it's different from previous one - who needs a gesture like "44444" when dragging the stylus left?
After the stylus is released, you'll have to compare generated gesture string to some patterns (eg. loaded from a configuration file), and if it matches, do an appropriate acton.
See the source if you want to see how it can be done, this post already is quite long
If you have any questions, post them and i'll do my best to answer.
Feel free to use this method, parts of, or the entire source in your apps. I'm really looking forward to seeing some gesture-enabled programs
Very nice work. Reading your post was very insightful, and I hope this can provide the basis for some new and exciting apps!
great app... and well done for not just thinking that seems easy... but actually doing it...
ive been a victim of that myself
very nice work man.. one question in which tool did you write code.. i mean it looks like C but how you test and all..
Great app, i see that it is just proof of concept at this stage, but i see that it can be used in future applications control
Continiue with your great work
nik_for_you said:
very nice work man.. one question in which tool did you write code.. i mean it looks like C but how you test and all..
Click to expand...
Click to collapse
It is C, (no "++", no "#", no ".NET", just god old C ) compiled with opensource compiler CeGCC (works under linux, or under windows using cygwin - a unix emulator), developed in opensource IDE Vham (but even a notepad, or better notepad++ would do), tested directly on my Wizard (without emulator). I used XFlib which simplifies graphics and input handling to a level where anyone who ever programed anything at all should be able to handle it - it providea an additional layer between the programmer and the OS. You talk to Xflib, and Xflib talks to the OS. I decided to use this library, because i wanted to try it out anyway.
If i decide to rewrite it and make an actual launcher or anything else out of it, i'll have to use something with a bit faster and more direct screen access (probably SDL, since i already done some programing for desktop PC with it) - XFlib concentrates on usage of sprites - like 2D console games. every single "blob" of the gesture trail is a separate sprite , which has to be drawn each time the screen is refreshed - that is what slows down the app so much. The gesture recognition itself is really fast.
Very good program i just test it and it works very well some combinaison are pretty hard to realize but i like this blue point turning red with command2 and 934. Goond luck , i'll continue to see your job maybe you'll code a very interesting soft.
Interesting work.... would like to see this implemented in an app, could be very useful.
If you want I have some code I did for NDS coding, and which I ported on PocketPC for XFlib.
It works perfectly well and I use it in Skinz Sudoku to recognize the drawn numbers.
The method is pretty simple : when the stylus is pressed, enter the stylus coordinates in a big array. And when it's released, it takes 16 points (could be changed depending on what you need) at the same distance from each other, checks the angle, and gives you the corresponding 'char'.
To add new shapes, it's just a 15 character string which you link to any char (like, link right movement to 'r', or 'a', or a number, or whatever ^^). It works for pretty much any simple shape, and I even used it to do a graffitti-like thing on NDS which worked really well
Hey!
How do you get the last Stylus Positions?
and how often do you read them out
I want to realice such a code under vb.net, but I don't know how i should read out the last stylus positions, to get them perfectly for such calculations
Code:
Private Sub frmGesture_MouseMove(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles MyBase.MouseMove
If StylusJump = 1 Then
StylusJump += 1
If (CurrentStylusPosition.X <> frmGesture.MousePosition.X) Or (CurrentStylusPosition.Y <> frmGesture.MousePosition.Y) Then
LastStylusPosition(9).X = LastStylusPosition(8).X
LastStylusPosition(9).Y = LastStylusPosition(8).Y
LastStylusPosition(8).X = LastStylusPosition(7).X
LastStylusPosition(8).Y = LastStylusPosition(7).Y
LastStylusPosition(7).X = LastStylusPosition(6).X
LastStylusPosition(7).Y = LastStylusPosition(6).Y
LastStylusPosition(6).X = LastStylusPosition(5).X
LastStylusPosition(6).Y = LastStylusPosition(5).Y
LastStylusPosition(5).X = LastStylusPosition(4).X
LastStylusPosition(5).Y = LastStylusPosition(4).Y
LastStylusPosition(4).X = LastStylusPosition(3).X
LastStylusPosition(4).Y = LastStylusPosition(3).Y
LastStylusPosition(3).X = LastStylusPosition(2).X
LastStylusPosition(3).Y = LastStylusPosition(2).Y
LastStylusPosition(2).X = LastStylusPosition(1).X
LastStylusPosition(2).Y = LastStylusPosition(1).Y
LastStylusPosition(1).X = CurrentStylusPosition.X
LastStylusPosition(1).Y = CurrentStylusPosition.Y
CurrentStylusPosition.X = frmGesture.MousePosition.X
CurrentStylusPosition.Y = frmGesture.MousePosition.Y
End If
Dim LabelString As String
Dim iCount As Integer
LabelString = "C(" & CurrentStylusPosition.X & "\" & CurrentStylusPosition.Y & ")"
For iCount = 1 To 9
LabelString = LabelString & " " & iCount & "(" & LastStylusPosition(iCount).X & "\" & LastStylusPosition(iCount).Y & ")"
Next
lblGesture.Text = LabelString
ElseIf StylusJump <= 3 Then
StylusJump += 1
Else
StylusJump = 1
End If
End Sub
Sorry, i didn't notice your post before. I guess that you have the problem solved now, that you released a beta of gesture launcher?
Anyway, you don't really need 10 last positions, in my code i used only 5 for calculations and it still worked fine.
Nice thread, thanks for sharing.
Human-,achine interface has always been an interesting subject to me, and the release of ultimatelaunch has sparked an idea. I am trying to acheive a certain look-and-feel interface - entirely using components that are today screen and ultimatelaunch compatible. Basically a clock strip with a few status buttons at the top, and an ultimatelaunch cube for the main lower portion of the screen - gesture left/right to spin the cube, and each face should have lists of info / icons scrolled by vertical gesture. I'm talking big chunky buttons here - tasks, calendar appts, (quick) contacts, music/video playlists - all vertical lists, one item per row, scrolling in various faces of the cube.
Done the top bit using rlToday for now. Set it to type 5 so scrollbars never show for the top section. All good. Cobbling together bits for the faces, but few of the apps are exactly what I want, some (like that new face contacts one) are pretty close, and being a bit of an armchair coder, I thought now would be a good opportunity to check out WM programming and see if I can't at least come up with a mockup of what I want if not a working app.
I was wondering if anyone could advise me on whether I should bother recognising gestures in such a way as this. Does WM not provide gesture detection for the basic up, down, left, right? Actually, all the stuff I have in mind would just require up/down scrolling of a pane - I was thinking that I may well not need to code gesture support at all, just draw a vertical stack of items, let it overflow to create a scrollbar and then just use the normal WM drag-to-scroll feature (if it exists) to handle the vertical scrolling by gesture in the face of the cube. I would rather keep the requirements to a minimum (eg touchflo), both for dependancy and compatibility reasons, so maybe doing the detection manually would be the "way to go", I dunno.
Did you release source with the app launcher? A library maybe? I intend to go open source with anything I do, so if you are doing the same then I would love to have access to your working code
Nice work man.
Impressive.
Hi there. I'm making an AR app and I couldn't figure out a way to draw on top of a camera view. I've created a custom SurfaceView class which uses Camera.startPreview and Camera.setPreviewDisplay to draw the real time feed from the camera upon it. Now I'm trying to render something on it. I have another SurfaceView which has a painter thread which is running with 30 fps and is drawing every frame on a Canvas obtained by SurfaceHolder.lockCanvas(). I put the two views in a FrameLayout like this
Code:
Preview cv = new Preview(
this.getApplicationContext());
Pano mGameView = new Pano(this.getApplicationContext());
FrameLayout rl = new FrameLayout(
this.getApplicationContext());
setContentView(rl);
rl.addView(cv);
rl.addView(mGameView);
And sadly it doesn't work. It shows only the camera feed. If I switch the order like this
Code:
rl.addView(mGameView);
rl.addView(cv);
The camera feed dissipates and only the painter is visible...
How should I make it work
Phew. Just to tell you I found a solution. You add this line
Code:
getHolder().setFormat(PixelFormat.TRANSLUCENT);
in the initialization of your overlay view
I've asked this elsewhere, but no one has been able to help me. i'm hoping the good people here at XDA will be able to provide some food for thought, if not a full resolution to my issue.
I have a layout.xml set for an activity, which is essentially a RelativeLayout containing several Views including an EditText. Every View is set to layout_width="match_parent" except for 3 Buttons across the bottom, and a single line EditText with a Button on its right. Every View is set to layout_height="wrap_content", but the multiline EditText that I am concerned with uses relative anchors to align the top with a single-line EditText View above it, and the bottom with the Buttons below it.
Everything looks perfectly fine when the Activity launches, but when I touch the multiline EditText, because it is low on the screen, the iME Keyboard completely covers the multiline EditText. For obvious reasons, it is unacceptable to use the soft keyboard to type into a View that is completely covered and not visible.
It's been suggested that I modify the Activity's windowSoftInputMode parameter in the Manifest, but neither adjustPan nor adjustResize creates the desired result. With adjustPan, the keyboard simply covers the Buttons and the multiline EditText. With adjustResize, the keyboard still completely covers the multiline EditText, but the buttons actually shift up as if they are attached to the top of the keyboard.
The issue seems to be that the Views above the multiline EditText cannot be resized--a Date Picker, a Time Picker, a single line EditText (adjacent to a Button) and then the multiline EditText in question.
I can't for the life of me figure out why the EditText completely gets covered despite the fact that it has focus.
Any suggestions/thoughts?
To close this thread up, I found out that minSdkVersion being set below "4" (it was "3" for me) causes some issues with the soft keyboard. I upped it to "4", and voila, it worked perfectly.
[Q] Beginner: Issues with canvas bitmaps, buttons, and layouts
Hi, I'm hoping people might help me with this more open-ended question. I'm fairly new to programming with little scripting experience.
My problem is that I'm cobbling code together from various separate examples, but trying to implement them into my tests, they don't seem to work together.
Let me show you what my goals are for this experiment:
1) Create Button
2) Generate Images with Button click
3) Drag Images around
So far, I've been able to do create images with drawing bitmaps using canvas and create a working button with the easy drag and drop.
My specific problem is that I can't use the Design tool to create buttons if I use the canvas method of drawing bitmaps because setContentView() needs to be on the canvas.
Here's my onCreate():
Code:
GenerateItem ourView;
Button genButton;
LinearLayout myLinearLayout;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
ourView = new GenerateItem(this);
setContentView(ourView);
myLinearLayout = (LinearLayout)findViewById(R.id.linearLayout1);
genButton = new Button(this);
genButton.setText("Generate");
myLinearLayout.addView(genButton);
}
And related XML code from my AndroidManifest:
Code:
<LinearLayout xmlns:android="somelink"
android:id="@+id/linearLayout1"
android:layout_width="wrap_content"
android:layout_height="wrap_content" >
</LinearLayout>
Now I have two questions:
1) Am I going about this problem the correct way using a mix of Design layout tools and code created images (canvas bitmaps)?
2) Can I use canvas bitmaps with an onTouchListener as a button?
Whoaness said:
2) Can I use canvas bitmaps with an onTouchListener as a button?
Click to expand...
Click to collapse
You should use an ImageView instead. Canvas aren't Views, so they aren't managed by the Android System when an event occurs. Canvas are used, for example, to draw anything in a low-level-way (custom views, for example).
I didn't understand the first question..
Sent from my Vodafone 875
Andre1299 said:
You should use an ImageView instead. Canvas aren't Views, so they aren't managed by the Android System when an event occurs. Canvas are used, for example, to draw anything in a low-level-way (custom views, for example).
I didn't understand the first question..
Sent from my Vodafone 875
Click to expand...
Click to collapse
You might have answered my first question. I'm not sure.
I was asking whether I should stick with canvas methods and, from my limited scripting knowledge, draw a hitbox for the onTouch method to detect so I can redraw the object to the fingers position.
I can easily drag imageviews/imagebuttons around. I guess I have to figure out how to create Imageviews, though I think I seen a way to cast bitmaps into an imageview, but it never worked out for me.
Whoaness said:
You might have answered my first question. I'm not sure.
I was asking whether I should stick with canvas methods and, from my limited scripting knowledge, draw a hitbox for the onTouch method to detect so I can redraw the object to the fingers position.
I can easily drag imageviews/imagebuttons around. I guess I have to figure out how to create Imageviews, though I think I seen a way to cast bitmaps into an imageview, but it never worked out for me.
Click to expand...
Click to collapse
Uhm... so, let me know wheter I understood.
In your project you have a button that renders an image (a bitmap) wich is drawn with a canvas. But you want that this image is draggable.
Am I right?
Andre1299 said:
Uhm... so, let me know wheter I understood.
In your project you have a button that renders an image (a bitmap) wich is drawn with a canvas. But you want that this image is draggable.
Am I right?
Click to expand...
Click to collapse
Yes, draggable image button. Although I'm not sure if it needs to be a button, but I thought button properties would be appropriate for touch and dragging.
Sorry for the late reply. I was on a hiatus.
hi all again,
still in "beta", but seems works fine enough to public and allow to use this format even more effective than in vendors watch faces.
what does mean "TBUI"? it's just a tag in watch face (further "wf") file, what allow you identify it. you may find it at the end of all of this type of watch faces. just open it in any editor/viewer, go at the end of the file and you'll se it:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
it's a very flexible format, perhaps a next version of "PUSH" (yep, a similar tag again). at least it looks similar but have more powerful abilities.
this format allow you to use in any combinations a lot of elements such as:
- clock hands (as discrete as smooth)
- digital data: time markers (49 in variants), sensors data (48 in variants)
- sequential frame-based elements (time, month, progress-bars, rolled-up elements... - 30 items)
- localizeable tags (chinesse/non chinesse)
- animations (sequential frames)
- buttons to call device menu items
- internal wf configuration (ability to enable/disable some groups of elements "on the fly")
and you not limited in one item per type (if you want).
but before to continue, to be honest I'd must warn:
WARNING - theoretically you may brick your device and may not recover it without special knowledge. while I'd investigated this format it happed few times in case of wrong frames compression. now it's solved and must not happens, but chances still not a zero. for sample, few days ago I've known what a zero mirror point for clock hands may cause stuck your device too. I did set a foolproof protection for exactly this issue, but can't check all of possibilities because of over 150 available elements.
ok. if you'd not escaped yet, let's continue.
Spoiler: few screenshots of interface
editor was maked closer to the file format as maximum as possible, so you must understand few basic definitions:
- frame - just an image
- char set - required for elements what uses strings to display. technically. it's a group of assignments frames to characters/symbols. if you use some char set for element, then each char in string what must be displayed, would be replaced by assigned frame. if the char would not assigned, than it'll be skipped.
- ui element - a part of a wf what do something. display some sort of data or mark some region for some action.
- region - rarely uses. if you not in plane to make a hard wf, than you may just ignore it. treat it as "layer". it assign new start point of a coordinate for child elements, but have no affect for sequential of global drawing queue and not cut elements if it's draws out of region area. each region have it's own "sets" - a group of elements what must be shown at once. what det must be shown, may be configured by special element class.
so in this case, editor have a 4 "subeditors": "UI" ("UI items" and "Regions") and "Resources" ("Char Sets" and "Frames")
main interface controls:
horizontal control panel:
"X" - remove element
"A" - add new element as next
"U" - up at one position
"D" - down at one position
"+"/"-" - expand/collapse element
vertical control panel (for ui items editor only):
"+"/"-" - expand/collapse element
"H" - hide element on branch
"L" - ignore element on branch (would be displayed, but would be ignored by mouse)
available classes:
Spoiler: Overlay
just a frame as is. used parameters:
Code:
Target ID - id of a frame
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Spoiler: ClockHands
used parameters:
Code:
Target ID - id of a first frame in chain or alone frame id
X, Y - top left corner coordinates
X2, Y2 - coordinates of a mirror/rotate point
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
elements:
CHHoursHand16, CHMinutesHand16, CHSecondsHand16 - uses 16 sequential frames from "N" to "E". works as a discrete clock hands. all other frames would be used as a mirrored via lines goes through mirror point (x2, y2)
CHHoursHand1, CHMinutesHand1, CHSecondsHand1 - uses only one frame ("N"). would be rotated around x2, y2. works as a discrete clock hands. a lot of cpu usage - reduce size of a frame as possible.
CHHoursHandSmooth, CHMinutesHandSmooth, CHSecondsHandSmooth - uses only one frame ("N"). would be rotated around x2, y2. works as a smooth clock hands. a lot of cpu usage - reduce size of a frame as possible.
Spoiler: Char2FrameTime
various elements what display time parts (hours, minutes, seconds, date, month, year, day of the week in different variants) as a numeric values
used parameters:
Code:
Target ID - id of a char set
X, Y - top left corner coordinates
X2, Y2 - used only for calendar table parts as a size of a cell
Align - align of an element to x, y point
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Params 1 - second field only - kerning (interval between chars)
Spoiler: Char2FrameStatistic
various elements what display sensors data (steps, hear rate, o2, battery level etc. in different variants) as a numeric values
used parameters:
Code:
Target ID - id of a char set
X, Y - top left corner coordinates
Align - align of an element to x, y point
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Params 1 - second field only - kerning (interval between chars)
Spoiler: Localizable
elements what display various tags for chinesse and non-chenesse localizations ("am/pm","bpp","o2","km"/"ml" etc. )
each item must have two sequential frames: for "chinesse" locale, then for "non-chinesse" locale.
useless if you have no plans to make a chinesse variant.
used parameters:
Code:
Target ID - id of a first frame (chinesse)
X, Y - top left corner coordinates
Align - align of an element to x, y point
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Spoiler: SequentialFrames
various elements displays some data as sequential frames. like a progress-bars of some data, month etc.
used parameters:
Code:
Target ID - id of a first frame in chain
X, Y - top left corner coordinates
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Spoiler: Specialized
some sort of elements what not assigned to any other classes
used parameters:
Code:
Target ID - id of a first frame in chain
X, Y - top left corner coordinates
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Spoiler: OverlayGroup
not analyzed yet
Spoiler: Button
uses to call internal watch menus if exists
used parameters:
Code:
X, Y - top left corner coordinates
X2, Y2 - width and height of a button
Parent Region ID - as named
Set ID - as named for region assigned in "Parent Region ID"
Spoiler: RegionSettings
selecting id of a set what must be shown in appropriate region. works like a button, but have a some specific.
used parameters:
Code:
X, Y - top left corner coordinates
X2, Y2 - width and height of a button
Region Type - how it must be used - via internal config or by tap
Params 1 - for "RSConfig" only. first field as a count of a selectable items
Params 2 - for "RSConfig" only. list of frames id what would be used as a thumbnails for selection menu
sorry for so short manual, it's a bit hard to write that's all on not my first language. you may use an original watch faces to learn how it works in details or ask me here or in pm.
~200 original watch faces for dm50 (466x466) available on my gdrive you may use it as a sample or just a template/resources.
download link TBUIWFTool.zip
Spoiler: list of tested compatible devices
- DM50, Lemfo (466х466)
- HD11, Huadai (240x280)
- HK28 (368х448)
- HK46 (360x360)
- i20, Colmi (360x360)
- i30, Colmi (390?x390?)
- i31, Colmi (466x466)
- C60, Colmi (240x280)
- C80, Colmi (368x448)
- LA24, Linwear, Tiroki (360х360)
- LF26 Max, Lemfo (360х360)
- Vibe 7 Pro, Zeblaze (466x466)
- x7, Gejian (360x360)
Spoiler: not tested devices with TBUI wf
- Dizo Watch R, Realme (360х360)
- HK3Pro (360х360)
- L20, microwear (240x280)
- W3Pro+, XO (360х360)
all devices except dm50 tested by other users, not by me.
watch face for sample:
it was maked by request (converted from push 240x240 to tbui 360x360) and because my watch have a 466x466 resolution it looks shifted. but on 360x360 would be fine.
if you hold tap on display to get watch face selector, you'll see a little gear icon at bottom of preview:
tap it and you'll enter in internal config menu:
you can setup:
- ability to show/hide weather applet
- using static or roll-up month/day elements
- using battery level by 10% or by 1%
- select discrete seconds clock hand or smooth
also it have a few buttons in regular mode:
- weather (by tap weather applet)
- data (by tap steps)
- heart rate (by tap heart or it's numbers)
- timer (by center of a clock)
Very good
I tried running the tool in a clean Windows 11 VM, and it wont start. Does it require a runtime or framework (eg, .NET, Python, etc.) to be installed?
EDIT: Nevermind, Windows' had blocked the file. Anyone else has this problem, right-click->propoerties->unblock. Windows does not give a helpful error message when failing to run. FYI I also got a hit in Windows Defender, which immediately deleted this when I first downloaded it. I scanned in in Kaspersky online and it appears to be clean, so it appears to be a false-positive, but that might have something to do with why Windows is making it so hard to run it.
danjayh said:
...Does it require a runtime or framework (eg, .NET, Python, etc.) to be installed?
...
Click to expand...
Click to collapse
.net 4.7.2 only.
danjayh said:
...FYI I also got a hit in Windows Defender, which immediately deleted this when I first downloaded it...
Click to expand...
Click to collapse
I use it on win 10 and never seen nothing similar. also never heard about this issue (shared on other forum from march).
also it's not finished and would be updated. m.b. even not an once.
Great work... But how I can upload to Colmi i30?
Thanks!
EDIT: Learned to upload and all ok, thanks again!
It's possible to easy convert 466px to 390px? Or need export and import resized all frames?
Jean-DrEaD said:
It's possible to easy convert 466px to 390px? Or need export and import resized all frames?
Click to expand...
Click to collapse
no. resizing in ".net" is a... not a good quality point. you must resize frames via external specialized tools. I use a xnview (for sample). it have a fine enough algorithms and batch processing.
with next build you'll be allowed to remove all frames by one click (already implemented in debugging build). also I'll integrate local browser (at first) for viewing graphical resources of local watch face files and export it or import to current watch face.
at this time, in debugging build it supports push format for viewing too (editing not in plans, just only as sourse of images). seems public build would be able at this week.
in plans some other formats and add web browser too.