So I'm in the preliminary stages of doing an experimental HCI project for a capstone course. I have no android (java) coding experience but I do have background in C++, Python, and QT so I'm hoping to be able to pick up java relatively fast.
Anyways, the idea of my project relies on being able to take accelerometer readings to determine device tilt in the X and Y axis (as documented in the SensorEvent section in the Android Reference guide) and pipe it into an app such as Google Maps to perform a certain action (in this case I'm tying to pan the map).
I know there are games that use the accelerometer to determine tilt that exist out there (eg. Minion Rush when collecting bananas on the unicorn) so it should be well documented I'm hoping....
Is it possible to make an app that's a "skeleton" over Google Maps and when the app detects certain orientations of the phone to execute the appropriate function. eg. To pan in the direction the phone is tilted? It looks like there's the "CameraUpdateFactory.scrollBy(float, float)" function to pan the map that takes 2 floats as arguments can I take the accelerometer readings (X and Y) and plug them into this function? I'm somewhat familiar with QT signals and slots, is there something like that that exists in Android app development so when the phone is tilted past a certain angle it will emit a signal that can be captured and plugged into the above function?
Also can anyone point me to any HCI research papers that may deal with this topic or keywords to be searching for?
Related
Howdo,
I'm just going to try and build my first gesture program in Visual Studio 2008 using VB. Can anyone give me some pointers on how I get this going. I've been looking at Inkcanvas but I'm not sure if this is right?
Many thanks
TheNecroscope said:
Howdo,
I'm just going to try and build my first gesture program in Visual Studio 2008 using VB. Can anyone give me some pointers on how I get this going. I've been looking at Inkcanvas but I'm not sure if this is right?
Many thanks
Click to expand...
Click to collapse
Hey I would like to learn that too...
Google is your friend:
http://www.vbaccelerator.com/home/NET/Code/Libraries/Windows_Messages/Mouse_Gestures/article.asp
Fantastic! Looks like a good starting place. Is there much advantage of using C# compared to VB out of interest?
Just found this as well
http://www.codeproject.com/KB/mobile/MouseGestures.aspx
I'm currently finishing to write a simple proof-of-concept application launcher using simple stylus gestures (no neural nets, just eight directions supported - somewhat like in firefox mouse gestures plugin). Right now it works quite fine and the code is easy to port to any language as long as you can record last few locations of stylus.
I'll post it as soon as i find out how to launch an application from C level. The simplest way system() doesn't seem to work
Dziekuje!
I look forward to seeing it!
As promised:
http://forum.xda-developers.com/showthread.php?t=374375
That's just a proof of concept - the included program doesn't do anything but recognizing gestures as you draw them and reacting if the drawn gesture matches one of patterns defined in config file (still didn't have time to find out how to make it launch an app).
The demo application is in C, but once you understand the method behind it, it should be easy to port to any programming language. Of course that's probably not the best way to do it, and it's definitely not the only way to do it, but it's simple to implement (no neural networks required), supports eight directions (including diagonal - opposed to methods described in links above), and gesture can consist of many strokes (a single gesture may be drawn like: left, up, up-right, down-right, down, up-left, right...). There's still a lot of room for optimizations, but it's some starting point...
Thats really smart! Well done, thanks for coming back to me! I will investigate it over the weekend! . Thanks also for the information on the compiler thats also useful to know!!
Hi all,
Since the Touch Diamond has accelerometers in the X Y and Z space axis, is it not possible to try and create an Inertial Navigation System. For those who are not sure, it INS uses the accelerometers to assess the movement of the device. So long as the user states the start point and north orientation do you think it possible to do it and to create the software. If it is, then we could do without GPS, afterall, aircraft have used INS for a long time now (albiet theyalso used a gyro). It's not as accurate as GPS but it'll certainly work as a navigation device.
Any comments or help in doing this?
Some GPS systems use inertial navigation for better positioning when in tunners or low coverage areas.
The idea could be simple, a program that read the GPS data, mix them with the G-Sensor data and (why not!) with the TMC data recovered from the FM radio and publish them in a new virtual serial port.
But doing it is not so simple
I think its not dificul to trie i already post the idia in other forum, get the gps data, mix whit the acc, and start counting speed changes in forward . left/right and i think we can get a 600m bonus before black out.
600m whit simple formulas, complex ones give you much more.
Ikari said:
I think its not dificul to trie i already post the idia in other forum, get the gps data, mix whit the acc, and start counting speed changes in forward . left/right and i think we can get a 600m bonus before black out.
600m whit simple formulas, complex ones give you much more.
Click to expand...
Click to collapse
The idea is simple yes - implementing such system is a lot more difficult than one could imagine though. It'll only be accurate for a short distance, before the errors become so large it's not feasible (due to double integration).
Android indoor map using image generates different path from point a to b depending on destination's input (No use of indoor navigation technologies)
I'm working on an android app for my final year project and I'm unsure how to solve this problem. I would like some advices to get me started in the right direction.
The project is to develop an android app to output a direction path from point a to point b on an image representing a two dimensional indoor map. The direction displayed will be depending on the user's input (destination). The starting point will always be the same location on the image. Once the user select a specific object, the app should find its location on the map using a database and then draw the direction's path from Point "a" to point "b".
I do not want to use any indoor navigation technology for this app. The location of each object on the map should be predefined and stored in a database.
The part which I am really unsure about is how to do the following for and android app. --> How to predefine points on an image and how to use these predefined data to display the path to take in the image representing the indoor map. I have been advised to use SVG (Scalable Vector Graphics), I have found androidVG to use SVG with android but I haven't found much information on it.
I am currently clueless regarding what language and techniques to use in order to perform the above feature on android.
Questions:
1) What general advice would you give me on how to effectively tackle this problem from the information I've provided?
2) Was I correctly advised when I have been given SVG as one of the language to use for the development of this app? If I was, would anybody have any more information on using SVG for android to provide me with?
3) IF there would be a better way to solve this problem which language should be used?
I really appreciate all the help provided in this community. Hopefully I have been clear enough, hoping to have my questions answered.
Thank you!
Hi there!
I'm looking for a library for Android to recognize objects in real time. It must (in importance order):
-work with Android 5.0.2 (HTC One),
-be free or have a trial with the following possibilities for infinite time,
-not only recognize objects, but also their size,
-recognize objects about half a meter wide square from about 100 m,
-be as accurate as possible,
-work as fast as possible,
-be as easy to use as possible.
I found a few libraries e.g. Catchoom, Vuforia, but I don't know which one fits me the best Please help!
Leftismer said:
Hi there!
I'm looking for a library for Android to recognize objects in real time. It must (in importance order):
-work with Android 5.0.2 (HTC One),
-be free or have a trial with the following possibilities for infinite time,
-not only recognize objects, but also their size,
-recognize objects about half a meter wide square from about 100 m,
-be as accurate as possible,
-work as fast as possible,
-be as easy to use as possible.
I found a few libraries e.g. Catchoom, Vuforia, but I don't know which one fits me the best Please help!
Click to expand...
Click to collapse
I do not think that any lib can help as is.. You have a very strict requirements and you will need to do a lot of research to meet your requirements... Maybe it is unreachable goal... especially if you need to measure sizes without any special "marks" close to objects. So it looks like you need to use NDK anyway, use some library like OpenCV and find a good developers (with strong math skills).
Thanks for answer, I'll try to do something with that OpenCV library. But you got me a little wrong. A library doesn't have to meet all of my requirements, but it would be really good, if it did In fact it could only meet the 2 opening requirements (insted of HTC One it may work with Android 4.1.1 on HTC Desire X, but I don't think there's any library, that would work on Desire X, but not on One
What objects are you trying to recognize?
The Google Play Services come with a Face Detection feature which is pretty accurate. Another solution would be to look into the Cloud Vision API also offered by Google (might not be fast enough for your use case though ...).
Objects to recognize are road signs. And thanks for information
I have with experience in VLSI, shell scripting (bash, windows powershell) and basic programming languages like C and Python (Matlab too, which uses a similar syntax).
I want to get into app development but I am completely new to this and unfamiliar with the whole structure - the IO libraries, rendering libraries especially. I mainly like to develop it for Windows 11 (but would be nice if I could make it cross platform, say using something like Vulkan as rendering library).
I wish to make a stylus based note making app (similar to one note, drawboard pdf etc). I wish to know about the libraries available for taking input from surface pen (or other pens). I found that there are a few API available for windows - RealtimeStylus, Windows Ink, etc but I am unable to find anything cross platform. I would like to know if there are open source, or cross platform alternatives. Alternately, I would like to know if it is possible to bypass these and create a custom API myself (including my own algorithms for tracing curves and predicting handwriting etc, at present I am left to use whatever was done in these APIs I think), also possibly making it lower latency within the app. To some extent I realized that pen position is very similar to trackpad input (on the data input to pc side) and then we have tilt and pressure sensitivity data which I'm not sure how it is accessed and used. I remember reading a little about libsdl sometime ago. I would like to know if there are alternatives to libsdl, or if Vulkan supports any alternate libraries.
I would like to know how I could code a program that works on both x64 and aarch64 on windows 11 (Not into 32 bit as I belive my tool will use more than 4GB RAM anyway, as a priority), and as mentioned above, it would be fantastic if I could make it cross platform. What I got from this page is that ( https://docs.microsoft.com/en-us/windows/arm/ ) if I write my program in C++ it should be possible to compile it for both x64 and aarch64 (and make possible optimizations for each of them separately). I am not sure how the whole development environment works - what is dotnet, what is unity, what is xamarin, and differences between each. I found a few code macros in dotnet that help in rejecting certain inputs (could be useful for palm rejection etc) : ( https://docs.microsoft.com/en-us/do...s.uielement.isinputmethodenabled?view=net-5.0 ) (https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.ishittestvisible?view=net-5.0 ). As far as I am aware dotnet is cross platform. I might want to make instruction level optimizations to the software (like SSE, AVX, certain 64 bit instructions etc if that gives any hint) and would like to know if the dotnet environment/toolkit has sufficient low level coding possibility to access these. Also, I am curious if it supports vulkan or opengl. Vulkan is written in C++ and supports multiple platforms so I am more inclined to try it.