Related
Hello all,
I've written a simple audio player app and testing it in Samsung Galaxy S.
It is a simple app with can create playlist and some buttons to play the mp3 file.
However, there's a funny bug in it.
After starting the app, whenever I rotate the phone, the running app will launch another instance of the app. If I rotate again, another instance (3rd) would be started. So u can hear 3 copies of the same audio being played simultaneously.
What could be wrong with it?
Is there some code which can prevent program re-entry?
Thanks.
It's not a bug, that's the way andorid OS works. You need to account for this in OnCreate for your activity. I assume you are spawning off the actual playing of the MP3 in a thread or service. You need to check if the service is already running in OnCreate and attach, else spawn a new one.
Hello Gene,
How do I check if an activity is already running?
I could not find the answer on the android developer page (the topic on application fundamentals).
And no, I'm not converting it to a service yet as this is my first app.
Haven't explore service yet.
Thanks.
I read on another article, a simple way to prevent this is to add overwrite the onConfigurationChanged function
@Override
public void onConfigurationChanged(Configuration newConfig) {
//ignore orientation change
super.onConfigurationChanged(newConfig);
}
and modify the androidmanifest.xml
<activity android:name="selectCategories" android:configChanges="orientation|keyboardHidden"></activity>
But it still launch multiple instance of the app.
Thanks.
This is a good question and should be in an FAQ somewhere.
As already mentioned, changing the display orientation basically restarts your app. Read the Android dev page on Activity lifecycle for more info.
A short answer for your question is: the Bundle parameter for onCreate() will be null when your app is first run. When your app is paused and restarted, that Bundle will be non-null. You can store data in that Bundle by overriding onSaveInstanceState(), then check for that data in onCreate(). It's a good idea to learn how to do this (save/read app data on pause/restart). Once you start testing apps by rotating the display at various times, you'll find a lot of them FC at unexpected places.
This is indeed a topic that keeps surprising people who are new to android (ahem, like me 4-5 months ago ).
The solutions above are perfect, however in certain situations there's another trick that might make your life much easier. I suspect it won't help you in this case, but it might help others who tackle the problem and see this thread.
Certain applications, mainly games, should be fixed in a single orientation. I.E. you won't be playing angry birds on portrait - that game is locked to landscape, as it should be. In order to lock your activity in a certain orientation you add this attribute to the manifest under the Activity tag. So a standard activity might look something like that, combined with the ignore tag from the previous posts:
Code:
<activity android:screenOrientation="landscape" android:configChanges="orientation|keyboardHidden"
android:name="whatever">
The great thing about this combination is that your activity will not restart at all when the phone is rotated. Of course, for a standard activity you'd usually want to support both landscape and portrait, but if you need your app to be in a fixed orientation this is the way to go - no weird FCs or annoying bugs
Hi,
How do I check the bundle?
And also, what to do if the bundle is not null? Just return?
Thanks.
regards
r_p_ang said:
This is a good question and should be in an FAQ somewhere.
As already mentioned, changing the display orientation basically restarts your app. Read the Android dev page on Activity lifecycle for more info.
A short answer for your question is: the Bundle parameter for onCreate() will be null when your app is first run. When your app is paused and restarted, that Bundle will be non-null. You can store data in that Bundle by overriding onSaveInstanceState(), then check for that data in onCreate(). It's a good idea to learn how to do this (save/read app data on pause/restart). Once you start testing apps by rotating the display at various times, you'll find a lot of them FC at unexpected places.
Click to expand...
Click to collapse
I have a user of my app who is having a problem running it. My code launches another activity in the same app, and he is saying it is stopping before it should & returning to the previous activity, and he doesn't see any Force Close warnings.
I have run my code in the emulator & on my phone, I can't reproduce the error. We both run Android 2.2 on our phones, his is an HTC EVO & mine is a HTC Wildfire, as far as I can tell his specs are better than mine so shouldn't cause an issue - I deliberately chose a low spec for for my dev work so the code ought to run on anything.
As a bit of an Andoid dev noob (but been coding for years), is there any easy way I can make a special build of the app to send to him that would log any errors that happen ? I'd like to get a stack dump as well if possible, as I'm not sure exactly what routine in the activity its crashing out in. The activity that crashes is Gallery with 9 images in it, he can't flick through them or select one. I'm stumped as to whats causing it, any assistance would be gratefully received.
Thanks.
Why not point to your app and let others here try it on their phones? It could simply be other apps installed on his phone interfering with your app.
Long time programmer here too and when I get to where you're at (and I"m sure you've put some hours into this LOL), I go back to STEP 1.
I comment-out any and all code but the bare minimum; break it down to the Intent, startActivity and maybe a Toast message in the second activity. Even parse down your XML files to bare minimum.
See if that works. Then, ADD BACK ONE LINE OF CODE AT A TIME Run program and make sure it works. Yeah, it's painful, but in my 20 years of coding, I've learned to put my pride aside and to not "pretend" all the code I've written is correct.
Sometimes on bigger projects, I"ll change or add a couple of lines of code, run a back up and test. Rinse and repeat LOL. That way, I know I"m only a couple of lines of code from what "used" to work.
Good Luck!
Thanks both of you.
old_dude - Its a paid app. Only £0.99 but I don't think people would pay to help me. There is a free version of the same app (with less functionality) that this guy can get to work. If your really interested the 2 versions are -
Plink Log - Free Version
Plink Log Pro - Paid version
Rootstonian - agreed thats the approach I'd normally take if I was having problems on my dev phone or the emulator. The problem is that its OK on my HTC Wildfire/Android2.2 but on this guys HTC EVO/Android2.2 its having problems. I dont really want to keep sending him .apks with 1 or 2 lines extra enabled just to see if that fixes his specific issue. I was hoping there was something I could code to catch whatever crashes the activity & log it somewhere for me to analyse. When I do PC dev work, I have a global exception handler that catches anything I dont explicitly handle, and dumps the full call stack into a Log File I can read later.
I think I'll just have to take the existing app & put loads of debug code into it to save messages into a log file & see what bits of code are being called & what isn't & then get him to email me the results.
Thanks for the ideas guys, its always useful to get input from another perspective.
Dave
Hmmmm, just discovered setDefaultUncaughtExceptionHandler - might be able to use that with printStackTrace. Sounds interesting.
Hi all,
If you've read the text that USED to exist here before, scratch that. Big Thanks to @Sunius1 for clarifying what I thought was a win. Due to this, I DID find something interesting in regards to the ExtensibilityApp class (Windows.Phone.System.LockscreenExtensibility.ExtensibilityApp). I happened to also find a hidden capability "ID_CAP_SHELL_DEVICE_LOCK_UI_API" (Seems to be a locked CAP because it only works on Emulator. I get a deployment error on my if I try including this capability). I suspected that these two worked together, but I wanted to make sure of this.
Before we get started, read through the documentation from this site: http://msdn.microsoft.com/en-us/lib...lockscreenextensibility.extensibilityapp.aspx.
We have the following methods:
BeginUnlock
EndUnlock
GetLockPinpadHeight
IsLockScreenApplicationRegistered
IsSystemOverlayApplicationRegistered
RaiseToastNotifications
RegisterLockScreenApplication
RegisterSystemOverlayApplication
UnregisterLockScreenApplication
UnregisterSystemOverlayApplication
EDIT: After the release of the Live Lock Screen app, my speculations about the ID_SHELL_CAP_DEVICE_UI_API capability and the ExtensibilityApp object were correct. Thanks to @jessenic for finding out a good bit of info on this with me.
It seems that in order to get this working, we have to add an Extension to the WMAppManifest.xml
<Extension ExtensionName="LockScreen_Application" ConsumerID="XXXXX" TaskID="_default" ExtraFile="Extensions\\LockAppExtension.xml" />
In the LockAppExtension.xml:
<?xml version="1.0"?>
<x:Extension xmlns:x="urn:LockApp">
<AppID>AppNameForLockScreen</AppID>
</x:Extension>
As usual, Microsoft doesn't really give us much in terms of documentation.. Probably because it isn't meant to be used by the normal developer Confirmed: For now we have to actually ask for permission in order to use the cap. As to whether we'll get that granted? Who knows....
All of these methods have no parameters at all, but I can almost guarantee this has to do with having an application that can control the lock screen.
This thread will be for efforts in breaking this open and seeing whether we can create lockscreen applications..
Homebrew Lockscreen Apps:
Lockscreen App by @-W_O_L_F-
There are actually two Windows.winmd files in Windows Phone SDK, one for Silverlight 8.1 apps and one for Jupiter 8.1 phone apps (located in C:\Program Files (x86)\Windows Phone Silverlight Kits\8.1\ and C:\Program Files (x86)\Windows Phone Kits\8.1\). There's only one the phone. And some APIs support only one app type (it's phone limitation it seems: faking .winmd file results in Platform::InvalidOperationException, saying you cannot use that API from this app type). That explains why the one on the phone has more APIs available than either of for single app type.
As for LockscreenExtensibility - it's documented, just not available for Jupiter apps:
http://msdn.microsoft.com/en-us/lib...ows.phone.system.lockscreenextensibility.aspx
Sunius1 said:
There are actually two Windows.winmd files in Windows Phone SDK, one for Silverlight 8.1 apps and one for Jupiter 8.1 phone apps (located in C:\Program Files (x86)\Windows Phone Silverlight Kits\8.1\ and C:\Program Files (x86)\Windows Phone Kits\8.1\). There's only one the phone. And some APIs support only one app type (it's phone limitation it seems: faking .winmd file results in Platform::InvalidOperationException, saying you cannot use that API from this app type). That explains why the one on the phone has more APIs available than either of for single app type.
As for LockscreenExtensibility - it's documented, just not available for Jupiter apps:
http://msdn.microsoft.com/en-us/lib...ows.phone.system.lockscreenextensibility.aspx
Click to expand...
Click to collapse
Well that is very good to know! Thanks for the clarification. The best part is that I was actually able to compile without receiving an error (somehow).
I found something that may be of use in order to get the LockscreenExtensibility working (I just tried on a Silverlight 8.1 app and got access denied).
<Capability Name= "ID_CAP_SHELL_DEVICE_LOCK_UI_API"/> <----. Can't be used OOTB
EDIT: I just tested this in the Emulator and it really IS the capability that the LockscreenExtensibility needs in order for it to work.
snickler said:
I found something that may be of use in order to get the LockscreenExtensibility working (I just tried on a Silverlight 8.1 app and got access denied).
<Capability Name= "ID_CAP_SHELL_DEVICE_LOCK_UI_API"/> <----. Can't be used OOTB
EDIT: I just tested this in the Emulator and it really IS the capability that the LockscreenExtensibility needs in order for it to work.
Click to expand...
Click to collapse
I assume this is the thing Rudy Hyun used to create the lockscreen app at Build?
TheInterframe said:
I assume this is the thing Rudy Hyun used to create the lockscreen app at Build?
Click to expand...
Click to collapse
I speculate that this is what he's using. I bet there's more going on that we have yet to figure out. It also could be that the base class EXISTS, but the full implementation isn't available yet. Who knows.
snickler said:
I speculate that this is what he's using. I bet there's more going on that we have yet to figure out. It also could be that the base class EXISTS, but the full implementation isn't available yet. Who knows.
Click to expand...
Click to collapse
Ah, Yes that makes sense. I wonder if there are any other "half-baked" API's in the SDK?
Edit: I Know it sounds stupid but honestly I think we should have a thread dedicated to finding odd API's (Just found one: Windows.Phone.System.SystemProtection, nothing terribly useful though)
TheInterframe said:
Ah, Yes that makes sense. I wonder if there are any other "half-baked" API's in the SDK?
Edit: I Know it sounds stupid but honestly I think we should have a thread dedicated to finding odd API's (Just found one: Windows.Phone.System.SystemProtection, nothing terribly useful though)
Click to expand...
Click to collapse
there are also some hidden APIs in the current SDK for 3D Touch-enabled Apps!
From WP Central:
Some of the features include APIs for gestures, side interactions and even heat maps.
Crazy stuff.
Believe it or not, some of these APIs for developers are in the current SDK, they're just not visible. What this mean though is developers will have access to this 3D Touch technology for their apps. It also means that Microsoft will have a small batch of third-party apps supporting this 3D Touch technology on launch day.
Click to expand...
Click to collapse
source: http://www.wpcentral.com/microsofts-next-flagship-windows-phone-november-3d-touch
Yea, even though those 3D touch APIs may be available, they're not particularly useful, as they require special hardware to work.
Sunius1 said:
Yea, even though those 3D touch APIs may be available, they're not particularly useful, as they require special hardware to work.
Click to expand...
Click to collapse
That is true. Sort of of a side question though, has anyone made a OEM account and looked over the API documentation there? There maybe some useful things we could learn about WP and maybe further a jailbreak for all WP devices....
TheInterframe said:
That is true. Sort of of a side question though, has anyone made a OEM account and looked over the API documentation there? There maybe some useful things we could learn about WP and maybe further a jailbreak for all WP devices....
Click to expand...
Click to collapse
API isn't much useful as long as you cant really use most of functions due to policies.
ultrashot said:
API isn't much useful as long as you cant really use most of functions due to policies.
Click to expand...
Click to collapse
Ah, Yes that makes sense....
http://www.wpcentral.com/joe-belfiore-announces-new-updates-sheds-details-lock-screen-app
Sounds like there will be a dev preview update to enable lockscreen functionality quite soon. Joe also mentioned keeping the lock screen in memory. So 512 MB devices won't get the functionality soon....
Good stuff. Another question: can apps show the action center? Because I want code an app to show notifications on lockscreen. Thanks
Marocco2 said:
Good stuff. Another question: can apps show the action center? Because I want code an app to show notifications on lockscreen. Thanks
Click to expand...
Click to collapse
something to force the volume/music control on the lock screen to automatically open would be really useful as well
Updated first post with some more data since the Live Lockscreen App debuted yesterday. There's more I didn't get into, but I want others to dig in and find out
I suppose we can only speculate how it works at this point, but if I had to guess, it goes like this:
1. You have 2 projects in your LockScreenApp solution, one for the application to register the lockscreen, and the second one for the actual lock screen application.
2. The former would use ExtensibilityApp APIs to register the the second one, coupled with the manifests so it's all "valid".
3. The second application is just a another app that is able to process input and draw whatever it wants on the screen. That would explain why there's a delay at it starting when you press lock screen button while the phone is sleeping (probably it's a time for .NET to startup? Direct3D app should be able to start much faster).
Although this is only speculation, I think this makes sense, because that's how background tasks work on Windows, at least. I wonder though, why Microsoft is not releasing the APIs to be used in public - are they afraid somebody will make a lockscreen application that will drain the battery fast or something?
Sunius1 said:
I suppose we can only speculate how it works at this point, but if I had to guess, it goes like this:
1. You have 2 projects in your LockScreenApp solution, one for the application to register the lockscreen, and the second one for the actual lock screen application.
2. The former would use ExtensibilityApp APIs to register the the second one, coupled with the manifests so it's all "valid".
3. The second application is just a another app that is able to process input and draw whatever it wants on the screen. That would explain why there's a delay at it starting when you press lock screen button while the phone is sleeping (probably it's a time for .NET to startup? Direct3D app should be able to start much faster).
Although this is only speculation, I think this makes sense, because that's how background tasks work on Windows, at least. I wonder though, why Microsoft is not releasing the APIs to be used in public - are they afraid somebody will make a lockscreen application that will drain the battery fast or something?
Click to expand...
Click to collapse
I don't think its that but most likely the fact that the API is un-optimized, some of the facts you stated (i.e. Slow start up, documentation is lacking) etc... The fact the OS needs to be updated to show a section telling the user what lock screen app has taken over (since the setting page doesn't now)
Edit: Remember what Joe said about keeping the lockscreen in memory and 512MB devices might not be supported for that reason? Yeah seems like they aren't doing that since you can see the resume time for the lo screen is wayyy to much
Sunius1 said:
I suppose we can only speculate how it works at this point, but if I had to guess, it goes like this:
1. You have 2 projects in your LockScreenApp solution, one for the application to register the lockscreen, and the second one for the actual lock screen application.
2. The former would use ExtensibilityApp APIs to register the the second one, coupled with the manifests so it's all "valid".
3. The second application is just a another app that is able to process input and draw whatever it wants on the screen. That would explain why there's a delay at it starting when you press lock screen button while the phone is sleeping (probably it's a time for .NET to startup? Direct3D app should be able to start much faster).
Although this is only speculation, I think this makes sense, because that's how background tasks work on Windows, at least. I wonder though, why Microsoft is not releasing the APIs to be used in public - are they afraid somebody will make a lockscreen application that will drain the battery fast or something?
Click to expand...
Click to collapse
You are correct. Two projects: One is the settings page, which is the main entrypoint of the app when it's opened from the start menu and the second one is the actual lockscreen app.
The settings page uses the ExtensibilityApp APIs to register the second one as a lock screen application. That second application is another 8.1 Silverlight app that uses a LockScreen_Bridge WinRT component that has native access to read what is shown on the lockscreen from the WP Settings item.
It then uses some storyboards to make it do different things as you're swiping up and down on the LayoutRoot grid. It does use a timer so that's where that little lag comes from.
The only background stuff it's doing is latching on to system events ("Start button being touched for example").
I can see where MS would be protective of this. They DID say that they would be releasing a public version of the API at some point. I'm hoping it's not one of the situations that leaves it public only when they've approved you to be able to use it.
It does suck that it's restricted to 8.1 Silverlight though. I could see some Music Apps wanting to take advantage of the lockscreen like this.
snickler said:
You are correct. Two projects: One is the settings page, which is the main entrypoint of the app when it's opened from the start menu and the second one is the actual lockscreen app.
The settings page uses the ExtensibilityApp APIs to register the second one as a lock screen application. That second application is another 8.1 Silverlight app that uses a LockScreen_Bridge WinRT component that has native access to read what is shown on the lockscreen from the WP Settings item.
It then uses some storyboards to make it do different things as you're swiping up and down on the LayoutRoot grid. It does use a timer so that's where that little lag comes from.
The only background stuff it's doing is latching on to system events ("Start button being touched for example").
I can see where MS would be protective of this. They DID say that they would be releasing a public version of the API at some point. I'm hoping it's not one of the situations that leaves it public only when they've approved you to be able to use it.
It does suck that it's restricted to 8.1 Silverlight though. I could see some Music Apps wanting to take advantage of the lockscreen like this.
Click to expand...
Click to collapse
Quite interesting...!
The API in itself is quite powerful, custom lockscreens with weather animations are possible! http://wmpoweruser.com/wp8-1-live-l...amazing-lock-screen-weather-animations-video/
First of all, forgive me if this is not the right forum to ask this question, because I'm not sure what is.
Hi everyone,
So the company that provides the TV channels in my country (like the cable companies in the US) has a streaming service that streams most of these channels online to phones, tablets, computers.
The problem is that their app is, according to them "not supported on hacked devices". Just so we're clear, we're talking about Android here, and hacked = root/custom rom, which this stupid company considers illegal. In some devices, they check both root and custom rom, in some only one of them, and in some the app will work even if you have both. For example, on my Nexus 4 the app worked with stock rom that was rooted. Now that I am running a custom rom, trying to hide root using various apps does not work. So obviously the problem, with my device at least, is running the custom rom.
I'm currently learning Java & Android development and have decided to use the little knowledge that I have to try to find the lines of code responsible for this idiotic check.
I looked up many tools for decompiling apps and have finally found a good one, called JadX.
http://androidcracking.blogspot.co.i...ler.html#links
This decompiler is excellent, but gives me a scary amount of code files to look. Even so, trying to search all of them (JadX has that functionality) for the code that checks for root/custom rom has turned up nothing. I have also tried to search for the message they give me when I open the app (about hacked devices not working) but I found nothing, again.
One more thing - a developer that also tried to solve this problem said he traced the problem back to DxDrmDlcCore. I searched it, found it a some class, but not sure what to do now (delete the entire class and recompile?)
Can someone here direct me towards what I need to be looking for?
OR
Is the solution really simple, such as editing my build.prop? Someone suggested it once, but did not know what lines to edit.
If someone is ready to step up to the challenge, I can upload the apk.
Thank you!
Hi Guys,
I managed to root my FireTV a few days back, and yesterday decided to look at the voice search to see if I could use it for other things.
After reversing some code, I found the actual voice search is handled by com.amazon.vizzini.apk
The SearchOverlay.class has this piece of code, which calls back to the amazon fireTV UI SearchResultsActivity upon completion with the search result returned as a string.
I replaced the fire TV UI with my own code, which receives the search text, and then sends a JSON rpc to KODI's web interface, and brings KODI to the front after the search is completed.
Here is a video of it in action.
http://youtu.be/hpgKci_gJYY
android studio project
http://uptobox.com/ccykod7zua1l
mirror
http://www107.zippyshare.com/v/mAhl3UuM/file.html
***** I have a FireTV v1 updated to fire os 5 ******
I have no idea if this will work on older versions of software.
To make it work you will require ROOT, and you will have to uninstall or move the existing fireTV amazon UI.
Reason for this is that the vizinni.apk calls back to
localIntent.setComponent(new ComponentName("com.amazon.tv.launcher", "com.amazon.tv.launcher.ui.SearchResultsActivity"));
So your activity has to be in that package, and called SearchResultsActivity.
The only other way to possibly get round this would be to modify the vizinni.apk to call a different package instead, but then voice search wouldn't work on the amazon UI anyway. I wasn't really bothered about the amazon UI working myself which is why I did it the way I did.
I just moved the original system/priv-app/com.amazon.tv.launcher/com.amazon.tv.launcher.apk to /system then installed my code via android studio.
mount -o rw,remount /system
mv /system/priv-app/com.amazon.tv.launcher/com.amazon.tv.launcher.apk /system/
You will also need to change KODI's settings to enable web interface control on port 8080.
On my setup it was .. system.. services.. webserver.. and tick the box that says "Allow control of kodi via http" and make sure the port is set to 8080.
edit: 07/07/2016
I did start work a few months back on an Xposed module (works but not 100%), I was going to add a settings page but I think there was issues with xposed on android version that the fire tv runs on, due to permissions etc.
You can set a prefix in the code (currently hard coded to KODI) so if you say " kodi star wars" it would pass the param of "star wars" to kodi.
If you just say "star wars" without the prefix it would pass this to the normal amazon UI.
If anyone wants to take it further, it's attached on the link below.
http://uptobox.com/ensqll4a7r65
Mick
Thats actually pretty great.
Conventional wisdom was that both voice recognition and response "results packaging" was done server side - and the only return query you could get were preexisting amazon database references - not the result of the initial voice recognition.
I've actually read that multiple places - but as it turns out, no one had bothered reversing the process I guess.
Major props and thank god for overlays containing strings... Finally - that microphone might not be so useless after all..
Next step - implement it to interface with google search. (Weather in ...)
edit: Actually - there are three interesting usecases I can think of on top of my head...
1. Launch other Apps (maybe even with "fixed Keywords (App 1 f.e. would work edit: "Number 1" delivers better results. )" if App names arent known to Amazon) - so thats text>launcher app - see if sphinx02 has any interest in coding that as part of Firestarter..
2. google search (Whats the weather in.. ) text>google search app (if possible)
3. direct text input as seen in the POC video
edit 4. forward to Amazon - not to "break" their implementation
This could be realized by a quick 4 way select screen (just confirm with the direction button) after you select the string in the overlay.
great find and implementation. would love see more expansion as above post mentions. Are you planning to share fire tv UI code?
I was playing with voice search strings and found that Amazon tends to strip out "Google" in front of search queries -so instead of "Google how is the Weather tomorrow" only "How is the weather tomorrow" will get returned.
But - Alexa as a trigger word will be returned fine, so - I vote for using "Alexa" as a trigger word to forward all search queries to the google search app.
Ok - maybe not - but conceptually, this would be a great "work of art".
An even better idea than Harklekinrains' would be to check the foreground app and do different things based on that. For example, if Kodi is open it could send the intent to Kodi. If the Fire TV launcher is open it could fall back to default functionality, etc. Simulating keystrokes could also cover 99% of the other applications. I'm excited for this. It really makes the Fire TV so much less of a novelty.
I've just updated the original post with a copy of the android studio project, and a few more details.
Mick
Great mate,thanks! gonna check it out soon!
One question (as im not firm with intents and stuff): aint there a possibility to listen for intents sent to the amazon ui and catch them? Personally, i dont care too much about it right now as im using nothing but kodi, but maybe some day when wanting to use prime or similar stuff?
dafunkydan said:
Great mate,thanks! gonna check it out soon!
One question (as im not firm with intents and stuff): aint there a possibility to listen for intents sent to the amazon ui and catch them? Personally, i dont care too much about it right now as im using nothing but kodi, but maybe some day when wanting to use prime or similar stuff?
Click to expand...
Click to collapse
I don't think so as the actual code in the vinizzi apk is as follows:
Intent localIntent = new Intent();
localIntent.setComponent(new ComponentName("com.amazon.tv.launcher", "com.amazon.tv.launcher.ui.SearchResultsActivity"));
localIntent.putExtra("identifier", null);
localIntent.addFlags(402653184);
localIntent.putExtra("term", str);
localIntent.putExtra("text", str);
localIntent.putExtra("source", "VOICE");
localContext.startActivity(localIntent);
So basically when it receives the voice search response from amazon's server, it's starting the activity com.amazon.tv.launcher.ui.SearchResultsActivity from package com.amazon.tv.launcher
Mick
Integrating the search attempt based on the previous foreground app would mean that "voice search" could never "initate a new attempt". This would prevent the "lets just ask google, or lets launch an app - impulse use" of the feature. Still - in the long term it might turn out to be the right approach - f.e. if
"People tend to use the the google search only lets say "half a dozen times", and in the majority of cases just want text input in Kodi" Also - without having a select screen with "four (dont make it too many) predefined "use options"" discoverability is pretty non existent. People would have to read readmes to find out which interactions are supported.
Also Amazon wants you to "be able to always reach the Amazon content search from anywhere" - so, political implications.
Also - each time a new app would want to integrate the voice to text feature - they would have to contact the devs of this project - if you dont use "just text input" as a default in the "use the previously open app as an indicator of intent" approach. So make sure you default to "just text input" in that case. (Amazon launcher > forward query to Amazon (do not break functionality), ....)
Dont implement it as a mixed approach though as "Kodi is open most of the time" probably for most people - and the intent (message) gets "confusing".
--
What you probably shouldnt do - regardless, is to use "trigger words" as an "indicator of intent" - because Amazon can start blocking them. "Number 1" is probably generic enough so they wont try to block that - but in principal, they can. Also Amazons "voice to text" engine is optimized for "short phrases" - the longer your input query gets the more prone to errors their results become. Leave the actual "voice input" as "natural" as possible (dont embed logic there). Imho.
I'm in the USA and uptobox.com is not available in our country.
I want to give the source a look and see what I can do to contribute.
Can you put it on github or somewhere else?
Much appreciated.
kratosjohn said:
I'm in the USA and uptobox.com is not available in our country.
I want to give the source a look and see what I can do to contribute.
Can you put it on github or somewhere else?
Much appreciated.
Click to expand...
Click to collapse
I've just uploaded it to zippyshare too. Link added to original post.
Mick
It would be nice if this could be generic way of entering text in any app, similar to the FTV Remote App that has keyboard support. I am surprised Amazon has not done this already. If you are in a text field voice search would fill in the text, else it can continue with normal Alexa functionality letting you " always reach the Amazon content search from anywhere". Should be an easy solution for Amazon to implement, not sure if it could be added with the progress here, and it would really make the voice control so much more useful.
Hey Mick, as im still very exited about that (so frustating to enter searchphrases with a dpad...), i thought about what your great find could develop to. I hope u dont get me wrong, as i unfortunatly cant contribute any programming (if u need something made with tasker let me know ), its not that i want to demand anything - rather share my thoughts or ideas...
- I think its a good thought not to 'blow it up' by adding many keywords, make it more complicated, and avoid amzn to lock features down. even if it may be an abstract fear. I think with a well-structured menu in kodi, all apps one will need to start are just 2-3 clicks away.
- if i got it right, xposed could hook into every module and alter it, right? so basically it should be possible to avoid to exchange the amazon ui and/or alter the vizinni.apk,but just modify the SearchOverlay.class? wouldnt it be (on top) the most convenient way to decide wether one is on e.g. Amazon Prime, or on kodi, and either hand over the result to the original module, or hand it over to kodi?
So, please understand my post as a mixture between sugestions and questions, not as a demanding 'please make it how i want it asap'. Im really excited about your finding, appreciate your sharing, and want to contribute what i can do - unfortunatly its rather thoughts and suggestions. Cheers mate!
Great Work !!! I totally understand this is for amazon Fire products only. My question is would we be able to port for generic Android TV box like nVidia Shield which also has mic capability and comes with root support.
Good job @is0-mick it's great to see you accomplish something that Netflix doesn't even want to be bother with on their app.
harlekinrains said:
No you dont understand. You havent even read or understood the first posting - but you have bought another device and now want others to move in your direction on your behalf.
Click to expand...
Click to collapse
Seriously mate if XDA bothers you so much, you need to take a break from it for your own sanity!
fach1708 said:
Seriously mate if XDA bothers you so much, you need to take a break from it for your own sanity!
Click to expand...
Click to collapse
No, I really don't think he needs to take break, his argument is completely understandable. This is not a Shield forum (btw before we make judgements, I own both devices). We need to get this fixed for one device, before even mentioning whether other devices are an option.
is0-mick said:
To make it work you will require ROOT, and you will have to uninstall or move the existing fireTV amazon UI.
Reason for this is that the vizinni.apk calls back to
localIntent.setComponent(new ComponentName("com.amazon.tv.launcher", "com.amazon.tv.launcher.ui.SearchResultsActivity"));
So your activity has to be in that package, and called SearchResultsActivity.
The only other way to possibly get round this would be to modify the vizinni.apk to call a different package instead, but then voice search wouldn't work on the amazon UI anyway. I wasn't really bothered about the amazon UI working myself which is why I did it the way I did.
Click to expand...
Click to collapse
Is it not usefull for this to create a xposed mod? I thought xposed was made for this...
Perhaps you should talk with rbox in his thread about a integration into version 1.5 of his mods, look here.
is0-mick said:
I just moved the original system/priv-app/com.amazon.tv.launcher/com.amazon.tv.launcher.apk to /system then installed my code via android studio.
mount -o rw,remount /system
mv /system/priv-app/com.amazon.tv.launcher/com.amazon.tv.launcher.apk /system/
Click to expand...
Click to collapse
So does that mean, while you are using your proof-of-concept you can not use the normal Launcher-UI?
Really great your work! I hope we got more
Sadly I am with my FireTV-Stick (hardware-Rooted-superSU) and my FireTV box gen1 (hardware-Rooted-superSU+custom-recovery+unlocked-bl)on the old FW. I wait for custom roms.
Greetings by Idijt
EDIT:
I forgot to ask you something. Did you got the kind of search-request back?
Can you see if amazon-voice-rgn knows if it is a app or a video or a moviestar?
This looks pretty cool since I don't like typing the name of the movie in the search box... now I can just speak it.
As a novice I have one question, You wrote:
is0-mick said:
I just moved the original system/priv-app/com.amazon.tv.launcher/com.amazon.tv.launcher.apk to /system
Mick
Click to expand...
Click to collapse
The code I downloaded was in .RAR format not .apk. Do I need to covert it to .apk or rename it to .apk before replacing the original?
I've not rooted my device yet so I haven't looked at the file structure, but I'd be willing to root to get this feature. OR is the adding of this code better done by someone other than a novice?
Thanks
carpenter940 said:
The code I downloaded was in .RAR format not .apk. Do I need to covert it to .apk or rename it to .apk before replacing the original?
Click to expand...
Click to collapse
It's the source code. You have to compile it and create the apk with AndroidStudio.