Related
Hi all,
A few weeks ago I started taking apart the LiveView software and manager. I'm really unhappy with the current plugin system, the menu structure and more. So, I started to reverse-engineer the Bluetooth protocol. I'm at the very beginning but it's looking promising.
Here's the repo: https://github.com/BurntBrunch/LivelierView
The protocol is not very difficult - just request-acknowledge-response serial communication over RFCOMM. Also, the kind people from SE didn't run the manager through Proguard (wink, wink, nudge, nudge ).
I also have what I *think* is a dump of the firmware but it seems either compressed or encrypted. Binwalk didn't find anything in it. If someone would be kind enough to take apart the software updater, we might figure out what's running on the actual device as well.
Overall, I'm just starting but so far it's looking good (got time syncing working! it's at least a watch, if nothing else! ).
Any help would be greatly appreciated (pull requests are more than welcome! )
thinking of doing something similar with one of my gadgets.
What did you use to reverse-engineer the Bluetooth protocol, just wireshark and a bluetooth dongle
Neither Did it from disassembly of the manager - much easier than sniffing and guessing.
If you don't have that option and said gadget connects to an Android phone, put on a decent ROM with the full BlueZ stack (e.g., Cyanogen) and use hcidump. It's really, really useful!
Come to think of it, Wireshark might be good enough - the only thing I found useful about hcidump was the SCO audio dump.
Nice effort. I've already forked your work on github, might have a look at it soon, I got some geeky ideas for myself as well, and I think integrating this functionality natively on CyanogenMod or even a custom app to replace the SE's one would be great to have as well.
Nice,
i'm was disapointed by the liveview manager myself, i hope something good emerges from your work
I've also decompiled the APK, and it seems that everything that displays on screen comes from the application, which means everything could be costumized. Seems like SE is using a PNG lib LodePNG to convert images and pushing them to the phone. Also, when it comes to strings, I've found some useful references in JerryProtocol that might indicate how the correct text encoding (not that we can push it right now, but just for the record):
Code:
private static final String mEncoding = "iso-8859-1";
private static final char cCarriageReturn = '\r';
private static final char cLineFeed = '\n';
Controlling the led seems quite simple to, it seems message's data is divided in 3 parts:
[RGB] [DELAY = Integer Number] [ON STATE = 0|1]
[old]although I've not figured out the ID of the LED control yet[/old].
LED request ID is 40 and LED response ID is 41. Hope this is enough for you to get started on that one too
I've not yet tested the app, but I've read your code and gave a shot at decompiling trying to see what I could dig up, will try it later (not very used to running python scripts though, will have to see how to install pyserial first and all that)
pedrodh said:
it seems that everything that displays on screen comes from the application
Click to expand...
Click to collapse
Yup, the main stuff is on the phone - the state machine is clearly isolated (on a side note, the manager is rather well-written, thankfully). On the other hand, I'm somewhat confused by all the constants - it almost feels as if the device has native navigation or icon cache or something.
pedrodh said:
Controlling the led seems quite simple to, it seems message's data is divided in 3 parts:
[RGB] [DELAY = Integer Number] [ON STATE = 0|1]
LED request ID is 40 and LED response ID is 41. Hope this is enough for you to get started on that one too
Click to expand...
Click to collapse
Thanks for the interest and the tip, I'll look into it soon - I need to figure out a good way to send commands from stdin. It seems that I'll need to figure out non-blocking reading in Python anyway (good news for you - I might drop pyserial! )
In any case, I'll add it to protocol.txt, unless you beat me to it!
Lastly, the only reason it's in Python is 'cause I'm productive in it *and* it has good, fast bindings (I try to stay away from gobject in C!). Whatever comes out of this effort would be running on the phone, surely
Edit: You *did* beat me to it!
Edit: Implemented LED, vibration, and a pretty good scheme for sending commands from the CLI
Nice work, saw quite a few commits in a small amount of time.
I've not yet been able to run it sucefully, I (think) have installed pyserial correctly, but maybe the problem is that the bluez that comes with my ubuntu is somewhat newer than the one you used, anyway here's as far as I got http://pastebin.com/uVRdr5T3 if you by chance know just by looking at it what it is would be great .
I've started an Android applicatoin Project in hopes of porting this to an Android application as well, but I'm somewhat new to Bluetooth handling on Android, still working it out. I'm already able to connect and pair with device (noob stuff), but it fails to READ from it. I've used java's DataOutputStream and DataInputStream since they deal with data in a big-endian notation, but I haven't understood yet how the initialization process goes. I've looked to your code, I get some parts but not the whole thing yet. Do you have to wait for the LiveView to tell something back, or you can just start to send commands at random? Also, does the script act as a bluetooth server or client (it seems that they are distinct when coding in Android, I've choosen to Connect as a Client, and yes I used the same UUID that you got from decompiling so at least that part I guess to be correct) ?
Anyway is just a bunch of very ugly code at the moment, after I get it to do something usefull I'll clean up the project and host it on github as well.
Hmm, that error is rather suspicious. Looking at the docs, Connect() is not even supposed to throw org.bluez.Failed, let alone with that message. And service discovery supposedly finished successfully..
Was the device in pairing mode (with the arrows/circle turning)? Was the computer the last thing it paired with (once you pair with the computer, the phone shouldn't be able to connect to it, since the device only remembers the last authorization)?
Install d-feet, the DBus browser, go under System bus, org.bluez, find the device, verify that it has the org.bluez.Serial interface and try calling Connect() with the proper UUID from there. Other than that, I've really no idea what it's on about.. Do you have more than one LiveView device by any chance (weird things might happen then)?
I don't actually think it's the difference in bluez versions (the Serial interface hasn't changed in the past 2 and something years) but it might be a (driver) bug you're hitting. I *think* I'm doing everything right as far as communication with BlueZ is concerned. Try running `hciconfig hci0 reset`.
Sorry I couldn't be more helpful..
Regarding your Java effort, if I recall my Bluetooth terminology correctly, you are a client, since the server is the thing advertising the service. You should *not* be reading immediately from the device. The phone/computer sends the first message - in my case, my first message is always STANDBY. Then and only then can you start reading back.
Lastly, I hope Android abstracts the whole RFCOMM pipe thing, 'cause it's a pain to use (and the reason I still need pyserial) - select() would sporadically tell me it has data to read and when I try to read it, I get ERRIO :/ I suspect RTS triggers select()..
Make sure you're only reading as many bytes as you know are in the next packet (take a look at consume() - it returns the number of bytes it expects next) and not more than that - it would either block or throw an exception. I've not done any Bluetooth work on Android, so that's as much as I can help, I'm afraid.
Lastly, as big as the temptation is, do not under any circumstances reuse code from the official manager. "Sony" is in the name of the company after all. I'm half-expecting a Cease & Desist any moment now
Edit: Implemented Display Properties Request and Clear Display Request (doesn't do anything). I think I'm out of low-hanging fruit
Really interesting work, guys. The Liveview is a fantastic idea and is almost brilliant - if only it worked properly! If you could get the basics working properly so we don't have to use the Sony software that would be fantastic, it's got so much potential.
Cheers,
Tim
So, I had a brilliant idea today. You know how the LiveView Manager app is full of debug messages. Turns out, they are disabled by means of a constant in ElaineUtils. My idea was to change that constant, put the apk back on my phone and rejoice from all the extra info I'd have.
Turns out, that's not how it works. I changed the constant (bumped it to 0x100 - literally a single bit change) and re-signed the apk. I got some output out of it but not all, and none of the useful ELEMENT_ID_* messages
Any help on that front would massively speed up the reverse-engineering effort.
EDIT: Scratch that, I'm stupid. I forgot that the .field annotations are not executable code - I was changing the wrong bit so to speak. Changed the value in <cinit> and voila, proper logcat!
EDIT: Here's some food for thought - http://pastebin.ca/2099804 - it's the log from startup + a bit of moving around and opening/closing the mediaplayer control.
Very cool project.
I believe, for the damn thing to be usable, focusing on improving Bluetooth performance would be quite good. By "performance" I mean "power consumption." Having to give up on the watch after two hours of light use is really unacceptable.
I would love it if you got this thing working efficiently like SmartWatchm/OpenWatch did for my MBW-150. I ordered my LiveView from the UK when it first released there instead of waiting for the US release. The darn thing disappointed the hell out of me and has been sitting in my garage for almost a year now.
Hopefully you get something going on with this.
archivator said:
So, I had a brilliant idea today. You know how the LiveView Manager app is full of debug messages. Turns out, they are disabled by means of a constant in ElaineUtils. My idea was to change that constant, put the apk back on my phone and rejoice from all the extra info I'd have.
Turns out, that's not how it works. I changed the constant (bumped it to 0x100 - literally a single bit change) and re-signed the apk. I got some output out of it but not all, and none of the useful ELEMENT_ID_* messages
Any help on that front would massively speed up the reverse-engineering effort.
EDIT: Scratch that, I'm stupid. I forgot that the .field annotations are not executable code - I was changing the wrong bit so to speak. Changed the value in <cinit> and voila, proper logcat!
EDIT: Here's some food for thought - http://pastebin.ca/2099804 - it's the log from startup + a bit of moving around and opening/closing the mediaplayer control.
Click to expand...
Click to collapse
Wow, that's very useful thank you. I've been very occupied and did not work more with the Android Side application since my last post, I intend to return to it soon enough though, that output is very welcome when it comes to understanding then the icons are sent and the whole mechanism itself.
I've been doing a bit of reverse engineering work on the liveview as well, and I think I have a complete (although i fear possibly slightly corrupt) firmware dump!
I have been able to extract was some PNG images from the firmware (Thanks to their rather distinctive %PNG Header and ending with IEND).
It would appear that the menus and stuff are in fact definitively transferred over bluetooth!
I've attached the images I've extracted if anyone's interested in seeing them!
I'm currently trying to work through it in IDA to disassemble it, which is a pain in the arse!
Is anyone else also interested in completely rewriting the firmware?
@aj256, nice work! I thought I had a dump as well but mine looked compressed :\ Mind uploading yours somewhere for all to see? (edit: sorry, saw it in the archive)
aj256 said:
It would appear that the menus and stuff are in fact definitively transferred over bluetooth!
Click to expand...
Click to collapse
That's correct - I almost have that part of the protocol figured out but I'm low on spare time.
aj256 said:
Is anyone else also interested in completely rewriting the firmware?
Click to expand...
Click to collapse
Well.. I'd be interested in modifying it and isolating the Bluetooth stack but don't really have the time OR the chops to write the whole firmware from datasheets and disassembly.
As for where I'm standing, I know what I need to decompile next (renderShowUi) but it's a couple of thousand lines of smali. There are so many branches, it's easy to get lost. I need to write better tools for decompiling smali first
Just bought a Live View! I know it may not be the best but I got it cheap and mainly want the Caller ID portion of it. I hope this reverse engineering pays off. Once I get mine I may start poking around and see if I can help out! Thanks for the post OP!
Hi,
do you guys have some irc channel or anything else? Just got my LiveView and want to help you with this...
I've quickly put together a project website at openliveview (dot) com (apparently I don't have enough posts for an external link!) with some forums as well to help to document peoples progress!
I've done a quick writeup on my progress so far (which isn't very much!)
@archivator, glad you found the firmware in the zip, I was just about to reply that it was there!
aj256 said:
I've quickly put together a project website at openliveview (dot) com (apparently I don't have enough posts for an external link!) with some forums as well to help to document peoples progress!
I've done a quick writeup on my progress so far (which isn't very much!)
@archivator, glad you found the firmware in the zip, I was just about to reply that it was there!
Click to expand...
Click to collapse
Nice. I've been on your website and the documentation is getting in good shape. When I got some free time I'll try and read it more carefully and complement the Android project.
Talking about that, I've uploaded my progress so far to github: https://github.com/pedronveloso/OpenLiveView
bare in mind that apart from pairing with the Device not much is actually working by now, contributions are welcome of course
I know some people have already discussed this in the noRefresh App thread.
But I got a new idea
I don't wanna create a module and embed that to applications
Instead, I created a new Input Device in the kernel
After that, I set the device ( dev/input/event3 ) permission to allow read / write for others.
Then, write an app using JNI to catch the Input Device and set the mode of refreshing.
The prototype proved that this idea works, as i worked out it already, but not very perfect, still in experiment
I am just in some problems of threading, as I am not a professional programmer
This is wt i have now:
http://www.youtube.com/watch?v=GkFyvRR6In8
NEW!! : http://www.youtube.com/watch?v=cucG03rg3tg
I know that my method is a bit dirty,
but at least it works XD
I hope that someone would like to help
Upload those code soon
By the way, why can't i change content of init.rc ?
It removes my changes after reboot..... How to solve?
wheilitjohnny said:
By the way, why can't i change content of init.rc ?
It removes my changes after reboot..... How to solve?
Click to expand...
Click to collapse
You need to modify uRamdisk to change init.rc.
Would you like to tell me how to unpack the uRamdisk? I use both Windows and Ubuntu, any methods on these two platform is ok. Thanks!
Try the script from this post http://forum.xda-developers.com/showpost.php?p=24135886&postcount=72
This is simply amazing. Since you already have it working, polishing it shouldn't take too long (I think, I'm still a beginner programmer).
I don't think /dev/input is a good place to park that.
There are any number of things that could show up there and affect order.
Why not put it in its own directory?
Maybe I'm missing something, but if we use your method you still expect applications to have to be modified to read/write from event3 to control display mode?
As it is now, you only need a single function call to switch display modes. Yes, there is a little bit of housework to do before that, but what could be simpler than a single function?
I think no place else is more suitable than /dev/input.
As /dev/input is the linux kernel's input system.
In the driver, I would only use the input report system but not use the I/O system.
Maybe even setup a input/event searching function to solve the problem later.
The case now is that, the touch screen originally just provide event2,
but if we need to extract move and fingerup information on upper level, it may waste some time.
In order to make the whole algorithm easier and faster, I added 1 more event in the zForce driver,
to only output FingerMove and FingerUp state.
As the reporting system is starting from the kernel, this hack would need to change uImage + uRamdisk + Add an App. The project is quite huge.
My final target is to make the application remembering your choice on what mode u need for the focusing application.
AlwaysOn / OnlyWhenDragging / AlwaysOff, so on.
Of coz the click-to-call function still work.
Isn't it a better workflow and more intuitive, isn't it the thing that we would expect?
I already have an dev/input/event3 on my system and sometimes event4, 5, 6.
That's why I don't think that whatever it is you are doing belongs there.
Is the point to allow coders for user applications to interface with your driver?
Is this supposed to work without modifying user applications?
What information would be going in/out of event3?
Clearly I am missing something here.
Actually not necessary to be event3, the system will auto-ly create a new eventx, just in normal case, without any extra USB devices, a nook should only have up to event2. So, I use event3 as an example here.
We can later make the program auto-ly search back which is the needed eventx.
Yes, u r right, we no need to modify any other applications and this is exactly the point y i am creating this!
Anyone know how to call a FullScreen Refresh in a service?
always on?
Look really great. Since blinking after scrolling is incomfortable is it possible to have also "always on " mode using your new ideas?
My final target is to make the application remembering your choice on what mode u need for the focusing application.
AlwaysOn / OnlyWhenDragging / AlwaysOff, so on.
Of coz the click-to-call function still work.
Also an Over-Ride mode, for more flexible using.
But the most basic part need to be handled well now, as it is not very perfect now.
I still don't figure out how to call the e-ink driver to refresh the screen =-=
wheilitjohnny said:
I still don't figure out how to call the e-ink driver to refresh the screen =-=
Click to expand...
Click to collapse
Have you tried
Code:
echo 1 > /sys/class/graphics/fb0/epd_refresh
?
wheilitjohnny said:
Yes, u r right, we no need to modify any other applications and this is exactly the point y i am creating this!
Click to expand...
Click to collapse
Are you saying that your idea does not require modifying user applications?
If it doesn't, then there is no need to have a public interface.
It will be only your code talking to your code.
What is the point of this /dev/input/event3? You say that it will be writable. What's going in and out?
Some apps will be using gestures, some dragging. How are you going to keep track of that all?
I have one application that works perfectly fine now, one activity uses swipe gestures to page up/down while another activity uses drag with a user choice of A2 and display while dragging or else only panning at ACTION_UP.
All this requires less than 10 lines of code.
With multitouch, many applications don't even need A2. Even normal panning in Opera Mobile works much better now that Opera doesn't try to display while panning.
Maybe my english is too bad, cannot express the idea well.
I know, we can make such an application with noRefreshDrag working on its own well.
But how about other applications, it is impossible for us to change all applications.
So, my idea is making it system based.
My prototype is very good now, after several adjustment.
Not limited to only 1 application, but the whole system.
The approach is like this:
1. zForce driver provide extra information to InputEvent
2. A JNI catch the InputEvent
3. A service get the data and set the update mode
We only need to write 1 application to handle the setting of this chain.
This is what i mean, hope that u get what i mean now.
mali100 said:
Have you tried
Code:
echo 1 > /sys/class/graphics/fb0/epd_refresh
?
Click to expand...
Click to collapse
Let me try it now!
wow, it works great
wheilitjohnny said:
zForce driver provide extra information to InputEvent
Click to expand...
Click to collapse
I guess that this is the part that I don't understand.
What is this extra information?
Renate NST said:
I guess that this is the part that I don't understand.
What is this extra information?
Click to expand...
Click to collapse
It created an extra Event, l called it Event3 before.
Driver reports only move and finger_up to Event3.
Just providing a channel to pass an information from driver to user-space.
You may ask why not directly using the existing Event, need to create another one.
That is because, the original one only have touch and position information, parsing them back to move information need a bit of work. As the hardware provided the move information, then don't waste it.
Code:
public boolean dispatchTouchEvent(MotionEvent event)
{
switch(event.getAction())
{
case MotionEvent.ACTION_DOWN:
break;
case MotionEvent.ACTION_MOVE:
break;
case MotionEvent.ACTION_UP:
break;
}
return(true);
}
}
Isn't that everything that you could ask for?
I want to add a custom method to my server-side code because I want the server to do some of the work. As I understand it is not possible to do in the Entities code something like
Code:
setCustomTimestamp(long customTimestamp) {
this.customTimestamp = new Date().getTime();
}
It gets simply ignored by Android Studio, that in response to that code, generates this
Code:
setCustomTimestamp(long customTimestamp) {
this.customTimestamp = customTimestamp;
}
So there must be some kind of convention to follow but at this point I don't know which one.
I need to do this because, for example, time is not the same between different devices, so I want the server to figure out the date and modify the timestamp variables.
So the question is, where should I put code that I want executed by the server, that is bound to a single Entity (like setting the updatedAt timestamps). I need to do this, because the date on different devices could not be the same. Maybe they like doing maths and figure out the current day every time.
Google shows only examples about really simple Entities, and on the web I found no examples of doing this inside of the Google App Engine framework.
It's a really simple question but I was surprised not finding anything on google about that. Also, I'm starting now to look into this app engine world, so please forgive me if the question is not well posed. Just give me what you want to know and I'll post it here. Thank you.
No one has any idea?
If its your website and/or you have access to it, make your posts have a specified template. Call the same method for all of them from your app.
Or you can check the source from your browser, and make a custom parser for it like I did for this a few days ago.
I have successfully ported Quectel M95 GSM modem with android 4.0.4 and I can receive and send sms, make calls all phone functionality but to turn on radio I am doing this change in RIL.java file
if (cm.isNetworkSupported(ConnectivityManager.TYPE_MOBILE) == false)
I changed this condition to true
if (cm.isNetworkSupported(ConnectivityManager.TYPE_MOBILE) == true)
so that my radio hardware may turn on and it works for me but I know its not a proper way , my question is why isNetworkSupported() function ever returns false even I have now ported gsm modem and my second question is how I can enable phone functionality so that my tablet will configure as phone so that this
if (cm.isNetworkSupported(ConnectivityManager.TYPE_MOBILE) == false)
returns true.
Quick response will be highly appreciated.
If I'm understanding this correctly - you're trying to get an external USB modem working on a Wifi-only tablet, to make it behave as a device with a mobile radio?
Interesting. A lot of the mobile radio stuff is intentionally left out of builds for wifi devices to save space/build time/reduce number of running services. You might need to take a look at the device tree for a mobile-enabled tablet for hints.
It's only recently that having wifi-only/mobile-capable variants of a device were possible (I believe this is the case with Nexus 9 builds) - you might want to look at this and possibly the recently unified castor+castor_windy trees in Sony AOSP (Omni does not support these devices, but they can be found in sonyxperiadev github)
Entropy512 said:
If I'm understanding this correctly - you're trying to get an external USB modem working on a Wifi-only tablet, to make it behave as a device with a mobile radio?
Interesting. A lot of the mobile radio stuff is intentionally left out of builds for wifi devices to save space/build time/reduce number of running services. You might need to take a look at the device tree for a mobile-enabled tablet for hints.
It's only recently that having wifi-only/mobile-capable variants of a device were possible (I believe this is the case with Nexus 9 builds) - you might want to look at this and possibly the recently unified castor+castor_windy trees in Sony AOSP (Omni does not support these devices, but they can be found in sonyxperiadev github)
Click to expand...
Click to collapse
Thanks for your response.
Yes you understand it correctly but only difference is that I am using UART of my modem.
I didn't understand that where is the device tree for mobile-enabled tablet..?
Is there any simple way where I just change some flag to get whole mobile functionality? like by changing TYPE_MOBILE etc;
Thanks
Regards,
Adeel
adeelkhan09 said:
Thanks for your response.
Yes you understand it correctly but only difference is that I am using UART of my modem.
I didn't understand that where is the device tree for mobile-enabled tablet..?
Is there any simple way where I just change some flag to get whole mobile functionality? like by changing TYPE_MOBILE etc;
Thanks
Regards,
Adeel
Click to expand...
Click to collapse
I used debug logs inside isnetworksupported function and now I came to know that in x210_ics_rtm_v13/out/target/common/obj/JAVA_LIBRARIES/framework_intermediates/src/core/java/android/net/IConnectivityManager.java isnetworksupported functions is returning me false following is a part of this code
public boolean isNetworkSupported(int networkType) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
Log.d("FLOW","mservice.isNetworkSupported " + _data); // added
android.os.Parcel _reply = android.os.Parcel.obtain();
Log.d("FLOW","mservice.isNetworkSupported " + _reply); // added
boolean _result;
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeInt(networkType);
mRemote.transact(Stub.TRANSACTION_isNetworkSupported, _data, _reply, 0);
_reply.readException();
_result = (0!=_reply.readInt());
}
finally {
_reply.recycle();
_data.recycle();
}
Log.d("FLOW","mservice.isNetworkSupported " + _result); // added
return _result;
}
in the above last log I got false value of _result , so If you tell me that how this function gets parcel values and how it is connected with parcels then it would be very helpful for me.
Waiting for your response.
Thanks
adeelkhan09 said:
Thanks for your response.
Yes you understand it correctly but only difference is that I am using UART of my modem.
I didn't understand that where is the device tree for mobile-enabled tablet..?
Is there any simple way where I just change some flag to get whole mobile functionality? like by changing TYPE_MOBILE etc;
Thanks
Regards,
Adeel
Click to expand...
Click to collapse
It's exactly what I said it was - look at the device trees for other tablets with mobile radios (LTE or GSM) for examples. flounder (Nexus 9) is an example of one that I believe supports both LTE and wifi-only in a single build. In fact it's the only one I know of. Nearly all other devices need a special build for LTE vs wifi-only - see device-sony-castor vs. device-sony-castor_windy in sonyxperiadev's github.
Entropy512 said:
It's exactly what I said it was - look at the device trees for other tablets with mobile radios (LTE or GSM) for examples. flounder (Nexus 9) is an example of one that I believe supports both LTE and wifi-only in a single build. In fact it's the only one I know of. Nearly all other devices need a special build for LTE vs wifi-only - see device-sony-castor vs. device-sony-castor_windy in sonyxperiadev's github.
Click to expand...
Click to collapse
Today I followed android_device_sony_castor/ device tree and make some changes on my system.prop, boardconfig.mk and device.mk files according to caster but unfortunately I am not successful then i try to follow device tree of android_device_moto_everest and made some changes accordingly but didn't get success
adeelkhan09 said:
I have successfully ported Quectel M95 GSM modem with android 4.0.4 and I can receive and send sms, make calls all phone functionality but to turn on radio I am doing this change in RIL.java file
if (cm.isNetworkSupported(ConnectivityManager.TYPE_MOBILE) == false)
I changed this condition to true
if (cm.isNetworkSupported(ConnectivityManager.TYPE_MOBILE) == true)
so that my radio hardware may turn on and it works for me but I know its not a proper way , my question is why isNetworkSupported() function ever returns false even I have now ported gsm modem and my second question is how I can enable phone functionality so that my tablet will configure as phone so that this
if (cm.isNetworkSupported(ConnectivityManager.TYPE_MOBILE) == false)
returns true.
Quick response will be highly appreciated.
Click to expand...
Click to collapse
i need your help i also using lineage 16 in grand prime and i get message receiving problem please help me to find out RIL i also want to fix it tell me how to port RIL from other rom please help me tell me file name help me
Hi guys, could you tell me how to open file for writing in the phone app LocalStorage for the non-unlocked handset (regular app for store)?
Code below doesn't work
Code:
FILE *tmp;
auto tmpPath = Windows::Storage::ApplicationData::Current->LocalFolder->Path + "\\tmp.txt";
auto tmpErr = _wfopen_s(&tmp, tmpPath->Data(), L"w");
Any suggestions?
Try looking though msdn articles. I found it somewhere in there. But I have forgotten it now.
Sent from Board Express on my Nokia Lumia 1020. Best phone ever!!
Note to noobs: DON'T PM ME WITH QUESTIONS. POST IN THE FORUMS. THAT'S WHAT THEY ARE HERE FOR!
@wcomhelp, please keep your rtfm advices for yourself, OK? I'm not a noob and of course I've searched msdn, google, codeplex, github etc. and so on before posting here. If you don't know how, much better be silent (like others who read this post but have no idea what I'm talking about)
I've tried a few possible methods including ugly "MS-way" with task & lambda syntax (see below) but nothing worked as it should be (code below works if no file exist and fails if file already exist - CreationCollisionOption::ReplaceExisting options is not worked/not implemented/buggy/billgates_knows_only ).
Code:
auto folder = Windows::Storage::ApplicationData::Current->LocalFolder;
Concurrency::task<Windows::Storage::StorageFile^> createFileOp(
folder->CreateFileAsync(CONFIG_FILE_NAME, Windows::Storage::CreationCollisionOption::ReplaceExisting));
createFileOp.then([=](Windows::Storage::StorageFile^ file)
{
return file->OpenAsync(Windows::Storage::FileAccessMode::ReadWrite);
})
.then([=](Windows::Storage::Streams::IRandomAccessStream^ stream)
{
auto outputStream = stream->GetOutputStreamAt(0);
auto dataWriter = ref new Windows::Storage::Streams::DataWriter(outputStream);
// data save code skipped
return dataWriter->StoreAsync();
})
.wait();
BTW, I've used workaround, to save ported C++ app data to the LocalSettings instead of text file (as it was in original code).
"Doesn't work" doesn't give us a lot to go on, troubleshooting-wise. Can you tell us what error you get?
Only thing I see in the code that looks a little weird is that the
Code:
"\\tmp.txt"
part isn't explicitly a wide-character string, but I'd expect string concatenation to take care of that.
Also, out of curiosity, why libc functions instead of Win32? Obviously, the code you're writing here isn't intended for much portability...
@GoodDayToDie, there is no error code at all - standard POSIX functions returns NULL FILE, the ::GetLastError() also return 0.
I'm porting old C-style app to WinRT platform and don't care about portability (but the first post code - just a simplified example, nothing more).
POSIX (libc) functions works pretty well for reading only but not for writing - that's the problem...
As I said before, I resolved my issue by workaround but still curious why the POSIX calls fails for file writing in the app storage.
buuuuuuuuuuuuuuuuh
No need for lambdas
https://paoloseverini.wordpress.com/2014/04/22/async-await-in-c/
You may also want to rethink your strategy
You can't create files at arbitrary locations, so your method is kinda redundant. All the locations you are allowed to create and read files to/from are available through KnowFolders and ApplicationData classes. These return StorageFolders which in turn can create files with CreateFileAsync (used for both creating and opening existing files) and get files with GetFilesAsync ( I recommend against this one though) and similar methods.
@mcosmin222, could you please re-read my posts one more time? I'm not trying to create files at "arbitrary locations"; I wanna create/write simple text file at the app's local storage (which one should be available for reading/writing). And the problem not in the lambdas or task usage (yes, it looks ugly but it works as it supposed to be).
Could you provide a working example instead of words? And I'll be glad to say you "thanks a lot"; can't say now...
sensboston said:
@mcosmin222, could you please re-read my posts one more time? I'm not trying to create files at "arbitrary locations"; I wanna create/write simple text file at the app's local storage (which one should be available for reading/writing). And the main problem not in the task (async execution).
Could you provide a working example instead of words? And I'll be glad to say you "thanks a lot"; can't say now...
Click to expand...
Click to collapse
Sure, just gimmie a few hours till I can get near a compiler that is capable of doing that
Of course, no rush at all, take your time. It's not a showstopper for me now (actually, my workaround with AppSettings is more preferable way - at least for universal app and roaming settings) but the issue still has an "academic interest" and maybe will be useful in the next projects for porting old C/C++ code to WinRT.
sensboston said:
Of course, no rush at all, take your time. It's not a showstopper for me now (actually, my workaround with AppSettings is more preferable way - at least for universal app and roaming settings) but the issue still has an "academic interest" and maybe will be useful in the next projects for porting old C/C++ code to WinRT.
Click to expand...
Click to collapse
hi
in vs 2015
#include <pplawait.h>
Something of the like should work
Code:
WriteSomeFile() __resumable
{
auto local = ApplicationData::Current->LocalFolder;
auto file = __await local->CreateFileAsync("some file", CreationCollisionOption::eek:penIfExists);
__await FileIO::WriteTextAsync(file, "this is some text");
}
However, as of right now, in VS 2015 RC, you have a host of limitations when dealing with this, but I do not believe this will be of any issue to you.
Code:
Cannot use Windows Runtime (WinRT) types in the signature of resumable function and resumable function cannot be a member function in a WinRT class. (This is fixed, but didn't make it in time for RC release)
We may give a wrong diagnostic if return statement appears in resumable function prior to seeing an await expression or yield statement. (Workaround: restructure your code so that the first return happens after yield or await)
Compiling code with resumable functions may result in compilation errors or bad codegen if compiled with /ZI flag (Edit and Continue debugging)
Parameters of a resumable function may not be visible while debugging
Please see this link for additional details
http://blogs.msdn.com/b/vcblog/archive/2015/04/29/more-about-resumable-functions-in-c.aspx
you should also note that this works with native, standard C++ types.
@mcosmin222, looks like unbuffered writing works (i.e. without streams) fine but it still not an answer for my initial question
I'm curious why the standard POSIX libc writing operations are not working on the app's local storage (but reading from files works fine). Actually, it's all about porting old C/C++ code for WinRT; of course for the new app it's not a problem but re-writing old code to FileIO should be a huge pain in the ass. What I did: I've "mechanically" changed all libc formatted outputs from file to string, and use LocalSettings class (actually it's XML file) to store that string (I'm planning also change LocalSettings to RoamingSettings, to provide settings consistency between WP & desktop app).
P.S. <pplawait.h> is not available in my VS 2015 (release pro version) so I've tested by using lambda pattern.
OK, first things first, LIBC != POSIX! The POSIX way to do this would be to call the open() function and get back an int as an "fd" (file descriptor), which is of course not implemented on Windows Phone because Windows Phone is not a POSIX platform (you might find the Windows compatibility functions _open() and _wopen(), but I doubt it). You are attempting to use the standard C library functions, which are portable but implement kind of a lowest common denominator of functionality and are generally slightly slower than native APIs because they go through a portability wrapper.
Second, sorry to be all RTFM on you but you should really Read The Manual (or manpage, or, since this is Windows, the MSDN page)! Libc APIs set errno (include errno.h) and use different error values than Windows system error codes (or HRESULT codes, or NTSTATUS codes, or...). Error reporting in C is a mess. If you were calling CreateFile(), you would check GetLastError(), but since you're calling _wfopen(), you check errno (not a function).
@GoodDayToDie, _wfopen_s returns 0 (i.e. "no error") but tmp pointer receives also 0 (NULL) Could you explain why libc file functions are working for reading (at the app installation & local data folders of course) but not for writing? Any logical ("msdn based") explanation? Or you just... don't know, heh?
sensboston said:
@GoodDayToDie, _wfopen_s returns 0 (i.e. "no error") but tmp pointer receives also 0 (NULL) Could you explain why libc file functions are working for reading (at the app installation & local data folders of course) but not for writing? Any logical ("msdn based") explanation? Or you just... don't know, heh?
Click to expand...
Click to collapse
LIBC functions will most likely work just in debug mode. The moment you try to publish the app it will fail. You can do lots of crazy stuff on your developer device with basic C functions, but if you try publishing, it won't pass the marketplace verification.
Most C APIs are simply not supported, since they do not comply with the sandbox environment of the Windows Runtime.
The code I gave you is tested with VS 2015 RC. You should be able to include <pplawait.h> just fine, if you are targeting toolchains newer than November 2013.
mcosmin222 said:
The moment you try to publish the app it will fail. You can do lots of crazy stuff on your developer device with basic C functions, but if you try publishing, it won't pass the marketplace verification.
Click to expand...
Click to collapse
Hmm... Are you sure or it's just your assumption? My app is still under development but (just for test!) I've made store app package for WP and it passed local store verification I also uploaded package to the store (via browser) and it also passed. I don't have time to create all tiles and fill all fields to complete beta-submission (actually, I don't know how to mark app as beta in the new dashboard) but for me it looks like app don't have any problem and will pass store certification easily. And you may be sure - it uses A LOT of libc calls 'cause originally it was written for Linux (or kind of UX system)
sensboston said:
Hmm... Are you sure or it's just your assumption? My app is still under development but (just for test!) I've made store app package for WP and it passed local store verification I also uploaded package to the store (via browser) and it also passed. I don't have time to create all tiles and fill all fields to complete beta-submission (actually, I don't know how to mark app as beta in the new dashboard) but for me it looks like app don't have any problem and will pass store certification easily. And you may be sure - it uses A LOT of libc calls 'cause originally it was written for Linux (or kind of UX system)
Click to expand...
Click to collapse
Once usage reports get up to microsoft, you will be given a notice to fix the offending API (happened to be once). You are much better off using the platform specific tools: not only they are much faster, they are also much safer and you won't have problems later on.
You might get away with reading stuff (since reading is not that harmful), but you should be using the winRT APIs each time they are available.
Simply uploading your app to the marketplace just reruns the local tests in their cloud servers: once you submit the actual app (not beta, not tests) for consumers, it will be much more aggressively checked. This is because the store allows specific scenarios for distributing apps in close circles that may break the usual validation rules.
@mcosmin222, one more time: is it your assumptions or personal experience? I don't know how many apps you have in store (I do have a lot) but I never heard that you said. I've used C++ libraries with WP hacks in some of published apps but never had any problem with "aggressive checks". What I know: if you are using some "prohibited" calls, your app will not pass uploading to the store (uploading, not a certification).
P.S. I'll send you personally a link when I publish release Hope, you'll like it
sensboston said:
@mcosmin222, one more time: is it your assumptions or personal experience? I don't know how many apps you have in store (I do have a lot) but I never heard that you said. I've used C++ libraries with WP hacks in some of published apps but never had any problem with "aggressive checks". What I know: if you are using some "prohibited" calls, your app will not pass uploading to the store (uploading, not a certification).
P.S. I'll send you personally a link when I publish release Hope, you'll like it
Click to expand...
Click to collapse
By "hacking" you mean recompiling the code to fit the windows phone toolchain? if so, then you shouldn't have to worry about too many things.
but even so, calling stuff like fopen in locations other than local storage will get your app banned. Even if it makes past the first publication, you can get noticed weeks later or even months (yes, it did happen to me personally).
In most cases, calling C APIs that can potentially break the sandbox (like opening a file in doc library with fopen) will always fail the marketplace verification, eventually. If it hasn't happened to you yet, then you may have not been using such APIs.
No, my C++ code is not accessing other than approved locations but the app has a lot of libс (and of course other C/C++ libs) calls; I'm 99.9% sure it's legitimate and will be not a source of any problem. Otherwise what is the advantages of having C++ compiler?!
As far as I know, just some of API's are prohibited but you will notice it right after local store compatibility test run...
As for "hacks" I mean usage of undocumented ShellChromeAPI calls (including loading hack).
P.S. I've found why <pplawait.h> header is missing. Initially I've created solution with the 12.0 toolset but now I can't (or don't know how to) change it to 14. However creating the new empty universal solution in VS 2015 also gives me toolset 12 by default. What is the toolset 14 for? Windows 10?
sensboston said:
No, my C++ code is not accessing other than approved locations but the app has a lot of libс (and of course other C/C++ libs) calls; I'm 99.9% sure it's legitimate and will be not a source of any problem. Otherwise what is the advantages of having C++ compiler?!
As far as I know, just some of API's are prohibited but you will notice it right after local store compatibility test run...
As for "hacks" I mean usage of undocumented ShellChromeAPI calls (including loading hack).
P.S. I've found why <pplawait.h> header is missing. Initially I've created solution with the 12.0 toolset but now I can't (or don't know how to) change it to 14. However creating the new empty universal solution in VS 2015 also gives me toolset 12 by default. What is the toolset 14 for? Windows 10?
Click to expand...
Click to collapse
The advantage of C++ is the obvious versatility: the standard C++ APIs will work fine for you as long as you stay inside the sandbox (this means you can't access files even in locations that are outside of sandbox but you have permission to them, such as music library). You can use most classic C/C++ libraries without issues as long as you do the interface with the runtime broker yourself. That means using windows runtime APIs instead of classic C APIs when dealing with stuff such as file access, for example. This is a pretty extensive topic and It is rather difficult to explain it all with 100% accuracy, especially when there is lots of docs running around.
You also get deterministic memory management, which is huge in specific scenarios.
Long story short
You will be fine with standard C/C++ when using
any in-memory functions supported by the compiler (you can manipulate data types, string, mutex, etc).
File IO in isolated storage only (applicationData folder)
Threads (although you are better off using threadpool or the like, it is much easier and cleaner). You can also use futures, and std::this_thread.
You will have to use winRT replacement
File system access in any other location than application data (you must use the windows::storage APIs)
sockets, internet access and the like.
any hardware related thing: music&video playerback must be interfaced through winRT (although the underlying decoders can be classic C/C++), messing around with the device sensors.
Retrieving system properties (internet connection state etc)
cross process communications
communicating with other apps
There are also win32 equivalents
mutex, threading, fileIO (isolated storage only)
Media playback with custom rendering pipeline.
Basically, winRT functions as an abstraction layer between the hardware and your code. You can use classic C++ up to the point where you need to interact with the system in any way. At that point, system interaction must be done with winRT. This way, microsoft ensures a higher degree of stability and security for devices.
check this link out for more information on the toolchains. You should be able to use this in VS 2013 as well with windows 8 (this is a compiler feature, has nothing to do with supported platform)
https://paoloseverini.wordpress.com/2014/04/22/async-await-in-c/