Question "This app can't run on your PC" - Windows 10, 8, 7, XP etc.

This is the popup that occurrs when you try to run a program. Maybe it's been more prevalent since the 22H2 update?
Windows (10) will be the death of me, opaque error messages and plenty of online advice telling you to do a dozen unrelated things.
There was a time (long ago) when you could look in the Event Viewer and actually find something concrete.
Why for goodness sake is there not a "Details" button that gives an exact description of the problem?
"Wacky heuristics has determined your program is evil" or "This program requires "boring.dll" that has been discontinued since Windows 3.1"
Instead it just says, "I dunno, just search on the internet and try all the fixes".
I don't want to get bogged down in the specifics of this particular incident. Suffice it to say that it was a C program that I built myself, that has worked fine for years and that the last build causes this popup. The previous build works fine. As a test I rebuilt 100 other programs and none of them flagged this.

it should work fine, being a C program.. i have made some modifications in my applications too and getting the same message on the last build..

@Eric_Mello
So, I narrowed it down and even have a work-around.
The first thing I did was to try to run the program through CreateProcess().
I got a GetLastError() of 0x000000c1, 193, "%1 is not a valid Win32 application."
Then I build my app without any resources. It ran (such as it could without resources).
BeginUpdateResource(), UpdateResource(), EndUpdateResource() have been around for over 20 years.
There have always been reports of problems with it.
When resources get inserted the fixup table gets moved past.
I see on my problem program that the fixup table is exactly 0x1000 bytes.
When the resources get inserted it gets moved 0x600 bytes to make room.
I've checked, the fixup table does not need to get modified when it moves.
The workaround: I took a case statement that I was using and added a case 99999: that will never be hit.
That actually dropped the size of the fixup table to 0xffc.
And the program works.
Edit: Ok, I have the real skivvy on it now.
The linker is dumb and when the rawdata size of the .reloc is 0x1000 it sets the virtual size to 0x2000
Because the linker is calculating (virtual) image size it is consistent.
Now we throw in the resource linker and it adds the resources.
It recalculates the (virtual) image size using calculations on the .reloc rawdata size (and not just using the .reloc virtual size).
The .reloc is moved, the section table is still the same with only the offset for the .reloc changed.
So now we have a discrepancy between the (virtual) image size and the sum of the virtual sizes for the sections.
There are two fixes, increment the (virtual) image size 0x1000 or decrement the .reloc virtual size 0x1000.
I've tried both and they both work.
If I get to it I'll write a little utility to check this.

Before adding resources (runs, kind of):
Code:
C:\>peclean v.exe
v.exe App, Intel 32 bit
Warning: Discrepancy in size of .reloc section
Raw size: 00001000
Calculated: 00001000
In table: 00002000
After adding resources (can't run):
Code:
C:\>peclean v.exe
v.exe App, Intel 32 bit
Warning: Discrepancy in size of .reloc section
Raw size: 00001000
Calculated: 00001000
In table: 00002000
Error: Discrepancy in size of image
Calculated: 00029000
In header: 00028000

Ho, hum.
So I added some code to my custom resource linker that fixes things when UpdateResource breaks things.

lol... windows is like having an entitled & spoiled 12 y/o, who got sent to live with you without an explanation... And you have survived 35+ years , leading a very independent lifestyle.... but here's the kid all of a sudden...
1. He wakes you up yelling at 7 am, cuz he wants cereal for breakfast and all you have is Whole Milk, and he cant do whole milk, cuz it upsets his stomach, so you have to rush out and buy other milk so he eats before school.......
BUT
2. You race BACK to the house, risking tickets and wrecks, to save the day, and there he is, pouting and screaming at you NOW because you got 2% Milk, but you bought BORDEN, and Borden is pasteurized, which might be made near a peanut company and he has a DEADLY peanut allergy! .... You are kinda angry... but more annoyed and head out again.
NOW
3. You got lucky cuz you remembered the corner store DID have organic milk, but its OUTRAGEOUSLY PRICED... though r/n you dont care as long as the kid leaves happy, and you enjoy some peace! MONEY WELL SPENT!.... ... As you open the door, he is running out, with milk running down his face and his mouth full saying BYE...
BUT YOU AINT HAVING THAT!... YOU SPIN AROUND AND ASK HIM WTF?? HOW DID HE EAT HIS CEREAL? And he tells you, that since the cereal was sweetened it actually weakened the lactose enough that it was fine and he ate 3 bowls while you were gone! ....
WHAAAAAAT! YOU HAD WHOLE MILK SITTING THERE TO BEGIN WITH, AND NONE OF THIS CIRQUE DO SOLEIL was necessary, so you scream and walk inside. Then a few min later the kid walks in and reminds you its Saturday... no school!
The moral of this story is, NEVER GIVE IN TO ANY DEMANDS THAT WINDOWS MAKES by throwing you the almighty blue screen, because chances are, you can edit a registry entry ... modify a file (like you did) .... or just FORCE windows to run the program and crash repeatedly until you can observe the activity monitor, and see the cpu spike that always happens directly before the crash... like sometimes 2 seconds... then trace back the process that threw a temper tantrum and sometimes just increase or decrease the priority of the process, and windows will first SLOWLY go past the error the 1st time... then faster and faster until it realizes that the Milk you already had was perfectly fine to use, and it better NEVER INTERRUPT YOU ON A SATURDAY AGAIN, IF IT VALUES ITS LIFE! ... lol

@Eric_Mello So, did you find that you had the same problem?
Since the virtual page size is 4096 bytes and a .reloc entry is 4 bytes you have an 0.1% chance of this happening.

Renate said:
@Eric_Mello So, did you find that you had the same problem?
Since the virtual page size is 4096 bytes and a .reloc entry is 4 bytes you have an 0.1% chance of this happening.
Click to expand...
Click to collapse
i really don't findyet.
i let my friend on this case and we still keep going on the tests.
i share this thread with him to discuss this information above.
thanks and good luck!

Eric_Mello said:
I let my friend on this case and we still keep going on the tests.
Click to expand...
Click to collapse
If you want a simple test/fix, try this. Use a hex editor on your .exe
Look at the little-end 32 bit value at 0x250. The hex value should end in three zeroes, like 0x00123000
Add to that the hex value 0x1000 and write it to that location. Try running it.

Related

CRC calculation

I have been trying to get a better understanding of CRC calculations. There is a lot of table based code that can be used available freely on the net but I want to implement my own basic technique first to get a fuller understanding. I have made a class to do the calculations and manually calculated an example to verify the code and they both agree. The problem is that based on this http://www.createwindow.com/programming/crc32/crcverify.htm my answer is wrong. If anyone could look over the attached txt file I would be great full. I can't post it here because the formatting would be lost and it would be impossible to read (very long division). The text I crc is resume (only those chars). The polynomial I use is the standard one used in zip 0x04C11DB7 when computing or 0x104C11DB7 when doing manually (the msb always cancels out).
It could just be that a table driven implementation generates a different crc, but I doubt it.
(log on to see attachment)

Windows Mobile RGB565 format problem

Hello there, to all developers.
I have this very weird problem for which i couldn't find any answer.It goes like this:
I'm developing a graphical app for pocket pc devices using DirectDraw Api with vs8(.net 2005) and windows mobile 6 SDK. Since i don't have double buffer native support, i simply use 2 LPDIRECTDRAWSURFACE objects, one as a primebuffer and one as a backbuffer.
As a test, i fill the backbuffer surface with a color and display it.This works fine.
Then i create a compatible bitmap using CreateDIBSection and read the color bytes from (void**)lpByte i receive into a new bitmap and blit it to the background. Basically i am copying the background and then bliting it back.
The problem: i get wrong data. When I initially fill the background, i use a WORD value (since color is displayed using RGB16 16 bits format) for example 65 535 which means 16 bits with value 1 each. So i get a white background.So far, so good.
When i read the values from that lpByte pointer of type LPBYTE, i get bytes with values of 255 and 127 which means every second byte looses it's most important bit, so when i create the WORD value by reading every 2 bytes, i get a value of 32 767 so, when i display the bitmap, i get a yellowish color.
It actually gets worse if i do this with textures.

[Q] Accelerometer question / algorithm

I want to detect different levels of acceleration in a vehicle. A requirement is a fixed mount in the vehicle. The mount can be mounted under any angle that is preferrable to the user. Also the mount doesn't have to face forward in alignment with the vehicle.
Now, what I've done so far is, calibrate the "stand still" situation, by reading the x, y, z g-forces of the accelerometer, during a certain amount of time, and then getting the averages of it. I will call these variables gBaseX, -Y & -Z
Once that is done, I want to calibrate the "straight line forward" situation, what I do there is show a screen with a button "start acceleration" and "end of acceleration". This now works by reading the accelerometer, directly subtracting the gBaseXY&Z. During the acceleration, I will search for the highest total G-force. When that's done I save the result in variables gAccelerationDirectionX, -Y & -Z.
Now comes the hard part: (And maybe I should ask this on a math forum or something anyone have suggestions?).
In the main logic of the app I want to use the gAccelerationDirectionX,Y&Z to
'rotate' the axis of my read actual values.
E.g. I have determined in my app that:
gBaseX = 0.15
gBaseY = -0.27
gBaseZ = -9.57
After that I accelerate over a straight line (having the phone mounted a weird angle) and I would get for instance:
gAccelerationDirectionX = -0.55
gAccelerationDirectionY = -0.70
gAccelerationDirectionZ = -0.03
When I move the phone in a straight line forward in the same position, I get those readings:
gReadX = -0.38 (After it's corrected with gBaseX)
gReadY = -0.94 (After it's corrected with gBaseY)
gReadZ = -0.27 (After it's corrected with gBaseZ)
How can I correct the "axis" of this? It must be a formula which use the gAccelerationDirectionX,Y&Z variables. I don't want to use anything like the phone orientation.
What the result must be is something like this:
gCorrectedX = 0 (meaning no left or right movement)
gCorrectedY = -? (meaning acceleration)
gCorrectedZ = 0 (meaning no up or downward movement)
I hope I've made myself clear
Please help on this algorithm.
The easiest way I can think of to do this would be to convert to a spherical coordinate system. Your reference acceleration vector will then be a magnitude and direction relative to the orientation of the reference frame by two angles. You can then convert any other acceleration vector back to a relative Cartesian system by normalizing your new vector with these reference angles.
See:
http://en.wikipedia.org/wiki/Spherical_coordinate_system
Why would you not use LBS (Location Based Services)? Very powerful stuff in there
There is really no such thing as "straight-line" movement...as infinitesimal as it seems, we are always moving in some type of arc. It's the result of living on a spherical object.
Some good reading here: http://en.wikipedia.org/wiki/Great-circle_distance
Calculating instantaneous speed is Calculus I reading. Calc I is why Jeff Gordon lost the Bristol race and Brad K. won it
Rootstonian said:
Why would you not use LBS (Location Based Services)? Very powerful stuff in there
Click to expand...
Click to collapse
Since the OP is using the accelerometers, I would assume that he or she needs a lot finer resolution than you can get with a GPS.
Rootstonian said:
Why would you not use LBS (Location Based Services)? Very powerful stuff in there
There is really no such thing as "straight-line" movement...as infinitesimal as it seems, we are always moving in some type of arc. It's the result of living on a spherical object.
Some good reading here: http://en.wikipedia.org/wiki/Great-circle_distance
Calculating instantaneous speed is Calculus I reading. Calc I is why Jeff Gordon lost the Bristol race and Brad K. won it
Click to expand...
Click to collapse
Oops. You added stuff. I imagine that the OP could safely ignore the earth's curvature in the sort of calculations I think he or she is trying to measure. I remember back in the 1980's when I used to drag race, there was a very expensive device that some racers used when the track was closed that could very accurately measure quarter mile times (as accurately as the traps at the track) all based on accelerometers.

[Info] Boot Time per Paging Pool size depending on imgfs compression

I want to share with you some findings on how the size of the paging pool influences the usability of a WM device.
Setup:
Device: HTC Tornado (Smartphone, original WM5, 64MB RAM, 64MB ROM, TI OMAP 850, DiskOnChip from M-Systems)
ROM: my WM6 build (based on Nitrogen's WM6 - see my signature)
ROM Kitchen: OS Builder 1.48
Paging Pool changer: PPSmartchanger (adapted for use without ULDR partition)
Method:
Create builds with OSB selecting the relevant imgfs compression options:
lzx and xpr
with and without having the code sections of modules uncompressed
Change Paging Pool with PPSmartchanger (works only for devices with M-Systems DiskOnChip - so old stuff only - changed with pdocwrite)
retrieve boot-time via devhealth call after device has rebooted to selected standard (for all tests) - 2 repeats have proven enough as there is very little deviation between reboots.
I have taken the boot time because this is easily repeatable and devhealth delivers standardized results.
Results:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
lzx delivers the smallest ROM builds but also takes longest time to boot
xpr delivers reasonable sized ROM builds and is the fastest
lzx with uncompressed code sections of modules has large ROMs (larger than xpr) and takes longer time to boot
xpr with uncompressed code sections of modules has giant ROMs (had to cut out all optional parts not impacting the procedure) and is not faster than xpr for all modules
Discussion of the results:
You may ask why that setup first? To learn about the paging pool, look also here.
The Paging Pool serves as a dedicated (limited) area of RAM where the current executed code sections of the loaded modules reside. During boot a lot of modules are loaded. While the boot process continues, the demanded use of the paging pool will intermittently exceed its size and pages (parts of the code sections) are discarded to make room for new code to be loaded to the paging pool. As the discarded pages can easily be re-read from their original location on the imgfs - this is no loss of information.
The time it takes to read a module from imgfs, decompress it (lzx, xpr or no compression) and load it to the paging pool for execution determines the total of the boot time. So if you have an unlimited paging pool or you can afford to switch off the demand paging - then your boot time is limited only by read + decompress.
If now the paging pool is smaller than the maximum sequentially loaded modules need it to be, then the discarding of pages in the paging pool will eventually require a re-load of pages for active modules from the imgfs again. The probability for this to happen will increase with a smaller paging pool, so the boot-times will rise with a smaller paging pool. This is what you see for all 4 setups.
Why is lzx imgfs slower than xpr? lzx files are smaller than xpr on imgfs, so they should be read faster from imgfs! True, but obviously the overhead of the decompression to the paging pool is taking more time over all here for my device. So lzx takes more CPU than xpr for decompression.
Why is xpr + uncompressed code sections slower than if the whole imgfs is xpr? The saved overhead of decompressing the code sections should give an advantage here! So here you see that there needs to be balance of imgfs-read performance and CPU power to decompress. For this device it seems that uncompressed larger code sections are no advantage as they take longer to read from imgfs.
Your device's results may vary, but it seems that you only need to measure for one paging pool setup to judge the different options of OS Builder.
Nice comparison. Which boot time marker did you look at?
Just the first in the summary:
<Perf Gates>
<Gate Name=Bootup>27813</Gate>
<Gate Name=Memory>33894400</Gate>
<Gate Name=Storage>13049856</Gate>
</Perf Gates>
This is the same as later:
Bootup Time Markers (Milliseconds):
Available RAM on boot = 34242560
Calupd: Exiting = 30226
Initialized appt ssupdate = 29832
Initialized msgcount ssupdate = 29764
SSUpdate: Thread now in message loop = 29231
Calupd: Starting = 28781
SSUpdate: Shell Ready = 28717
Start Menu Icons Fetched = 28687
Appman: Finished Fetching Icons = 28664
Appman: Started Fetching Icons = 27831
Home Painted = 27813
Cprog: Done Booting = 25188
Last Boot Type = 2
I intend to use that setup also to look a little deeper in the devhealth memory report later, to possibly learn about differences that other OSB options are setting to relevant memory allocation of modules - especially dealing with code that can be paged out or not.
From the MSDN/Kernel Blog articles I have read, there are also other (internal) properties of the code that determine if paging may occur or not. Usually SW production tools set the relevant flags of those automatically so you don't have to care yourself.
I am not enough expert to judge here, but from a glance at the devhealth reports I see that e.g. gwes.exe process has no pageable parts (due to OSB setting it like that?) while other processes (e.g. home.exe) do have pageable code.
I was thinking of e.g. setting rilgsm.dll on the list of modules to prevent demand paging (to have the device immediately respond to incoming calls), but I noticed in the report that it already has no pageable (p) pages listed.
To avoid urban legends I wonder if there is a way to determine or even guide which modules/files are candidates for the list of advanced operations in OSB. It could also help to indicate which are clearly not taking any benefit of those.
From what I understand, OSB has a bunch of options that deal with demand paging one or the other way:
Set Kernel Flag 0x00000001 (MSDN: Demand paging is disabled when the first bit of ROMFLAGS is set. When demand paging is disabled, a module is fully loaded into RAM before running. For OEMs that need real-time performance, turning off demand paging eliminates page faults that can cause poor performance.)
Disable demand paging list (need to find out how this is done - dump directory files are still identical to the SYS folder) - I suspect that relevant code section properties are set here. gwes.exe is internally added automatically (see your build log).
imgfs compression options (see the initial compare in the first post) with the related setting to not compress code sections in modules. It may be helpful to balance the ROM size with potential benefit here if there was option to select the modules where the code sections should stay uncompressed. As seen from below comparison it is no benefit for me - but it may for others.
paging pool size (bigger is better - if you can afford it - even set to "unlimited")
In the context of above I still think that wealthy devices with variable use (like smartphones) better profit from PP = 0 than from Kernelflag = 0x00000001 because an "endless" PP still has the potential recovery of paging out when RAM limit is reached while disabled demand paging prevents further loading of modules at all. The latter may be suitable for fixed purpose devices like navigation devices
Nice, I found this useful. I'm looking forward to your future benchmarks.
I'm wondering how much lzx does impact loading applications after the device has booted-up.
Jackos said:
I'm wondering how much lzx does impact loading applications after the device has booted-up.
Click to expand...
Click to collapse
As long as applications just have to start it does not matter much imho. For such cases I even prefer to UPX these to get even more compression than with the imgfs itself. Here just one time read+decompress has to be taken into account.
The key to the boot-time (and related other module loading and discarding pages later from the paging pool) is that this will happen repeatedly and automatically without you able to steer it.
---------- Post added at 11:37 AM ---------- Previous post was at 11:30 AM ----------
tobbbie said:
I am not enough expert to judge here, but from a glance at the devhealth reports I see that e.g. gwes.exe process has no pageable parts (due to OSB setting it like that?) while other processes (e.g. home.exe) do have pageable code.
I was thinking of e.g. setting rilgsm.dll on the list of modules to prevent demand paging (to have the device immediately respond to incoming calls), but I noticed in the report that it already has no pageable (p) pages listed.
Click to expand...
Click to collapse
I looked up the devhealth report of an old build and the gwes.exe is just the same (from first glance). So I suspect that devhealth is just depicting the current state of the device and not which pages could be paged out (or not). On first fast search I have not found resources to clarify on that yet.
You don't need to run devhealth to get the boot-time, just look at HKCU\Performance\millisecondstoidlethread.
I actually run a mortscript that records each time I reset my device, and how long it takes (there's a shortcut to it in the startup folder). This is the script, if you want to try something similar. It's kind of nice; I've had a few times where my startup time increased a lot, and I like being able to tell when it happened.
Code:
MkDir("\Performance")
If(FileExists("\Performance\bootlog.txt"))
writefile("\Performance\bootlog.txt","^NL^",True)
writefile("\Performance\bootlog.txt",RegRead("HKLM","Comm","BootCount"),True)
writefile("\Performance\bootlog.txt"," boots, booted ",True)
writefile("\Performance\bootlog.txt",FormatTime("m:d:Y:H:i:s a"),True)
writefile("\Performance\bootlog.txt",", reboot time ",True)
sleep(30000)
writefile("\Performance\bootlog.txt",RegRead("HKCU","Performance","Millisec to idle thread"),True)
else
writefile("\Performance\bootlog.txt",RegRead("HKLM","Comm","BootCount"))
writefile("\Performance\bootlog.txt"," boots, booted ",True)
writefile("\Performance\bootlog.txt",FormatTime("m:d:Y:H:i:s a"),True)
writefile("\Performance\bootlog.txt",", reboot time ",True)
sleep(30000)
writefile("\Performance\bootlog.txt",RegRead("HKCU","Performance","Millisec to idle thread"),True)
endif
If(FileExists("\Performance\ramlog.txt"))
writefile("\Performance\ramlog.txt","^NL^",True)
writefile("\Performance\ramlog.txt",RegRead("HKLM","Comm","BootCount"),True)
writefile("\Performance\ramlog.txt"," boots-reset, RAM ",True)
writefile("\Performance\ramlog.txt",FreeMemory(MB),True)
writefile("\Performance\ramlog.txt"," MB, ",True)
writefile("\Performance\ramlog.txt",FormatTime("m:d:Y:H:i:s a"),True)
else
writefile("\Performance\ramlog.txt",RegRead("HKLM","Comm","BootCount"))
writefile("\Performance\ramlog.txt"," boots-reset, RAM ",True)
writefile("\Performance\ramlog.txt",FreeMemory(MB),True)
writefile("\Performance\ramlog.txt"," MB, ",True)
writefile("\Performance\ramlog.txt",FormatTime("m:d:Y:H:i:s a"),True)
endif
This is what the output looks like (the time is milliseconds, so we're looking at~45 seconds per boot):
2 boots, booted 08:11:2011:12:24:24 pm, reboot time
3 boots, booted 09:04:2011:08:04:31 am, reboot time 49245
4 boots, booted 09:04:2011:09:18:14 am, reboot time 46642
5 boots, booted 09:04:2011:13:39:35 pm, reboot time 42994
6 boots, booted 09:12:2011:15:31:33 pm, reboot time 45625
7 boots, booted 09:12:2011:17:30:11 pm, reboot time 45564
Click to expand...
Click to collapse
Edit: Oops, I forgot, only the top part writes the boot log. The second part logs the device free ram whenever I wake it up. I just like to follow that for the hell of it. It's pretty cool to have that log when you go 24 days between soft resets, lol.
...thanks for the scripts - maybe useful as these take very little time to execute. I do a reset every night on my device - so long time RAM "loss" is easily worked around this way. Need to adjust for the keys present in my \performance path though (old WM5 Native Kernel with WM6 OS on top) - have not seen the one you read from there.
I wanted to utilize the devhealth logs later as well to compare more between different settings - this is why I had chosen devhealth to read it for me.
I noticed that some performance gates are linked to the amount of appointments or tasks. These log entries are after "Home painted". So for my compare I simply picked the devhealth selection of "Home painted" as the performance gate for bootup.
This is not the true time until you can use your device, but a good one to compare the influence of a single parameter (the PP size or the imgfs compression).
BTW: I checked if the latest XIP compression from OSB takes an influence on boot time - it does not I have only applied it as recommended to the 2 big ones in my XIP - not sure yet if I want to squeeze the remaining bytes from there.
The following post was initially done in the OSB thread, but I thought it is off topic there, so moved it here:
I read about the use of the paging pool in this blog post but in essence I thought that mostly modules code would be candidates for that memory region. Looking at the output of devhealth there are also other parts (aside modules) marked as "p" which I suspect to bei either "paged out" or "pageable" (have not found a description here).
For me the relevant question are:
Can normally loaded executables (from imgfs as files or elsewhere, e.g. storage card) be utilizing the paging pool? I guess yes - as the OS acts according to the information present in the executable's structure, just like it does for the modules. So if this is the case - what is difference then regarding the use of the paging pool if a certain dll/exe is in "module" format or as a file? Are sections' attributes in a file incarnation set differently regarding the use of paging pool?
Does a potential UPX compression of a file (exe/dll) then impact the utilization of the paging pool and later performance of the exe/dll when paging applies? In a wild speculation and linked to the analogy of imgfs decompression needed when reading LZX compressed code sections parts to be paged in again: would a upx-ed exe/dll need to be re-read and decompressed from storage in case paging of code would apply? I guess no as this would severely impact performance and I have not noticed it for my UPXed (usually BIG) files.
So I wonder if I sacrifice OS memory management flexibility when I UPX compress a large executable, like Opera, that it e.g. needs more RAM and that parts maybe page-able if not UPX-ed and not page-able when UPX-ed. My guess (and hope) is that dll/exe files do not have page-able sections that are compressed. Possibly something to discuss with the UPX-team.
Can I do any tests myself to find out? I could use e.g. Opera and load it one time as UPX-compressed file and another time as normal file. Will the devhealth memory report be suitable to compare these cases? Is the "p" in the devhealth report telling that these pages in memory are "pageable" (so could be paged out if needed) or that they are "paged out" at the time of report?
tobbbie said:
Can I do any tests myself to find out? I could use e.g. Opera and load it one time as UPX-compressed file and another time as normal file. Will the devhealth memory report be suitable to compare these cases? Is the "p" in the devhealth report telling that these pages in memory are "pageable" (so could be paged out if needed) or that they are "paged out" at the time of report?
Click to expand...
Click to collapse
...well I just tried it (using PocketPlayer from Conduits) and the differences are there - and they are as expected:
The normal exe delivers a map that has most memory parts marked either as "-: reserved" or "p-page" and the exe ends at a size of 2 882 088 bytes
The UPX-ed exe delivers a map where executable parts are marked as "E" but ending at a size of 2 775 552 bytes
Strange to notice that the devhealth dump delivers maps that are missing certain memory regions within the sequence of the report. So for the normal report the lines for 2e160000, 2e200000, 2e240000, 2e470000, 2e850000 are missing - compared to the UPX-ed. No clue what these "gaps" mean actually - probably memory that is simply not allocated to the process.
I cannot interpret any detail here, but the obvious difference is that the UPX-ed version seems to allocate the memory in a fixed way (i.e. it cannot by dynamically reclaimed by the OS), while the "normal" version seems to allocate memory in a way that allows "paging" to happen.
UPX:
plus side: no paging happens for the main executable parts - much similar to the OSB module option "exclude from demand paging".
minus side: in total more RAM is required as the whole executable size is loaded (to RAM?).
Normal:
plus side: less RAM is needed as the executable seems to utilize the paging pool
minus side: the paging pool is used, so it should be dimensioned to cope with the parallel use of all normal running applications, not just the modules from ROM.
My speculations are based on the assumption that the "p" indication of the DevHealth report is depicting the utilization of the paging pool. If so then the total number of "p" must not exceed the defined paging pool size. I have no easy way to count these "p" and skip others in the report - so now way to confirm that.
Take-home message: UPX your dll/exe if you want to exclude them from demand paging.
See attachment with the relevant excerpts from the DevHealth reports.
tobbbie, too long way to disable paging for selected exe. You just have to set "unpageable" flag on all sections using apps like LordPE.
ultrashot said:
tobbbie, too long way to disable paging for selected exe. You just have to set "unpageable" flag on all sections using apps like LordPE.
Click to expand...
Click to collapse
Well - for the pro's that only want to have demand paging disabled your suggested way to hack the exe/dll is viable as well. The UPX method has a positive side effect however - the main purpose of UPX is to make the exe/dll smaller - and it does it very well. So you have smaller footprint, faster load time and exclusive RAM with just one activity - and this can be done by anyone, without hacking!
The real downside for those with tight memory (like myself) is that such exclusive memory allocation eats much more precious RAM than if you have the exe/dll allow their code to be paged. So you have to balance the snappy application behavior (UPX) with the number of concurrent running applications demanding memory.
For my part (small RAM) I have taken the following conclusions:
Free RAM is not the goal to optimize for. A small paging pool makes free RAM grow, but it limits the amount of paged-in parts of concurrent running programs.
UPX.ing dll/exe is only useful in case you have plenty of RAM to always keep the whole code in (virtual) memory. So if you have LARGE programs (like my example Pocket Player or Opera) it is advisable NOT to UPX them, despite their storage footprint will be larger. Get these programs on the memory card instead and accept longer loading times. Their RAM footprint will be lower as the OS loads the code to the paging pool (not normal RAM) and it can discard non accessed code parts from the paging pool and reload them from the file again if needed.
Optimize the size of the paging pool to the usually running concurrent programs. This does not mean to minimize the paging pool so that you can still somehow "use" your device, but find a balance for the case that all your concurrent applications are loaded. Mind that any UPX-ed programs (or such marked to be excluded from demand paging) do not count here on the paging pool as their code is put elsewhere! So make sure first that your programs are loaded from a non UPX-format file.
Sigh - all my efforts to get the smallest ROM footprint were done in vain as the price paid is simply the amount of RAM used when such exe/dll are loaded. I need to re-think many optimizations and do a better balancing of ROM size with RAM use.
It is really worth to note that all normal loaded applications have their CODE part loaded to the paging pool, while their other demand for memory (data, heap) is put to other virtual memory. The OS demand paging utilizes this limited amount of RAM (real RAM, not just virtual memory) in an optimal way for the concurrent running programs. So while these execute (and the instruction pointer passes along the memory pages reading the code for execution) the pages have to be in RAM. The overall minimal RAM use (without paging activity) is thus defined by the active code-traces along the memory pages only. All code that is NOT accessed when the programs run can be on discarded pages (so they occupy no RAM).
Now for a mysterious part (to me): The allocations of memory are done for "virtual memory". As you know, each process has 32MB of contiguous virtual memory in its assigned slot - but how does that map to used RAM? So how can I tell how much of precious real RAM is utilized for a certain process? I guess the RAM use for the paging pool is easy understood and also visible in the devhealth report (the "p" - if my assumptions are true), but what about RAM outside the paging pool? Is it just that any part marked in the devhealth report (except the "-" for reserved, as I would count this for discarded code pages) is mapping virtual memory to real RAM 1:1 - so the advantage for the application is just the contiguous address space which in reality may be put on RAM pages that are located anywhere in real RAM address space. From other OS you may remember that there are "paging files" which could also be utilized for mapping virtual memory - but this is not present in WinCE - so is it just 1:1 VM allocation to RAM allocation?
I remember reading that modules loaded from several processes are not loaded several times in the relevant process slot but just referenced from all the processes to their slot1 - so what about the applications data in their slot?
I would really like to learn more about the DevHealth report meaning for the various symbols used when drawing the virtual memory maps. Any hints for me to MSDN or other sources?
I remember reading that modules loaded from several processes are not loaded several times in the relevant process slot but just referenced from all the processes to their slot1 - so what about the applications data in their slot?
Click to expand...
Click to collapse
All module sections except r/w occupy only one space in VM (higher slots on 6.5 kernel, slots 1/0 for older kernels). R/W section is copied to every process using this dll (if that r/w section isn't shareable), but, unfortunately, every process NOT using it will still have this memory reserved.
btw, if you want to read some more interesting reading, here is a nice link
ultrashot said:
All module sections except r/w occupy only one space in VM (higher slots on 6.5 kernel, slots 1/0 for older kernels). R/W section is copied to every process using this dll (if that r/w section isn't shareable), but, unfortunately, every process NOT using it will still have this memory reserved.
Click to expand...
Click to collapse
...well but reserved pages eat no real RAM. It just keeps the process from allocating contiguous memory where the reserved pages are located in VM. As the dlls go top-down and the process bottom up - they have 32MB of VM in the middle. A very nice tool to see this visualized is here: http://www.codeproject.com/KB/windows/VirtualMemory.aspx The "source code" link at the top of the article also contains compiled versions - otherwise I could not have used it.
ultrashot said:
btw, if you want to read some more interesting reading, here is a nice link
Click to expand...
Click to collapse
...oops - looks like a source rip of an old wiki of 2007 - quite hard to get something from it. I found however what I needed on DevHealth here:
https://evnet.svn.codeplex.com/svn/C9Classic/WebSite/Wiki/WikiBases/CEDeveloper/BSPVMDevHealth.wiki
tobbbie said:
...well but reserved pages eat no real RAM.
Click to expand...
Click to collapse
exactly.
10 char
ultrashot said:
exactly.
10 char
Click to expand...
Click to collapse
instead of 4k page size, or?
Ultrashot, if you feel bored sometime you could further give insight to the VM use of a running device's memory snapshot with your devhealthanalyzer tool. The following could as well be discussed in the linked thread of yours if you prefer that. For me it is still linked to the paging pool mainly - so I have it here for a start.
I guess the sum of all "p" give the currently committed pages from the paging pool and so you can see the current percentage of paging pool utilization (as you know the total size from the header of the report. Usually this should be 100% (or close) but if you have huge paging pools it may be less.
I noticed that the devhealth output already reports on the pages utilized per module and process - even giving totals for the system. The use for a 5MB paging pool (1285 pages) is 1267 pages, so 98.6% for a sample I took.
Now the problem is to put that utilization in ratio to the potential demand. You could tell this by comparing the total code size of all loaded exe/dll with the given paging pool how the ratio is between these two.
Evaluating even per loaded exe/dll the ratio of "total code / loaded code to paging pool" could tell how much RAM you would loose/gain if you decide for excluding that from demand paging.
Furthermore the ratio of code (utilizing the paging pool) and other data (stack, heap) eating remaining memory may give advice on the recommended ratio of paging-pool to RAM for a given devhealth snapshot. I think that on average this ratio should be a guide for dimensioning the paging pool - like "reserve 20% or your available RAM for the paging pool".
I don't know which pages from the report should be put in relation to estimate on the above. I had guessed the ratio of "- reserved" to "p" could do it, but the reserved pages are also listed outside the areas which I would attribute to potential paging pool use.
Would not the following be a way to guide a potential finding?
I can grow the paging pool as long as no other memory demand would be limited for my use-case. I know that some applications (video playback, web browsing) have dynamic needs for RAM, but that would be visible in the snapshot taken with devhealth. For an honest report you must not use any memory cleanup tools before taking the snapshot, obviously.
Growing the paging pool beyond the size that all loaded code needs makes no sense and is wasting RAM for a pool that is not exploited.
Shrinking the paging pool to sizes where even the OS (and fancy UI for newer devices) fight for allocated RAM in the paging pool on their own behalf is the lower limit and makes no sense either.
My fault was to assume that you should go as small as possible - but what does it help to have a small paging pool which makes your device slow due to a lot of OS demand paging activity if you still have RAM available that your loaded processes do not utilize for heap and data?!
There are several means to manipulate memory use (mark R/W sections as shared) and avoid the paging pool use (kernel flags, paging pool size = "0", exclude from demand paging - via section flags or simply UPX). I think that only a tool based evaluation of the devhealth output allows to discuss the consequences. Side-by-side compare of different devhealth reports are hard to get insight from - at least for non-pros like myself.
Your module report already gives something but I wonder what conclusion to take from what you list there (module - in use - size - pageable[Y,N,Partly]). Not that much detail as I would need to answer the questions raised further up.

LA10 smartwatch (Linwear/Kingwear/Lige/etc...). tools, watchface structure, direct connect via BT

hi all.
at first, sorry for my english - it's not my first language.
so...
you can find a few chines vendors with identical device named as "LA10 smartwatch" (Lige even not try to naming this model - just "smartwatch"), even you may order a lot with your own logo.
but all of it would have an identical characteristics, uses one app "ffit" and identifies via BT as "LA10_XXXX" where "XXXX" would be a last 2 bytes of a mac.
also all of vendors use an identical promo images, so you'll easily detect it. few of it:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
as I'd understood, the original developer was a Linwear, in case "LA" seems may means "Linwear AMOLED", also Linwear have as usual description in promo images, as manual, as link to all of applications. if you'll interesting, you can learn more here https://www.linwear.com/article/1321.html
btw. Lige have a variant with stainless steel strap (I have an exactly this).
ok. I'll think that's enough for intro, so let's start...
Spoiler: about watchfaces.
while default watchfaces with a clock hands, uploadable seems can be only a digital, or it must have a different structure. as I have no one of watchface (further "wf") with clock hands, I can talk about digitals only.
structure is a bit easy:
0000-0009header with bits of used options00A-019Fparameters of an options. every option have a personal offset and similar structure01A0-...graphical resources in rgb565 format (2 bytes per pixel)
structure of a parameters are:
2 bytesX position2 bytesY position2 bytesframe width2 bytesframe height (not of texture)1 byteframes count4 bytetexture offset. numerics of steps, pulse and calories have a few of offsets equal maximum possible digits, what theoretical may use different textures but with identical parameters
while I'd investigate wf structure, I'd make a little excel with full descriptions and wf model. all is automated, all what you need, just insert a sequence of first hex-bytes 0000-01F9.
feel free to use it if you want
Spoiler: images
excel
Spoiler: wf editor
a little tool, what allow you modify wf or make your own
I did not tried to make an integrated photoshop, so you must import/export bitmaps via clipboard.
keep in mind:
- a clear black (0, 0, 0) would be treated as a transparent color by your watch, while editor not support transparency.
- all unused resources would not be saved in wf file.
- parameters have no any correctness check (frames count for sample), so you may set any, but it may works incorrect.
- background layer must be exactly 454x454, or it'll be looks lake a damaged. but you may disable bg layer and get just a black screen as a bg.
- file open dialog would appear on right click of a text field.
Spoiler: images
LA10WFTool.zip
Spoiler: direct working via BT
original "ffit" app will not allow you to upload any wf, so you must upload it manually.
Spoiler: upload algorithm
at first, you must enable notifications of a "00001603-0000-1000-8000-00805f9b34fb" characteristic (further "1603")
all packets must be sent to a "00001602-0000-1000-8000-00805f9b34fb" characteristic (further "1602")
- prepare header:
00-02first 3 bytes are zero03-064 bytes of a wf file size07size of MTU08-09size of a "page"0Amust be = 1
- increase MTU. in original to 244
- split the file
- - at first, split by "pages". in original, page size is a 4096. so you must split your file on a chunks of 4096 bytes.
- - now split the pages on blocks with size of (MTU - 2) i.e. by 242 bytes (244-2=242). keep in mind, what last chunk in page would be a 224 byte of size instead of 242 in case 4096 mod 242 = 224.
- - for every block, add a 2 bytes counter (starts from "01") at forward.
now you are ready to send wf. all packets must be send at 1602, any callbacks catch at 1603.
- send a header.
- wait 4 changes at 1603. last two wold be a "00 00 13 00", then "00 00 12 00".
- start sending a blocks sequentially.
- after every 17 blocks, before sending a next, you'll must send "00 00 17" at 1602 and wait a response from 1603. usually it's a "00 00 14", but I don't check, just wait an any.
- after sending a last part, you must send "00 00 17" for the finish.
if all is ok, you'll see your wf else - default wf.
of course you need an app.
algorithm are described, so you may make your own app, or use my. but actually I'm not a programmer (just an angry customer), so don't blame me about rugged code, I write it as can. in excuse - I even never listen about kotlin before((
so I've got an existed sample as a base and add required functional. the best way what I'll found is a "BLE Starter" of a Punch Through (team?). very thanks them for it.
sources are here https://github.com/PunchThrough/ble-starter-android big article about BT working are here https://punchthrough.com/android-ble-guide/
Spoiler: the app
at first unbound your smartwatch in ffit. in case device may be bounded to only one app.
- bound via app and look at 1602 characteristic. you don't need to enable 1603 manually in app, it would be enabled automatically. MTU would be set automatically too.
- tap 1602 characteristic and chose any write method
- in payload dialog, type just only one letter "w" (w/o quotes). tap "ok" and select your wf in open file dialog.
- now just wait. I make it by callbacks, so process would have a long time ~15 minutes per megabyte. I'll try to rewrite it latter from callbacks to blind write (write with no response via timer), but not soon. just have no time for it now.
the app BLE_Starter_base.zip
source ble-starter-android_srs.zip
Spoiler: View and download watch faces directly from ffit server
I've finish a little tool, what allow you to discover watch faces directly on ffit server.
at this time, it have an only 100 of different sizes. useful as resources only and editable via my LA10WFTool
now you can scan ffit server for watch faces by ID. it's allow you to get over 45k items, but a lot of it are duplicated. seems not compatible with la10, but may be used as resources for customizing.
FFitWFDownloader.zip
I have a figured out how to send messages, so if someone interesting, I'll add later.
also all stuff would be available at my gdrive.
that's all. have fun. hope it would be useful for someone.
*********
LA10WFTool - updated (removed check for import image more than 454x454)
*********
LA10WFTool - minor bug fixes and ui improvement (mostly scrolls over nud's)
*********
16/11/22 - FFitWFDownloader
- added "Investigation" ability, to "discover" watch faces by ID. over 45k, but a lot of duplication.
*********
23/11/22 - FFitWFDownloader
- reworked "Investigation" section.
- added "Download Range" ability.
Hey VXSX, great stuff! I was exactly looking in this direction but I had still not much time and was just starting. I believe many of those smartwatches come from the same manufacturer or seem to have very similar HW. Your watch and mine seem very similar to those from the dtno1.com
Anyway, first of all. Can you say something about the filenames in use? In the app (FitCloudPro) used with my phone I can see a set of 4 files for every wf, they seem to be in pairs based on their sizes. One pair is quite large (e.g. hundreds of KB) and one is much smaller. Any idea? How is it for you?
Will look into our files and will try to use your findings as well.
Thanks and keep posting!
the_cipo said:
Can you say something about the filenames in use? In the app (FitCloudPro) used with my phone I can see a set of 4 files for every wf, they seem to be in pairs based on their sizes. One pair is quite large (e.g. hundreds of KB) and one is much smaller. Any idea? How is it for you?
Click to expand...
Click to collapse
at first time, I'd tried to replace wf file for ffit app "on the fly" via proxy, but had fail. app download my file, but didn't to upload in the watch. seems app check crc and treat my file as corrupted. manual uploading have no mater for the name of a file. anyway, watch even don't know about name in case you upload file data only, not the name.
vxsw said:
at first time, I'd tried to replace wf file for ffit app "on the fly" via proxy, but had fail. app download my file, but didn't to upload in the watch. seems app check crc and treat my file as corrupted. manual uploading have no mater for the name of a file. anyway, watch even don't know about name in case you upload file data only, not the name.
Click to expand...
Click to collapse
But do you also have more files in the app dir for every single wf?
app have no accesable dir. but seems have a some sort of hidden cache, in case it'll not redownload wf every time. as I've understood through sniffing, wf have stored on server "body" and "preview" image separatelly. both downloadable via direct http link both have a names, but it's actual for vendor app only. exactly device requires only content of a wf file ("body") what must be correctly prepeared and upload by algorithm what I've described above. device did not get any info about names and/or preview.
App has an accessible dir on the phone. In my case is "\Internal shared storage\Android\data\com.topstep.fitcloudpro\files\Download"
In this dir I can find 4 files for every wf:
Here I selected 4 belonging to the same wf. I did a few tests and this dir gets 4 more files for every new wf.
ffit have no similar dirs. and/or I had not find any method to get access to it.
looks pairs of files have an identical size. what about content? attach something pls, I would like to look at this. also some sort of preview would be nice too.
Sure, this is the smallest I found.
With its 4 files following. As you can see the wf (above pic) has a reported size of 192KB which matches the two files below. So this data is consistent.
And these are the small ones (related to the same wf). Maybe this file is the picture of the wf presented in the UI (attached above).
CONFIRMED: both the 2 small files are the PNG of the wf (res 368x368). Probably that one used in the preview.
vxsw said:
ffit have no similar dirs. and/or I had not find any method to get access to it.
looks pairs of files have an identical size. what about content? attach something pls, I would like to look at this. also some sort of preview would be nice too.
Click to expand...
Click to collapse
Do you have the phone USB set in sebug mode? It might help
first two looks like a wf, but seems have a some sort of compression. in case even 368*368=135424 and it's only bg pixels count, while file size is only 186496.
also pairs are identical, so not sure what name have meaning for device. may be for app only as some sort of check.
the_cipo said:
Do you have the phone USB set in sebug mode? It might help
Click to expand...
Click to collapse
I use a wi-fi debugging, but for why?
Yeah, the two pairs are identical. probably the name is used for some check.
I am looking with HEX editor and you can see some blocks (in the wf file). E.g. from 00x0 to 1ffx0 and from 200x0 to 5ffx0. One more starting at 4f00x0 (or 4f20x0) with many zeros before that.
The block 0600x0 starts with ASCI "TBUI" and ends with the last 4 ASCI bytes in the file (at 2d87fx0) with again "TBUI" string.
vxsw said:
I use a wi-fi debugging, but for why?
Click to expand...
Click to collapse
Just thinking about what would cause you not to see hidden files.
vxsw said:
first two looks like a wf, but seems have a some sort of compression. in case even 368*368=135424 and it's only bg pixels count, while file size is only 186496.
Click to expand...
Click to collapse
Why would you expect to be exactly 135242? I have bin wf files of almost 1 MB. I don't think it has to do with screen resolution. Can you try to identify the protocol and blocks inside the bin?
the_cipo said:
Why would you expect to be exactly 135242?
Click to expand...
Click to collapse
it's just a pixels count. you said:
the_cipo said:
(res 368x368)
Click to expand...
Click to collapse
so I'd suggest, what wf have a background with 368x368 resolution and it's an exactly square of a 135424 pixels (368x368=135424). if it would be a plaine bmp, it must have a [size] = [quantity of all pixels] x [bytes per pixel]. so even it would have at least 2 bytes per pixel, it'll must have size = 368x368x2 = 135424x2 = 270848 bytes only for bg texture. + header + some parameters. so as it have an only 184k bytes it must have a some sort of compression or it's something other, what at least have no bg texture.
the_cipo said:
Can you try to identify the protocol and blocks inside the bin?
Click to expand...
Click to collapse
I can try, but have no enough time at now. try to gather 5-10 wf with previews, each wf in personal folder. more prefer with easy elements on textures (less colors, even solid).
it would be easy, if it would be a plain bmp (in my case), but seems it's not for type of your wf.
so I would not promise anything, but I'll try.
I don't think so, I agree on the pixel count but the bin contains other info besides the bg data. So it must be bigger than 135424 bytes (considering 1 byte per pixel). As you found out, there are sections related to other clock info. In fact, I got the example with a simple bg image with a few colors to help that out. Will look for more
I liked your excel but I cannot still figure out all info there, for example, the column "S" in the second tab (to start with).
the_cipo said:
the column "S" in the second tab (to start with).
Click to expand...
Click to collapse
offset of a first byte of a parameter. one parameter is a red+green(+gray if exist). red is an exactly parameters, green is a main offset of a texture, grey is a "suboffsets" (offsets of a "sockets" - independent part of a layer. i.e. layer "steps num" is a numeric of steps, it have a 5 digidts ("sockets"). sockets draws sequential from left to right, defined position in red field is a position of a first socket.)
the_cipo said:
So it must be bigger than 135424 bytes (considering 1 byte per pixel
Click to expand...
Click to collapse
I tried to explain exactly this)) what 186k is to low size for wf with 368x368 resolution)) 186k is not enough even for bg texture.
Here are two sets of simple wf, will do more later.
the_cipo said:
Here are two sets of simple wf, will do more later.
Click to expand...
Click to collapse
it's just a png, not a wf file.
--------------
sorry, found it. didn't saw a prew. post.
as I understood it have an only by 3 clockhands and nothing more?

Categories

Resources