Related
I have the TMOUSA version, but I think this question would apply to all versions, and in fact to other phones as well.
I was just re-reading the excellent guide to storage card optimization by the great Windows Mobile guru (and XDA member) who writes under the name Menneisys:
http://www.smartphonemag.com/cms/forum/topic/17921?&TOPIC_ID=17921
That article was written a few years ago, though, with older WM versions, and older storage cards.
I am wondering if the info is still relevant, to a new phone like the HD2, with WM 6.5 and Sense, and the newer storage cards?
The 16MB storage card that comes with the HD2, although the newer SDHC type, is only Class 2, therefore relatively slow, compared to Class 4 and Class 6 cards. I am wondering if using any of the tweaks suggested in the article by Menneisys would speed up the card.
For instance, changing from FAT32 to FAT16? (FAT16 is really ancient now though, don't know if it would work well at all on newer cards and devices.)
Eliminating the FAT backup?
Also, by changing to a larger cluster size? (Which of course, would reduce the storage space, by adding more slack. But would it speed up the card's performance enough to make it worth it?)
Of course defragmentation is always a good idea, with any disk or card, old or new. That part of his advice is not in question, then or now.
But I am wondering about the other stuff--like changing to FAT16, eliminating the FAT backup, and changing the cluster size?
Anyone know? (Menneisys, are you reading? Others?)
Thank you.
well, without reading the link, (i'll save that till the kids are in bed) i can say that fat16 can't address 16gb, however re the cluster size, yes, that can deff help, especially if you have lots of fairly large files. if your card is mostly music images and video, then you can deff benefit from setting the size as large as it will go. it does mean tiny files will take up a whole block, of course, but if its mostly big files then go for it.
samsamuel said:
well, without reading the link, (i'll save that till the kids are in bed) i can say that fat16 can't address 16gb, however re the cluster size, yes, that can deff help, especially if you have lots of fairly large files. if your card is mostly music images and video, then you can deff benefit from setting the size as large as it will go. it does mean tiny files will take up a whole block, of course, but if its mostly big files then go for it.
Click to expand...
Click to collapse
Very interesting article. Yes, do read it when you have a chance.
Yes, probably SDHC cards did not exist at the time of the article, nothing larger than 2 GB. So, sounds like the FAT16 option is out for current cards.
Do you know if that option of formatting a card with "no FAT backup" still makes sense on current cards? Is it a risky thing to do?
Regarding the cluster size-- most of us probably have both small and large files, not only one or the other. So, it is a trade-off between speed and storage space. What cluster size do you think is a good balance between the two?
never read anything about fat backup, so i couldn't say. as for block size, i use 16k on a 2gb card, which has 1gb of music and about 300meg images.
i would say the lost space is negligible on sdcards, even if you have a thousand 1k files, you only waste 16meg, so that's maybe 1/2 an mp3 album,, its only really an issue when dealing with hundreds of gig hard disks with tens of thousands of tiny system and program files. (just checked mine, theres only 250 files smaller than 32k, and only 120 less than 5k)
course, its a matter of preference, and i'm sure there are loads of people will say i'm wasting space and should be disowned from the community,, hehe
samsamuel said:
never read anything about fat backup, so i couldn't say. as for block size, i use 16k on a 2gb card, which has 1gb of music and about 300meg images.
i would say the lost space is negligible on sdcards, even if you have a thousand 1k files, you only waste 16meg, so that's maybe 1/2 an mp3 album,, its only really an issue when dealing with hundreds of gig hard disks with tens of thousands of tiny system and program files. (just checked mine, theres only 250 files smaller than 32k, and only 120 less than 5k)
course, its a matter of preference, and i'm sure there are loads of people will say i'm wasting space and should be disowned from the community,, hehe
Click to expand...
Click to collapse
Read the Menneisys article, and he says it makes the card run a lot faster, to eliminate the FAT backup. (Something you can do with SK Tools.)
However, I would wonder if that would make the card less stable, more prone to data loss. Or, even whether a non-standard cluster size might make the card more flaky?
does wm 6.5 support exfat?
using 16G thumbdrive on win 7, exfat is wayyyy faster than ntfs.
I used the 8GB card at the beginning, switched then to a 16GB card class 6 and then to 32 GB class 2 and dinĀ“t find the slightest dfifference in speed, neither when recording videos with the cam in max resolution.
me said:
Read the Menneisys article, and he says it makes the card run a lot faster, to eliminate the FAT backup. (Something you can do with SK Tools.)
However, I would wonder if that would make the card less stable, more prone to data loss. Or, even whether a non-standard cluster size might make the card more flaky?
Click to expand...
Click to collapse
lets take a step back and think about what the FAT backup is. i believe it is another table that mirrors the contents of the primary table. essentially, it can be used to recover your file system table in case the primary backup is corrupted/lost. now lets think about WHEN this table is read. to the best of my knowledge, the backup is read ONLY if the primary is found to be corrupted. similarly, the backup is UPDATED/WRITTEN only when the primary is UPDATED/WRITTEN.
thus, any speed gain due to disabling the backup should be seen in WRITE operations ONLY. read speed should be not be affected by that tweak. i could be wrong though!
if i am correct, then try disabling the backup if you desire write speed. however, you will lose some of the "robustness" of the file system. and FAT (and its variants like FAT12, FAT16, FAT32) are already fairly fragile file systems.
regarding cluster sizes, a smaller cluster size means LESS wastage when having many SMALL files. a larger cluster size means MORE wastage when having many SMALL files. however, a smaller cluster size means MORE clusters to address, which means a LARGER allocation table, which means MORE TIME spent looking up/updating the table's contents. conversely, a larger cluster size means LESS clusters to address, which means a SMALLER allocation table, which means LESS TIME spent looking up/updating the table's contents. so the sweet spot would be somewhere in the middle. HOWEVER, most modern operating systems load the allocation table in MEMORY so i imagine the speed gain would be negligible. the fact that the table is managed in memory and periodically updated back to the disk is the reason behind most corruptions in a non-journaling file system like FAT.
i've over simplified things a bit, but it should give you an idea of what kind of gains to expect by such tweaking (i.e. little to none in my opinion!).
Again, I'd suggest reading the Menneisys article.
ASCIIker said:
lets take a step back and think about what the FAT backup is. i believe it is another table that mirrors the contents of the primary table. essentially, it can be used to recover your file system table in case the primary backup is corrupted/lost. now lets think about WHEN this table is read. to the best of my knowledge, the backup is read ONLY if the primary is found to be corrupted. similarly, the backup is UPDATED/WRITTEN only when the primary is UPDATED/WRITTEN.
thus, any speed gain due to disabling the backup should be seen in WRITE operations ONLY. read speed should be not be affected by that tweak. i could be wrong though!
if i am correct, then try disabling the backup if you desire write speed. however, you will lose some of the "robustness" of the file system. and FAT (and its variants like FAT12, FAT16, FAT32) are already fairly fragile file systems.
regarding cluster sizes, a smaller cluster size means LESS wastage when having many SMALL files. a larger cluster size means MORE wastage when having many SMALL files. however, a smaller cluster size means MORE clusters to address, which means a LARGER allocation table, which means MORE TIME spent looking up/updating the table's contents. conversely, a larger cluster size means LESS clusters to address, which means a SMALLER allocation table, which means LESS TIME spent looking up/updating the table's contents. so the sweet spot would be somewhere in the middle. HOWEVER, most modern operating systems load the allocation table in MEMORY so i imagine the speed gain would be negligible. the fact that the table is managed in memory and periodically updated back to the disk is the reason behind most corruptions in a non-journaling file system like FAT.
i've over simplified things a bit, but it should give you an idea of what kind of gains to expect by such tweaking (i.e. little to none in my opinion!).
Click to expand...
Click to collapse
These tests might be of some interest to you.
http://forum.xda-developers.com/showthread.php?t=756781&highlight=card+speed+test
This is probably missing a lot of facts that we haven't uncovered yet. When we learn more, we can update what we know here
Background
All data is stored on an 8gb or 16gb MoviNAND chip, of which 2GB is 'system data', and the rest is for user storage. The MoviNAND is one of the first mobile 'smart SSD' chips. That means that the MoviNAND handles all operations such as data wear leveling, physical data lookup, as well as having it's own internal buffers. This cleverness is both good... and very bad.
FSYNC
When writing data to disk, your system and apps will make a call to the driver to 'write some data to file X'. This data will then be placed into kernel filesystem buffers and streamed off as commands to the MoviNAND. The MoviNAND will then slowly accept these commands, and place them into its own buffer, and the disk controller itself will then go about it's business writing this data to disk, using lookup tables to determine where to write the data to ensure maximum NAND lifetime, etc. It does a lot of work.
The system or apps also have an extra tool, called FSYNC. When this is used, the kernel and filesystem will clear the buffer for the affected file, and ensure it is written to disk. The current thread will block, and wait for the fsync call to return to signal that the data is fully written to disk. The kernel itself will wait for an event from the MoviNAND to signal that the data has been completely written.
In a 'dumb' disk, this fsync is fairly quick - the kernel buffer will be written directly to where the kernel has directed, and the round trip time (RTT) will be as long as it takes for data to be written.
In a 'very smart' desktop SSD, the fsync can return instantly - the disk controller will take the data and place it in it's battery-backup protected, and then go about it's wear leveling and writing in the background without bothering the system.
In the 'smart' MoviNAND, the fsync will take a very very long time to return - sometimes fsync on MoviNAND will take several seconds(confirm?) to return. This is because the MoviNAND may have a long line of housekeeping tasks waiting for it when a fsync is called, and it will complete all of it's tasks before returning.
RFS
RFS has a fairly badly written driver, that will call an fsync on file close.
Basically, RFS runs in 'ultra secure' mode by default. This security may not be really needed - I personally don't want it if it means enormous slow downs. It also doesn't help data security if the system/app is holding a file open, only if it closes the file. The MoviNAND is also fairly smart, and appears to write it's cache to disk before turning off, and also appears to have capacitors to keep it alive for a little bit of time in the event of a power cut.
SQLite
Most Android apps use SQLite - a fairly simple database that is easy to embed. Sqlite has 'transactions' - not real transactions, but a transaction in sqlite is where the database is locked for the duration of a database write, and multiple databases writes can be included in one transaction. At the end of a transaction, sqlite will call FSYNC on the database file, causing a possibly long wait while the MoviNAND does it's thing. Certain applications will not bunch up writes into a single transaction, and will do all of their writes in new transactions. This means that fsync will be called again and again. This isn't really a problem on most devices, as fsync is a very fast operation. This is a problem on the SGS, because MoviNAND fsync is very slow.
The various fixes and why they work
Native EXT4 to replace RFS (Voodoo)
By replacing RFS with EXT4, the 'sync on fileclose' problem is removed. The EXT series of filesystems is also more efficient at allocating information into blocks than RFS/FAT32 is. This means less real writes to MoviNAND, which means that the MoviNAND buffer should be smaller, and when a sync is called, fewer commands have to be run. When a sync is called on EXT4, it will still be very slow, as the MoviNAND's sync is still slow.
Basically, EXT4 improves filesystem grouping which leads to less commands, and does not have the broken 'sync on file close' that RFS does. It will not heavily improve sqlite database access in certain apps, as the full fsync on transaction end will still have to go through MoviNAND, and will be slow.
When pulling out the battery, there is a chance to lose data that has been written to a file but has not yet been told to sync to disk. This means that EXT4 is less secure than RFS. However, I believe the performance to be worth the risk.
Loopback EXT2 on top of RFS (OCLF)
By creating a loopback filesystem of EXT2, the 'sync on fileclose' problem is removed as well. Since the Loopback File is never closed until the EXT2 is unmounted, RFS will not call fsync when a file in the EXT2 loopback is closed. Since a single large file is created on RFS instead of multiple small files, RFS is unable to mis-allocate the file, or fragment it. The actual allocation of filesystem blocks is handled by EXT2. As a note, care should be taken in making the large file on RFS - it MUST align correctly with the MoviNAND boundries, or operations will be slowed down due to double-disk accesses for files, etc. It is unknown whether OCLF is aligning this correctly (how to determine this? 4KB block size gives double the performance of 2KB block size, so it might be aligning it correctly already).
Loopback also has the benefit of speeding up Sqlite databases (at the expense of a transaction being lost in power outage, as it could still be in ram). As always, this is a performance tradeoff between data security when the battery is pulled out, and performance. When pulling a battery out while using the loopback filesystem, there is a chance to lose the last few seconds of database writes. In practice, this isn't a huge deal for a mobile phone - most lost data will be resynced when the phone reboots. In my opinion, the performance is worth it because of the very slow speed of a sync on MoviNAND.
Loopback EXT2 on top of EXT4
All of the above for normal loopback EXT2 applies. In addition, when the loopback flushes data, it will be flushed to EXT4 instead of RFS. This will probably be better than flushing to RFS, as the RFS driver is not as well written as the EXT4 driver. The difference should not be very large, though.
Journaling
Journaling on an SSD is not required. Your data will not be lost, your puppy will not die. Here is a post made by Theodore Tso - http://marc.info/?l=linux-ext4&m=125803982214652&w=2
But there will be some distinct tradeoffs with
omitting the journal, including possibility that sometimes on an
unclean shutdown you will need to do a manual e2fsck pass.
Click to expand...
Click to collapse
Not using a journal is not a big deal, as long as you take care to do a full e2fsck pass when an unclear shutdown has occurred. This is the main reason for a journal - to prevent the need to do a full disk check, and instead the journal can be easily read, and the full disk check avoided.
EXT2 vs EXT4
EXT2 appears to work better on the SGS than EXT4. This is because EXT4 has more CPU overhead than EXT2. Journaling is also very bad on MoviNAND. Why? It appears to be the command buffer in the MoviNAND controller. A call to update the journal will use a command slot in the MoviNANDs buffer, that could otherwise have been used for a real disk write. This means that journaling on MoviNAND is a VERY expensive operation compared to journaling on a 'dumb' disk.
Well, you could technically use EXT4 and simply disable the high cpu and other features until you are left with EXT2, since EXT4 and EXT2 are basically the same thing.
At any rate, the difference between EXT4 and EXT2 is not very large, and there's no need for flamewars over it - it comes down to a choice of 'running' performance vs 'startup' performance, with EXT2 edging out EXT4 for everyday speed, while EXT4 not required a long disk check at boot.
Future Work
Rewrite the firmware for the MoviNAND's flash to handle fsyncs properly and not bring the system to it's knees. I joke, but this is really the true solution.
Other solutions include hacking EXT's fsync method to return instantly, and ensuring that the real fsync is called when the system shuts down. Or doing nothing, fsync is there for a reason, I guess, and would be fine if MoviNAND's fsync wasn't so very slow.
There is probably a lot of small details missing from this writeup. They'll be updated when we learn more. Thanks for all the useful discussions and arguments, everyone!
Thanks RyanZA a lot and it's good thread to all SGS users to understand what's we're running!
Keep on going!
thanks for breaking it down for largescale comsumption ! loved reading this post.
Excellent post, it seems like you enjoy figuring this stuff out. Reading about it like this even gets me interested. Samsung would do well in hiring more people like you.
Interesting.. How did you work these behaviors out, by checking the code?
Thanks RyanZA. You are a impressive coder with so much information.
thanks for sharing and hope that we can get it fix forever and get the desire HD rom for us.
RFS has been around for a bit and is used on other phones do those phones have the same lag issues as the sgs?
Not sure if it helps but I stumbled on this:
http://www.samsung.com/global/busin...ionmemory/downloads/RFS_130_Porting_Guide.pdf
http://movitool.ntd.homelinux.org/trac/movitool/wiki/RFS
Thanks dude...
being a techy guy, enjoyed reading your post and very nice to know the details of the file system...
Looking forward to your future work and updates
ryanza, u crazy guy (again!! ), u did a good job. it should be clear enough for ppl to decide which fs is a better choice for their particular uses.
and , in fact, i've tried all of them. ext4 is far more cpu extensive, and caused a lot of lags when i was listening to mp3s while surfing the internet. ext3 is the modest one, while ext2 is very fast with the expense of "possible data loss".
for the ext fs over loop devices, it seems there is no impact on performance issue, as well as the noatime and nodiratime mount options, although theoritcally they should increase the performance a bit by skipping the atime and diratime jobs
Thanks for the huge breakdown. Very informative. Hopefully someone sorts out this non sense in the near future. Looking forward to see what happens
Great post !
Thanks for this !
ykk_five said:
ryanza, u crazy guy (again!! ), u did a good job. it should be clear enough for ppl to decide which fs is a better choice for their particular uses.
and , in fact, i've tried all of them. ext4 is far more cpu extensive, and caused a lot of lags when i was listening to mp3s while surfing the internet. ext3 is the modest one, while ext2 is very fast with the expense of "possible data loss".
for the ext fs over loop devices, it seems there is no impact on performance issue, as well as the noatime and nodiratime mount options, although theoritcally they should increase the performance a bit by skipping the atime and diratime jobs
Click to expand...
Click to collapse
It really doesn't seem that ext2 has any "possible data loss". ext2 DOES have possible "long boot up time while doing filesystem checks", but the actual data itself will be the same across all ext2,3,4. You need to use ext2 if the long bootup isn't a problem for you, but if you reboot your phone frequently then ext3,4 would be a better choice since the bootup will be far quicker! An EXT2 partition can take over 5 minutes to boot if things go badly, while an EXT3,4 should never take longer than about 10 seconds.
Data loss, if any, would be identical between EXT2,3,4 though, so don't worry about the data, only the boot up time!
dakine said:
RFS has been around for a bit and is used on other phones do those phones have the same lag issues as the sgs?
Not sure if it helps but I stumbled on this:
http://www.samsung.com/global/busin...ionmemory/downloads/RFS_130_Porting_Guide.pdf
http://movitool.ntd.homelinux.org/trac/movitool/wiki/RFS
Click to expand...
Click to collapse
The issue is fairly specific to Linux+RFS+MoviNAND - it is the way the RFS linux drivers interact with MoviNAND that seems to cause the big black screens. I don't have any other RFS devices though, so I can't test it myself.
andrewluecke said:
Interesting.. How did you work these behaviors out, by checking the code?
Click to expand...
Click to collapse
I checked the sqlite code... but as far as the rest, it's mostly from reading the MoviNAND spec, and investigations and tests by myself and others on the RFS filesystem properties, etc. So not so much the code, because the code itself for the RFS driver is practically illegible and I barely understand it. (Magic numbers everywhere! What do they mean?)
EDIT: I'd like to add that there are no doubt missing facts in what I've written, as well as errors as to the cause of certain things. We'll eventually get this all worked out though. This doc represents the current 'All we know' about the RFS lag issue. That doesn't mean there isn't more we can still learn - I'm sure there is. If you find any inconsistencies in this, please share them so we can try and work out the truth!
There are also several things that can be done to speed up RFS.
One obvious thing is to remount the partitions using the "noatime" option instead of the default "relatime". This should reduce writes-after-reads.
Tweaking the CFQ io scheduler helps tremendously. I have:
for i in $(ls -1 /sys/block/stl*) $(ls -1 /sys/block/mmc*) $(ls -1 /sys/block/bml*) $(ls -1 /sys/block/tfsr*)
do echo "0" > $i/queue/rotational
echo "1" > $i/queue/iosched/low_latency
echo "1" > $i/queue/iosched/back_seek_penalty
echo "1000000000" > $i/queue/iosched/back_seek_max
echo "0" > $i/queue/iosched/slice_idle
echo "50" > $i/queue/iosched/slice_sync
echo "20" > $i/queue/iosched/slice_async
done
Just for testing, I've tried remounting the /cache RFS partition as VFAT (FAT32), and it makes sequential writes to the same partition about 2x faster. But I think we cannot remount the /data and /dbdata partitions as VFAT.
RyanZA said:
It really doesn't seem that ext2 has any "possible data loss". ext2 DOES have possible "long boot up time while doing filesystem checks", but the actual data itself will be the same across all ext2,3,4. You need to use ext2 if the long bootup isn't a problem for you, but if you reboot your phone frequently then ext3,4 would be a better choice since the bootup will be far quicker! An EXT2 partition can take over 5 minutes to boot if things go badly, while an EXT3,4 should never take longer than about 10 seconds.
Click to expand...
Click to collapse
no, ext2 did caused some data loss. i ran fsck on a ext2 parition once and it returned some unfixable blocks!
for the boot up time, i used to have include the fsck when the phone boot up b4, but i removed it already since, "for me", those "possible data losses" are insignificant. and no matter whether i pick ext2/3/4, the reboot time is about 30 secs
ykk_five said:
no, ext2 did caused some data loss. i ran fsck on a ext2 parition once and it returned some unfixable blocks!
for the boot up time, i used to have include the fsck when the phone boot up b4, but i removed it already since, "for me", those "possible data losses" are insignificant. and no matter whether i pick ext2/3/4, the reboot time is about 30 secs
Click to expand...
Click to collapse
Those unfixable blocks would be unfixable under any EXT variant - the unfixable block is basically a block that was partially written to disk before power was cut, and therefore the checksum doesn't add up. This can happen regardless of ext2/3/4, and the data is lost under all of them (because the data was never fully written to disk). The only way to avoid this is to do a proper shutdown of the system, and not pull the battery. In EXT3/4 with journaling, the journal would simply indicate that the write to that block did not complete. In EXT2, there is no journal, so the filesystem check must trawl it's way through the entire disk and discover for itself that the data wasn't written. In both cases, the data is gone (since it was never there), but in EXT3/4 the process is much quicker. In EXT2 you'd be sitting waiting for the phone to boot up while it checks it.
The tradeoffs are very very straight forward: fast boot + slower speed vs slow boot + slightly faster speed. Not much to it.
EDIT: Not running the fsck at all on EXT2 could be bad, eventually the disk may become unmountable, and your phone won't boot. I'd say either do the check on boot and suffer the wait, or use journaling.
hardcore said:
There are also several things that can be done to speed up RFS.
One obvious thing is to remount the partitions using the "noatime" option instead of the default "relatime". This should reduce writes-after-reads.
Tweaking the CFQ io scheduler helps tremendously. I have:.
Click to expand...
Click to collapse
I already had the noatime trick on my sgs, but didn't know about the cfq tweakings. Can you explain what exactly they do?
hardcore said:
Tweaking the CFQ io scheduler helps tremendously. I have:
Click to expand...
Click to collapse
Tweaking the CFQ io scheduler helps a lot, but it has a problem: When an FSYNC is called, the app will wait until the fsync returns. No matter how you tweak the scheduler, you won't be able to get around the app sitting there waiting for all pending disk operations in the MoviNAND to complete. This gets worse and worse as much applications are running at once, since there is more for the MoviNAND to do on each sync.
So yeah, scheduling does help a bit, but it doesn't defeat the core problem.
BTW, if you're using Froyo and want a quick way to put in some sane CFQ scheduler settings, just set the scheduler option in the OCLF app to 'CFQ' and tick 'set on boot' - when the scheduler gets changed, Linux will put it's defaults back in and override Samsung's strange settings, which means you end up with something fairly close to what hardcore is setting here.
I have 1 question.......:
WHY HASN'T SAMSUNG HIRED YOU YET?!
Seriously, Samsung should be able to figure that out for their own hardware and software, so why would they proceed this way if they were aware of these issues?
RyanZA said:
Those unfixable blocks would be unfixable under any EXT variant - the unfixable block is basically a block that was partially written to disk before power was cut, and therefore the checksum doesn't add up. This can happen regardless of ext2/3/4, and the data is lost under all of them (because the data was never fully written to disk). The only way to avoid this is to do a proper shutdown of the system, and not pull the battery. In EXT3/4 with journaling, the journal would simply indicate that the write to that block did not complete. In EXT2, there is no journal, so the filesystem check must trawl it's way through the entire disk and discover for itself that the data wasn't written. In both cases, the data is gone (since it was never there), but in EXT3/4 the process is much quicker. In EXT2 you'd be sitting waiting for the phone to boot up while it checks it.
The tradeoffs are very very straight forward: fast boot + slower speed vs slow boot + slightly faster speed. Not much to it.
EDIT: Not running the fsck at all on EXT2 could be bad, eventually the disk may become unmountable, and your phone won't boot. I'd say either do the check on boot and suffer the wait, or use journaling.
Click to expand...
Click to collapse
yes i know the risk for not fsck the disks, but it is "under control" and i am prepared to do a reflash when needed
it's this awesome community and it's works that keeps me from selling this phone lol.
Background:
Ok, so I write flash drivers for a living, so I would consider myself somewhat knowledgeable regarding flash technology.
The flash is erased in 128k blocks and written in smaller pages. Data, once written, cannot be changed until you erase, so the FS will write to another page and invalidate the current page. The 100k program/erase cycle count is on a per block basis. It is not being erased every time you write a file, so calm down, your phone isn't going to die. The 10 year data retention time that people are quoting has nothing to do with this. It is how long once programmed...and not changed...data is guaranteed to be valid for.
The only thing that you need to remotely consider...and needs to actually be verified, is whether RFS actually writes to the file system more or less than EXT4, and how much more. The data wear leveling is done on a lower layer than the file system and Dameon87 already confirmed both RFS and EXT4 are using the same sector translation layer.
Sources:
XDA Post linking RFS documentation: http://forum.xda-developers.com/showthread.php?t=801223
Reliability: http://www.samsung.com/global/busin...s/fusionmemory/Products_FAQs_Reliability.html
Datasheet: http://www.datasheetcatalog.org/datasheets2/12/1248447_1.pdf
Attached are app notes on RFS.
Regarding RFS:
RFS Programming Guide said:
STL Block Device Driver: This block device driver is used to provide driver functions for the device files /dev/stl0/*, /dev/stl1/* and so on. Since there is FTL between this block device driver and BML, it is allowed to perform random write requests and write requests are handled atomically. Thus any read-write file system (e.g. RFS) can run on this block device driver.
STL (Sector Translation Layer): translates a logical address from the file system into the virtual flash address. It internally has wear-leveling during the address translation.
Click to expand...
Click to collapse
Regarding EXT4:
EXT4 supposedly buffers more data before writing, thus in theory should require less program/erase cycles. This could in theory explain why people claim better battery life using EXT4. To program/erase flash, you must temporarily raise the flash voltage...this is why flashing ROMs and using ODIN drain your battery like crazy...and why you should always flash with a battery near 100%. This point is of course mute if there is no wear protection. If EXT4 is using the Samsung STL driver, the wear leveling should be implemented exactly the same as in RFS.
Regarding Bad Blocks:
It is typical to have some bad blocks in large flash arrays direct from the factory. It is normal and part of the manufacturing/validation process.
http://www.samsung.com/global/business/semiconductor/products/fusionmemory/Products_FAQs_Reliability.html said:
SAMSUNG guarantees the first block will operate properly during the 100K P/E cycle under normal conditions. On the other hand, other blocks can be invalid as long as the total number of bad blocks doesn't exceed 2% of all blocks.
Click to expand...
Click to collapse
Samsung Datasheet said:
The device may include invalid blocks when first shipped. Additional invalid blocks may develop while being used. The number of valid blocks is presented
with both cases of invalid blocks considered. Invalid blocks are defined as blocks that contain one or more bad bits. Do not erase or program factory-marked bad blocks.
Invalid blocks are defined as blocks that contain one or more invalid bits whose reliability is not guaranteed by Samsung. The information
regarding the invalid block(s) is so called as the invalid block information. Devices with invalid block(s) have the same quality
level as devices with all valid blocks and have the same AC and DC characteristics. An invalid block(s) does not affect the performance
of valid block(s) because it is isolated from the bit line and the common source line by a select transistor. The system design
must be able to mask out the invalid block(s) via address mapping. The 1st block, which is placed on 00h block address, is fully guaranteed
to be a valid block.
Within its life time, additional invalid blocks may develop with the device. Refer to the qualification report for the actual data.The following
possible failure modes should be considered to implement a highly reliable system. In the case of status read failure after
erase or program, block replacement should be done. Because program status fail during a page program does not affect the data of
the other pages in the same block, block replacement can be executed with a page-sized buffer by finding an erased empty block and
reprogramming the current target data and copying the rest of the replaced block.
Click to expand...
Click to collapse
Check your bad datablocks by doing this...
Code:
adb shell
su
cat /proc/L*/bmlinfo
You will probably have a few. I have 3. The block size is 128KB. 512MB/128KB = 4096 blocks (thats why they are using the bottom blocks in the 4000 range for the 2% coverage. 2% of 4096 is apx 81 bad blocks. But don't worry. You would have to get about 3 bad blocks per month for 2 years straight before a failure.
Conclusion:
Now of course the best way to extend the life of the flash is to use the SD card for partitions that get continually written to like /data...and don't flash new roms 100 times a day for 2 years. But you really don't have to worry about the dreaded flash fairy coming and breaking your phone after a week. Since FAT writes to the fixed location file allocation table over and over, Samsung already has the wear leveling in place. Moreover, RFS adds journaling and posix commands to FAT and was mounted with atime. Most likely, it was doing MORE file IO than EXT4. Below is a link to some info on EXT4 disk writes. Clearly using noatime and journaling off is the best option for flash with regard to longevity, however, the difference isn't as big as you would think.
Further Reading:
The Truth About RFS (warning lacks citation): http://forum.xda-developers.com/showthread.php?t=799931
EXT4 Performance testing: http://thunk.org/tytso/blog/2009/03/01/ssds-journaling-and-noatimerelatime/
Moved to general section for discussion.
Again, no flamming...
So in your opinion even if Ext4 degregades nand lets say 2-3 times faster than RFS,would Epic still function normaly with heavy daily use for 2-3 years?
Whosdaman said:
Moved to general section for discussion.
Again, no flamming...
Click to expand...
Click to collapse
Yeah, I should have put it in general. I wasn't trying to flame, but generally speaking, quoting ones self doesn't count as a valid citation.
lviv73 said:
So in your opinion even if Ext4 degregades nand lets say 2-3 times faster than RFS,would Epic still function normaly with heavy daily use for 2-3 years?
Click to expand...
Click to collapse
Code:
EXT4_lifecycle = RFS_flash_lifecycle * (EXT4_write_cycles/RFS_write_cycles)
plapczyn said:
Code:
EXT4_lifecycle = RFS_flash_lifecycle * (EXT4_write_cycles/RFS_write_cycles)
Click to expand...
Click to collapse
Thank you so much for your expertise on this matter plapczyn and for clearing up this dispute! I would add a thanks to you but alas, I have used all of mine today hehe! I suppose I may be one of the few (or many, who knows?) people who really enjoy reading technical responses that are beyond my personal understanding as they generally allow me to glean information that was previously out of my grasp! I really appreciate your detailed response!
Great information plapczyn.
But I think a lot of this stems to how ext4 behaves compared to rfs.
To be frank, I know very little about rfs beyond the fact that it's basically vfat with journaling support. To be honest, that sounds horrible...
ext4 on the other hand, I've got a decent grasp on it:
The real danger of writes with ext4 on nand flash comes from the meta-data blocks. Luckily in ext4 (unlike ext3), the meta-data blocks can be (and are) moved. The 128MB per block (in a 4KB block file system) restriction is removed (each 128MB block required a dedicated meta-data block).
Meaning, mke2fs (part of ext4) can MOVE the meta-data blocks around outside the large virtual block group as the are grown and shrank. Which means that the meta-data blocks aren't constantly written to the same spots, spreading out the meta-data writes across the storage.
The delayed allocation feature of ext4, in addition to the block allocator (mballoc) significantly reduces fragmentation -- in addition to vastly increasing performance. Decreased fragmentation means less move(); rename(); write(); delete(); operations to fit your data in the allocated blocks, thus decreasing wear on the nand (re-writing & updating meta-data) -- atleast in comparison to ext2/ext3. See the part above on how I know very little about rfs, I can't speak on how rfs handles fragmentation and block allocation. But considering how fragmented vfat gets...
But let's put some stuff into perspective:
Does ext4 create more I/O overhead (delete(); / write(); operations specifically) than rfs? Possibly. Some very valid questions were raised -- and questions like this NEED to be raised and debated.
journaling doesn't need to be enabled on your phone. That will alleviate a great deal of the writes if you are worried about it.
Is ext4 a good idea for nand flash on Linux running a database from a reliability stand point? Hell. No.
But a lot of writes in android's /system directory running ext4? Not likely. Sure it would wear out, but probably after a few years. Besides, doesn't samsung have wear-leveling in the controller to the nand? All Android and ext4 sees is the logical level. Which would render this whole argument moot.
msponsler said:
Great information plapczyn.
But I think a lot of this stems to how ext4 behaves compared to rfs.
To be frank, I know very little about rfs beyond the fact that it's basically vfat with journaling support. To be honest, that sounds horrible...
ext4 on the other hand, I've got a decent grasp on it:
The real danger of writes with ext4 on nand flash comes from the meta-data blocks. Luckily in ext4 (unlike ext3), the meta-data blocks can be (and are) moved. The 128MB per block (in a 4KB block file system) restriction is removed (each 128MB block required a dedicated meta-data block).
Meaning, mke2fs (part of ext4) can MOVE the meta-data blocks around outside the large virtual block group as the are grown and shrank. Which means that the meta-data blocks aren't constantly written to the same spots, spreading out the meta-data writes across the storage.
The delayed allocation feature of ext4, in addition to the block allocator (mballoc) significantly reduces fragmentation -- in addition to vastly increasing performance. Decreased fragmentation means less move(); rename(); write(); delete(); operations to fit your data in the allocated blocks, thus decreasing wear on the nand (re-writing & updating meta-data) -- atleast in comparison to ext2/ext3. See the part above on how I know very little about rfs, I can't speak on how rfs handles fragmentation and block allocation. But considering how fragmented vfat gets...
But let's put some stuff into perspective:
Does ext4 create more I/O overhead (delete(); / write(); operations specifically) than rfs? Possibly. Some very valid questions were raised -- and questions like this NEED to be raised and debated.
journaling doesn't need to be enabled on your phone. That will alleviate a great deal of the writes if you are worried about it.
Is ext4 a good idea for nand flash on Linux running a database from a reliability stand point? Hell. No.
But a lot of writes in android's /system directory running ext4? Not likely. Sure it would wear out, but probably after a few years. Besides, doesn't samsung have wear-leveling in the controller to the nand? All Android and ext4 sees is the logical level. Which would render this whole argument moot.
Click to expand...
Click to collapse
To clarify a single point, and I apologize if this is a stupid question, I have read and heard that disabling journaling does increase the risk of corrupted data but how real is that risk (meaning how much of a danger is it really?) and if data is corrupted would it effect the ability of the system to function or would it merely be 'cosmetic' (for lack of a better word) corruption?
Wear leveling for the OneNAND is implemented in software, not in hardware, in the STL as plapczyn mentioned in his post. From what I can tell, STL is responsible for exposing the OneNAND chip as a block device and from there, you can format the device using RFS, YAFFS, EXT4 or whatever.
Another note here is that the type of wear-leveling that RFS and YAFFS do is called dynamic wear leveling which means wear-leveling is only done on write ops as I understand it. SSDs use static wear leveling that is capable of moving, for example, data from blocks that change rarely over to blocks that change frequently in an effort to give said blocks a chance to "rest". The wear-leveling in RFS and YAFFS doesn't do this.
In addition, SSDs include extra NAND chips and only expose some percentage of the total capacity so that extra blocks are available for wear leveling. From what I understand, the OneNAND chip has no "extra" capacity for this purpose. This means that the more total space you allocate with static data, the quicker you'll run into problems because you'll be reusing a smaller set of blocks over and over for writes. This can be overcome with careful partitioning and, of course, maintaining a reasonable amount of free space.
But, dynamic wear leveling is used in USB memory sticks and most flash memory cards as well (including Micro SDHC cards which is why they can stay on FAT/FAT32 without issues). Lots of folks run, for example, web browsers off USB memory sticks for years - I have an old 1GB drive that's several years old and I keep a copy of PortableChrome on it. All the transient data like the browser cache and history files are kept on the stick and I haven't run into any problems yet. Also note that the MLC NAND chips typically found in these devices are only rated for 10,000 P/E cycles instead of the 100,000 for SLC chips like the OneNAND.
I'm sure that someone can concoct some nightmare-scenario or torture test that will easily result in blowing past the P/E cycle limit on some blocks but realistically speaking, it would need to be a LOT of continuous activity to run into those limits. Overall, even with EXT4, the OneNAND chip is going to be far more durable than your average USB memory stick or memory card for your camera. Granted, the usage patterns aren't exactly the same but then again, OneNAND is good for an order of magnitude more P/E cycles vs. the MLC chips found in these solutions.
Journaling is there in order to rebuild data in the event of a power loss mid write.
A journal will rebuild the last write operation, staving off data corruption.
But let me ask you this...do you routinely start copying files on your phone and pull the battery? Probably not. Which is why journaling isn't very important on phones. You just have to wait the extra 10 seconds for the phone to shut down.
msponsler said:
Journaling is there in order to rebuild data in the event of a power loss mid write.
A journal will rebuild the last write operation, staving off data corruption.
But let me ask you this...do you routinely start copying files on your phone and pull the battery? Probably not. Which is why journaling isn't very important on phones. You just have to wait the extra 10 seconds for the phone to shut down.
Click to expand...
Click to collapse
Wonderful! Thank you for clearing that up!
So question, since Ext4 is efficient for stuff like usb style flash etc, and "bad"(note the quotes cuz of the claims) for phone style flash, wouldnt it be beneficial if the kernel/initramfs supported it to multi-format? essentially a partial "lagfix" where /cache and /data get partitioned to a mount on the SD and the multi writes/reads and the kernel and /system lives on the onenand, while moving the datadb partition to rfs?
more or less a hybrid where you gain the advantages of each, potentially a performance boost and reduce the wear and tear so to speak?
** just thinking of it all would love an explanation if im wrong in my thinking of behaviors of it.
*** Also because CW 3.0.0.5 supports the ability to partition however you tell it, yopu could multi partition this way, also couldnt we technically mimic google and shift to yaffs pretty easy as well(same way we did with ext4)?
art3mis-nyc said:
So question, since Ext4 is efficient for stuff like usb style flash etc, and "bad"(note the quotes cuz of the claims) for phone style flash, wouldnt it be beneficial if the kernel/initramfs supported it to multi-format? essentially a partial "lagfix" where /cache and /data get partitioned to a mount on the SD and the multi writes/reads and the kernel and /system lives on the onenand, while moving the datadb partition to rfs?
more or less a hybrid where you gain the advantages of each, potentially a performance boost and reduce the wear and tear so to speak?
** just thinking of it all would love an explanation if im wrong in my thinking of behaviors of it.
*** Also because CW 3.0.0.5 supports the ability to partition however you tell it, yopu could multi partition this way, also couldnt we technically mimic google and shift to yaffs pretty easy as well(same way we did with ext4)?
Click to expand...
Click to collapse
Exactly. It shouldn't matter if the RO partitions (system, kernel, radio) are formatted as EXT4 because the whole concern regarding program/erase cycles is mute. We should use whatever gives the best performance. Also, if it is truely that big of a deal, we could always go to YAFFS the same way we did with EXT4. Since google announced EXT4 as the default FS for 2.3+, I doubt it really was 'necessary' for YAFFS on the NAND. It is possible...that they chose to use a different low level device driver (faster?) and do the wear leveling in the FS layer.
While still on the subject,how come none of these devs use RaiserFS in their roms?Raiserfs suposed to be real fast with small files.There are few Evo/Nexus roms/kernels with Raiserfs implemented and users report big speed boost over Ext4.
lviv73 said:
While still on the subject,how come none of these devs use RaiserFS in their roms?Raiserfs suposed to be real fast with small files.There are few Evo/Nexus roms/kernels with Raiserfs implemented and users report big speed boost over Ext4.
Click to expand...
Click to collapse
There must be a reason devs have not ported this. Perhaps there are issues?
Look, Im pretty techie myself, been a server administrator/architecht for 25+ years....linux, vmware, windows, etc. So I'm not dumb lol. Just wanted to understand a bit more....please bear with me on this....but can someone line-item out what benefit ext4 is going to give me on my epic running froyo?
maybe something like:
epic with dk28 (froyo) non-ext4 : blah blah
epic with DK28 (froyo) on ext 4 : blah blah PLUS blah BLAH brrrb BLAH and its faster or whatever.
ThoughYou could move to reiserfs. I used it for many yeas, and I greatly enjoyed it.
However...reiserfs is a dead project. Hans reiser is in jail for mudering his wife. Reiserfs isn't without its problems though. It works we'll with files under 4kb. But it still uses the "big kernel lock", which is not the way to go IMHO. And reiserfs does suffer from degredation over time.
As far as the benefits of ext4:
Extents replacing block mapping improves I/O performance. It uses h-trees in the meta-data to drastically improve file location time. Actually mathematical genius...
Mutiple block allocators improve I/O time.
Ext4 is multi-threaded...yaffs2, rfs, ext2/3 are not
Allocate on flush, meaning blocks arent written until the data is ready. You'll sacrifice cpu/memory for I/O through put, but it does improve performance while reducing fragmentation.
15 years as a Linux and Solaris admin and engineer here. Always nice to meet another SA / Engineer!
hmmm ext4 is multi-threaded how?
let me throw a bit of what i understand here into the equation and ask differently....vmfs3 (esx filesystem) is comparable to ext3 right ? seems like I can multi-thread like a maniac on that filesystem with umpteen vms right? so basically ext4 is just another filesystem, like fat16 and ntfs, etc , and so we are just seeing the benefits of something written for newer SSD hardware in ext4?
Not trying to be any more newbish lol decided to just research more and found this little article....
http://www.phoronix.com/scan.php?page=article&item=ext4_btrfs_nilfs2&num=7
change the last number in the link to review the whole 7 page article.
It looks to me that ext4 is an upgrade over ext3 and shouldnt be such a big worry - as you said, it seems to have less frag problems, can write faster, writes in a different manner in order to get better throughput.
rhel6 pushes ext4 as a default, with btfrs as an option - which the above link basically proves to me with their testing. In short, ext4 is ready for primetime and works well for linux systems. And the others might not be.
So if you want a little more out of your phone, go ext4. want to be safe, stay where you are.
Since most of us wont have this phone in even 1 more year, more performance is the reason we are here anyways lol...so ext4 here I come I guess )
and a big SUP! to a fellow SA ) thanks for the info, I appreciate it.
msponsler said:
You just have to wait the extra 10 seconds for the phone to shut down.
Click to expand...
Click to collapse
Does this mean that I shouldn't be using QuickBoot while running an ext4 file system with journaling disabled?
aal1 said:
hmmm ext4 is multi-threaded how?
Click to expand...
Click to collapse
EXT4 isn't multi-threading but it supports file-level locking. YAFFS only supports partition-level locking. That means that only a single thread can write to a YAFFS partition at any moment in time. So long as that thread has a lock open, all other write operations to that partition are blocked until the lock is released.
I hear about this Data2ext that is supposed to increase the IO of the phone. Is it possible to do on the GT540?
it very easil possable from what i understand, it would just involve mounting the ext partition to data. the kernel would need a bit of edditing but it should be less to change that data2system.
I think rc.conf must be edited for this operation, and on SD Card ext partition must be created.
So, if I wanted to tackle creating Data2ext for the GT540, where would be the best place to start getting the required information?
Granted, I'm no programmer. But I've done some hex and file editing, and understand how operating systems, partitions, and all that stuff work. I used to to do a lot of work with dos back in the day, and I'm sure this can't be much harder.
edit: After a quick bit of research, Data2ext just places the data folder on the SD card, with some other frills. But in order for this to be beneficial, the SD card would need to be faster than the internal memory. So my SD card is 12MB/s write and 18MB/s read, how fast is the internal memory?
No idea, but you would need to decompile the kernel and Reddit some of the files I think. I would suggest you use HTC kitchen. Works with are phone, and there's an option to decopile the kernel.
I came across something interesting. I use AnTuTu System Benchmark, My GT540 scores pretty high, but not the highest. The only difference between me and the people who score higher is the IO score. I get a little above 100, they get close to 200. Which means, there must already be a hack out there to improve the IO score. It would be easier to install a working hack than to create a new one.
So I'll spend a bit of time looking for the IO speed increase, before I start working on data2ext. If anyone knows what they used to increase IO speed, let me know.
Are you talking about the whole data partition or the data folder of apps?
Link2SD can move app data(lib files) with the newer version.
The whole data partition, or at least most of it (more than just the apps). But, the new swiftdroid M5 has improved IO peformance,. So, I will abandon this idea and just use the new version of swiftdroid.
You may have noticed the ~1 GB cache partition on 3rd gen HDX devices that was used as temporary work space for chunky FireOS OTA updates. Typically <5% is used by Android which leaves a sizable block of space completely wasted. While it is possible to adjust partition boundaries to to expand either the System or Data partition that task is not for the faint of heart on an Android based device.
One option is to utilize a portion of the Cache partition for eMMC backed swap, especially since the stock kernel does not support zRAM. This can be attractive for those who run large or numerous apps that consume the 1.8 GB of available RAM. While Android's LMK will typically prevent OOM (out-of-memory) conditions under heavy pressure the constant recreation/reloading of killed activities can be annoying.
It is pretty easy to create a swap file in the Cache partition with an app like App2SD (just an example; not an endorsement). Suggest starting with 128 or 256 MB. You may want to crank down the swappiness value (default on most ROMs is 60) to limit write activity to eMMC which has a finite lifespan. Tuning LMK is also part of the game; lots of apps can handle that including the fan favorite L Speed or any of the popular Kernel Manager apps (EK Kernel Manager, Kernel Adiutor, etc).
eMMC backed swap has its pros and cons. While experimenting won't hurt you'll probably want to do a little research before making swap a permanent part of your config.
Enjoy!
edit: A tool like DiskInfo can help illuminate how partitions are allocated/utilized on your device.
FWIW - the following values returned acceptable results for my typical usage scenario:
- LMK thresholds (in MB): 16/32/48/64/80/96
- swappiness: 40
- vfs cache pressure: 70
Edit 04/18: Over time I have stopped twiddling with most VM parameters (accepting default values) as there was not a sustained, meaningful difference in performance to justify maintaining custom settings. However, I have found increasing the LMK "empty app" threshold provides a noticeable increase in response time with light to moderate multi-tasking. New LMK settings as follows:
LMK thresholds (in MB): 16/24/32/48/64/128.
I have found these values work well on most devices equipped with ~2 GB of RAM. In fact setting appropriate LMK values can all but eliminate the benefits of file based swap on this device. Obviously YMMV.
Quick follow-up: The config outlined in the OP remains on my daily driver and continues to enhance the overall enjoyment of this device. Over time I refined a few tunings for my workflow. Difference are subtle but yield better resource utilization. YMMV.
- swappiness: reduced to 20 to further discourage cache file writes
- VFS cache pressure: restored device default (100)
Davey126 said:
Quick follow-up: The config outlined in the OP remains on my daily driver and continues to enhance the overall enjoyment of this device. Over time I refined a few tunings for my workflow. Difference are subtle but yield better resource utilization. YMMV.
- swappiness: reduced to 20 to further discourage cache file writes
- VFS cache pressure: restored device default (100)
Click to expand...
Click to collapse
Hello Dave, I've follow your posts and managed to get 256MB for swap space but used only about 50KB. Is it work or not? How to check does a swap helps a system as android?
BR!
iksel said:
Hello Dave, I've follow your posts and managed to get 256MB for swap space but used only about 50KB. Is it work or not? How to check does a swap helps a system as android?
BR!
Click to expand...
Click to collapse
Likely working...give it time. You will see swap file utilization slowly creep up but will likely remain at a small fraction of the available space. Note: the swap file is reset (flushed) on reboot.
Setting swappiness to 20 discourages the use of the swap file except under high memory pressure. In most cases that is what you want as RAM is several magnitudes faster than eMMC. The benefit kicks in under high memory loads:
- older content in the memory cache can be (quickly) written out to the swap file freeing up RAM for current demands
- context of loaded but less frequently accessed apps is likely to be fully/partially retained avoiding a complete restart
Bumping swappiness to 40 or higher will increase swap file utilization and also change the composition of swapped contents. The default on many devices, especially on low RAM configs and/or those with zRAM, is 100. That aggressive setting will likely hurt overall performance on a 2GB device with no zRAM support (like the HDX).
Keep in mind the swap file resides in an area of permanent storage that goes largely unused on a HDX fitted with a custom ROM (FireOS uses this area for processing OTA updates). If that file were taking space away from the data partition this tweak would be of questionable value.
Davey126 said:
Likely working...give it time. You will see swap file utilization slowly creep up but will likely remain at a small fraction of the available space. Note: the swap file is reset (flushed) on reboot.
Setting swappiness to 20 discourages the use of the swap file except under high memory pressure. In most cases that is what you want as RAM is several magnitudes faster than eMMC. The benefit kicks in under high memory loads:
- older content in the memory cache can be (quickly) written out to the swap file freeing up RAM for current demands
- context of loaded but less frequently accessed apps is likely to be fully/partially retained avoiding a complete restart
Bumping swappiness to 40 or higher will increase swap file utilization and also change the composition of swapped contents. The default on many devices, especially on low RAM configs and/or those with zRAM, is 100. That aggressive setting will likely hurt overall performance on a 2GB device with no zRAM support (like the HDX).
Keep in mind the swap file resides in an area of permanent storage that goes largely unused on a HDX fitted with a custom ROM (FireOS uses this area for processing OTA updates). If that file were taking space away from the data partition this tweak would be of questionable value.
Click to expand...
Click to collapse
Good to know, thanks again!
Davey126 said:
Quick follow-up: The config outlined in the OP remains on my daily driver and continues to enhance the overall enjoyment of this device. Over time I refined a few tunings for my workflow. Difference are subtle but yield better resource utilization. YMMV.
- swappiness: reduced to 20 to further discourage cache file writes
- VFS cache pressure: restored device default (100)
Click to expand...
Click to collapse
Yet another update. After making modest tweaks to dirty/dirty background ratios I noticed a subtle increase in momentary (<1 sec) lag switching between previously loaded apps. Such behavior is symptomatic of increased memory cache pressure and potentially unnecessary swapping and/or LMK activity. Flushing the cache cured that (for awhile) but is clearly not the ideal solution. Ultimately bumping swappiness to 40 addressed the problem. Guessing the previous value (20) allowed stale application pages to remain in memory a bit too long increasing cache pressure which became evident when actively multitasking.
Bumping this thread as reminder/reinforcement of the beneficial effects for some workflows. In short, a small static swap file significantly improves the multitasking UX if returning to previous app context is important. Newer devices leverage zRAM for this purpose; HDX kernel doesn't support that.
Over time I have gravitated back to defaults for swappiness, dirty timeouts, cache pressure, etc as custom values did not yield significant measurable (or subjective) improvement to warrant changing. Less knobs to turn/tweak which is always good thing in my book!
This is what I usually do - just resize and move the extra space to userdata partition. If only the days of roms that are installed by simply extracting the files on to system partition still continues, we could get some space of the system partition out too :silly:
pipyakas said:
This is what I usually do - just resize and move the extra space to userdata partition. If only the days of roms that are installed by simply extracting the files on to system partition still continues, we could get some space of the system partition out too :silly:
Click to expand...
Click to collapse
Yep - done that too on some devices. Resizing partitions is not for the faint of heart which is why I opted to excluded it from the guide.
I had kown how deal with it ,we can rest our disk partition to make it change to data
I had kown how deal with it ,we can rest our disk partition to make it change to data or system to use!!!
Davey126 said:
Yep - done that too on some devices. Resizing partitions is not for the faint of heart which is why I opted to excluded it from the guide.
Click to expand...
Click to collapse
Why not provide that info? It is no different than flashing custom roms. You are warned by the devs that doing so brings a risk of bricking your device...proceed at your own risk.
I think it would be valuable to those that want to use that wasted space or optimize the use of the storage space available.
Hopefully you will reconsider.
droiduzr2 said:
Why not provide that info? It is no different than flashing custom roms.
Click to expand...
Click to collapse
lol - not comparable, at least with the vast majority of mobile devices that I have been exposed to. Including this one. Those who wish to muck with resizing Android partitions will find copious detail on the net...usually from those who have spent 100X the initial resizng effort trying to recover their device. Because, ya know, it is no different than flashing custom ROMs.
Davey126 said:
lol - not comparable, at least with the vast majority of mobile devices that I have been exposed to. Including this one. Those who wish to muck with resizing Android partitions will find copious detail on the net...usually from those who have spent 100X the initial resizng effort trying to recover their device. Because, ya know, it is no different than flashing custom ROMs.
Click to expand...
Click to collapse
I am pretty sure everyone on here that goes to flash a rom (change from stock) read the disclaimer and assume the risk that they can brick their device. If there were a tool or clear directions to optimize the use of storage space considering we are stuck with 16gb (no usb otg support, no external sd card) then being able to use every Mb much less Gb seems to be a helpful thing imo.
Also it's not about mucking around with just any Android device, it's about this specific device and what one would have to do.
If you are saying it is not an easy task then so be it.