[GUIDE] [STARTER] Custom Kernel Features Explained! [5/20/2013] - Galaxy Note 10.1 General

This is a simple STARTER GUIDE to kernel features/parameters and everything you need to know about custom kernel goodies before you consider flashing them. Now that there are a few custom kernels out there for our device, you may want to know about these.
I’d be glad if you could help me complete this guide.
First of all I’d like to thank all kernel guys who put countless hours into this to bring us the features which I am going to explain soon.
Overview:
Post 1:
A.: What you want to know about the CPU/GPU of your device
B.: Custom Kernel Features
Post 2:
Coming Soon!!!
A: What you may want to know about the CPU/GPU of your device:
Galaxy Note 10.1 features a 1.4GHz Quad Core CPU (Exynos 4412) and a 400MHz GPU (Mali-400).
More Data will be added soon.
B.1: CPU/GPU/IO Features which comes with Custom Kernels:
OC/UC (As for OverClock/UnderClock):
As you may know, CPU or any other processing unit features a clock frequency. Over/Under Clock simply means raising/decreasing the clock frequency of CPU.
Reason: Why would one need to overclock? Because one needs more processing power. For example if you want to experience smoother gameplay when playing high-graphic games.
Why would we need underclock? Higher processing power demands more battery. So underclocking helps us, reserve more battery. As for HOX, searching internet and texting don’t need much of processing power. So we can limit the processing power and save battery during low use of our device.
•Note1: OC/UC is not limited to CPU. GPU is also capable of OC/UC. And the interface for that is NOT available in the current custom kernels of Note 10.1.
•Note2: Gamers may not use GPU UC. Limiting GPU processing power impact significantly on your gaming experience.
UV (As for Undervolt):
Every frequency of a processing unit, demands a certain amount of power to be supplied. Undervolting to put it simple means decreasing the voltage of a certain frequency (or all of them).
Reason: The more voltage CPU/GPU gets, more heat will be generated. So mainly we UV to decrease the generated heat of CPU/GPU.
•Note1: One Frequency needs a certain minimum amount of voltage to perform correctly and the system be stable. Undervolting more than a certain amount of voltage will cause system instability.
•Note2: UV improves battery life by using less power. See this.
CPU Governors:
Frequency scaling is the means by which the Linux kernel dynamically adjusts the CPU frequency based on usage of the device. Governors refer to schemes which dictate to the kernel how it should do these adjustments. (From rootzwiki)
To put it simple, Governors are the way that CPU frequency is adjusted according to the demand of operating system.
Selecting a proper governor for your CPU is crucial to the performance and battery preserving of your device. For example if you are low using your device you may use a more battery friendly governor and if you want to play games you may use a more power consuming performance governor.
•Note: See Droidphile’s Great Guide about Governors to be familiar with each one of them and the ones that you should use in different situations here.
I/O Schedulers (As for Input/Output):
Input/output (I/O) scheduling is the method that computer operating systems use to decide which order block I/O operations will be submitted to storage volumes. I/O Scheduling is sometimes called 'disk scheduling'. (From Wikipedia)
To put it simple, Schedulers are the way reading and writing to the SD card is managed.
The same things that is said in the Governors part is applied here, too.
•Note: See Droidphile’s Great Guide about Schedulers here.
ReadAhead buffer size:
In terms of reading data from SD card, there is a cache which is used as a buffer. The size of that cache is readAhead buffer size. The size has a direct impact on your reading speed of your SD. So giving it a right amount is crucial.
File System “X” R/W (As for Read/Write):
Android by default doesn’t support all the File Systems (What are file systems?! See here). So some kernels may add certain file system R/W. The most popular unsupported file system is NTFS.
B.2 Features of Custom Kernels (AKA Goodies!!!):
MultiCore PowerSaving:
This feature try to group up tasks in the least cores possible. To put it simple, it will focus in using least cores for your tasks to be done. This means less cores are active and so more battery life. Also this will decrease performance.
•Note: To enable use TricksterMod app. 0 for disable and 2 for the most aggressive.
CIFS:
In order to manage your cifs/nfs network shares on your Android device you need the proper and working modules. And so you can mount/unmount your network accessible file resources and access your data.
B.3 Other Features:
Init.d Support:
There are some scripts that run every time your device boot up which are located in /etc/init.d Those are called init.d scripts. One of the most popular init.d scirpts that is available for Note 10.1 is this.
Eco Mode - NEW:
Eco Mode is an optimized dual core solution for quad-core SOC (System on Chip) like the Qualcomm S4-pro. This should allow for Maximum battery life without sacrificing performance. - Paul Reioux
TCP Congestion Control:
The choices in this section, address how the operating system kernel manages flows of information in and out of the kernel, which is at some level the "switchboard operator" of your handset. More info here.
Better to leave this options as is. Cubic or Westwood as the default of your kernel.
Dynamic FSync - NEW:
fsync is a system call in Unix/Linux. "man fsync" says:
fsync() transfers ("flushes") all modified in-core data of (i.e., modified buffer cache pages for) the file referred to by the file descriptor fd to the disk device (or other permanent storage device) so that all changed information can be retrieved even after the system crashed or was rebooted. This includes writing through or flushing a disk cache if present. The call blocks until the device reports that the transfer has completed. It also flushes metadata information associated with the file (see stat(2)).
Click to expand...
Click to collapse
So it's something embedded in programs after a related set of write operations to ensure that all data has been written to the storage device. The bolded part is what makes it interesting for some to disable it - "The call blocks" means the calling program waits until it's finished, and this may create lag. The downside is that if the system crashes, the data on the storage devices may be inconsistent, and you may lose data. (From here).
Dynamic FSync, makes it possible for fsync operation to be asynchronous when the screen is on, and synchronous when the screen is off. And what does asynchronous mean? Means OS issues fsync call, but not necessarily immediately at commit time for each transaction. It delays the FSync call for a certain amount of time. In case of a crash, the transactions not yet sync'ed in the last delay time before the crash may be rolled back, but the state of the data is always consistent. (From here).
zRAM:
In order to explain zRAM more precisely first we need other terms defined clearly:
Swap can be compared with the swap file on Windows. If the memory (RAM) is almost complete, the data which is not used actively (ex. background applications) will be stored on hard drive so as to re-evacuate RAM free. If required, this data is then read back from there easily. This will preserve performance with no lose at multitasking (the main reason we use swap).
In zRAM unnecessary storage resources are compressed and then moved to a reserved area in the fixed RAM (zRAM). So in other words, zRAM is a kind of swap in memory. As the data is compressed not much memory needs to be preserved as zRAM. However, the CPU has to work more because compressed data has to be unpacked again when it is needed). The advantage clearly lies in the speed. Since the swap partition in RAM is much faster than this is a swap partition on a hard drive.
In itself a great thing. But Android does not have a swap partition, and therefore brings Android ZRAM under no performance gain as would be the case with a normal PC. (From here with some editing.)
Click to expand...
Click to collapse
What we need to know essentially lies here:
zRAM off = Low use data will be stored the way they are in the memory. This will cause no extra load on CPU, yet need more RAM.
zRAM on = Low use data will be stored compressed in the memory. This will cause extra load in CPU as to store or restore data, yet preserve more Free RAM.
The main use of zRAM is when you are using a heavy ROM that eats up all your RAM. This will allow multitasking to be more functional. On light ROMs, or for those who don't multitask much, this is not necessary.
Note: Also there are methods to use a part of internal memory as Swap space. This is not as fast as zRAM. But no RAM will be used at all and CPU load is less. Though I am not sure that this has been brought to our device yet. Will add more data soon on this part.
Work in progress, will add more info soon.

Reserved for OP

csec said:
•Note2: UV does not impact battery life (or it is not noticeable).
Click to expand...
Click to collapse
First of all, I would like to praise this guide for its information and depth, secondly I would like to help improve it;
emprize said:
the only different for me is temperature, for battery saving, always not noticeable for me, placebo effect most likely
Click to expand...
Click to collapse
AndreiLux said:
This is a nonsensical argument, temperature aka heat, is power. If you are getting a less heated phone, then your battery life is improving. If this wouldn't be the case then you are holding the solution to the world's energy problems in your hands.
Click to expand...
Click to collapse
just thought i would share some thoughts of a well respected and knowledgeable developer.
http://forum.xda-developers.com/showthread.php?p=36723175#post36723175
Regards
Jack

JSale said:
First of all, I would like to praise this guide for its information and depth, secondly I would like to help improve it;
just thought i would share some thoughts of a well respected and knowledgeable developer.
http://forum.xda-developers.com/showthread.php?p=36723175#post36723175
Regards
Jack
Click to expand...
Click to collapse
Wow, that was hell of an argument. Seems to be theoretically true. What I referred to in above is real life experience with my phone and according to kernel guys down at HOX forums.
However the information was really interesting. I will add it to the OP.
Thanks.
Sent from my GT-N8000 using Tapatalk HD

Recently Added to the OP:
- Eco Mode
- Dynamic FSync

zRAM added to the guide.

Related

[Guide] I/O scheduler

If you don’t know what`s an I/O scheduler or want to know the working of each scheduler read this http://forum.xda-developers.com/showpost.php?p=23616564&postcount=4 Thanks to droidphile
Thanks to @KNIGHT97 for his tips and experience http://forum.xda-developers.com/showthread.php?t=2784750 he inspired me to do this
Introduction
Every Scheduler has its advantage and disadvantage, so its difficult to find what suits you. So I have come up with a guide to help those people decide the right I/O scheduler according to their usage.
This guide is from my observation, the information below may or may not be accurate. If the scheduler doesn’t work as well as mentioned below, then maybe your kernel has a different version of the scheduler.
People most often use benchmarking apps to find a good I/O scheduler, but that’s the wrong way to go. First off all the benchmarks results are similar for each scheduler.
Let me tell you how a scheduler can affect your performance
Some scheduler try handling multiple operations at one time (Uses more CPU and battery) while others do few or only one at a time (Use less CPU and saves battery)
Let’s take an example, copying few big files one at time is faster than copying all at once… So at situation like that I/O scheduler that does one at time is good
But when there is multiple small files, the speed is very slow when copying one at a time but copying many at once is faster… So here we can say I/O scheduler that handle multiple operation is better
When I mention light I/O operation, it means the file size is small or if its writing a big file it doesn’t need high speed(like a long non-HD video recording) and as for heavy I/O operation means it’s reading/writing a big file
If an app requests read/write one at a time or does requests multiple,its performance can be affected by the scheduler accordingly
I/O scheduler
CFQ (Completely Fair Queuing)
CFQ it is said to work well with multi core processor and balances both write a read. It handles multiple operations real well but can be slow if most of the operation is read and little write (maybe cause its reserving place for another write operation /?)
Many people say that the media scanning is really, and I agree to it. (I have like 300 apps on my 64GB sandisk microSD and it is overloaded with many big and small files, So… I can easy tell the difference cause it takes like 1:30 minutes before the externalSD apps icon to load while on other scheduler it takes around 30 sec to 1 min
Ideal if you download files a lot and use apps that mainly read files(like music/ videos) or you use apps that needs to read and write equally.
ROW (Read Over Write)
It is just like CFQ but just as the name implies, it gives priority for reading. It also handles multiple operations real well.
Its ideal when you run apps has more read operations and little write
So if you use more multiple reading operations(like gallery). Row is the best scheduler for you
I wanted to use scheduler that does one at time operation but some apps like tumblr does reading and little cache writing at the same time so I use ROW instead for internal storage
BFQ (Budget Fair Queueing)
BFQ is just like CFQ but you could say it’s better for heavy process or lots of multiple process and uses the CPU even more. Due to its high CPU usage when doing lots multiple I/O operations other non-I/O operations can become laggy.
Ideal for burst shots, HD recording and other multiple heavy read and write
Most computers uses BFQ
Noop
Noop is a rather simpler scheduler compared to those above. There is no priority for read or write, it does thing in a first in first out(FIFO) method. So it basically does things one at a time.
Ideal for those that have light usage of the storage and for people who wants to save battery or wants less CPU usage
Many apps don’t require to write and read at the same time so it’s ideal for those apps
Ideal for doing one at a time like only downloading, playing video, music and taking photo or video
Sometimes can be slower than deadline, SIO or zen
Not ideal for some games
Deadline
Its bit similar to noop but handles processes little differently, it creates a queue(FIFO) of around 5 and gives each queue a turn by giving the process a deadline. It handles multiple light I/O operations real well
Ideal for those that have medium usage of the storage
Might suit better than ROW/CFQ for those that who don’t have heavy and multiple I/O operation
It is said that when the CPU is overloaded, set of processes that may miss deadline is largely unpredictable.
SIO
It’s the mix of the Noop and deadline. It’s known for its simplicity of handling I/O request and handles operations one at a time
Ideal for those that have light-medium usage of the storage
Many apps don’t require to write and read at the same time so it’s ideal for those apps
Ideal for doing one at a time like only downloading, playing video, music and taking photo or video
I use SIO for the external storage cause that’s where I keep my Games, Photos, Videos and Music and I naturally cant play music when playing a game or watch a video while listing to music which is ideal for a scheduler that does one at a time operation
Zen
Zen is said to also be simply yet more close to noop. From my observation, It has notably more speed for I/O read and write (this is the one that has noticeable difference in benchmark only ) but uses the CPU a lot if the I/O operation is heavy which can cause more lag than BFQ. It does one at a time operation.
Might be good for games that reads a lot (like graphic data) since its fast. You don’t have to worry about the fact that zen uses a lot of CPU which could make the game lag cause most I/O operation is short and when done it won’t use the CPU until next operation (need to test more)
Ideal for burst shots and HD recording also
Many apps have short/light I/O operation so you wont face lag but an increase in speed compared to SIO or Noop (sometimes faster than others also)
Not ideal for multitasking like browsing and downloading at the same time
VR
It’s said to treat each process fairly by giving them a deadline and has performance fluctuation so sometimes it’s the most fastest scheduler or just plain slow.
Not sure if it handles multiple operations well or one at time operations.... I need to research more
Not sure what it’s ideal for hmm…. Maybe for those that dont have fixed usage
FAQ
When deciding an I/O for each storage, treat the usage separately.eg I listen to music(external with SIO) and use tumblr (internal with ROW) though SIO does things one at time, there won’t be music shuttering. But if my music was on internal with SIO there will be music shuttering
If you too many different uses and cant stay with one I/O scheduler. Use Perfomance profile by h0rn3t. It can change I/O scheduler and other settings depending on the app http://forum.xda-developers.com/xposed/modules/xposed-performance-profile-t2723739
Since most external SDcard has slow speed, I recommend a scheduler that does operations one at a time
As for the battery usage, it doesn’t make much of difference cause usually most I/O operations aren’t that long unless you are downloading or something. Battery usage depends on the CPU usage caused by the scheduler so….
SIO<Noop<Deadline<VR<ROW<=CFQ<Zen<=BFQ
SIO supposedly has least battery usage while Zen and BFQ has the most
And that’s the end of my guide
Do comment and share your opinions or preference
If you find any info incorrect or that I could add, do tell
Reserved
Reserved
Nice info mate,maybe u can add more info for governors,and which scheduler suits with xxx governor. Just a tip,?
Sent from my GT-N7100 using Tapatalk

[GUIDE] Tips for Getting the Most from Your Custom Kernel

These tips are based in my study and experience of using different programs and techniques to get solid and stable performance from custom kernels. It's a long read, but a good read if you really want to minimize heat and maximize performance. This information works on any custom kernel that can be used with Synapse. This guide is aimed at beginner and intermediate users who are interested in custom kernel settings but don't know where to start. These are the most commonly optimized settings on Android.
***DVFS Rule #2 is exclusive to Samsung devices (so if you don't have a Samsung device don't use it!)***
Rule 1: What works for some won't work for others - This has always been the most important rule of custom kernels. One person using Barry Allen governor, overclocked to 3.07 GHz, and not using intelliplug will never hit 70C while your phone using those same settings might immediately cause the phone to reboot. All synapse changes made to a device must be tested by you to ensure compatibility for your hardware. Some of you may be curious, if this is the case, why Google would have everyone use the interactive governor by default? I would respond to you that they originally assigned everyone to OnDemand while they worked to make a better and more compatible Interactive governor. But as you know there are a LOT of governors out there and knowing what is best for your device is to read and to test it. The good news is that once you find something that works well you can stick with it as long as you own the device.
Rule 2: DVFS (Dynamic Voltage and Frequency Scaling) Disabler - This is mentioned in a few threads, but requires a repeat. Samsung uses DVFS to contol your CPU frequencies and voltage levels. This isn't an issue when you're running a stock kernel because it works in conjunction with MP-Decision to decide on the proper frequency level. When you're running emotion kernel you don't want DVFS to control anything. You want it to let Faux's Intellithermal work together with the kernel routines to shut down cores it doesn't need.
R2 Part 1: If your phone is running hot: Install Xposed, download Wanam Xposed (enable the module), Open Wanam, click "Advanced" menu, Uncheck "Enable TouchWiz DVFS" and reboot.
R2 Part 2: Following the reboot open Synapse, scroll over to CPU Drivers scroll down and put a checkbox in "Enable Intelli-hotplug", leave the rest alone, and click the check mark at the top of the screen to save changes. Scroll over one more screen to "Thermal" and put a check in the box, "Enable or Disable Intelli Thermal Control" and leave the settings below as they are unless you know what you are doing.
Note: The phone will be still be warm for awhile, but if you give the phone a break and allow the heat to dissipate it should run cooler and more consistently with the added bonus of shutting down 3 out of the 4 cores unlike MP-Decision which always leaves 2 cores live. After doing this if you are noticing performance loss or stuttering it's time to find a new governor that makes the most of these settings. Which leads me to Rule 3....
Rule 3: Choosing a new governor - This part is going to require some reading on your part, but I can assure you that it is worth your time. This guide http://forum.xda-developers.com/general/general/ref-to-date-guide-cpu-governors-o-t3048957 by @gsstudios is OUTSTANDING (please thank him while you are there) and carries a LOT of information about the governors and other performance informations. So find something you like from the guide descriptions to try out and leave it in place for 24 hours or more as long as you aren't experiencing excessive lag or reboots. If it performs well under normal usage the next step is to watch battery life. There won't be a single governor that does everything perfectly, but finding something that fits your needs is key. And it's OKAY if you decide to stick with Interactive, it is a very well made governor and many are based off of that design, but I equate it to finding a good pair of shoes. Lots of things will protect your feet from the ground, but shoes that you enjoy and work well make all the difference in the world.
NOTE: You can use Antutu for stability testing but do not pick a new governor and then run benchmarks and expect top score unless you chose the performance governor. Antutu specifically maxes out your device voltage and frequency to see what the device can handle which is not what intelliplug is for. If you are following this guide benchmarks are the least of your concerns.
Rule 4: Don't forget your Scheduler - Believe it or not, everything runs off of your phone storage. So using an optimized scheduler for you device could make great strides in battery and performance improvements. Unlike choosing a governor, you can absolutely benchmark your flash memory to find out which scheduler works best for you. I only use one and I am going to shamelessly plug it here: I use Android Tuner to benchmark my flash speed with different schedulers and it works great for me. How it works:
Note:To help with choosing where to start with schedulers you can use the guide I linked to above which will give you an idea of what to use. That guide has an exceptional amount of information about how these types of schedulers work and what you should use for getting what you want from your phone. What follows is my advice for testing benchmark write performance and those steps are completely optional, but it never hurts to see how your device does with your chosen scheduler.
R4 Part 1: Launch Android Tuner, from the main menu accept root, acknowledge that root is powerful, scroll all the way to the far left screen, choose "SD Card Read Speed", click "Benchmark in the center, and then notice at the top you have /storage/emulated/0 which is your faster internal memory (which you should optimize first) and then /storage/ExtSdCard which you can benchmark second.
R4 Part 2: When you are ready to benchmark choose your "I/O Scheduler" from the drop down menu and click "Run". Your results will be listed with the fastest read-ahead size on the left and the speed at which it read on the right. Higher = Better -- Then you would choose the other test size (10MB in my case) and you see which readahead value is the fastest for those files. If you manage to get a readahead that the 100MB and 10MB test highest values are the same, then you've won the lottery. Since that is almost impossible try to remember what your top 5 fastest are and choose the one that performs well in both tests. If you want to keep testing and check other schedulers for raw read speed just switch them at the top and re-run both tests.
After you've found which scheduler gives you the performance you want, go back to Synapse, scroll to "I/O" tab, and adjust pick your scheduler and readahead value, and finally click the check mark to apply it.
Rule 5: Undervolting (IMO) does more harm than good - I'll let you Google it but it follows the rule of diminishing returns. You might be running Synapse and are able to undervolt 75mV and still cruise through menus and apps like it's no problem. But I promise you at some point the CPU is going to call for that voltage you starved it of and it will reboot. It's not a matter of if it will happen, but when it will happen. This comes down to your personal silicon and I do not recommend undervolting at all. Other more advanced users may disagree with me, but for the average user there are better ways to save battery and negate heat than starving your CPU and GPU at a kernel level. Especially with DVFS disabled.
And that's all I have for you. These tips work for any custom kernel that uses Synapse to manage their settings. I've personally test my setup with Emotion and the kernel we are not to speak of on XDA. If you like the idea of getting the most out of your device then a custom kernel might be for you, but it requires a lot of patience. If you just need something that works I definitely recommend the stock kernel because they really did a good job with the Lollipop release of our kernel.
I hope this helps some of you and I hope that everyone can take something away from this guide. Feel free to let me know if this helped you in some way or if you need some guidance on the topic. I'll do my best to backup my claims with proper research and documentation.
Thanks
Google - For research help and a great phone OS
Linux - The foundation of this OS
gsstudios - For his outstanding guide and work on researching this information (Definitely hit 'Thanks' on his post)
Note 4 ROM and Kernel Devs - we are very lucky to have a lot of great devs for our device; their work is appreciated and gives me something to write about
XDA and their members - where I've learned most everything about one of my favorite operating systems

[ROOT] Advanced Tuning

This thread is for the fortunate subset of 5th Gen Fire devices that are rooted and rocking a custom ROM. It should also work on rooted FireOS (5.3.1 and below) that have both ads and OTA updates blocked.
There have been numerous posts regarding uneven performance while multitasking along with sluggish response after waking the device from a long slumber. Most recognize this is due to excessive swapping associated with limited user addressable RAM. While there are a number of incremental 'tweaks' that can marginally improve this behavior my objective was to realize a more substantial improvement with minimal effort, knob turning and side effects. To date I have realized the benefit (minimal lag; responsiveness approaching devices with twice the RAM; woohoo!) but still working on the automation that will make it largely transparent. Lacking the time to work on the latter I thought it best to toss out the high level config and let others, if interested, work through both validation and implementation details.
As an aside, I have used the same technique on a 2nd gen HD running CM 11 that had been shelved for many months due to the same issues. It now hums along at a respectable pace and is once again a joy to use.
The secret sauce is simple: expand zram space allocation and add a small, secondary swap file in a normally unused location in permanent storage.
Tools (or adb/shell/terminal commands for those with furry chests):
- EX Kernel Manager (EXKM) or other tool/technique that can manage zram parameters (note: I find current builds of Kernel Adiutor too unstable for this work)
- Apps2SD Pro or other tool/technique that can create/manage traditional swap files and swap space priorities
- BusyBox Installer (v1.27.2+) or other tool/technique to insure startup scripts are properly executed
- L Speed (optional) - for ease of implementing a few discretionary performance tweaks
- DiskInfo PRO (optional) - visualize partition utilization
- RAM Truth (optional) - simple app to visualize RAM utilization
Technique (highly abbreviated):
- boot device to rooted ROM; install above tools or equivalents
- use EXKM to resize zram to 128 MB (note: zram must be temporarily disabled)
- use Apps2SD to:
* add a static, 128 MB swap file in the cache partition which remains largely unused with custom ROMs
* important: reassign swap file priorities (button at top right): 0 for the static file; 1 for zram
* increase swappiness to 100 if necessary (EXKM can also be used to set swappiness and other VM parameters)
* verify both swap spaces are enabled via sliders​
Note to geeks: I understand how swappiness, vcache pressure and other virtual memory tunings really work; let's not debate that here. Same with the merits of running a static swap file in combination with zram or the 'dangers' of placing that file in the volatile cache partition. We're talking a hand held device with very modest resources...not the server room with a 99.9x SLA. Yes, zswap would be better. However ...
Optional tweaks:
- use EXKM or L Speed to set LMK parameters to: 24, 32, 40, 48, 56, 64
- use EXKM or L Speed to set write deferral (aka 'laptop mode') to 5 sec
- toggle KSM off/on in L Speed (sets performance enhancing parameters)
- with zram disabled enable zram tweak in L Speed which will establish a 96 MB space along with other optimizations; I find the smaller size ideal for my workflow; YMMV zRAM size can be set with EXKM or another kernel manager.
Challenges:
While the options exist none of the tools noted above can reestablish custom zram space or automatically create a static swap file on boot. I believe this is a kernel issue but have not ruled out interference by Lineage 12.1 which is the ROM I have been testing with. Unfortunately, I lack the time (and quite frankly motivation) to toss Nexus or another ROM on to a spare device to verify the culprit. I might do a bit more testing my my HD 7 which uses a different kernel and ROM. --> Turns out an old version of BusyBox was the culprit; updating to 1.27.2 solved the problem allowing the suggested configuration to be automatically reestablished on reboot. I added my favorite BusyBox installer to the prerequisite tools.
Another issue is the potential for maintaining 'stale' annon pages in zram for a period of time but that's a left field item that probably won't effect most users. A quick fix is occasionally swiping away all apps.
Provide discussion/feedback in this thread. I may or may not respond depending on available time. I love a deep dive (shared above) but once the goal has been reached my interests move elsewhere.
Edit: struck-out references to L Speed after developer/maintainer acknowledged "cooperation" with Kingo Root team (borderline malware).
Quick follow-up: I continue to enjoy benefits noted in the OP with a dual cache configuration. Device remains responsive after waking and typically returns to 'full' performance within a few seconds. I can easily switch between a handful of apps (browser, mail, Play Store, XDA labs, etc) with minimal lag and context preservation; no reloading web pages after switching away. No notable impact on battery life. Really no disadvantages at all - at least with my work flows.
Regardless of tuning one has to keep in mind the modest hardware resources on Fire 7s. Load up a game or two or a couple heavy Amazon/Google apps and things go south pretty quick. That said, responsiveness far better than any stock config, even when the device is clearly overburdened.
Another quick note. Simply adding a classic swap file (suggest 128 GB) to the largely unused cache partition can yield a decent improvement in multi-tasking performance without the complexity of tinkering with zRAM. All steps can be accomplished with the free tool Apps2SD or equivalent. Happy to document if there is sufficient interest.
Note: Be sure to change zRAM swap priority to "1" so it receives preferential treatment over the classic swap file. zRAM will almost always be faster than classic swap but there is only so much if it. The swap file will be used once zRAM is fully utilized (not entirely accurate but generally true).
FWIW - depreciated references to L Speed app in OP after developer acknowledged "cooperation" with Kingo Root team. While nefarious behavior is unlikely there are other options that avoid any potential conflict of interest.
Davey126 said:
...
Technique (highly abbreviated):
- boot device to rooted ROM; install above tools or equivalents
- use EXKM to resize zram to 128 MB (note: zram must be temporarily disabled)
- use Apps2SD to:
* add a static, 128 MB swap file in the cache partition which remains largely unused with custom ROMs
* important: reassign swap file priorities (button at top right): 0 for the static file; 1 for zram
* increase swappiness to 100 if necessary (EXKM can also be used to set swappiness and other VM parameters)
* verify both swap spaces are enabled via sliders
Note to geeks: I understand how swappiness, vcache pressure and other virtual memory e)
Click to expand...
Click to collapse
When you say Cache partition for the swap file are you referring to "/cache" or the second partition for app2sd?
rjmxtech said:
When you say Cache partition for the swap file are you referring to "/cache" or the second partition for app2sd?
Click to expand...
Click to collapse
"/cache" partition which resides on faster internal storage. Anything on external storage will be significantly slower due to interface limitations.
@Davey126 it has been about a day or two and I can confirm that by following these instructions it has brought new life into my KFFOWI 5th gen. This paired with some L Speed Tweaks (even though you say not to trust them, I opted to use it for a few performance tweaks) and the Lineage ROM from @ggow makes my user experience on the device quite pleasing.
rjmxtech said:
@Davey126 it has been about a day or two and I can confirm that by following these instructions it has brought new life into my KFFOWI 5th gen. This paired with some L Speed Tweaks (even though you say not to trust them, I opted to use it for a few performance tweaks) and the Lineage ROM from @ggow makes my user experience on the device quite pleasing.
Click to expand...
Click to collapse
Thanks for the feedback.
As for L Speed I don't distrust the current developer/maintainer but no longer feel comfortable providing an implicit endorsement. Who you associate with makes a difference IMHO. Each person needs to make their own call. There is no magic in L Speed; it simply offers a convenient UI to various well publicized system 'tweaks' that can be implemented using other tools/techniques.
Davey126 said:
Optional tweaks:
- use EXKM or L Speed to set LKM parameters to: 24, 32, 40, 48, 56, 64
- use EXKM or L Speed to set write deferral (aka 'laptop mode') to 5 sec
- toggle KSM off/on in L Speed (sets performance enhancing parameters)
- with zram disabled enable zram tweak in L Speed which will establish a 96 MB space along with other optimizations; I find the smaller size ideal for my workflow; YMMV zRAM size can be set with EXKM or another kernel manager.
Click to expand...
Click to collapse
Thanks for the guide, it already seems to have helped a lot with smoothness but I wanted to know how to set these options using EXKM.
I'd never heard of the app before today and I've had a good look through the menus but can't seem to find somewhere to set these values. I'm guessing these are the usage % values used by the CPU governor to jump up and down power states?
NeuromancerInc said:
Thanks for the guide, it already seems to have helped a lot with smoothness but I wanted to know how to set these options using EXKM.
I'd never heard of the app before today and I've had a good look through the menus but can't seem to find somewhere to set these values. I'm guessing these are the usage % values used by the CPU governor to jump up and down power states?
Click to expand...
Click to collapse
No, governor tunning is a different beast not addressed in the OP (although I do that on some higher end devices).
With regard to EXKM:
- LMK values can be set under memory -> low memory killer
- KSM toggle can also be found in the memory section
- it appears laptop mode can not be set in EXKM (not that important)
As an alternative to laptop mode you can twiddle 'dirty ratio' and 'dirty background ratio' in EXKM. Suggest setting to 30 and 15, respectfully.
Edit: you may also want to take a peek at Kernel Adiutor (correct spelling). While I find it a bit flaky it exposes more controls vs EKKM and costs less too.
Davey126 said:
No, governor tunning is a different beast not addressed in the OP (although I do that on some higher end devices).
With regard to EXKM:
- LMK values can be set under memory -> low memory killer
- KSM toggle can also be found in the memory section
- it appears laptop mode can not be set in EXKM (not that important)
As an alternative to laptop mode you can twiddle 'dirty ratio' and 'dirty background ratio' in EXKM. Suggest setting to 30 and 15, respectfully.
Edit: you may also want to take a peek at Kernel Adiutor (correct spelling). While I find it a bit flaky it exposes more controls vs EKKM and costs less too.
Click to expand...
Click to collapse
Ah, LMK, not LKM. Thanks again.
Also, just a small suggestion but wouldn't it be better to remove the references to L-Speed and leave an edit message at the bottom rather than having the red, striked through text in the middle?
NeuromancerInc said:
Ah, LMK, not LKM. Thanks again.
Also, just a small suggestion but wouldn't it be better to remove the references to L-Speed and leave an edit message at the bottom rather than having the red, striked through text in the middle?
Click to expand...
Click to collapse
Thanks for noting LKM/LMK typo in OP - fixed that.
I will likely clean-up the OP at some point as there are other refinements (eg: tweaking dirty ratios) that may prove beneficial to a larger community.
Davey126 said:
Thanks for noting LKM/LMK typo in OP - fixed that.
I will likely clean-up the OP at some point as there are other refinements (eg: tweaking dirty ratios) that may prove beneficial to a larger community.
Click to expand...
Click to collapse
I was wondering what differences need to be made for a 7th gen hd 10. I know this guide is written for a 5th gen (1gig RAM, 8 gig drive), but I have a 7th Gen (2gig RAM, 32GIG drive) with 2gig zram (priority 1) and 4 gig swap on the /data partition (priority 2). What would be the best LMK values? Also, is it ok to have the swap on /data vs /cache (my /cache only has 400mb)?
Thanks for any help!
edit: in the OP, it says to set laptop mode using L-speed, and then L-speed is crossed out (I understood why), but no alternative is listed for doing this. I just wanted to add that you can use kernel adiutor to change laptop mode. It's on virtual memory settings.
mistermojorizin said:
I was wondering what differences need to be made for a 7th gen hd 10. I know this guide is written for a 5th gen (1gig RAM, 8 gig drive), but I have a 7th Gen (2gig RAM, 32GIG drive) with 2gig zram (priority 1) and 4 gig swap on the /data partition (priority 2). What would be the best LMK values? Also, is it ok to have the swap on /data vs /cache (my /cache only has 400mb)?
Thanks for any help!
edit: in the OP, it says to set laptop mode using L-speed, and then L-speed is crossed out (I understood why), but no alternative is listed for doing this. I just wanted to add that you can use kernel adiutor to change laptop mode. It's on virtual memory settings.
Click to expand...
Click to collapse
It appears you have priorities reversed. Higher values receive preference. The magnitude of the difference is irrelevant. zRAM is considerably faster than eMMC based storage; the latter should only be used when zRAM is exhausted or momentarily unavailable for whatever reason.
The container sizes also seem excessive. 2 GB of zRAM effectively leaves no uncompressed memory on a HD 10 which is highly inefficient. I wouldn't go over ¼ available RAM or ~½ GB. Toss in a 500 MB of eMMC based (overflow) swap file and you're good to go. If you regularly use more than 1 GB of swap on a relatively low end Android device then something else is amiss.
I am aware Kernel Adiutor can set laptop mode but did not want to introduce another tool into the mix...especially one that has demonstrated inconsistent behavior. FWIW - recent testing suggests 1-2 sec may be a better choice vs the 5 sec mentioned in the OP as the latter may trigger lockouts during sustained writes (eg: large file download on a fast connection). I currently use 1 sec and happy with the results. I will likely update the OP with this info once satisfied that the benefit is worth the effort.
All things being equal I see no reason to change LMK values suggested in the OP. Especially given the availability of zRAM and swap.
Thanks for these instructions, Davey126!
I just tried this process on my 5th Gen Fire 7" which I recently installed with the LineageOS ROM. I was not familiar with the EX Kernel Manager and Apps2D Pro tools, but it was reasonably clear how to make the settings changes you recommend.
I added the 128Mb swap under /cache and increased the zram swap to 128Mb, setting it to priority 1. Maybe it's my imagination but my device does seem a lot snappier when switching between running applications, and better at returning to previously displayed data in applications instead of reloading pages.
Cheers!
Matrey_Moxley said:
Thanks for these instructions, Davey126!
I just tried this process on my 5th Gen Fire 7" which I recently installed with the LineageOS ROM. I was not familiar with the EX Kernel Manager and Apps2D Pro tools, but it was reasonably clear how to make the settings changes you recommend.
I added the 128Mb swap under /cache and increased the zram swap to 128Mb, setting it to priority 1. Maybe it's my imagination but my device does seem a lot snappier when switching between running applications, and better at returning to previously displayed data in applications instead of reloading pages.
Cheers!
Click to expand...
Click to collapse
Thanks for sharing first impressions. Time will tell if the benefits are durable; certainly have been for me with no adverse side-effects.
Another suggestion to reduce wake lag: install Greenify (or similar tool) and add commonly used apps to the action list even if not flagged as background abusers (you may need to override Greenify's sensible defaults via the gear icon). This prevents multiple apps from becoming simultaneously 'active' on wake which is a huge contributor to lag on lower end devices with limited resources (CPU and RAM). Hibernated apps will launch when needed with minimal delay and NO loss of context. Works a treat.
Be sure to add your favoriate browser, mail, messaging and social media apps to the hibernation list as all like to 'check in' after a long slumber.
Although Greenify can auto-hibernate apps on most devices (works best with Xposed Framework) I use an automated approach that invokes Greenify's widget when the screen goes off. There's still some momentary lag on wake but the device remains responsive which is a huge improvement.
Hi Davey126,
thx for the guide, it seems to work awesome.
However, i have the one problem thats the settings in EXKM regarding to "zRAM Size", "dirty ratio" and "dirty background ratio" are lost after rebooting the device. Is there a way to make the settings reboot proof? Interestingly for the "LKM" settings there is an option "Apply at bootime", which does the trick for me, but only for the LKM options.
Kind regards,
Stephan
IronMan1977777 said:
Hi Davey126,
thx for the guide, it seems to work awesome.
However, i have the one problem thats the settings in EXKM regarding to "zRAM Size", "dirty ratio" and "dirty background ratio" are lost after rebooting the device. Is there a way to make the settings reboot proof? Interestingly for the "LKM" settings there is an option "Apply at bootime", which does the trick for me, but only for the LKM options.
Kind regards,
Stephan
Click to expand...
Click to collapse
Likely BusyBox is missing or outdated. Try installing this (I use the pro version).
Davey126 said:
Likely BusyBox is missing or outdated. Try installing this (I use the pro version).
Click to expand...
Click to collapse
Ok. I bought BusyBox Pro and updated to Version 1.28.1-Stericson. Still all settings in EXKM besides LMK get lost after rebooting the device ...
IronMan1977777 said:
Ok. I bought BusyBox Pro and updated to Version 1.28.1-Stericson. Still all settings in EXKM besides LMK get lost after rebooting the device ...
Click to expand...
Click to collapse
- verify BusyBox is property installed w/no conflicting builds
- uninstall/reinstall EXKM
- test if behavior can be duplicated with another (free) kernel manager like KA

[MODULE] KTweak - Backed by evidence

Another "kernel optimizer"?
No. Well, yes. However, a "kernel optimizer" is a poor way to put it. KTweak performs kernel adjustments based on facts and evidence, unlike other optimizers with poorly written or heavily obfuscated code. For example:
LSpeed is almost 4000 lines long; completely unnecessary.
NFS Injector uses compiled binaries that are closed source... yuck. Not to mention the typos in the README. This one is hard to look at.
LKT sets random nonsensical build.props that likely don't even exist.
MAGNETAR uses (you guessed it) compiled binaries that install themselves to your /system/etc/ directory (???). Great idea, install an external closed source, compiled binary to the system partition.
Need I go on?
What's different about KTweak?
Unlike other "kernel optimizers", KTweak is:
Concice, at around 200 lines long,
Entirely open source with no compiled components,
Backed by logic and evidence,
Designed by an experienced kernel developer,
Non-intrusive, being completely systemless.
Benchmarks
The following benchmarks were performed on a OnePlus 7 Pro running the stock kernel provided by the OEM on Android 10.
hackbench -pTl 4000 (lower is better)
Without KTweak: ~20-50 seconds on average
With KTweak: ~4-6 seconds on average
perf bench mem memcpy (lower is better) (average of 50 iters)
Without KTweak: 14.01 ms
With KTweak: 10.40 ms
synthmark (voicemark) (higher is better)
Without KTweak: 374.94
With KTweak: 383.556
synthmark (latencymark little) (lower is better)
Without KTweak: 10
With KTweak: 10
synthmark (latencymark big) (lower is better)
Without KTweak: 12
With KTweak: 10
The Tweaks
In order to remain genuine, I have commited to explaining each and every kernel tweak that KTweak applies. Grab your coffee, this could take a while.
kernel.perf_cpu_time_max_percent: 25 --> 5
This is the maximum CPU time long perf event processing can take as a percentage. If this percentage is exceeded (meaning perf event processing used too much CPU time), the polling rate is throttled. This is reduced from 25% to 5%. We can afford inaccuracies with perf events in exchange for more time that a foreground task can use.
kernel.sched_autogroup_enabled: 0 --> 1
The Linux Kernel scheduler (CFS) distributes timeslices to each active task. For example, if the scheduling period is 10ms, and there are 5 tasks running, CFS will give each task 2ms of runtime for that scheduling cycle. However, this means that a SCHED_OTHER task may compete with a SCHED_FIFO task. Autogrouping groups task groups together during scheduling. For example, if the scheduling period is 10ms, and there are 6 SCHED_OTHER tasks running and 4 SCHED_FIFO tasks running, the SCHED_OTHER tasks will get 50% of the runtime and the SCHED_FIFO tasks will get the other 50%. For each task group, the timeslices are once again divided. The SCHED_FIFO tasks will get 12.5% runtime and the SCHED_OTHER tasks will get ~8.3% runtime. This usually offers better interactivity on multithreaded platforms. See scheduling priority documentation: https://man7.org/linux/man-pages/man7/sched.7.html See autogrouping off: https://www.youtube.com/watch?v=uk70SeGA7pg See autogrouping on: https://www.youtube.com/watch?v=prxInRdaNfc
kernel.sched_enable_thread_grouping: 0 --> 1
To my knowledge using the limited documentation of this tunable, this is basically autogrouping for thread groups.
kernel.sched_child_runs_first: 0 --> 1
When forking a child process from the parent, execute the child process before the parent process. This usually shaves down some latency on task initializations, since most of the time the child process is doing some form of heavy lifting.
kernel.sched_downmigrate: 20 20
Do not allow tasks to migrate back down to a lower-power CPU until the estimated CPU utilization would go below 20% on said CPU. This means tasks will stay on higher-performance CPUs for longer than usual.
kernel.sched_upmigrate: 80 80
Similar to the previous tunable, do not allow CPUs to migrate to the higher-performance CPUs unless the utilization goes above 80%.
kernel.sched_group_downmigrate: 20
The same as kernel.sched_downmigrate, except for whole task groups.
kernel.sched_group_upmigrate: 80
The same as kernel.sched_upmigrate, except for whole task groups.
kernel.sched_tunable_scaling: 0
This is more of a precaution than anything. Since the next few tunables will be scheduler timing related, we don't want the scheduler to scale our values for multiple CPUs, as we will be providing CPU-agnostic values.
kernel.sched_latency_ns: 10000000 (10ms)
Set the default scheduling period to 10ms. If this value is set too low, the scheduler will switch contexts too often, spending more time internally than executing the waiting tasks.
kernel.sched_min_granularity_ns: 1000000 (1ms)
Set the minimum task scheduling period to 1ms. With kernel.sched_latency_ns set to 1ms, this means that 10 tasks may execute within the 10ms scheduling period before we exceed it.
kernel.sched_migration_cost_ns: 500000 (0.5ms) --> 1000000 (1ms)
Increase the time that a task is considered to be cache hot. According to RedHat, increasing this tunable reduces the number of task migrations. This should reduce time spent balancing tasks and increase per-task performance. See RedHat: https://www.redhat.com/files/summit...tuning-of-Red-Hat-Enterprise-Linux-Part-1.pdf
kernel.sched_min_task_util_for_boost: 25
This value effects if tasks should be migrated to a higher performant CPU if it's utilization is above this amount. Allow tasks to be migrated upwards if the user is triggering a touch boost and the task is above 25% utilization.
kernel.sched_min_task_util_for_colocation: 50
This value is the same as the former, except it occurs when the user is not touching the screen. We shouldn't upmigrate tasks if the user isn't actively interacting with them (i.e. video streaming).
kernel.sched_nr_migrate: 32 --> 64
When migrating tasks between CPUs, allow the scheduler to migrate twice as many as usual. This should increase scheduling latency marginally, but increase the performance of SCHED_OTHER tasks.
kernel.sched_schedstats: 1 --> 0
Disable scheduler statistics accounting. This is just for debugging, but it adds overhead.
kernel.sched_wakeup_granularity_ns: 1000000 (1ms) --> 5000000 (5ms)
Require the current task to be surpassing the new task in vmruntime by 5ms instead of 1ms before preemption occurs. This should reduce jitter due to less frequent task interruptions.
kernel.timer_migration: 1 --> 0
Disable the migration of timers among CPUs. Usually, when a timer is created on one CPU, it would be able to be migrated to another CPU. However, this increases realtime latencies and scheduling interrupts. It can be turned off.
net.ipv4.tcp_ecn: 2 --> 1
Enable Explicit Congestion Notification for incoming and outgoing negotiations. This reduces packet losses.
net.ipv4.tcp_fastopen: 3
Enable data transmission during the SACK exchange point in TCP negotiation. This reduces packet latencies. Enable it for senders and receivers.
net.ipv4.tcp_syncookies: 1 --> 0
This tunable, when enabled, prevents denial of service attacks by allowing connection ACKs to be tracked. However, this is more-or-less unnecessary for a mobile device. It is more applicable for servers. Disable it.
net.ipv4.tcp_timestamps: 1 --> 0
RedHat claims that TCP timestamps may cause performance spikes due to time accounting code on high-performance connections. Disable it. See RedHat: https://access.redhat.com/documenta...ml/tuning_guide/reduce_tcp_performance_spikes
vm.compact_unevictable_allowed: 1 --> 0
Do not allow compaction of unevictable pages. With this set to 1, more compactions can happen at the cost of small page fault stalls. Turn this off to compact less but avoid aforementioned stalls.
vm.dirty_background_ratio: 5 --> 10
Start writing back dirty pages (pages that have been modified but not yet written to the disk) asynchronously at 10% memory dirtied instead of 5%. Writing dirty pages back too early can be inefficient and overutilize the storage device.
vm.dirty_ratio: 20 --> 30
This tunable is the same as the former, but it is the ceiling for synchronous dirty writeback, meaning all I/O will stall until all dirty pages are written out to the disk. We usually won't need to worry about hitting this value, as the background writeback can catch up before we reach 20% memory dirtied. But as a precaution (i.e. heavy file transfers), increase this value to a 30% ceiling to prevent visible system stalls. We are sacrificing available memory in exchange for a reduced change of a brief system stall.
vm.dirty_expire_centisecs: 300 (3s) --> 1000 (10s)
This is the longest that dirty pages can remain in the system before they are forcefully written out to the disk. By increasing this value, we can allow the dirty background writeback to take its time asynchronously, and avoid unnecessary writebacks that can clog the flusher thread.
vm.dirty_writeback_centisecs: 500 (5s) --> 0 (0s)
Do not periodically writeback data every 5 seconds. Instead, leave it to the dirty background writeback to wake up when the dirty memory of the system hits 10%. This allows the dirty pages to stay in memory for longer, possibly increasing cache locality as the page cache is still available in memory.
vm.extfrag_threshold: 500 --> 750
Compact memory more often, even if the memory allocation was estimated to be due to a low-memory status. This lets us put more data into RAM at the expense of running compation more often. This is a worthy tradeoff, as it reduces memory fragmentation, which is incredibly important for ZRAM.
vm.oom_dump_tasks: 1 --> 0
Do not dump debug information when (or if) we run out of memory. If we have a lot of tasks running, and are OOMing often, then this overhead can add up.
vm.page-cluster: 3 --> 0
Disable reading additional pages from the swap device (in most cases, ZRAM). This is the same philosophy as disabling readahead.
vm.reap_mem_on_sigkill: 0 --> 1
When we kill a task, clean its memory footprint to free up whatever amount of RAM it was consuming.
vm.stat_interval: 1 --> 10
Update /proc/stat information every 10 seconds instead of every second, reducing jitter on loaded systems.
vm.swappiness: 100 --> 80
Swap to ZRAM less often if we don't have to. ZRAM can become expensive due to constant compression and decompression. If we can keep some of the memory uncompressed in regular RAM, we can avoid that overhead.
vm.vfs_cache_pressure: 100 --> 200
This tunable controls the kernel's tendency to reclaim inodes and dentries over page cache. Inodes and dentries are information about file metadata and directory structures, while page cache is the actual cached contents of a file. By increasing this value to 200, we tell the kernel to prefer claiming inodes and dentries over the page cache, increasing the chance of a cache hit when referencing recently used data, while not polluting the RAM with less-important information.
Next Buddy
By scheduling the last woken task first, we can increase cache locality since that task is likely to touch the same data as before.
No Strict Skip Buddy
Usually, the scheduler will always choose to skip tasks that call yield(). However, these yeilding tasks may be of higher importance than the last or next buddy that are available. Do not always skip the skip buddy if we don't have to.
No Nontask Capacity
The scheduler decrements the perceived CPU capacity that longer the CPU has been idle for. This means that an idle CPU may be skipped during task placement, and a task can be grouped with a busier CPU. Disable this to improve task start latency.
TTWU Queue
Allow the scheduler to place tasks on their origin CPU, increasing cache locality if the CPU is non-local (i.e. a cache hit would definitely have been missed).
Governor Tweaks
hispeed_load: 90 --> 80: Jump to a higher frequency if we are approaching the end of the frequency list, where a task may begin to starve or begin to stutter.
hispeed_freq: : Set the "higher freq" (referencing hispeed_load) to the maximum frequency available to take advantage of Race-To-Idle.
CAF CPU Boost Tweaks
input_boost_freq: 1.4 GHz (closest freq) as a generic, universal boost frequency to the little cluster.
input_boost_ms: 250 ms, not consuming too much power but boosting for important, interactive events such as clicking on things.
I/O
iostats: 1 --> 0: Disable I/O statistics accounting, which adds overhead.
readahead: 0: Disable readahead, which is intended for disks with long seek times (HDD), whereas mobile devices use flash storage with zero seek time.
nr_requests: 128 --> 512: Allow more I/O requests to be issued before flushing the queue, slightly increasing latencies but allowing more requests to be executed before being put to sleep.
noop / none: Use a scheduler with little CPU overhead to reduce I/O latencies, which is essential for fast flash storage (eMMC & UFS).
ZRAM
ZRAM reduces disk wear by reducing disk writes, and also increases cache locality by allowing more data to fit in RAM at once. KTweak configures ZRAM to take up at most half of the available RAM on the system, which is a good ratio of RAM to ZRAM for a mobile device.
Other Notes
You should know that KTweak applies after 60s of uptime as to prevent Android's init from overwriting any values.
Contact
You can find me on telegram at @tytydraco. Feel free to email me at [email protected].
Downloads
All releases and the entire source code for KTweak is available on GitHub:
Downloads
XDA:DevDB Information
KTweak, Tool/Utility for all devices (see above for details)
Contributors
tytydraco, tytydraco
Source Code: https://github.com/tytydraco/ktweak
Version Information
Status: Stable
Current Stable Version: v1.0.7
Stable Release Date: 2020-08-16
Created 2020-08-16
Last Updated 2020-08-16
What are the requirements to use this? Root with Magisk is a given - but Linux kernel version, Android OS version, device, etc?
MishaalRahman said:
What are the requirements to use this? Root with Magisk is a given - but Linux kernel version, Android OS version, device, etc?
Click to expand...
Click to collapse
The script adjusts only the tweaks that are compatible with your version. It contains tweaks for EAS, HMP, and supports 3.18 and above in testing. It likely supports even lower. Otherwise, it's totally universal.
KTweak now has an official Telegram channel for release information and changelogs: @ktweak
Thank you for your work and great explanation ?
This will work on lineage kernel?
Hello there. Im using ver 1.0.9. I updated and tried 1.1.0 but ended up in reboots after boot complete. I am using NX kernel which seta vfs cache pressure to 100. May that be the case?
Now back to 1.0.9 and everything seems fine. However, I had to uninstall and reinstall magisk, because when Ive flashed 1.0.9 ober 1.1.0, I was still experiwncing problems.
lapirado said:
Thank you for your work and great explanation ?
This will work on lineage kernel?
Click to expand...
Click to collapse
This should work on any kernel
myaslioglu said:
Hello there. Im using ver 1.0.9. I updated and tried 1.1.0 but ended up in reboots after boot complete. I am using NX kernel which seta vfs cache pressure to 100. May that be the case?
Now back to 1.0.9 and everything seems fine. However, I had to uninstall and reinstall magisk, because when Ive flashed 1.0.9 ober 1.1.0, I was still experiwncing problems.
Click to expand...
Click to collapse
Hi! Thanks for the report. Which device and Android version are you using? I have an idea of why this could be happening.
myaslioglu said:
Hello there. Im using ver 1.0.9. I updated and tried 1.1.0 but ended up in reboots after boot complete. I am using NX kernel which seta vfs cache pressure to 100. May that be the case?
Now back to 1.0.9 and everything seems fine. However, I had to uninstall and reinstall magisk, because when Ive flashed 1.0.9 ober 1.1.0, I was still experiwncing problems.
Click to expand...
Click to collapse
Hi myaslioglu,
I've released v1.1.1 which adds an additional 20 second sleep after Android reports that it has been initialized. This should prevent init and any post-boot init scripts from running alongside ktweak. I believe your issue stems from ZRAM resizing itself alongside bootup, where memory is most scarce, possibly causing your device to think it failed to bootup correctly.
Please let me know if v1.1.1 fixes your issue. It is live on GitHub releases and Telegram.
tytydraco said:
Hi myaslioglu,
I've released v1.1.1 which adds an additional 20 second sleep after Android reports that it has been initialized. This should prevent init and any post-boot init scripts from running alongside ktweak. I believe your issue stems from ZRAM resizing itself alongside bootup, where memory is most scarce, possibly causing your device to think it failed to bootup correctly.
Please let me know if v1.1.1 fixes your issue. It is live on GitHub releases and Telegram.
Click to expand...
Click to collapse
Hello. Sorry I forgot to report my device, it is s8 exynos just in case. But 1.1.1 fixed the issue. Works perfect! Thanks
myaslioglu said:
Hello. Sorry I forgot to report my device, it is s8 exynos just in case. But 1.1.1 fixed the issue. Works perfect! Thanks
Click to expand...
Click to collapse
1.1.2 got me reboots again. What has changed? Only the swappiness? Worked fine on 1.1.1 and did not work on 1.1.0 as aforementioned.
Can zram be the issue?
1.1.1 made my op7pro freeze with weeb kernel ??* and we have the same device draco
Doesn't work on stock kernel of redmi note 9s, freezes a few seconds after boot,forcing me to hard restart
It compatible with j5 (2015)?
To those of you getting freezes, I have identified the cause to be related to ZRAM. I will push an update today that will remove ZRAM tweaking from the script.
The reason I believe this is happening is because ktweak tries to resize the ZRAM. That requires all data that is currently in ZRAM to decompress and enter your main memory unit. If we run out of memory during this process, we will freeze.
The solution is to not adjust the zram size when using ktweak. Sorry for the inconveniences that may have been caused. I'll get straight to fixing this as soon as possible.
I've heard about your work from xda Telegram channel.
I read the info and thought to test it, but as some users reported as latest update having freeze issue. I'll test with freeze issue fixed update.?
tytydraco said:
To those of you getting freezes, I have identified the cause to be related to ZRAM. I will push an update today that will remove ZRAM tweaking from the script.
The reason I believe this is happening is because ktweak tries to resize the ZRAM. That requires all data that is currently in ZRAM to decompress and enter your main memory unit. If we run out of memory during this process, we will freeze.
The solution is to not adjust the zram size when using ktweak. Sorry for the inconveniences that may have been caused. I'll get straight to fixing this as soon as possible.
Click to expand...
Click to collapse
I ever disabled ZRAM on my OP7 PRO feels more fluid
Is working on mi 10 pro?
Vivo v9
How can I root vivo v9 1723?
I tried many methods but nothing work for me.
Please help
tyagis777 said:
How can I root vivo v9 1723?
I tried many methods but nothing work for me.
Please help
Click to expand...
Click to collapse
Really? You created an account for this post?

[KERNEL] EAS Kernel for sagit (Dev discussion)

Hi, there
EAS kernel (mainly Energy Aware Scheduler) was firstly proposed by ARM few years ago and nowadays used more widely in custom ROM, although the OEMs didn't support it. The main obstacle of developing EAS kernel is the busy-cost-data for a specified CPU. Fortunately, Pixel 2 uses the same CPU as our XiaoMi 6 and Google shared the data in AOSP. Developers can directly use it for sagit. But EAS is not just a piece of data, we need drivers to make it work better for us.
The CPU scheduler supported in EAS is the famous /schedutil/ governor. This governor is unique and very different from others that it handles the frequencies based on system load. In our sagit kernel, there are already WALT (Window Assistant) codes left by Codeaurora, which can be used to predict the system load. So far, everything seems ready except of boosting things.
SchedTune was born for that. It creates 5 cgoups for different kinds of tasks, one of which is the /top-app/. Settings for this group will significantly affect the user experience. However, AFAIK, there are still no applications in userspace to control the interface, aka /schedtune.boost/. Its value can be altered from 0 to 100. Kernel developers often hard-coded some values to trigger the boost action. But /top-app/ should not be boosted forever. For example, if you're watching a video or listening some music, boost is not preferred for such sustainable tasks. To solve this issue, Joshuous introduced a mechanism call "Dynamic SchedTune Boost", see Ref. It dynamically set the boost for each cgroup task using slot-based tracking system. It is perfect in itself. But if we go through all boost techniques in the kernel, a giant of coupling system will appear and disturb our mind.
Let me try to sort it out.
In the original kernel released by XiaoMi, there are already a few boosting methods implemented by QCOM. Boosting inerface is created and coorperated with performance daemon in userspace. The first one is /proc/sys/kernel/sched_boost. The value is received from a daemon and afterwards triggers a bunch of tasks in the kernel. Codes for these tasks are deeply embedded in HMP, which for now seems impossible to coexist with EAS. To save this interface, Josh again wrote a stub function to build a connection between this interface and boost in schedtune, but he didn't count the value of sched_boost, which was defined from 0 to 3. The problem here becomes simple that we shall map [0 3 2 1] to [0..100]. Thus, I made an effort to write down a function as follows:
C:
return data > 0 ? (4 - data) * 33 : 0;
The factor 33 is trivial and can be changed as you wish. I take 5 for conservative boosting. The returned value is then directly written to the interface /schedtune.boost/ for /top-app/. A positive side effect is that the code can naturally switch on/off the boost status.
The second boost is carried out by codes in soc/qcom/msm_performance.c, which set min/max CPU frequencies upon user touch. Do you remember the original idea behind the /interactive/ CPU governor? xD It's a bit annoying to pull-up the min frequencies constantly in background and waste energy. It seems safe to partially disable this boost in the kernel and just reserve the sysfs name.
The last one locates in cpu-boost.c, cooperated with HMP codes to boost CPU based on event notifier. Totally disable this feature.
After reviewing all boost techniques, we know that CPU frequency is no doubt most critical factor of all. I don't know why QCOM implemented so many boosts, even coupled with each other. I know that msm8998 performs very well without them. I build an experimental EAS kernel, driven by a combination of /schedutil/ and /schedtune/. Tested few days in a small community, it is running good with respect to power efficiency.
Kernel Source
Sorry, no prebuilt kernel here, just sharing the idea about EAS and boost.
After reading more resources, I gathered new information on this topic. The schedtune.boost value of "top-app" was controlled by /libperfmgr/ daemon in userspace, which has been used on Google Pixel phones for years. Therefore, optimization in kernel side seems not that neccessary with libperfmgr.
@strongst​Community Admin may consider deleting this thread. Thank you.
THREAD LOCKED
Requested by OP.
Regards,
shadowstep
Forum Moderator

Categories

Resources