So I'm doing my very best here... but I'm not ready to start partitioning my PC to run Linux.
Linux is very intimidating to me, as Ive tried it on several occasions. I was literally raised on DOS/Windows and my father HATED anything *ix. So trying to UNLEARN all that and re-learn linux is just really freakin frustrating.
IS there no other way to compile a kernel other than having a PC running linux?
I just want to enable JIT on the newest FreshToast to see the increase in speed. Just "Playing around" i guess.
The vast amounts of information out there.... just.... dont know where to start!
what is JIT?
poor_red_neck said:
So I'm doing my very best here... but I'm not ready to start partitioning my PC to run Linux.
Linux is very intimidating to me, as Ive tried it on several occasions. I was literally raised on DOS/Windows and my father HATED anything *ix. So trying to UNLEARN all that and re-learn linux is just really freakin frustrating.
IS there no other way to compile a kernel other than having a PC running linux?
I just want to enable JIT on the newest FreshToast to see the increase in speed. Just "Playing around" i guess.
The vast amounts of information out there.... just.... dont know where to start!
Click to expand...
Click to collapse
jit isn't compiled into the kernel, its compiled from android source. You can either try patching aosp to get a libdvm.so to have jit in it, or you could jut push the libdvm.so from dusted donuts. you don't need linux to get jit on your phone
Just In Time compiler.
I don't fully understand, but it seems like code that is used often is compiled to be run more efficiently before its actually used... I'm probably WAY off here but Im no expert. LOL
http://en.wikipedia.org/wiki/Just-in-time_compilation
poor_red_neck said:
Just In Time compiler.
I don't fully understand, but it seems like code that is used often is compiled to be run more efficiently before its actually used... I'm probably WAY off here but Im no expert. LOL
http://en.wikipedia.org/wiki/Just-in-time_compilation
Click to expand...
Click to collapse
This method of implementing jit has worked the most consistently for me on FreshToast >> http://forum.xda-developers.com/showthread.php?t=659387
Yeah... JIT will make things a lot faster. Noticeably faster. It's not in the kernel live DarchStar said, so you get a break there If you do want to mess with kernels though, there are SOME scripts (very few) written in Perl that can be used to extract the ramdisk and kernel from the boot.img. I just took TrevE's advice and messed with it in Ubuntu Linux, since it was all native that way. You WILL need two files though... mkbootfs and mkbootimg if you want to recompile the kernel and ramdisk back into a boot.img though.
poor_red_neck said:
Just In Time compiler.
I don't fully understand, but it seems like code that is used often is compiled to be run more efficiently before its actually used... I'm probably WAY off here but Im no expert. LOL
http://en.wikipedia.org/wiki/Just-in-time_compilation
Click to expand...
Click to collapse
it just reads your apps in native code making them run faster and more efficient, thats all.
if you want it on fresh toast, try this: http://forum.xda-developers.com/showthread.php?t=637419
Try a virtual linux install. I have this running on my Win 7 laptop and it's awesome!
http://forum.sdx-developers.com/tip...ubuntu-9-10-vm-dev-station-on-windows-part-1/
Related
just see this would be nice lol
http://androidcommunity.com/nexus-one-running-450-faster-thanks-to-froyo-android-2-2-20100511/
this is being discussed about 4 threads down...
Indeed but does "running 450% faster" mean overclocking the snap dragon?
ChillRays said:
Indeed but does "running 450% faster" mean overclocking the snap dragon?
Click to expand...
Click to collapse
no it means jit and a very well written version of android
bobdude5 said:
no it means jit and a very well written version of android
Click to expand...
Click to collapse
Thanks bob. Now i'll spend the next 30 minutes searching google.com to find out what "jit" means.
ChillRays said:
Thanks bob. Now i'll spend the next 30 minutes searching google.com to find out what "jit" means.
Click to expand...
Click to collapse
30 minutes? Nah!
http://en.wikipedia.org/wiki/Just-in-time_compilation
In computing, just-in-time compilation (JIT), also known as dynamic translation, is a technique for improving the runtime performance of a computer program. JIT builds upon two earlier ideas in run-time environments: bytecode compilation and dynamic compilation. It converts code at runtime prior to executing it natively, for example bytecode into native machine code. The performance improvement over interpreters originates from caching the results of translating blocks of code, and not simply reevaluating each line or operand each time it is met (see Interpreted language). It also has advantages over statically compiling the code at development time, as it can recompile the code if this is found to be advantageous, and may be able to enforce security guarantees. Thus JIT can combine some of the advantages of interpretation and static (ahead-of-time) compilation.
Several modern runtime environments, such as Microsoft's .NET Framework and most implementations of Java, rely on JIT compilation for high-speed code execution.
Click to expand...
Click to collapse
http://www.androidspin.com/2010/02/18/jit-compiler-for-android-just-in-time-for-google-io/
JIT stands for “just-in-time” compilation or “dynamic translation”. It compiles/translates bytecode into native machine code at runtime before native execution. This allows software to run faster and perform better.
Click to expand...
Click to collapse
Thanks Paul. Much appreciated.
i lovve wikipedia
all that jibber jabber means BOOST for Android. Now can we has some mind blowing 3D apps?
I read somewhere that the speed increase will only really work with some applications and the new JIT compiler wont have much effect on things such as 3D games.. Can someone confirm this?
3D games are typically limited by the GPU, and JIT won't help too much in that regard, but if the game is CPU intensive, then it should help.
Few things really seem slow on the N1, so I'd guess this is actually better news for those with older phones. But at least more CPU efficiency means less processing time, which means slightly better battery life, even if a delay is introduced when JIT compiles something during boot/launch.
You can try out an unstable version of JIT on CM 5.0.5.3 or whatever it is. It gave the Linpack benchmark a 200% boost and things did seem a little faster when I used it a few weeks ago (I seem to recall app dock scrolling was much better). Stability was pretty poor which is why Cyan et al have been waiting for Google to fix up JIT.
Won't be better for people with older phones because they won't see 2.2 for months and months and months. Most haven't even seen a 2.1 upgrade.
And the thing that would help 3D games is Open GL ES 2.0 support and a proper SDL lib
MODS
can we get thread moved to other discussion or locked?
LevitateJay said:
I read somewhere that the speed increase will only really work with some applications and the new JIT compiler wont have much effect on things such as 3D games.. Can someone confirm this?
Click to expand...
Click to collapse
Parts of some 3D games are written in the NDK. Anything compiled natively cannot be accelerated by JIT (they are already freaking fast).
spazoid said:
can we get thread moved to other discussion or locked?
Click to expand...
Click to collapse
You see that little triangle with the exclamation point on the top right of every post? That's how you report something to mods, rather than "Mods!".
spazoid said:
can we get thread moved to other discussion or locked?
Click to expand...
Click to collapse
can u stop being a troll and go find something else to do instead of complaining and trolling forums?
Pardon me if this has already been discussed extensively, but with JIT implementation in 2.2, what speed increases can be noticeably snappier in real life use? A given would be flash support, yes, but where else? Will cold opening market, camera, browser see any change? Will initial loading up of games be affected? Swiping through home screens?
I'm all for free performance increase, but it seems that everyone is getting over excited because it can crunch out equations faster.
cxdist said:
Pardon me if this has already been discussed extensively, but with JIT implementation in 2.2, what speed increases can be noticeably snappier in real life use? A given would be flash support, yes, but where else? Will cold opening market, camera, browser see any change? Will initial loading up of games be affected? Swiping through home screens?
I'm all for free performance increase, but it seems that everyone is getting over excited because it can crunch out equations faster.
Click to expand...
Click to collapse
I can get my tip amount to my waiter 450% faster than I can now.
If that doesn't pump blood to your tool, I don't know what will.
cxdist said:
Pardon me if this has already been discussed extensively, but with JIT implementation in 2.2, what speed increases can be noticeably snappier in real life use? A given would be flash support, yes, but where else? Will cold opening market, camera, browser see any change? Will initial loading up of games be affected? Swiping through home screens?
I'm all for free performance increase, but it seems that everyone is getting over excited because it can crunch out equations faster.
Click to expand...
Click to collapse
The Nexus One does nothing but crunch out equations
Thing is, we don't know what kind of difference its going to make, but it will NOT be 450%, anyone expecting this WILL be disappointed.
The best guess we can make is running JIT enabled ROMs, when I ran one on the Hero the Linpack score jumped, but I did not notice ANY speed boost in real use
JCopernicus said:
I can get my tip amount to my waiter 450% faster than I can now.
If that doesn't pump blood to your tool, I don't know what will.
Click to expand...
Click to collapse
How about getting the waiter to tip you?
I am running CM6 A013 with oc-legend-cm-2.6.29.6 kernel. Everything seems to be working just fine except for the wifi. Has anyone gotten the wifi to work with the oc kernels?
I read in another thread that we could take the wlan.ko file from an old ROM. Does anyone have a copy of this file or would anyone be willing to pull the file so that I can test it? I really appreciate it.
same situation here. neither kernels work with cm6's wifi and hotspot. can someone fix this please! i really want my wifi back! i didnt do a backup before hand!
Im looking at this and it looks easier than just replacing the entire kernel like you guys did, read here from that guy's post above:
"How to do it for kernel_legend_13be9c9c:
At first, you should read zanfur's post and his patch.
I just modified two tables in acpuclock-arm11.c excluding his having written.
1. modify cpufreq.c to let SetCPU to access freq tables
2. modify acpuclock-arm11.c to let HTC Legend be able to overclocking
3. modify msm7227_defconfig to disable PERFLOCK [optional]
You might not need modify defconfig when you use SetCPU which can purchase in Android Market.
SetCPU can disable PERFLOCK by setting. ([menu] -> [Perflock Disabler])"
Click to expand...
Click to collapse
So, with that said, I would take that file or file(s) and replace them with the ones in our current kernel made for our phone... might work and your wifi won't be broken.
You guys are running a kernel made for a different device.
could some one say in plain english the exact steps to get the old kernel back? the one from CM6
lilhaiti said:
I am running CM6 A013 with oc-legend-cm-2.6.29.6 kernel. Everything seems to be working just fine except for the wifi. Has anyone gotten the wifi to work with the oc kernels?
I read in another thread that we could take the wlan.ko file from an old ROM. Does anyone have a copy of this file or would anyone be willing to pull the file so that I can test it? I really appreciate it.
Click to expand...
Click to collapse
That kernel is not built for our wifi chip. The kernel needs to be built for the BCM chipset and not the TI. Further, the wifi module needs to be in sync with the kernel build. Android 101.
I am not sure what we are going to gain from overclocking. There just doesn't seem to be any end goal, other than bragging rights on a benchmark where the Aria can only hope to be among the best of the mediocre CPUs. If it's for flash - forget it. We don't have the instruction set required in the CPU. The downside is the potential to add instability and confuse test results for mods that can actually increase functionality.
I like overclocking and running a couple benchmarks once in a while. For day to day use I'm more interested in downclocking but only if it can increase battery life.
Sent from my HTC Liberty using XDA App
the extra speed of the OC makes a big different on the games, that's the reason I like it... Games run smoother with the OC kernel.... do the test.. try to run Abduction with the stock Kernel for a while and then with the OC.. you'll see the difference... also Live Wallpapers with the stock kernel is choppy... it is smooth with the OC.
Hell my nexus one has been over clocked since the day I got it (rooted that same day ) not that a 1ghz phone really needs it. Well the Nexus One maybe for gaming but my Captivate and Vibrant on the other hand doesn't need it period. You won't get better performance gaming with any other android handset out there to date. Still my Captivate is over clocked to 1.2ghz lol. Like I say there's something fun about pushing the limits.
It really just another thing to tweak and play around with on your device. It's always fun to push the limits.
There are several reason why I would like to overclock. 1. The 3D photo gallery loads photos really slow with the stock kernel, but with an overclocked kernel the pics loaded very quickly. 2. It's nice to run a 3D game or two without chopping. Just to name a couple. It would be nice to have an Aria kernel that works with all of the phones hardware. And showing off benchmark number is nice as well.
http://www.pcworld.com/businesscent...x_kernel_patch_delivers_huge_speed_boost.html
http://forum.xda-developers.com/showthread.php?t=844458
could this be worked into Epic 4G kernels as well?
tyl3rdurden said:
http://www.pcworld.com/businesscent...x_kernel_patch_delivers_huge_speed_boost.html
http://forum.xda-developers.com/showthread.php?t=844458
could this be worked into Epic 4G kernels as well?
Click to expand...
Click to collapse
WOW. I am seriously impressed by your "keeping up with the times" mentality. Good job on noticing this!
So...
"n tests by Galbraith, the patch reportedly produced a drop in the maximum latency of more than 10 times and in the average latency of the desktop by about 60 times. Though the merge window is now closed for the Linux 2.6.37 kernel, the new patch should make it into version 2.6.38."
Along with an Overclocked Froyo kernel (once source is out) this should REALLY improve our experiences.
I mentioned in another thread that I am in talks with Paragon software
http://www.paragon-software.com/exp...ocs/technologies/Paragon_UFSD_for_Android.pdf
for NTFS and HSF access. I think that is is POSSIBLE that this is actually a software patch, although it may need to be placed into the kernel itself as a driver. I promise to update as soon as they get back to me as I just spoke to the devs there yesterday.
Looks like our experience is about to improve dramatically!
Already in IntersectRavens latest kernel and wildmonk's latest beta kernels for nexus one. Check the threads
Click to expand...
Click to collapse
From the other xda thread someone mentioned that some kernels have already implemented. I am sure some of them would be glad to share how it is implemented and how easily it can be done. I know it is different phones/kernels but the idea behind it should be similar.
Dulanic said:
From the other xda thread someone mentioned that some kernels have already implemented. I am sure some of them would be glad to share how it is implemented and how easily it can be done. I know it is different phones/kernels but the idea behind it should be similar.
Click to expand...
Click to collapse
We don't have a source kernel for Froyo yet to do this. Someone correct me if I am wrong please.
Edit: I can't find anything mentioning this patch. If anyone has a link post it. I don't believe this is implemented anywhere yet.
I found the below info here:
http://www.reseize.com/2010/11/linux-kernel-patch-that-does-wonders.html
Below is the video of the Linux desktop when running the kernel and the patch in question was applied but but disabled:
As you can see, the experience when compiling the Linux kernel with so many jobs is rather troubling to the Linux desktop experience. At no point in the video was the 1080p sample video paused, but that was just where the current mainline Linux kernel is at with 2.6.37. There was also some stuttering with glxgears and some responsiveness elsewhere. This is even with all of the Linux 2.6.37 kernel improvements up to today. If recording a video of an older kernel release, the experience is even more horrific! Now let's see what happens when enabling the patch's new scheduler code
It is truly a night and day difference. The 1080p Ogg video now played smoothly a majority of the time when still compiling the Linux kernel with 64 jobs. Glxgears was also better and the window movements and desktop interactivity was far better. When compiling the Linux kernel with 128 jobs or other workloads that apply even greater strain, the results are even more dramatic, but it is not great for a video demonstration; the first video recorded under greater strained made the "before look" appear as like a still photograph.
This could be potentially patched into our Eclair kernel if the changes aren't too intrusive, and by the sounds of it they're not.
The mainline patch was against 2.6.39 kernel however, our froyo kernel will be 2.6.32 and eclair is 2.6.29 - so we're several revisions behind in eclair.
It's definitely interesting, but it's geared toward desktops using the group scheduler - absolutely worth a try if that scheduler works with android easily ( most of the community kernels are using BFS scheduler however )
cicada said:
This could be potentially patched into our Eclair kernel if the changes aren't too intrusive, and by the sounds of it they're not.
The mainline patch was against 2.6.39 kernel however, our froyo kernel will be 2.6.32 and eclair is 2.6.29 - so we're several revisions behind in eclair.
It's definitely interesting, but it's geared toward desktops using the group scheduler - absolutely worth a try if that scheduler works with android easily ( most of the community kernels are using BFS scheduler however )
Click to expand...
Click to collapse
Sniff...
It did sound a little too good to be true. Well, eventually we will get 2.6.38 and that has it built in, if the desktop group scheduler can even be used at all it seems.
but because its in other peoples' kernels cant it be easily ported into ours?
tyl3rdurden said:
but because its in other peoples' kernels cant it be easily ported into ours?
Click to expand...
Click to collapse
It's very possible to patch in. If it's been done before, anyway.
But, because it is based on the .39 kernel, it might be a little buggy. Or a lot buggy. You wanna link me to a kernel that has it and I'll look into it? I probably will wait for Froyo source for at least the .32 kernel.
Here's what Linus himself had to say about the patch:
Yeah. And I have to say that I'm (very happily) surprised by just how small that patch really ends up being, and how it's not intrusive or ugly either.
I'm also very happy with just what it does to interactive performance. Admittedly, my "testcase" is really trivial (reading email in a web-browser, scrolling around a bit, while doing a "make -j64" on the kernel at the same time), but it's a test-case that is very relevant for me. And it is a _huge_ improvement.
It's an improvement for things like smooth scrolling around, but what I found more interesting was how it seems to really make web pages load a lot faster. Maybe it shouldn't have been surprising, but I always associated that with network performance. But there's clearly enough of a CPU load when loading a new web page that if you have a load average of 50+ at the same time, you _will_ be starved for CPU in the loading process, and probably won't get all the http requests out quickly enough.
So I think this is firmly one of those "real improvement" patches. Good job. Group scheduling goes from "useful for some specific server loads" to "that's a killer feature".
Click to expand...
Click to collapse
DevinXtreme said:
It's very possible to patch in. If it's been done before, anyway.
But, because it is based on the .39 kernel, it might be a little buggy. Or a lot buggy. You wanna link me to a kernel that has it and I'll look into it? I probably will wait for Froyo source for at least the .32 kernel.
Click to expand...
Click to collapse
Devin- I agree with waiting until the Froyo source is out for attempting to implement this. I'm not sure that group scheduling is even an option in the Android kernel. But I don't think anyone has done this so I doubt any links are coming your way.
Edit: Found this here- http://groups.google.com/group/android-kernel/browse_thread/thread/f47d9d4f4e6a116a/ab1a8ab42bb0b84a
Android is using the CFS.
They are combine with RT scheduling.
When you playing the audio and video service, paltform change the
scheduling policy and change the schedule prority.
search the platform code
dalvik has policy n proiorty setting code, also framework related with
audio n video
check the init.rc and cutil folder
u need to search the platform after eclair release (Froyo)
cicada said:
( most of the community kernels are using BFS scheduler however )
Click to expand...
Click to collapse
Actually, no Epic kernel uses BFS. It isn't stable on our hardware, and its not worth porting. Android uses CFS by default, and then the CFQ scheduler I think, but most have switched from CFS/CFQ to CFS/BFQ combination. I know mine & Devin's kernels have.
Geniusdog254 said:
Actually, no Epic kernel uses BFS. It isn't stable on our hardware, and its not worth porting. Android uses CFS by default, and then the CFQ scheduler I think, but most have switched from CFS/CFQ to CFS/BFQ combination. I know mine & Devin's kernels have.
Click to expand...
Click to collapse
Ok then, so in your professional opinion is this patch a possibility still?
Enter your search termsSubmit search formWeblkml.org
Subject [RFC/RFT PATCH] sched: automated per tty task groups
From Mike Galbraith <>
Date Tue, 19 Oct 2010 11:16:04 +0200
Greetings,
Comments, suggestions etc highly welcome.
This patch implements an idea from Linus, to automatically create task groups
per tty, to improve desktop interactivity under hefty load such as kbuild. The
feature is enabled from boot by default, The default setting can be changed via
the boot option ttysched=0, and can be can be turned on or off on the fly via
echo [01] > /proc/sys/kernel/sched_tty_sched_enabled.
Link to code: http://forums.opensuse.org/english/...ernel-speed-up-patch-file-mike-galbraith.html
Thanks for the clarification Geniusdog254.
ZenInsight, any chance you can prune down that post and just use a link? The patch is all over the web right now, and it's hard to scroll by on a phone
ZenInsight said:
Ok then, so in your professional opinion is this patch a possibility still?
Click to expand...
Click to collapse
I'm sure its possible, I just haven't looked at it yet. Like I stated before, until we get 2.6.32 FroYo kernel source I'm not doing any devving besides app work (maybe)
EDIT: Devin said on the last page that he'll look into it. I know IntersectRavens Nexus kernel has it, but I haven't looked into any reports of how much it helps.
Also found this:
Phoronix recently published an article regarding a ~200 lines Linux Kernel patch that improves responsiveness under system strain. Well, Lennart Poettering, a RedHat developer replied to Linus Torvalds on a maling list with an alternative to this patch that does the same thing yet all you have to do is run 2 commands and paste 4 lines in your ~/.bashrc file. I know it sounds unbelievable, but apparently someone even ran some tests which prove that Lennart's solution works. Read on!
Lennart explains you have to add this to your ~/.bashrc file (important: this won't work on Ubuntu. See instructions for Ubuntu further down the post!):
CODE:
if [ "$PS1" ] ; then
mkdir -m 0700 /sys/fs/cgroup/cpu/user/$$
echo $$ > /sys/fs/cgroup/cpu/user/$$/tasks
fi
Linux terminal:
mount -t cgroup cgroup /sys/fs/cgroup/cpu -o cpu
mkdir -m 0777 /sys/fs/cgroup/cpu/user
Further more, a reply to Lennart's email states that his approach is actually better then the actual Kernel patch:
I've done some tests and the result is that Lennart's approach seems to work best. It also _feels_ better interactively compared to the vanilla kernel and in-kernel cgrougs on my machine. Also it's really nice to have an interface to actually see what is going on. With the kernel patch you're totally in the dark about what is going on right now.
-Markus Trippelsdorf
The reply also includes some benchmarks you can see @ http://lkml.org/lkml/2010/11/16/392
Found all this here (Ubuntu patch info too):
http://www.webupd8.org/2010/11/alternative-to-200-lines-kernel-patch.html
Just thought of something.
Since we have all the working 2.2 builds that use the right drivers, I'm pretty sure the Android 2.3 gingerbread "lag" is not the SDK build itself - I think it's the OpenGL files (that run SurfaceFlinger/UI?) that are set in "framebuffer" emulation mode (due to the ROM being intended for use in the SDK Emulator), which is using the GUI to render all the elements, not the GPU Hardware. In theory, if we used the 2.2 OpenGL Libraries, we should have better performance as they'd be using the GPU hardware.
Sidenote: the HD2 actually has a AMD/ATI chip in it, consider it as a Pocket Radeon Graphics Chip if you like (when the graphics hardware is initialzing, you'll see an reference to AMD OpenGL).
Don't take me word for word on that, but the idea I got is someone (maybe m-deejay or whoever) should try to take the OpenGL libraries from a working Android 2.2 ROM, and paste them into an Android 2.3 ROM (either mine or m-deejays, you decide). Of course, make a backup before hand, so if Android doesn't work after that, we can rollback to the 2.3 drivers. They should be libopengl or libegl or something else, I can't remember 100%.
Happy hacking peeps! Let's get Android 2.3 running butter-smooth on our HD2!
Im gonna take a look at it after I get back home from the hospital.
Sent from my Decepticon using XDA App
spbeeking said:
after I get back home from the hospital.
Click to expand...
Click to collapse
get well soon.... or the person who is in there
@iced
pretty nice theory..... i also wondered how it can be that its so laggy, when every driver is available
Nope, the lag is caused by the vm heap in build.prop (raise it to 32) and the excessive amount of resources inside apks (every apk loaded fills up the memory in no time). Remove all mdpi and ldpi folders from all apks (even framework-res) and that should speed things up considerably.
I do this for the Dream everytime a new SDK is released and a quick-and-dirty port ensues
jubeh said:
Nope, the lag is caused by the vm heap in build.prop (raise it to 32) and the excessive amount of resources inside apks (every apk loaded fills up the memory in no time). Remove all mdpi and ldpi folders from all apks (even framework-res) and that should speed things up considerably.
I do this for the Dream everytime a new SDK is released and a quick-and-dirty port ensues
Click to expand...
Click to collapse
Thanks for that info, I'll go and do that.
Our HD2 is a HDPI device, no?
IcedCube said:
Our HD2 is a HDPI device, no?
Click to expand...
Click to collapse
Yes. it is
No worries. I did what jubeh said, and I noticed a boost in performance. You're getting a shoutout in my credits list.
Some guys found a huge optimization for Linux kernel and Dalvik as well on ARM platforms. Actually it is not made by optimizing the code, but by the way GCC compiles, and it increased the performances from 30% to 100%. There is a little video of them running a benchmark on two Android development platforms, the two development platforms are the same. Here it is :
So the ROM used is the same used by the Galaxy Nexus, and Cyanogen Mod now uses it to gain these 30%~100%. What are your feelings about it ? Are you pessimistic, optimistic about the implementation for example for stock Atrix ROMs ? Or community ROMs maybe ? Also tell us if you have some news about it.
So this optimization was made mainly for the Linux kernel on ARM devices, which means it will be way more efficient on ARM computers/servers. This is a great step forward for Linux on embedded platforms. They also worked on Dalvik, so now even Android apps will run faster.
(Sorry for my grammar if I made some mistakes, just tell me I'll correct them.)
Very impressive performance increase. Looking forward to seeing these optimizations make there way into custom roms.
Can you post more info about how this works? Or a link to the original GCC discovery?
Linaro is a hot topic in the Samsung forums. Even the OG SGS (basically half the specs of the Atrix) users are begging support for it...
Sent from my MB860 using XDA Premium App
It will be implemented in the aokp #39 release.
Inviato dal mio Atrix con Tapatalk
AkaGrey said:
It will be implemented in the aokp #39 release.
Inviato dal mio Atrix con Tapatalk
Click to expand...
Click to collapse
how do you know that???
facuxt said:
how do you know that???
Click to expand...
Click to collapse
http://www.droid-life.com/2012/06/1...stem-performance-boosts-are-quite-noticeable/
Sent from my MB860 CM7.2 RC3 36p Radio
It seems it is not easy to get this to work on every CM device. Some people report issues with this patch: http://r.cyanogenmod.com/#/c/17535/
v.k said:
It seems it is not easy to get this to work on every CM device. Some people report issues with this patch: http://r.cyanogenmod.com/#/c/17535/
Click to expand...
Click to collapse
It may not be easy, but I got a feeling that many people from CM teams everywhere are gonna work round the clock to get this to work on their devices.
I'm definitely not a "benchmark guy", and generally shrug off those topics, but even I was blown away after watching this video...
Sent from my MB860 using XDA Premium App
Wow. Awesome!
Imagine that guy talking one on one with a girl though......
Sent from my MB860 using XDA
rancur3p1c said:
Can you post more info about how this works? Or a link to the original GCC discovery?
Click to expand...
Click to collapse
Well all is about the compilation, they didn't change the code but the way GCC compiles it, it is now optimized for the ARM instruction set. So, why wasn't it optimized before the "discover" ? Well I think they didn't take enough time on build optimization when they made GCC working with ARM. First it was made for x86, x64 etc. These are other instruction set, another list of commands the CPU is able to work with. Imagine the instruction sets like different languages. x64, the first one, has a rich vocabulary, and ARM the second one has a more restricted vocabulary but the two languages have the same syntax. The difference will be that you will need to use more words with ARM than with x64 to describe something complex, so now it has to be optimized to use the fewer words possible to be faster. And that's basically what the Linaro Team did.
So the optimization has been used for the Android System (Linux kernel + Dalvik, etc.) but it can also be used for any other ARM program. This is a great step forward also for ARM computers, and maybe ARM servers that will continue to use less energy for bigger tasks because of the optimization.
Slymayer said:
Well all is about the compilation, they didn't change the code but the way GCC compiles it, it is now optimized for the ARM instruction set. So, why wasn't it optimized before the "discover" ? Well I think they didn't take enough time on build optimization when they made GCC working with ARM. First it was made for x86, x64 etc. These are other instruction set, another list of commands the CPU is able to work with. Imagine the instruction sets like different languages. x64, the first one, has a rich vocabulary, and ARM the second one has a more restricted vocabulary but the two languages have the same syntax. The difference will be that you will need to use more words with ARM than with x64 to describe something complex, so now it has to be optimized to use the fewer words possible to be faster. And that's basically what the Linaro Team did.
So the optimization has been used for the Android System (Linux kernel + Dalvik, etc.) but it can also be used for any other ARM program. This is a great step forward also for ARM computers, and maybe ARM servers that will continue to use less energy for bigger tasks because of the optimization.
Click to expand...
Click to collapse
You lost me at compilation...lol
Sent from my MB860 using XDA
So they found a way to optimize compilation for arm architecture yielding massive performance boosts over current standards.. do want =D
These dudes rock.
Sent from my MB860 using XDA
michaelatrix said:
You lost me at compilation...lol
Sent from my MB860 using XDA
Click to expand...
Click to collapse
They found a way to talk to the system by saying less. Like if I would say to you, " hello, how are things in your life" but now I say, "how's things" and you understand both phrases mean the same thing. You get to the conclusion faster because you process less information but reached the same outcome. It takes less processing for the shorter phrase and improves overall response time.
Sent from my MB860 using xda premium
i don't think it'll be easy to use it for our beloved atrix, the linaro code uses a 3.2 kernel, and we're still stuck on the crappy froyo 2.6.32 kernel =/