Related
Hi everyone
For quite a long time i've been thinking about the whole "galaxy s can do 90mpolys per second" thing.
It sounds like total bull****.
So, after many, many hours of googling, and some unanswered mails to imgtec, i'd like to know-
Can ANYONE provide any concrete info about the SGX540?
From one side i see declerations that the SGX540 can do 90 million polygons per second, and from the other side i see stuff like "Twice the performance of SGX530".
...but twice the performance of SGX530 is EXACTLY what the SGX535 has.
So is the 540 a rebrand of the 535? that can't be, so WHAT THE HELL is going on?
I'm seriously confused, and would be glad if anyone could pour light on the matter.
I asked a Samsung rep what the difference was and this is what I got:
Q: The Samsung Galaxy S uses the SGX540 vs the iPhone using the SGx535. The only data I can find seems like these two GPU's are very similar. Could you please highlight some of the differences between the SGX535 and the SGX540?
A: SGX540 is the latest GPU that provides better performance and more energy efficiency.
SGX535 is equipped with 2D Graphic Accelerator which SGX540 does not support.
I also tried getting in contact with ImgTec to find out an answer, but I haven't received a reply back. It's been two weeks now.
Also, the chip is obviously faster than snapdragon with the adreno 200 gpu. I don't know if Adreno supports TBDR, I just know it's a modified Xenon core. Also, Galaxy S uses LPDDR2 ram. So throughput is quite a bit faster, even though it's not *as* necessary with all the memory efficiencies between the Cortex A8 and TBDR on the SGX540.
thephawx said:
A: SGX540 is the latest GPU that provides better performance and more energy efficiency.
SGX535 is equipped with 2D Graphic Accelerator which SGX540 does not support.
Click to expand...
Click to collapse
i think that is the cue, for cost saving for Samsung
besides who will need a 2D Accelerator, with a CPU as fast as it's already.
The HTC Athena (HTC Advantage) failed miserably at adding the ATI 2D Accelerator which no programmers were able to take advantage of, in the end the CPU did all the work.
I'd imagine its a 535 at 45nm. Just a guess, the cpu is also 45nm
Having tried a few phones the speed in games is far better, much better fps though there is a problem that we might have to wait for any games to really test its power as most are made to run on all phones.
This was the same problem with the xbox and ps2, the xbox had more power but the ps2 was king and so games were made with its hardware in mind which held back the xbox, only now and then did a xbox only game come out that really made use of its power....years later xbox changed places which saw 360 hold the ps3 back (dont start on which is better lol) and the ps3 has to make do with 360 ports but when it has a game made just for it you really get to see what it can do...anywayits nice to know galaxy is future proof game wise and cannot wait to see what it can do in future or what someone can port on to it.
On a side note I did read that the videos run through the graphics chip which is causing blocking in dark movies (not hd...lower rips) something about it not reading the difference between shades of black, one guy found a way to turn the chip off and movies were all good, guess rest of us have to wait for firmware to sort this.
thephawx said:
A: SGX540 is the latest GPU that provides better performance and more energy efficiency.
SGX535 is equipped with 2D Graphic Accelerator which SGX540 does not support.
Click to expand...
Click to collapse
smart move sammy
voodoochild2008-
I wouldn't say we'd have to wait so much.
Even today, snapdragon devices don't do very well in games, since their fillrate is so low (133Mpixels)
Even the motorola droid (SGX530 at 110mhz, about 9~ Mpoly's and 280~ Mpixels with that freq) fares MUCH better in games, and actually, runs pretty much everything.
So i guess the best hardware is not yet at stake, but weaker devices should be hitting the limit soon.
bl4ckdr4g00n- Why the hell should we care? I don't see any problem with 2D content and/or videos, everything flies at lightspeed.
well I can live in hope, and I guess apples mess (aka the iphone4) will help now as firms are heading more towards android, I did read about one big firm in usa dropping marketing for apple and heading to android, and well thats what you get when you try to sell old ideas...always made me laugh when the first iphone did like 1meg photo when others were on 3meg, then it had no video when most others did, then they hype it when it moves to a 3meg cam and it does video.....omg, ok I am going to stop as it makes my blood boil that people buy into apple, yes they started the ball rolling and good on them for that but then they just sat back and started to count the money as others moved on.................oh and when I bought my galaxy the website did say "able to run games as powerfull as the xbox (old one) so is HALO too much to ask for lol
wait so what about the droid x vs the galaxy s gpu?? i know the galaxy s is way advanced in specs wise... the droid x does have a dedicated gpu can anyone explain??
The droid X still uses the SGX530, but in the droid x, as opposed to the original droid, it comes in the stock 200mhz (or at least 180)
At that state it does 12-14Mpolygons/sec and can push out 400-500Mpixels/sec
Not too shabby
he 535 is a downgrade from the 540. 540 is the latest and greatest from the PowerVR line.
Samsung did not cost cut, they've in fact spent MORE to get this chip on their Galaxy S line. No one else has the 540 besides Samsung.
Like i said, its probably just a process shrink which means our gpu uses less power and is possibly higher clocked.
p.s. desktop gfx haven't had 2d acceleration for years removing it saves transistors for more 3d / power!
This worries me as well... Seems like it might not be as great as what we thought. HOWEVER again, this is a new device that might be fixed in firmware updates. Because obviously the hardware is stellar, there's something holding it back
Pika007 said:
The droid X still uses the SGX530, but in the droid x, as opposed to the original droid, it comes in the stock 200mhz (or at least 180)
At that state it does 12-14Mpolygons/sec and can push out 400-500Mpixels/sec
Not too shabby
Click to expand...
Click to collapse
http://www.slashgear.com/droid-x-review-0793011/
"We benchmarked the DROID X using Quadrant, which measures processor, memory, I/O and 2D/3D graphics and combines them into a single numerical score. In Battery Saver mode, the DROID X scored 819, in Performance mode it scored 1,204, and in Smart mode it scored 963. In contrast, the Samsung Galaxy S running Android 2.1 – using Samsung’s own 1GHz Hummingbird CPU – scored 874, while a Google Nexus One running Android 2.2 – using Qualcomm’s 1GHz Snapdragon – scored 1,434. "
The N1's performance can be explained by the fact it's 2.2...
But the Droid X, even with the "inferior" GPU, outscored the Galaxy S? Why?
gdfnr123 said:
wait so what about the droid x vs the galaxy s gpu?? i know the galaxy s is way advanced in specs wise... the droid x does have a dedicated gpu can anyone explain??
Click to expand...
Click to collapse
Same here. I want to know which one is has the better performance as well.
Besides that. Does anyone know which CPU is better between Dorid X and Galaxy S?
I knew that OMAP chip on the original Droid can overclock to 1.2Ghz from what, 550Mhz?
How about the CPU on Droid X and Galaxy S? Did anyone do the comparison between those chips? Which can overclock to a higher clock and which one is better overall?
Sorry about the poor English. Hope you guys can understand.
The CPU in the DroidX is a stock Cortex A8 running at 1GHz. The Samsung Hummingbird is a specialized version of the Cortex A8 designed by Intrinsity running at 1Ghz.
Even Qualcomm does a complete redesign of the Cortex A8 in the snapdragon cpu at 1GHz. But while the original A8 could only be clocked at 600Mhz with a reasonable power drain, the striped down versions of the A8 could be clocked higher while maintaining better power.
An untouched Cortex A8 can do more at the same frequencies compared to a specialized stripped down A8.
If anything the Samsung Galaxy S is better balanced, leveraging the SGX 540 as a video decoder as well. However, the Droid X should be quite snappy in most uses.
At the end of the day. You really shouldn't care too much about obsolescence. I mean the Qualcomm Dual-core scorpion chip is probably going to be coming out around December.
Smart phones are moving at a blisteringly fast pace.
TexUs-
I wouldn't take it too seriously.
Quadrant isn't too serious of a benchmark, plus, i think you can blame it on the fact that 2D acceleration in the SGS is done by the processor, while the DROID X has 2D acceleration by the GPU.
I can assure you- There is no way in hell that the SGX540 is inferior to the 530. It's at least twice as strong in everything related to 3D acceleration.
I say- let's wait for froyo for all devices, let all devices clear from "birth ropes" of any kind, and test again. with more than one benchmark.
Pika007 said:
TexUs-
I wouldn't take it too seriously.
Quadrant isn't too serious of a benchmark, plus, i think you can blame it on the fact that 2D acceleration in the SGS is done by the processor, while the DROID X has 2D acceleration by the GPU.
I can assure you- There is no way in hell that the SGX540 is inferior to the 530. It's at least twice as strong in everything related to 3D acceleration.
I say- let's wait for froyo for all devices, let all devices clear from "birth ropes" of any kind, and test again. with more than one benchmark.
Click to expand...
Click to collapse
The SGS might be falling behind in I/O speeds... It is well known that all the app data is stored in a slower internal SD-card partition... Has anyone tried the benchmarks with the lag fix?
Also, if only android made use of the GPU's to help render the UI's... It's such a shame that the GPU only goes to use in games...
Using the GPU to render the UI would take tons of battery power.
I preffer it being a bit less snappy, but a whole lot easier on the battery.
thephawx said:
At the end of the day. You really shouldn't care too much about obsolescence. I mean the Qualcomm Dual-core scorpion chip is probably going to be coming out around December.
Smart phones are moving at a blisteringly fast pace.
Click to expand...
Click to collapse
Smart phones aren't but batteries are.
IMO the only way we haven't had huge battery issues because all the other tech (screen, RAM power, CPU usage, etc) has improved...
Dual core or 2Ghz devices sound nice on paper but I worry if the battery technology can keep up.
TexUs said:
Smart phones aren't but batteries are.
IMO the only way we haven't had huge battery issues because all the other tech (screen, RAM power, CPU usage, etc) has improved...
Dual core or 2Ghz devices sound nice on paper but I worry if the battery technology can keep up.
Click to expand...
Click to collapse
I think so. The battery will be the biggest issue for the smart phone in the future if it just remain 1500mAh or even less.
The dual-core CPU could be fast but power aggressive as well.
First the G2, now the Lexicon:
http://phandroid.com/2010/09/20/htc-lexikon-looks-to-be-next-verizon-droid/
Sure the clock speed is lower, but reports are saying that the processor is actually faster. And the battery usage will probably be a lot better too.
I'm a sucker for performance and have always said I'd stick with the N1 until the next CPUs come out. Finally... Has the next era in mobile CPU's finally begun?
Next era, no. 1.5+single cores, then dual core.
Paul22000 said:
First the G2, now the Lexicon:
http://phandroid.com/2010/09/20/htc-lexikon-looks-to-be-next-verizon-droid/
Sure the clock speed is lower, but reports are saying that the processor is actually faster. And the battery usage will probably be a lot better too.
I'm a sucker for performance and have always said I'd stick with the N1 until the next CPUs come out. Finally... Has the next era in mobile CPU's finally begun?
Click to expand...
Click to collapse
It's faster 'cause the gpu is a logically separate device. I expect linpacks to be somewhat slower, but quadrants to be faster. How's it going?
"Next era"? No. 7x30 isn't a direct successor to 8x50, having the same CPU but different GPU and some other internal differences (for example, LPDDR2 support appears on Github). Just read Qualcomm's own product description:
http://www.qualcomm.com/products_services/chipsets/snapdragon.html
It's called "second generation" because of HSPA+, much better GPU, 45nm process, additional video codecs support, newer GPS, and some other bits and pieces. It's an overall better device. But if you count only the CPU area - it loses to Nexus. Same CPU, clocked lower. 8x55 is equal in CPU power.
If you're looking for the real next generation in power - look for 3rd generation devices, with dual core CPUs.
Jack_R1 said:
"Next era"? No. 7x30 isn't a direct successor to 8x50, having the same CPU but different GPU and some other internal differences (for example, LPDDR2 support appears on Github). Just read Qualcomm's own product description:
http://www.qualcomm.com/products_services/chipsets/snapdragon.html
It's called "second generation" because of HSPA+, much better GPU, 45nm process, additional video codecs support, newer GPS, and some other bits and pieces. It's an overall better device. But if you count only the CPU area - it loses to Nexus. Same CPU, clocked lower. 8x55 is equal in CPU power.
If you're looking for the real next generation in power - look for 3rd generation devices, with dual core CPUs.
Click to expand...
Click to collapse
+1. I concur 100% with what he said.
keep in mind that pure clock speed does not mean something is faster... the 45nm die shrink also means they increased efficiency in a lot of areas and have allowed for more cache on the die...
think of it this way, i built a dual core PC back in 2006 that ran at 2.8ghz but it was like 90nm tech... if i buy a new dual core today, with a 45nm tech but same speed it would blow the old proc out of the water...
I really doubt dual core procs in phones will make a huge leap like everyone is expecting... I mean, how often do you run 4-5 apps simultaneously that are all very stressful on the CPU? the two most stressful things you prolly do on your phone is watch a movie (encoding video is stressful) or play a video game like on your PSX emulator... do you ever watch a movie and play a game at the same time? Stupid question right... the basic everyday performances are not going to see any huge improvements like everyone expects...
if they want to improve phones they should stick to single core and have a dedicated gpu or go dual and prioritize one of the cores to graphical processing...
oh i forgot to mention the only way you will see strong software performance improvements from dual core is if Google rewrites virtually the entire code for Android to make use of multiple cores... so while your phone might be dual core, your OS wont care since it virtually cannot use it correctly... better pray the manufacturer updates the OS for you cuz the N1 is single core and guess whos getting all the updates for the next year or so?
Pure clock speed on exactly the same CPU is directly correlated with CPU speed. Yes, there are some things that impact benchmarks like memory bandwidth etc, but we're not talking about them - and even if we did, the difference still wouldn't cover. 65nm vs 45nm means NOTHING - it doesn't matter, what process the CPU was built on, it matters how it functions. We're talking about EXACTLY THE SAME CPU, can you keep that in mind, please? Thanks. CPU cache almost doesn't matter, since L1 is limited anyway, and L2 is big enough anyway, the increases add a bare couple of percents to CPU speed, which is nothing compared to 20% speed loss due to clocking.
Thanks for your smart suggestions on "improving phones". I guess you might be one of the VPs at Qualcomm. Or maybe you aren't. I'll skip your even smarter comments about "dedicated GPU" etc. I guess you probably need to google the word "SoC" first and see what it means.
And you should probably educate yourself about multi-threaded applications, and also remember that Linux kernel (which is running on Android) is built to support multiple cores, and Dalvik VM (which is running the apps) might very well be multi-threaded too.
Adding a second core with load balancing OS results in ~35-40% performance increase (depends on some things). And ironically, when you compare "your old 90nm core" and "newer 45nm cores", saying that the newer cores clocked similarly "would blow the old out of the water", you're actually comparing multi-core vs single-core CPUs (with some internal speed-ups, too, but the most significant performance boost comes from additional cores).
Jack_R1 said:
65nm vs 45nm means NOTHING
Click to expand...
Click to collapse
Correct me if I'm wrong, but won't the 45nm process at least have better efficiency due to smaller gates?
I'm probably the only person on this planet that would ever download a 20.5-meg, 2426-page document titled "S5PC110 RISC Microprocessor User's Manual", but if there are other hardware freaks out there interested, here you go:
http://pdadb.net/index.php?m=repository&id=644&c=samsung_s5pc110_microprocessor_user_manual_1.00
As you may or may not know, the S5PC110, better known as Hummingbird, is the SoC (System on a Chip) that is the brain of your Epic. Now, when you have those moments when you really just gotta know the memory buffer size for your H.264 encoder or are dying to pore over a block diagram of your SGX540 GPU architecture, you can!
( Note: It does get a little bit dry at parts. Unless you're an ARM engineer, I suppose. )
Why arent you working on porting CM6 or gingerbread via CM7?? lol
now we can overclock the gpu
/sarcasm
cbusillo said:
Why arent you working on porting CM6 or gingerbread via CM7?? lol
Click to expand...
Click to collapse
Hah, because I know exactly squat about Android development. Hardware is more my thing, though if I find some spare time to play around with the Android SDK maybe that can change.
Sent from my SPH-D700 using XDA App
This actually is really exciting news. RISC architectures in general, especially the ARM instruction set is great and honestly it would so the works a lot of good kicking the chains of x86
Sent from my Nexus S with a keyboard
Interesting - the complete technical design of the Hummingbird chips.
After reading your blog as to how Hummingbird got its extra performance, I still wonder at times - did we make the right choice in getting this phone the Epic 4G (I bought one for $300 off contract and imported it to Canada) knowing that there are going to be ARM Cortex A9 CPUs coming around in just a couple of months? We know that in the real world, Hummingbird is more powerful than Snapdragon and the OMAP 3600 series, while benchmark scores tend to not reflect real world performance.
Performance-wise: It's know that the out of order A9 parts are at least 30% faster clock for clock in real world performance. There will be dual and maybe quad core implementations. What's really up in the air is the graphics performance of the A9 parts. There's now the Power VR SGX 545, the Mali 400, and the Tegra 2.
Edit: There is also the successor, the Mali T-604. I don't expect to see this in a phone in the near future. Nor do I expect the Tegra 3. Maybe close to this time next year though.
sauron0101 said:
Interesting - the complete technical design of the Hummingbird chips.
After reading your blog as to how Hummingbird got its extra performance, I still wonder at times - did we make the right choice in getting this phone the Epic 4G (I bought one for $300 off contract and imported it to Canada) knowing that there are going to be ARM Cortex A9 CPUs coming around in just a couple of months? We know that in the real world, Hummingbird is more powerful than Snapdragon and the OMAP 3600 series, while benchmark scores tend to not reflect real world performance.
Performance-wise: It's know that the out of order A9 parts are at least 30% faster clock for clock in real world performance. There will be dual and maybe quad core implementations. What's really up in the air is the graphics performance of the A9 parts. There's now the Power VR SGX 545, the Mali 400, and the Tegra 2.
Edit: There is also the successor, the Mali T-604. I don't expect to see this in a phone in the near future. Nor do I expect the Tegra 3. Maybe close to this time next year though.
Click to expand...
Click to collapse
Your always going to be playing catchup..I personally think the Epic has great hardware for the time...I mean on Samsung's roadmap for 2012/13 is their Aquila processor which is a quad-core 1.2ghz..its going to be endless catchup..every year there will be something that completely over shallows the rest..
gTen said:
Your always going to be playing catchup..I personally think the Epic has great hardware for the time...I mean on Samsung's roadmap for 2012/13 is their Aquila processor which is a quad-core 1.2ghz..its going to be endless catchup..every year there will be something that completely over shallows the rest..
Click to expand...
Click to collapse
No, but I mean, if you buy the latest technology when its released, you'll be set for quite some time.
For example, if you were to buy the one of the first Tegra 2 phones, its unlikely that anything is going to be beating that significantly until at least 2012 when the quad core parts begin to emerge.
It takes a year or so from the time that a CPU is announced to the time that it gets deployed in a handset. For example, the Snapdragon was announced in late 2008 and the first phones (HD2) were about a year later. IF you buy an A9 dual core part early on, you should be set for some time.
Well, I got the Epic knowing Tegra 2 was coming in a few months with next-gen performance. I was badly in need of a new phone and the Epic, while not a Cortex A9, is no slouch.
Sent from my SPH-D700 using XDA App
sauron0101 said:
No, but I mean, if you buy the latest technology when its released, you'll be set for quite some time.
For example, if you were to buy the one of the first Tegra 2 phones, its unlikely that anything is going to be beating that significantly until at least 2012 when the quad core parts begin to emerge.
Click to expand...
Click to collapse
Thats relative, in terms of GPU performance our Hummingbird doesn't do so badly..the GPU the TI chose to pair with the dual core OMAP is effectively a PowerVR SGX540..the Snapdragon that is rumored to be in the dual cores next summer is also on par with our GPU performance...so yes we will loose out to newer hardware..which is to be expected but I wouldn't consider it a slouch either...
It takes a year or so from the time that a CPU is announced to the time that it gets deployed in a handset. For example, the Snapdragon was announced in late 2008 and the first phones (HD2) were about a year later. IF you buy an A9 dual core part early on, you should be set for some time.
Click to expand...
Click to collapse
The first phone was a TG01, that said I guarantee you that a year if not less from the first Tegra release there will be a better processor out...its bound to happen..
Edit: Some benchmarks for Tablets:
http://www.anandtech.com/show/4067/nvidia-tegra-2-graphics-performance-update
Though I am not sure if its using both cores or not...also Tegra 2 I think buffers at 16bit..while Hummingbird buffers at 24bit..
gTen said:
Thats relative, in terms of GPU performance our Hummingbird doesn't do so badly..the GPU the TI chose to pair with the dual core OMAP is effectively a PowerVR SGX540..the Snapdragon that is rumored to be in the dual cores next summer is also on par with our GPU performance...so yes we will loose out to newer hardware..which is to be expected but I wouldn't consider it a slouch either...
The first phone was a TG01, that said I guarantee you that a year if not less from the first Tegra release there will be a better processor out...its bound to happen..
Edit: Some benchmarks for Tablets:
http://www.anandtech.com/show/4067/nvidia-tegra-2-graphics-performance-update
Though I am not sure if its using both cores or not...also Tegra 2 I think buffers at 16bit..while Hummingbird buffers at 24bit..
Click to expand...
Click to collapse
AFAIK, dual-core support is only fully supported by Honeycomb. But if you feel like buying into NVIDIA's explanation of Tegra 2 performance, check this out: http://www.nvidia.com/content/PDF/t...-Multi-core-CPUs-in-Mobile-Devices_Ver1.2.pdf
Electrofreak said:
AFAIK, dual-core support is only fully supported by Honeycomb. But if you feel like buying into NVIDIA's explanation of Tegra 2 performance, check this out: http://www.nvidia.com/content/PDF/t...-Multi-core-CPUs-in-Mobile-Devices_Ver1.2.pdf
Click to expand...
Click to collapse
I see I actually read before that Gingerbread would allow for dual core support but I guess that was delayed to honeycomb...
either way this would mean even if a Tegra based phone comes out it wont be able to utilize both cored until at least mid next year.
I can't open pdfs right now but I read a whitepaper with performance of hummingbird and Tegra 2 compared both on single core and dual core..is that the same one?
One thing though is Nvidia and ATI are quite known for tweaking their gfx cards to perform well on benchmarks...I hope its not the same with their CPUs :/
gTen said:
Edit: Some benchmarks for Tablets:
http://www.anandtech.com/show/4067/nvidia-tegra-2-graphics-performance-update
Though I am not sure if its using both cores or not...also Tegra 2 I think buffers at 16bit..while Hummingbird buffers at 24bit..
Click to expand...
Click to collapse
Here are some additional benchmarks comparing the Galaxy Tab to the Viewsonic G Tablet:
http://www.anandtech.com/show/4062/samsung-galaxy-tab-the-anandtech-review/5
It's possible that the Tegra 2 isn't optimized yet. Not to mention, Honeycomb will be the release that makes the most of dual cores. However, there are lackluster performance gains in terms of graphics - most of it seems to be purely CPU gains in performance.
I'm not entirely sure that Neocore is representative of real world performance either. It's possible that it may have been optimized for some platforms. Furthermore, I would not be surprised if Neocore gave inflated scores for the Snapdragon and it's Adreno graphics platform. Of course, neither is Quadrant.
I think that real world games like Quake III based games are the way to go, although until we see more graphics demanding games, I suppose that there's little to test (we're expecting more games for Android next year).
Finally, we've gotten to a point for web browsing where its the data connection HSPA+, LTE, or WiMAX that will dictate how fast pages load. It's like upgrading the CPU for a PC. I currently run an overclocked q6600 - if I were to upgrade to say a Sandy Bridge when it comes out next year, I don't expect significant improvements in real world browsing performance.
Eventually, the smartphone market will face the same problem that the PC market does. Apart from us enthusiasts who enjoy benchmarking and overclocking, apart from high end gaming, and perhaps some specialized operations (like video encoding which I do a bit of), you really don't need the latest and greatest CPU or 6+ GB of RAM (which many new desktops come with). Same with high end GPUs. Storage follows the same dilemna. I imagine that as storage grows, I'll be storing FLAC music files instead of AAC, MP3, or OGG, and more video. I will also use my cell phone to replace my USB key drive. Otherwise, there's no need for bigger storage.
gTen said:
I see I actually read before that Gingerbread would allow for dual core support but I guess that was delayed to honeycomb...
either way this would mean even if a Tegra based phone comes out it wont be able to utilize both cored until at least mid next year.
I can't open pdfs right now but I read a whitepaper with performance of hummingbird and Tegra 2 compared both on single core and dual core..is that the same one?
One thing though is Nvidia and ATI are quite known for tweaking their gfx cards to perform well on benchmarks...I hope its not the same with their CPUs :/
Click to expand...
Click to collapse
Gingerbread doesn't have any dual-core optimizations. It has some JIT improvements in addition to some other minor enhancements, but according to rumor, Honeycomb is where it's at, and it's why the major tablet manufacturers are holding off releasing their Tegra 2 tablets until it's released.
And yeah, that paper shows the performance of several different Cortex A8s (including Hummingbird) compared to Tegra 2, and then goes on to compare Tegra 2 single-core performance vs dual.
Electrofreak said:
Gingerbread doesn't have any dual-core optimizations. It has some JIT improvements in addition to some other minor enhancements, but according to rumor, Honeycomb is where it's at, and it's why the major tablet manufacturers are holding off releasing their Tegra 2 tablets until it's released.
And yeah, that paper shows the performance of several different Cortex A8s (including Hummingbird) compared to Tegra 2, and then goes on to compare Tegra 2 single-core performance vs dual.
Click to expand...
Click to collapse
I looked at:
http://androidandme.com/2010/11/new...u-will-want-to-buy-a-dual-core-mobile-device/
since I can't access the pdf..does the whitepaper state what version they used to do their tests? for example if they used 2.1 on the sgs and honeycomb on their tests it wouldn't exactly be a fair comparison...do they also put in the actual FPS..not % wise? for example we are capped on the FPS for example...
Lastly, in the test does it say whether the Tegra 2 was dithering at 16bit or 24bit?
gTen said:
I looked at:
http://androidandme.com/2010/11/new...u-will-want-to-buy-a-dual-core-mobile-device/
Click to expand...
Click to collapse
I'm one of Taylor's (unofficial) tech consultants, and I spoke with him regarding that article. Though, credit where it's due to Taylor, he's been digging stuff up recently that I don't have a clue about. We've talked about Honeycomb and dual-core tablets, and since Honeycomb will be the first release of Android to support tablets officially, and since Motorola seems to be holding back the release of its Tegra 2 tablet until Honeycomb (quickly checks AndroidAndMe to make sure I haven't said anything Taylor hasn't already said), and rumors say that Honeycomb will have dual-core support, it all makes sense.
But yes, the whitepaper is the one he used to base that article on.
gTen said:
since I can't access the pdf..does the whitepaper state what version they used to do their tests? for example if they used 2.1 on the sgs and honeycomb on their tests it wouldn't exactly be a fair comparison...do they also put in the actual FPS..not % wise? for example we are capped on the FPS for example...
Lastly, in the test does it say whether the Tegra 2 was dithering at 16bit or 24bit?
Click to expand...
Click to collapse
Android 2.2 was used in all of their tests according to the footnotes in the document. While I believe that Android 2.2 is capable of using both cores simultaneously, I don't believe it is capable of threading them separately. But that's just my theory. I'm just going off of what the Gingerbread documentation from Google says; and unfortunately there is no mention of improved multi-core processor support in Gingerbread.
http://developer.android.com/sdk/android-2.3-highlights.html
As for FPS and the dithering... they don't really go there; the whitepaper is clearly focused on CPU performance, and so it features benchmark scores and timed results. I take it all with a pinch of salt anyhow; despite the graphs and such, it's still basically an NVIDIA advertisement.
That said, Taylor has been to one of their expos or whatever you call it, and he's convinced that the Tegra 2 GPU will perform several times better than the SGX 540 in the Galaxy S phones. I'm not so sure I'm convinced... I've seen comparable performance benchmarks come from the LG Tegra 2 phone, but Taylor claims it was an early build with and he's seen even better performance. Time will tell I suppose...
EDIT - As for not being able to access the .pdfs, what are you talking about?! XDA app / browser and Adobe Reader!
Is all the 2.2 roms for vibrant JIT Optimization enabled?
I heard that JIT would make our Vibrant more fast....
seems not, jit in 2.2 only optimized for some cpus, the cpu vibrant used is been optimized in android 2.3, same as nexus s
2.2 has jit. jit typically improves performance on all phones running it, but jit's improvements are dependent on what app you are running.
the reason jit isn't apparent on our phones is due to pretty much 2 reasons.
1) is due to how samsung built the CPU in our phones.
While it is arm a8 based, there are other things companies add to it to make it more targeted. Samsung designed part of this different than qalcom did.
2) Google's coding
Google implemented jit with a "spcific" generalization for it (i.e, they kinda based it around qalcom cpu's). Since the vibrant had a different implementation of a specific feature that benifited from jit, the jit optimization was rendered "unoptimised" and therefore we have a "slower" jit effect.
In real world practice, jit doesn't help our phones that much, it's more synthetic.
geoffcorey said:
the reason jit isn't apparent on our phones is due to pretty much 2 reasons.
1) is due to how samsung built the CPU in our phones.
While it is arm a8 based, there are other things companies add to it to make it more targeted. Samsung designed part of this different than qalcom did.
2) Google's coding
Google implemented jit with a "spcific" generalization for it (i.e, they kinda based it around qalcom cpu's). Since the vibrant had a different implementation of a specific feature that benifited from jit, the jit optimization was rendered "unoptimised" and therefore we have a "slower" jit effect.
In real world practice, jit doesn't help our phones that much, it's more synthetic.
Click to expand...
Click to collapse
i'm curious where you are getting this info... cause i can't find anything from google saying they optimized jit for the snapdragon.
snapdragon and hummingbird use the same isa, so while the hw implementation is different that shouldn't realistically have much impact on performance.
but that isn't as relevant as the fact that jit is reducing the necessity to interpret (and reinterpret) code which is where a lot of the performance benefits of jit are.
funeralthirst said:
i'm curious where you are getting this info... cause i can't find anything from google saying they optimized jit for the snapdragon.
snapdragon and hummingbird use the same isa, so while the hw implementation is different that shouldn't realistically have much impact on performance.
but that isn't as relevant as the fact that jit is reducing the necessity to interpret (and reinterpret) code which is where a lot of the performance benefits of jit are.
Click to expand...
Click to collapse
iirc humming bird and snapdragon (well qalcom chips for the most part) implement SIMD is different bit sizes.
qalcom uses 128bit while hummingbird uses 64bit. Since google worked with a lot of qalcom chips, i'm assuming the 2.2 jit was optimized more for the higher bit length SIMD than what samsung used. With 2.3, google optimized for larger and smaller SIMD bit lengths.
geoffcorey said:
iirc humming bird and snapdragon (well qalcom chips for the most part) implement SIMD is different bit sizes.
qalcom uses 128bit while hummingbird uses 64bit. Since google worked with a lot of qalcom chips, i'm assuming the 2.2 jit was optimized more for the higher bit length SIMD than what samsung used. With 2.3, google optimized for larger and smaller SIMD bit lengths.
Click to expand...
Click to collapse
yeah, i remember seeing that somewhere with the SIMD bit length. so now, is it something that can be optimized or is it simply a hw limitation/difference?
Yes, Google optimized hit to work with both the Samsung simd and the Qualcomm simd in 2.3. But it 2.2, the Samsung simd optimization isn't there and there really isn't a way to work around it.
Sent from my SGH-T959 using XDA App
geoffcorey said:
Yes, Google optimized hit to work with both the Samsung simd and the Qualcomm simd in 2.3. But it 2.2, the Samsung simd optimization isn't there and there really isn't a way to work around it.
Sent from my SGH-T959 using XDA App
Click to expand...
Click to collapse
seems like it would manifest somewhere, but so far running cm7 i'm getting maybe a little better performance than 2.2 roms. same linpack and quadrant. granted cm7 is far from perfect. more what i'd expect just from progress on the os than an optimization. and what about moto? are the omaps just screwed until google decides that moto is their next nexus manufacturer?
funeralthirst said:
seems like it would manifest somewhere, but so far running cm7 i'm getting maybe a little better performance than 2.2 roms. same linpack and quadrant. granted cm7 is far from perfect. more what i'd expect just from progress on the os than an optimization. and what about moto? are the omaps just screwed until google decides that moto is their next nexus manufacturer?
Click to expand...
Click to collapse
i think the omaps have a wider SIMD bit length than samsung chose, so they saw improvements in the 2.2 jit.
I think the reason CM7 hasn't show much optimization is jit is because it still has a lot of debugging enabled in the kernel level.
funeralthirst said:
seems like it would manifest somewhere, but so far running cm7 i'm getting maybe a little better performance than 2.2 roms. same linpack and quadrant. granted cm7 is far from perfect. more what i'd expect just from progress on the os than an optimization. and what about moto? are the omaps just screwed until google decides that moto is their next nexus manufacturer?
Click to expand...
Click to collapse
CM7 Quadrant CPU score:5800
2.2 Roms Quadrant CPU score:1500
But i think it just skip the H.264 decoding test to achieve the high score(like the HTC devices which have qualcomm cpu)
plane501 said:
CM7 Quadrant CPU score:5800
2.2 Roms Quadrant CPU score:1500
But i think it just skip the H.264 decoding test to achieve the high score(like the HTC devices which have qualcomm cpu)
Click to expand...
Click to collapse
really? my quadrant is about 1500 on cm7 and an average of about 14mflops on linpack. and mflops should directly relate to simd/neon implementation.
plane501 said:
CM7 Quadrant CPU score:5800
2.2 Roms Quadrant CPU score:1500
But i think it just skip the H.264 decoding test to achieve the high score(like the HTC devices which have qualcomm cpu)
Click to expand...
Click to collapse
the number is not important, it's the breakdown of numbers (i.e. what's the number in the i/o portion of that score?).
Disclaimer:
I make no assertion of fact on any statement I make except where repeated from one of the official linked to documents. If it's in this thread and you can't find it in an official document, feel free to post your corrections complete with relevant link and the OP can be updated to reflect the most correct information. By no means am I the subject matter expert. I am simply a device nerd that loves to read and absorb information on such things and share them with you. The objective of this thread is to inform, not berate, dis-credit, or otherwise talk trash about someone else's choice. Take that to a PM or another thread please.
There is a LOT of misconception in the community over what hardware is the more capable kit. They are not the same. Therefore comparing them in such a way can be difficult at best. The Ti White Sheet speaks to the many aspects of attempting to do such a thing. It is no small undertaking. Therefore I ask you trust their data before my opinion. However, I felt it necessary to have something resembling a one-stop thread to go to when you are wondering about how the hardware differs between the two devices.
One thing you won't see me doing is using pointless synthetic benchmarks to justify a purchase or position. I use my device heavily but not for games. Web browsing, multiple email, etc......I use LTE heavily. A device can get a bajillion points in but that doesn't make me any more productive. Those into gaming will probably be more comfortable with the EU Exynos Quad, I'll just say that up front....it has a stronger GPU, but this isn't about that. It would be nice to keep this thread about the technology, not synthetic benchmark scores.
Dictionary of Terms (within thread scope):
SGSIII: Samsung Galaxy S 3 smartphone, variant notwithstanding
Samsung: manufacturer, proprietor of the Galaxy S III smartphone. Also responsible for designing the Exynos cpu used in the International variant of the SGSIII.
ARM: Processor Intellectual Property Company, they essentially own the IP rights to the ARM architecture. The ARMv7 architecture is what many processors are based upon at the root, this includes the Exynos by Samsung and the Krait S4 by Qualcomm, as used in the SGSIII as well as many others. It's like the basic foundation with the A9 and A15 feature sets being "options" that Samsung and Qualcomm add on.
Qualcomm: Like Samsung, they are a manufacturer of processors, their contribution here is the S4 Krait cpu used in the US/Canadian market SGSIII smartphone.
CPU: processor, central processing unit, it's the number crunching heart of your phone, we are interested in two here, Samsung's Exynos and Qualcomm's Krait.
As most everyone knows by now, the EU and US variants of the SGSIII come with two different cpu's in them. The EU has the Samsung Exynos, the US the Qualcomm S4 Krait. One major reason if not the only reason I am aware of is the inability of Exynos to be compatible with LTE radio hardware. Qualcomm's S4 Krait however has the radio built into the package. It's an all in one design where Exynos is a discreet cpu and has to depend on secondary hardware for network connectivity. Obviously there are power implications any time you add additional hardware because of redundancy and typical losses.
However the scope of this thread is to point out some differences between the two very different cpu's so that you, the consumer, can make an educated decision based on more than a popularity contest or the "moar corez is bettar!" stance.
Anyone who is into computers fairly well knows the "core counting" as a determination of performance is risky at best. Just as with the megahertz wars of the 1990's....hopefully by now you all know not every 2Ghz CPU is the same, and not every CPU core is the same. You cannot expect an Intel 2Ghz CPU to perform the same as an AMD 2Ghz CPU. It's all about architecture.
Architecture for the purpose of this thread is limited to the ARMv7 architecture and more specifically the A9 and A15 subsets of the architecture. Each architecture supports certain features and instruction sets. Additionally the internal physical parts of the core vary from one architecture to the next.
A9 is older technology in general while A15 is much newer. Exynos is A9 based, Krait S4 is A15 based. Lets look at the differences.
When looking at the two, one must understand that some of the documentation available is comparing what was available at the time they were written. In most cases A9 info is based on the 40nm manufacturing process. Samsung's Exynos is built using newer 32nm HKMG manufacturing processes. Qualcomm S4 Krait is built on a diferent smaller TSMC 28nm manufacturing process. Generally speaking, the smaller the process, the less heat and power loss. There is also power leakage, etc......not going to get into it because frankly, I haven't read enough to speak to it much. But don't take my word for it.
There is a lot of information out there but here are a few links to good information.
Exynos 32nm Process Info
Qualcomm S4 Krait Architecture Explained
Ti A15 White Papers
ARM Cortex A9 Info
ARM Cortex A15 Info
Samsung Exynos 4412 Whitesheet
Exploring the Design of the A15 Processor
I could link you to all sorts of web benchmarks and such, but to be honest, none of them are really complete and I have not yet found one that can really give a unbiased and apples to apples comparison. As mentioned previously most of them will compare the S4 Krait development hardware to the older 40nm Samsung Exynos hardware......which really doesn't represent what is in the SGSIII smartphones.
Now a few take aways that to me stood out from my own research. If you are unable to read someone's opinion without getting upset please don't read on from here.
The Exynos EU variant that does not support LTE is on paper going to use more power and create more heat due to it simply needing to rely on additional hardware for it's various functions where the S4 Krait has the radio built in. This remains to be seen but battery life would be the biggest implication here. Although Samsung's latest 32nm HKMG process certainly goes a long way towards leveling the playing field.
The Exynos variant is built on older A9 core technology and when comparing feature sets, does not support things such as virtualization. Do you need VT for your phone? Only if the devs create an application for it, but I believe the ability to dual boot different OS'es is much easier done with VT available.
In contrast the S4 Krait core does support this feature. I would like to see about dual booting Windows Phone 8 and Android and I hope having the hardware support and additional ram (EU version has 1GB ram, US has 2GB ram) will help in this area. Actual VT implementation may be limited in usefulness, to be seen.
The S4 Krait/Adreno 225 package supports DirectX 9.3, a requirement for Windows RT/Windows 8 Phone(not sure if required for Phone version). In contrast Exynos Quad/Mali400 does not support DirectX 9.3 and may or may not be able to run Windows RT/Windows 8 Phone as a result. From what I understand Windows Phone 8 may be an option.
Code compiled for the A9 derived Exynos has been around for quite some time as opposed to A15 feature code. I really don't know much about this myself, but I would expect that the A15 based solution is going to have much longer legs under it since it supports everything the Exynos Quad does plus some. My expectation is that with time the code will be optimized for the newer A15 architecture better where the Exynos A9 is likely much more mature already. It could be we see a shift in performance as the code catches up to the hardware capabilities. Our wonderful devs will be a huge influence on this and where it goes. I know I would want to develop to take advantage of the A15 feature sets because they are in-addition-to the A9 feature sets that they both support.
My hope is that anyone who is trying to make a good purchasing decision is doing so with some intent. Going with a EU SGSIII when you want to take advantage of LTE data is going to cause you heartache. It cannot and will not work on your LTE network. Likewise, if you live somewhere where LTE doesn't exist or you simply don't care to have that ability, buying the US SGSIII may not be the best choice all things considered. So in some cases you see the CPU might not be the gating item that causes you to choose one way or another.
Todays smartphones are powerful devices. In todays wireless world many times our hardware choice is a two year long commitment, no small thing to some. If you have specific requirements for your handset, you should know you have options. But you should also be able to make an educated decision. The choice is your's, do with it what you will.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
SlimJ87D said:
Click to expand...
Click to collapse
One thing you won't see me doing is using pointless synthetic benchmarks to justify a purchase or position. I use my device heavily but not for games. Web browsing, multiple email, etc......I use LTE heavily. A device can get a bajillion points in <insert your choice of synthetic benchmark> but that doesn't make me any more productive. Those into gaming will probably be more comfortable with the EU Exynos Quad, I'll just say that up front....it has a stronger GPU, but this isn't about that. It would be nice to keep this thread about the technology, not synthetic benchmark scores.
Click to expand...
Click to collapse
This is not a benchmark comparison thread, as simply put in the OP. Please create a synthetic benchmark thread for synthetic benchmark comparisons. Please read the OP before commenting. I was really hoping you were going to offer more technical information to contribute as you seem to be up to date on things. I expected more than a cut and paste "me too" synthetic benchmark from you....congrats, you can now run Antutu faster....
Thanks for info but Qualcomm's architecture is not quite following ARM's blueprint/guideline. They did huge modification on their first AP(Snapdragon 1) to push over 1GHz and it causes low power efficiency/application compatibility/heat issue compared to Sammy's legit 1Ghz Hummingbird. And for some reason Qualcomm decide to improving their mal-functioning architecture(scorpion) instead of throwing it away and their inferiority continues through all scorpion chips regardless of generation. Their only sales point and benefit was one less chip solution, and LTE band chip nowadays.
Personally I don't think S4 based on A15 architecture and it is slower than International note's exynos in many comparing benchmarks/reviews.
Exynos in GS3 is made on 32nm node, which is better than 45nm one in note. I don't think Sammy figured out yet the ideal scheduler that android system and applications to use those four core efficiently, but it will show significant performance boost over coming updates as shown on GS2 case.
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
Thanks for info but Qualcomm's architecture is not quite following ARM's blueprint/guideline. They did huge modification on their first AP(Snapdragon 1) to push over 1GHz and it causes low power efficiency/application compatibility/heat issue compared to Sammy's legit 1Ghz Hummingbird. And for some reason Qualcomm decide to improving their mal-functioning architecture(scorpion) instead of throwing it away and their inferiority continues through all scorpion chips regardless of generation. Their only sales point and benefit was one less chip solution, and LTE band chip nowadays.
Personally I don't think S4 based on A15 architecture and it is slower than International note's exynos in many comparing benchmarks/reviews.
Exynos in GS3 is made on 32nm node, which is better than 45nm one in note. I don't think Sammy figured out yet the ideal scheduler that android system and applications to use those four core efficiently, but it will show significant performance boost over coming updates as shown on GS2 case.
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
The fact both cpu's are modified versions of their ARM derived variants is captured in the OP, as is the fact that most if not all comparisons reference the 40nm Exynos as opposed to the newer 32nm process, also mentioned in the OP.
Thanks
Why would windows environment even matter at this moment?
Isn't MS setting the hardware specs for the ARM version of the devices?
As for LTE compatibility, it's getting released in korean market with LTE and 2GB of RAM supposedly and this was the speculation from the beginning.
spesific discussion of the processors is different to general discussion on comparison.
thread cleaned. please keep to this topic?
jamesnmandy said:
When looking at the two, one must understand that some of the documentation available is comparing what was available at the time they were written. In most cases A9 info is based on the 40nm manufacturing process. Samsung's Exynos is built using newer 32nm HKMG manufacturing processes. Qualcomm S4 Krait is built on a newer smaller 28nm manufacturing process. Generally speaking, the smaller the process, the less heat and power a cpu will generate because of the much denser transistor count. There is also power leakage, etc......not going to get into it because frankly, I haven't read enough to speak to it much.
Software written for the A9 derived Exynos has been around for quite some time as opposed to A15 feature code. I really don't know much about this myself, but I would expect that the A15 based solution is going to have much longer legs under it since it supports everything the Exynos Quad does plus some. My expectation is that with time the code will be optimized for the newer A15 architecture better where the Exynos A9 is likely much more mature already. It could be we see a shift in performance as the code catches up to the hardware capabilities. Our wonderful devs will be a huge influence on this and where it goes. I know I would want to develop to take advantage of the A15 feature sets because they are in-addition-to the A9 feature sets that they both support.
Click to expand...
Click to collapse
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
AndreiLux said:
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
Click to expand...
Click to collapse
So i am happy to make corrections when unbiased data is presented. I will look into some of your claims for myself and update accordingly but as mentioned in the OP if you would like to cite specific sources for any thing, please include links. Thank you for your input. The entire point of the thread is to document the differences because a lot of people seem to be looking at the choice as simply 4 or 2 cores and in similar fashion they gravitate to the bigger number without understanding what they are buying into. Some of your statements claim "hogwash", as mentioned I am learning myself and hope to rid the post of any hogwash asap. I for one will be trying to get Windows 8 Phone to boot on it if possible, I tried to clarify in the OP Windows Phone 8 while Windows 8 RT certainly looks to be a stretch. Thanks
Sent from my DROIDX using xda premium
AndreiLux said:
First of all, Samsung's 32nm HKMG process is superior and more power efficient than TSMC's 28nm that's being used in Krait right now, even if the feature size is slightly bigger. HKMG overall is absolutely big step in transistor leakage, and on top of that Samsung integrated a lot of low level electrical engineering tricks to lower the power usage as load biasing. The quadcore with last generation architecture is toe-to-toe with the dual-core Kraits in maximum power dissipation if you normalize for battery size as per Swedroid's tests: http://www.swedroid.se/samsung-galaxy-s-iii-recension/#batteri
And the Kraits and A15 are supposed to be much more power efficient in W/MHz, so it just proves how much of a manufacturing advantage Samsung has here.
Secondly, that paragraph about software written for A15 is absolutely hogwash. You don't write software for A15, the compiler translates it into instruction code which the A15 is equal to an A9, i.e. ARMv7-A. The only difference is the SIMD length which is being doubled/quadrupled, again something you don't really program for in most cases. A15 is mostly IPC improvements and efficiency improvements, not a new instruction set to warrant difference in software, else nothing would be compatible.
DX9.3 compliance is senseless in that regard too as until we'll need any of that the new generation of SoCs will be out so I don't know why we're even bringing this up as nobody's going to hack Windows 8 into their Galaxy S3.
I really don't see the point of this thread as half of it is misleading and the other half has no objective.
Click to expand...
Click to collapse
Which test on that review shows the 32nm 4412 being more efficient than the 28nm 8260 on the one s?
VT has nothing to do with dualbooting. That only requires a bootloader and of course the operating systems supporting sharing their space.
It might allow with a lot of work to get Android to run under WP8 or WP8 to run under Android in a virtual machine.
The most interesting feature you could achieve with VT is to have two copies of the same operating system running with their own data, cache and storage partitions each. This would allow corporta BYOD to remain more-or-less secure and enforce corporate policies on Exchange clients without requiring the user's private part of the phone to be affected by these restrictions.
However after years of development 3d performance on x86 (desktop) platforms is mediocre at best with imho Microsoft Hyper-V being the current winner.
Additionally you claim that WP8 will work on the Qualcom chip.
This is simply not true: it MIGHT work if a build for that exact SoC is produced (somewhat unlikely).
Since WP8 is closed-source and the hardware is proprietary binary too it won't be portable.
This is due to ARM not being like x86 a platform with extensible capability and plg&play support but rather an embedded system where the software has to be developed for each device individually.
nativestranger said:
Which test on that review shows the 32nm 4412 being more efficient than the 28nm 8260 on the one s?
Click to expand...
Click to collapse
You have the full load test and the temperature, in the link that I posted. Normalize them for battery size, for example to the Asus Padphone (Or the One S for that matter, they similar in their result) at 3.7V*1520mAh = 6.1Wh and the S3 at 3.8V*2100mAh = 7.98Wh >> 30.8% increase. Nomarlize the S3's 196 minutes by that and you get 149 minutes. Take into account how the S3's screen is bigger and higher resolution and the result will be more skewed towards the S3. So basically a four core last generation at full load on all four cores is arguably toe-to-toe in maximum power dissipation to a next-generation dual core. The latter should have been the winner here by a large margin, but it is not. We know it's not due to architectural reasons, so the only thing left is manufacturing. HKMG brings enormous benefits in terms of leakage and here you can see them.
d4fseeker said:
VT has nothing to do with dualbooting. That only requires a bootloader and of course the operating systems supporting sharing their space.
It might allow with a lot of work to get Android to run under WP8 or WP8 to run under Android in a virtual machine.
The most interesting feature you could achieve with VT is to have two copies of the same operating system running with their own data, cache and storage partitions each. This would allow corporta BYOD to remain more-or-less secure and enforce corporate policies on Exchange clients without requiring the user's private part of the phone to be affected by these restrictions.
However after years of development 3d performance on x86 (desktop) platforms is mediocre at best with imho Microsoft Hyper-V being the current winner.
Additionally you claim that WP8 will work on the Qualcom chip.
This is simply not true: it MIGHT work if a build for that exact SoC is produced (somewhat unlikely).
Since WP8 is closed-source and the hardware is proprietary binary too it won't be portable.
This is due to ARM not being like x86 a platform with extensible capability and plg&play support but rather an embedded system where the software has to be developed for each device individually.
Click to expand...
Click to collapse
Changed text to read
From what I understand Windows Phone 8 may be an option.
Click to expand...
Click to collapse
and
Actual VT implementation may be limited in usefulness, to be seen.
Click to expand...
Click to collapse
TSMC is struggling with their 28nm node and failed to bring up yield rate through High-K Metal Gate process, so they announced they will keep 28nm SiON for now. The problem is when node become more dense, voltage leakage rate increases geometrically. HKMG itself reduces that leakage to about 100 times less as stated by Global Foundry and Samsung. That is why 32nm HKMG is way more superior than 28nm SiON. You can easily find related article, PR data and detailed chart about this.
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
TSMC is struggling with their 28nm node and failed to bring up yield rate through High-K Metal Gate process, so they announced they will keep 28nm SiON for now. The problem is when node become more dense, voltage leakage rate increases geometrically. HKMG itself reduces that leakage to about 100 times less as stated by Global Foundry and Samsung. That is why 32nm HKMG is way more superior than 28nm SiON. You can easily find related article, PR data and detailed chart about this.
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
Will update with linkage when i can, kinda busy with work, if you have links to share please do,
Sent from my DROIDX using xda premium
jamesnmandy said:
Will update with linkage when i can, kinda busy with work, if you have links to share please do,
Sent from my DROIDX using xda premium
Click to expand...
Click to collapse
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Radukk said:
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
Interesting reading. Thanks! :thumbup:
Sent from my GT-I9300 using xda premium
Radukk said:
http://en.wikipedia.org/wiki/High-k_dielectric
http://www.chipworks.com/en/techni...resents-dualquad-core-32-nm-exynos-processor/
Sent from my SAMSUNG-SGH-I717 using xda premium
Click to expand...
Click to collapse
For future reference I would never use wikipedia as a source of fact. Thanks for the other link, will update as soon as i can.
Sent from my DROIDX using xda premium
jamesnmandy said:
For future reference I would never use wikipedia as a source of fact. Thanks for the other link, will update as soon as i can.
Sent from my DROIDX using xda premium
Click to expand...
Click to collapse
You know, there's always the sources at the bottom of every Wikipedia article...
AndreiLux said:
You know, there's always the sources at the bottom of every Wikipedia article...
Click to expand...
Click to collapse
you are of course correct, which is why I always drill down and link to the sources not the article, just personal preference I suppose, but this isn't my idea, I think linking to wikipedia as a source of fact is generally frowned upon
no worries