Related
Is it true that Qualcomm's dual-core CPU's will be based on the older ARM Cortex-A8 architecture set instead of the modern Cortex-A9 which is being used by Apple's A5 Chip and Nvidia'S Tegra 2 ?
Source:
http://smartphonebenchmarks.com/for...msm8660-12ghz-dual-core-snapdragon-processor/
The hardware benchmarks on the dual-core MSM8x60 1.2 Ghz chip used by HTC Pyramid (Sensation,Doubleshot) and the Evo-3D do not look pretty good.
Source:
http://smartphonebenchmarks.com/forum/index.php?showtopic=258
Need a bit of clarification on this issue why they didn't choose the Cortex-A9 path.
Ok so I just read this report from Qualcomm explaining this issue:
http://www.qualcomm.de/documents/files/linley-report-dual-core-snapdragon.pdf
Apparently their architecture set is compatible with ARM's instruction architecture set and they claim its better than the A9.
"The superscalar CPU uses a 13-stage pipeline to generate faster clock speeds than competing products can achieve using ARM’s Cortex-A8 or Cortex-A9"
Having said that still not sure why the hardware benchmarks are not near the Cortex-A9 dual-core processors.
Adreno-220 is pretty good though compared to other GPU's.
mjehan said:
Apparently their architecture set is compatibily with ARM's instruction architecture set and they claim its better than the A9.
Having said that still not sure why the hardware benchmarks are not near the Cortex-A9 dual-core processors.
Click to expand...
Click to collapse
Because bechmarks are meaningless and HTC have yet to put the work into fiddling them yet!
Quamcomm has been claiming that their design is better than ARM's Cortex A8 before but other than few special occasions, they are mostly equal at the same clock speed. Since MSM8x60 is also based on the identical cores, I don't see how it could be better than Cortex A9. In fact, Qualcomm is working on their own "equivalent to A9" version right now.
FYI, # of pipelines don't tell the whole story about the speed of CPUs. If not implemented well, it will simply cause longer stall delays. We have seen this in the old Pentium 4 architectures.
I think the 128bit fpu makes scorpion equivalent to a9 in floating points calculation
Sent via psychic transmittion.
Which is the better GPU and why ?
I'm not sure of the technical reasons why, maybe people are just going off benchmarks, but the general consensus is that Adreno 220 has better gaming performance.
However, unless you are planning some hardcore gaming; Mali-400 MP or GeForce ULP will be just fine.
MALI-400 MP is imo a faster GPU but it really lacks stuff needed to be a good GPU. Also on the low level some of the major 3d scores are even lower than Adreno 205. So the quality here sucks. It misses many compression texture formats so low compatibility. Most games will come up with a solution for that but with time and that time could really end the life cycle of the gs2. Mali 400 is slower than adreno 205 in Geometric Tests, Common Tests, Exponential Tests. Adreno 220 will be a slightly slower GPU in synthetic tests but with more compatibility, better quality from the lower level, more texture compression formats and will be compatible with all games since start as adreno gpu games are already abundant in the market. So its more like a Samung delivered the fastest GPU with major flaws. Here Adreno 220 is like ATI and Nvidia and Mali-400 is like any other generic GPU from another company. And Galaxy S 2 coming in tegra 2 would really mess up the compatibility of Mali-400 seeing that Mali will be missing the number of devices so Mali - 400 could be a left out here.
Right now the game here is a Faster GPU (by a small margin) vs a more Compatible GPU. Better - if u can wait with no definite future mali and if u want everything now and in future its adreno
With CF working on compat I wouldnt be surprised if we're all playing Tegra Zone next month.
bilboa1 said:
With CF working on compat I wouldnt be surprised if we're all playing Tegra Zone next month.
Click to expand...
Click to collapse
I agree. (10char)
Samsung's Galaxy S II Preliminary Performance: Mali-400MP Benchmarked 1Ghz Mali 400
Dual Core Snapdragon GPU Performance Explored - 1.5 GHz MSM8660 and Adreno 220 Benchmarks 1.5Ghz Adreno 220
Should give you a rough idea of what to expect.
_dsk_ said:
Samsung's Galaxy S II Preliminary Performance: Mali-400MP Benchmarked 1Ghz Mali 400
Dual Core Snapdragon GPU Performance Explored - 1.5 GHz MSM8660 and Adreno 220 Benchmarks 1.5Ghz Adreno 220
Should give you a rough idea of what to expect.
Click to expand...
Click to collapse
No, it's not even close.
My Galaxy S II scores 42.2 fps in the same benchmark, Adreno scores an impressive 38 fps but this is with the CPU at 1.5GHz.
_dsk_ said:
Samsung's Galaxy S II Preliminary Performance: Mali-400MP Benchmarked 1Ghz Mali 400
Dual Core Snapdragon GPU Performance Explored - 1.5 GHz MSM8660 and Adreno 220 Benchmarks 1.5Ghz Adreno 220
Should give you a rough idea of what to expect.
Click to expand...
Click to collapse
No, this one is not accurate.
Just look at the firmware, SGS2 was running Android2.3.1 at the time, it was not a retail device.
Retail SGS2 outperforms anything currently in GLbench.
"Originally Posted by iwantandroid
I cried when I lerned this phone i got from tmobile didnt have Android. Can sum1 help me get Android on my new G1 and then tel me how to jailbroke it please"
LOL OMG
_dsk_ said:
Samsung's Galaxy S II Preliminary Performance: Mali-400MP Benchmarked 1Ghz Mali 400
Dual Core Snapdragon GPU Performance Explored - 1.5 GHz MSM8660 and Adreno 220 Benchmarks 1.5Ghz Adreno 220
Should give you a rough idea of what to expect.
Click to expand...
Click to collapse
these tests are kinda misleading, between non-final device/software and capped framerate
i'm a bit disappointed that it comes from anandtech since they usually try to have stuff all squared out on PCs ;-)
lol, you guys are very defensive about your phones, understandably.
What you should be able to ascertain though is that the 1Ghz Mali benchmarks are decent and you can expect better performance with it clocked at 1.2Ghz.
Conversely you should be able to see that the Adreno at 1.5Ghz, though impressive, will be less so clocked at 1Ghz like in the Sensation, which will also have a higher resolution screen.
I only provided the links so that people could make up their own mind by using the same logic.
Are you sure the Mali-400 is clocked at 1.2Ghz ?
Because when I overclocked my SGS2 to 1.5Ghz I saw a 25% performance increase in computing performance, but almost no increase at all in graphics performance (using GL Benchmark), so I thought the frequencies of the two were totally unrelated.
I dont know what the clock speeds of the GPU are, but CPU speed bumps will also help with 3D performance.
_dsk_ said:
I dont know what the clock speeds of the GOP are, but CPU speed bumps will also help with 3D performance.
Click to expand...
Click to collapse
Well in my case it did not. I guess a dual core 1.2Ghz CPU is not a bottleneck on a smartphone lol.
Ive heard there are FPS caps on the Galaxy line, not sure how true this is, usually benchmarks should see an increase when handsets are overclocked.
_dsk_ said:
Ive heard there are FPS caps on the Galaxy line, not sure how true this is, usually benchmarks should see an increase when handsets are overclocked.
Click to expand...
Click to collapse
its true for the sgs1 and 2 at least, frame rate is capped between 56 and 66 fps depending on kernels/versions etc
many benchmarks hit the cap (like quadrant)
Many people have said that SE should have put a Tegra 2 dual core chip inside the Xperia Play instead of the Snapdragon with Adreno 205.
In the real world the Adreno 205 was a much better choice for complex game effects and battery life.
This is a heavy read but there are plenty of charts & pictures that tell a fairer story from a Game Developers point of view.
http://blogs.unity3d.com/wp-content/uploads/2011/08/FastMobileShaders_siggraph2011.pdf
Here's hoping Tegra 3 is a much better effort.
First of all Tegra II is not a GPU. It's a CPU. So a more valid comparison would be snapdragon V's tegra II or Adreno V's GEforce.
Adreno 200 really was a poor GPU and qualcomm made a mess when they purchased the Adreno project off ATI. Although i think were all agreed that the jump from adreno 200 to adreno 205 was massive.
Adreno 205 is easly on par with the GPU in any single core CPU. I dont quite think it is a match for the 8 core ULV GPU inside the tegra II.
And imo NVIDA has proven with some of the tegra II games that the mobile version of GEforce inside there CPU is in a league of it's own compared to our GPU. Although i think Adreno 220 is on par with the Tegra II GPU. The soon to be released quad core tegra III CPU comes with such an awesome GPU it will be hard to beat
http://www.youtube.com/watch?v=cI-guAGGK3s
AndroHero said:
First of all Tegra II is not a GPU. It's a CPU. So a more valid comparison would be snapdragon V's tegra II or Adreno V's GEforce.
Click to expand...
Click to collapse
I think I tried to say that in the post the header could have been a bit clearer.
Adreno 200 really was a poor GPU and qualcomm made a mess when they purchased the Adreno project off ATI. Although i think were all agreed that the jump from adreno 200 to adreno 205 was massive.
Click to expand...
Click to collapse
Agree I have tried rudimentary GPU benchmarking on all my phones, the Xperia Play would have been severly weakened if it went ahead with using a Adreno 200 based SOC.
Adreno 205 is easly on par with the GPU in any single core CPU. I dont quite think it is a match for the 8 core ULV GPU inside the tegra II.
And imo NVIDA has proven with some of the tegra II games that the mobile version of GEforce inside there CPU is in a league of it's own compared to our GPU. Although i think Adreno 220 is on par with the Tegra II GPU. The soon to be released quad core tegra III CPU comes with such an awesome GPU it will be hard to beat
Click to expand...
Click to collapse
I also thought the NVIDA GPU chip would be much better, but after reading the PDF I don't think it is. It looks like to get the best from the NVIDA GPU you have to use the CPU's much more than the Adreno 205 which will hit battery life. Also the Adreno looks like it has some hidden tricks that help in more complex scenes.
Give the PDF a read.
From the (very little, it is a really technical paper) content I can extract, it seems that the Nvidia Tegra devices follow a "classic approach" and load many more things on the CPU, while the Adreno and PowerVR (aka Apple's chip) follow a "smarter" approach, reducing the CPU load and loading the GPU, plus using tricks.
I'd say that, if that is correct, that it comes from the legacy of Nvidia as a desktop pc GPU maker, and that it makes sense that Nvidia is betting on getting multi-core devices out ASAP, for their approach is much more CPU-taxing and multiple cores allow to reduce CPU stress.
Techdread said:
I think I tried to say that in the post the header could have been a bit clearer.
Agree I have tried rudimentary GPU benchmarking on all my phones, the Xperia Play would have been severly weakened if it went ahead with using a Adreno 200 based SOC.
I also thought the NVIDA GPU chip would be much better, but after reading the PDF I don't think it is. It looks like to get the best from the NVIDA GPU you have to use the CPU's much more than the Adreno 205 which will hit battery life. Also the Adreno looks like it has some hidden tricks that help in more complex scenes.
Give the PDF a read.
Click to expand...
Click to collapse
I did look at the .pdf. But to be honest, it's a little over my head lol
Sent from my R800i using Tapatalk
Interesting results for the Adreno 205.
Shader Performance
•Normalized to iPad2 resolution
•From single color:
• 1.4ms iPad2
• 3.5ms XperiaPlay
• 3.8ms Tegra2
• 14.3ms iPhone3Gs
•To fully per-pixel bump spec:
• 19.3ms iPad2
• 18.4ms XperiaPlay
• 47.7ms Tegra2
• 122.4ms iPhone3Gs
hairdewx said:
Interesting results for the Adreno 205.
Shader Performance
•Normalized to iPad2 resolution
•From single color:
• 1.4ms iPad2
• 3.5ms XperiaPlay
• 3.8ms Tegra2
• 14.3ms iPhone3Gs
•To fully per-pixel bump spec:
• 19.3ms iPad2
• 18.4ms XperiaPlay
• 47.7ms Tegra2
• 122.4ms
Hmmmmmmm
Sent from my R800i using Tapatalk
Click to expand...
Click to collapse
Double post......
AndroHero said:
The soon to be released quad core tegra III CPU comes with such an awesome GPU it will be hard to beat
http://www.youtube.com/watch?v=cI-guAGGK3s
Click to expand...
Click to collapse
Holy cr*p that looks amazing!
When is the Tegra 3 and Adreno 220 coming out? which will be the best? tablet only or on phones too?
FK1983 said:
Holy cr*p that looks amazing!
When is the Tegra 3 and Adreno 220 coming out? which will be the best? tablet only or on phones too?
Click to expand...
Click to collapse
Adreno 220 is already out with the dual core qualcomm chips
http://www.youtube.com/watch?v=Ehfyxvh2W4k&feature=related
Although the game in the demo is desert winds, an xperia play (adreno 205) exclusive
Comparing dual core Qualcomm chips to the Tegra is like comparing our current chip to the Samsung hummingbird.
The former is more widely supported, and better optimized. Whereas the latter is not well supported, and although it's supposed to be better on paper, it's real life performance isn't as good.
Sent from my R800
Logseman said:
From the (very little, it is a really technical paper) content I can extract, it seems that the Nvidia Tegra devices follow a "classic approach" and load many more things on the CPU, while the Adreno and PowerVR (aka Apple's chip) follow a "smarter" approach, reducing the CPU load and loading the GPU, plus using tricks.
Click to expand...
Click to collapse
Thats the impression I got.
I'd say that, if that is correct, that it comes from the legacy of Nvidia as a desktop pc GPU maker, and that it makes sense that Nvidia is betting on getting multi-core devices out ASAP, for their approach is much more CPU-taxing and multiple cores allow to reduce CPU stress.
Click to expand...
Click to collapse
Desktop and handheld are vastly different in power & heat requirements, NVidia were probably rushing their dual core SOC's to market the lack of NEON in initial shipments and poor GPU's seems to confirms this.
Quick question -- how does the Tegra 3 stack up to the Nook Tablet's OMAP 4430? i know it's a bump up, but i'd like to know how much of a bump up i'm getting with my fancy new bit of gadgetry. I know they're both based on a ARM A9 CPU, but that's all...
Mr. Argent said:
Quick question -- how does the Tegra 3 stack up to the Nook Tablet's OMAP 4430? i know it's a bump up, but i'd like to know how much of a bump up i'm getting with my fancy new bit of gadgetry. I know they're both based on a ARM A9 CPU, but that's all...
Click to expand...
Click to collapse
well you have two more cores, so thats twice the CPU performance in apps that can support quad core (browsers and some games).
the GPU is much faster on the tegra3 though. the geforce ulp is about 2-3x faster than the gpu on the omap 4460 chipset at the same resolution. not sure how far behind the omap 4430 is, but the tegra3 is definitely much better.
Souai said:
well you have two more cores, so thats twice the CPU performance in apps that can support quad core (browsers and some games).
the GPU is much faster on the tegra3 though. the geforce ulp is about 2-3x faster than the gpu on the omap 4460 chipset at the same resolution. not sure how far behind the omap 4430 is, but the tegra3 is definitely much better.
Click to expand...
Click to collapse
Quad-core tegra 3 vs Dual Core OMAP 4430?
I know...quad core and tegra 3 is the winner, but Dual Core OMAG 4430(Galaxy Tab2 7.0) how much time will resist gaming market, if you know what I mean
T3 have better CPU+ GPU
Also better, faster, newer system by google
The 4430 is crap? The gnex has the 4460 and its not all that great either. The CPU in the 8.9 which is the 4470 is the only one that's better than the tegra3.
Sent From My N7 via White Tapatalk
Mr. Argent said:
Quick question -- how does the Tegra 3 stack up to the Nook Tablet's OMAP 4430? i know it's a bump up, but i'd like to know how much of a bump up i'm getting with my fancy new bit of gadgetry. I know they're both based on a ARM A9 CPU, but that's all...
Click to expand...
Click to collapse
Compared to the T3 the OMAP is garbage. I actually write firmware for TI processors (DSP's not OMAP's but still...)
So the Qualcomm Snapdragon 820 is quad core. Why did Qualcomm decide to go with a quad core one over a octacore or hexacore? How would that affect the GS7/GS7 Edge if it were octacore or hexacore? How much of a difference is there between the Exynos and 820?
Indeed the snapdragon 820 is a quad core SOC unlike most recent socs which have featured 8 cores (2 clusters of 4 cores). However most big.little socs like the exynos 8890 and 7420 use 4 low power slow processors and 4 high power, but power hungry processors. They are completely different architectures. This means that while the exynos 8890 for instance has 8 cores. Only 4 of them are really designed for performance. The other 4 are designed to save power. The 820 is different. It's also some sort of big.little setup with 2 clusters of 2 cores. However both clusters are identical architectures. The difference is one cluster is clocked lower and has a different l2 cache configuration in order to use less power. On top of that the custom cores in the 820 are faster per core than the exynos 8890. So clock for clock the 820 would win against the high power cluster of the exynos. In heavily multitheaded situations though. The exynos still can tap into all 8 cores at the same time which should give it an advantage in that scenario. For the rest of the time I would imagine the 4 faster cores of the snapdragon would be better suited to everyday stuff. As for why they only went with 4. My guess is cost, and power efficiency. Kyro is a brand new architecture. Krait went through many iterations. Kyro will probably see a noticeable reduction in its power envelope in the next iteration which would make shoving more of them onto an SOC a more viable option. As for gpu, all signs are pointing to the snapdragons adreno GPU beating the Mali in the exynos atm. Development will also be improved on the snapdragon device as Qualcomm releases the proprietary vendor binaries and Samsung does not. This means the likelihood of seeing cm or aosp on an exynos variant is slim. Hope this helps!
Actually, the Kryo cores are (slightly) better at running single threaded tasks while the Exynos cores are (slightly) better at running multi-threaded tasks. I doubt the everyday users will notice.
The Adreno is also more powerful than the MALI GPU, though everyday users will mostly notice a performance improvement on applications using the Vulkan API vs regular applications, than anything between both these GPUs.
Finally the memory management seems much better on the Exynos 8890 for some reason (about twice as fast), since the same chips are used I wonder whether it's a software or a hardware implementation difference, both units are plenty fast though.
The real difference between both these SOCs will seen in the power management efficiency, in fact both variants are overpowered in every aspects as far as regular usage goes, so there is little point in comparing which one's the fastest. Instead, you need to wonder which one is the most conservative with power consumption while achieving equivalent performances.
Both the GPUs on these SOCs support the Vulcan API. And, whilst the Adreno is faster in terms of pure benchmark numbers, I very much doubt there will be a noticeable difference on any game or application, Vulcan or otherwise, that will be released during the lifetime of these phones.
Yeah, that does help explain it. Thanks. I just hoped there wouldn't be a TSMC vs Samsung difference in the iPhone 6S/6S Plus SoC.
i was thinking 8 core snapdargon 810 was over heating and thermal throttling so they went for 4 cores instead on snapdragon 820.
just my thoughts.