Unlimited proof of quad core and and battery usage. - XPERIA X10 General

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Low power Processing brunt for phones, netbooks, laptops & tablets.
It seems ARM and Nvidia have big plans for the future.
Newer small chipsets like these will sport some serious multi core action.
Here's the basic road map.
Tegra 3 (Kal-El) series
Processor: Quad-core ARM Cortex-A9 MPCore, up to 1.5 GHz
12-Core Nvidia GPU with support for 3D stereo
Ultra Low Power GPU mode
40 nm process by TSMC
Video output up to 2560×1600
NEON vector instruction set
1080p MPEG-4 AVC/h.264 High Profile video decode
The Kal-El chip (CPU and GPU) is to be about 5 times faster than Tegra 2
Estimated release date is now to be Q4 2011 for tablets and Q1 2012 for smartphones, after being set back from Nvidia's prior estimated release dates of Q2 2011, then August 2011, then October 2011
The Tegra 3 is functionally a quad-core processor, but includes a fifth "companion" core. All cores are Cortex-A9's, but the companion core is manufactured with a special low power silicon process. This means it uses less power at low clock rates, but more at higher rates; hence it is limited to 500 MHz. There is also special logic to allow running state to be quickly transferred between the companion core and one of the normal cores. The goal is for a mobile phone or tablet to be able to power down all the normal cores and run on only the companion core — using comparatively little power — during "standby" mode or when otherwise using little CPU. According to Nvidia, this includes playing music or even video content.
Tegra (Wayne) series
Processor: Quad-core/Octa-core ARM Cortex-A15 MPCore (octa core already)
Improved 24 (for the Quad Version) and 32 to 64 (for the Octa-core Version) GPU cores with support for Directx 11+, OpenGL 4.X, OpenCL 1.X, and Physx
28 nm
About 10 times faster than Tegra 2
To be released in 2012
Tegra (Grey) series
Processor: ARM Cortex MPCore
28 nm
Integrated Icera 3G/4G baseband
To be released in 2012
Tegra (Logan) series
Processor: ARM ?
Improved GPU core
28 nm[23]
About 50 times faster than Tegra 2
To be released in 2013
Tegra (Stark) series
Processor: ARM ?
Improved GPU core
About 75 times faster than Tegra 2[24]
To be released in 2014
THE TEGRA 3's SECERET CORE? what does it do?
There's a not-so-dirty little secret about NVIDIA's upcoming Tegra 3 platform (which will soon find a home in plenty of mobile devices): the quad-core processor contained within has a fifth core for less intensive tasks.
In a paper published by NVIDIA, they provided in-depth details about their Variable Symmetric Multiprocessing (vSMP). Simply put, vSMP implemented in Kal-El not only optimizes CPU multi-threading and multi-tasking for max performance and power efficency at a moments notice, but it offloads background tasks and less intensive CPU activities, such as background syncing/updating, music playback, video playback to the fifth core, which runs at a considerably slower 500 MHz, and therefore consumes considerably less power. Bottom line: battery life!
All five CPU cores are identical ARM Cortex A9 CPUs, and are individually enabled and disabled (via aggressive
power gating) based on the work load. The "Companion" core is OS transparent, unlike current Asynchronous SMP architectures, meaning the OS and applications are not aware of this core, but automatically take advantage of it. This strategy saves significant software efforts and new coding requirements.
The Tegra 3 logic controller also has the power to dynamically enable and disable cores depending on the workload at any given time, making sure not to waste any power from unused cores. So what's the quantitative payoff?
NVIDIA ran Coremark benchmark using the Tegra 3 and pitted it against against current industry chipsets, such as the TI OMAP4 and the Qualcomm 8x60 (take these with a grain of salt, obviously). They found that when handling the same workload, Tegra 3 consumed 2-3x less power than the competition. When running max performance tests, the Tegra 3 was twice as fast while still using less power.
Compared to the Tegra 2 chipset, the Tegra 3 CPU uses 61% less power while video playback is happening, and 34% less power during gaming activities. While most of the work is done by the GPUs in mobile devices, previous chipsets lacked the ability to ramp down the energy output of unused cores like Tegra 3 is purportedly able to do.
What I'm trying to say is that you should be excited for this mobile quad-core processor to arrive and not scared for your battery life.

5 cores already?! Here I am sitting with aluminum foil ghetto heatsink between my x10 and battery to dissapate heat from 1.2ghz of increasingly mediocre temp bound performance, wondering when to jump ship to dual core. And now 5core is around the years corner...
Nonetheless... thanks for the info omegaRED

This is going to be huge...
I bet you're going to see a lot of uneducated posts in this thread about more cores and battery usage, etc...
Lol
*waits for the **** storm*

scoobysnacks said:
This is going to be huge...
I bet you're going to see a lot of uneducated posts in this thread about more cores and battery usage, etc...
Lol
*waits for the **** storm*
Click to expand...
Click to collapse
here comes scooby again.lol/come back for everything

josephnero said:
here comes scooby again.lol/come back for everything
Click to expand...
Click to collapse
This is contributing to the thread how?

Let me explain this thing in laid mans terms.
It has 5 cores.. 1 runs at 500Mhz at all times and handles most background processes.
Video
Media
ect...
Due to it's 500mhz speed it's battery usage is very low.
The other 4 cpu's running at 1.5Ghz can handle all the good stuff..
games
physix
media processing
and can be put into a off like deep sleep when going offline if not required for the task.
The 500mhz cpu and ultra low power nvidia GPU keeps everything going when all 4 cores go offline.
Also the 28nm silicon wafer technology.
The power if this design can compete with any console on the market while keeping your device going for much much longer than current chipsets.

OmegaRED^ said:
Let me explain this thing in laid mans terms.
It has 5 cores.. 1 runs at 500Mhz at all times and handles most background processes.
Video
Media
ect...
Due to it's 500mhz speed it's battery usage is very low.
The other 4 cpu's running at 1.5Ghz can handle all the good stuff..
games
physix
media processing
and can be put into a off like deep sleep when going offline if not required for the task.
The 500mhz cpu and ultra low power nvidia GPU keeps everything going when all 4 cores go offline.
Also the 28nm silicon wafer technology.
The power if this design can compete with any console on the market while keeping your device going for much much longer than current chipsets.
Click to expand...
Click to collapse
Ooh I agree completely and already understand this stuff.
I'm in the industry..
this will definitely help those who argue that more cores equals more battery drain, and who don't understand power distribution.

Compared to a core2duo
a hypothetical Tegra [email protected] would have scored 17,028 points, again beating the Core 2 Duo using the same compiler settings. If we extend the projections to a hypothetical 2.5GHz Cortex-A9 chip, we arrive at 28,380 CoreMarks, which is the very least we should expect from Qualcomm's recently announced Cortex-A15 based chip at 2.5GHz.
There's always a bigger fish.
2.5Ghz.. O_O
The quad core is just a quad core.. but usage is 25% less.
Anyone that takes this over the Tegra3..
Enjoy the 30 second battery life.
It's a massive stride forward.!!
But it's still too hungry.
I wish phone developers would shove their battery usage predictions and add 1 to 2 Amps to the projected figure.

sweet, I'll definitely look into new SE products, great hardware

Love the name of the pocessor, Kal-El

The future of smartphones is bright Look forward to purchasing a new phone in oh.... 2 years. When that time comes I'll make sure to investigate whats the best on the market rather than going in blind

Related

geforce ULP vs Adreno 220 vs PowerVR 543

This post is not to start a flaming war against any of the devices using any of these chips.
I just wanted to start this post for information.
Im a bit concerned with the geforce ULP which shows only around 20-30% increase over the previous PowerVR 540 chip.
The PowerVR 543 is supposed to be a multicore version that is much faster than the PowerVR 540 (Also, the PowerVR543 dual core is rumored to be in the ipad 2)
Additionally, the new adreno 220 has been showing off its tech and is rumored to be in the new dual core qualcomm chips.
The problem i see is, that while the dual core CPUs seem to be equally matched in terms of A5 vs tegra 2 vs dual core qualcomm, the geforce ULP seems to be heavily outclassed by the powerVR 543 (perhaps even the single core) and the adreno 220.
Does this worry anyone?
I know that the games coming out for the tegra 2 are looking quite amazing, but at the same time people are saying no low framerates, and when I looked at dungeon defenders HD in the store, the framerates, while playable, where not that high..
If the ipad 2 is going to have a dual core PowerVR SGx543 then it is going to crush the geforce ULP. And the adreno 220 looks to be a high performer as well.
I feel apprehensive jumping into a technology (the tegra 2 with geforce ULP) that is already heavily outclassed by its competition (ipad 2's A5 with power VR 543)
I dont want to get into a competition between iOS and honeycomb - and I agree that it depends fully on how the developers utilize the chip - but with such a huge performance difference, I can see developers making more amazing games with the faster chips.
The Tegra 2 is fine for gaming. It can handle its own. Media playback is an entirely other issue and the Tegra 2 is a failure in that regard.
We haven't seen either the Adreno or the SGX543 benchmarked yet at all, so its too early to say they are better than the Tegra 2. Also, I feel as though Nvidia is going to be pushing for a lot of games to be made specifically for the Tegra 2. If that is the case its going to fragment the Android market even further.
Its not fair to look at the Xoom's benchmark due to the much higher resolution skewing results downwards, so lets look at current generation phones:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can see the best performer right now is the old SGX540 clocked at 300mhz. I would bet that the SGX543 is a monster.
muyoso said:
The Tegra 2 is fine for gaming. It can handle its own. Media playback is an entirely other issue and the Tegra 2 is a failure in that regard.
We haven't seen either the Adreno or the SGX543 benchmarked yet at all, so its too early to say they are better than the Tegra 2. Also, I feel as though Nvidia is going to be pushing for a lot of games to be made specifically for the Tegra 2. If that is the case its going to fragment the Android market even further.
Its not fair to look at the Xoom's benchmark due to the much higher resolution skewing results downwards, so lets look at current generation phones:
You can see the best performer right now is the old SGX540 clocked at 300mhz. I would bet that the SGX543 is a monster.
Click to expand...
Click to collapse
The tegra 2 was rebenched after that using the viewsonic G-tablet and it did better than the SGX 540.
Refer to:
http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/8
and
http://www.anandtech.com/show/4054/first-look-viewsonic-gtablet-and-tegra-2-performance-preview/2
(on the last link - look to the bottom of the page with the "updated" benchmarks
Those are the exact same numbers. 25.2 for the Optimus 2x. 18.9 for the G-Tablet.
Flaunt77 said:
The tegra 2 was rebenched after that using the viewsonic G-tablet and it did better than the SGX 540.
Refer to:
http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/8
and
http://www.anandtech.com/show/4054/first-look-viewsonic-gtablet-and-tegra-2-performance-preview/2
(on the last link - look to the bottom of the page with the "updated" benchmarks
Click to expand...
Click to collapse
Judging by these articles - Tegra 2 is doing great.
Besides, nVidia is excellent when it comes to updating drivers on their GPUs and working with game devs to help them use the GPU's potential.
So, if they do the same for their ULPs, we should expect frequent Tegra 2 updates and great games coming.
Actually, even now, a mere week from release - check Tegra zone and see the amazing games that will be released this/next month. Mindblowing.
DarkDvr said:
Judging by these articles - Tegra 2 is doing great.
Besides, nVidia is excellent when it comes to updating drivers on their GPUs and working with game devs to help them use the GPU's potential.
So, if they do the same for their ULPs, we should expect frequent Tegra 2 updates and great games coming.
Actually, even now, a mere week from release - check Tegra zone and see the amazing games that will be released this/next month. Mindblowing.
Click to expand...
Click to collapse
I agree NOW that it may be the fastest - but in one week when the ipad 2 comes out - it will supposedly do LESS THAN HALF of the ipad 2 graphics capability - thats a huge hit in performance.
THis means that the ipad 2 can handle a class of games far above what the tegra can. Its just unsettling is all.
Flaunt77 said:
I agree NOW that it may be the fastest - but in one week when the ipad 2 comes out - it will supposedly do LESS THAN HALF of the ipad 2 graphics capability - thats a huge hit in performance.
THis means that the ipad 2 can handle a class of games far above what the tegra can. Its just unsettling is all.
Click to expand...
Click to collapse
The iPad 2 is not going to be 2x faster than the Tegra 2. The graphics processor it uses will handily beat the Tegra 2, but not by 100%. It really depends on how many cores the SGX543 has. Wikipedia says it can be anywhere from 2-16 cores.
muyoso said:
The iPad 2 is not going to be 2x faster than the Tegra 2. The graphics processor it uses will handily beat the Tegra 2, but not by 100%. It really depends on how many cores the SGX543 has. Wikipedia says it can be anywhere from 2-16 cores.
Click to expand...
Click to collapse
i guess we will find out on march 11th
but if the SGX543 has 2 cores and each single core is faster than the tegra 2 geforce ULP, then combined it will likely be 75-85% faster than the tegra 2 (i agree no more than double)
Having said that, that is still a huge upgrade.
Flaunt77 said:
i guess we will find out on march 11th
but if the SGX543 has 2 cores and each single core is faster than the tegra 2 geforce ULP, then combined it will likely be 75-85% faster than the tegra 2 (i agree no more than double)
Having said that, that is still a huge upgrade.
Click to expand...
Click to collapse
Don't forget the Tegra 2 has EIGHT cores in the GPU. Can't wait to see the anandtech review on the iPad2. Gonna be interesting.
well ipad 2 was released and although an apple hater i do have to admit..we gentlemen have been obliterated..its a dual 543..everything android devices had to offer so far has been surpassed by double the ammount..was considering the atrix,but certainly not gonna buy a device incinerated already, as it is...xoom is still a great tablet and honeycomb an amazing os...but i think its time price is lowered..we have been outclassed twofold.
chris2busy said:
well ipad 2 was released and although an apple hater i do have to admit..we gentlemen have been obliterated..its a dual 543..everything android devices had to offer so far has been surpassed by double the ammount..was considering the atrix,but certainly not gonna buy a device incinerated already, as it is...xoom is still a great tablet and honeycomb an amazing os...but i think its time price is lowered..we have been outclassed twofold.
Click to expand...
Click to collapse
Its not released yet. We will see the benchmarks when anandtech releases their review. Anandtech released their Xoom review the day before launch. If they do something similar for the iPad2, we will know in 2 days.
muyoso said:
Its not released yet. We will see the benchmarks when anandtech releases their review. Anandtech released their Xoom review the day before launch. If they do something similar for the iPad2, we will know in 2 days.
Click to expand...
Click to collapse
well you kinda know the ooutcome without waiting...each sgx543 core outperforms a tegra2 by roughly 50% ..with 75% more fillrate,huge shaders e.t.c ..and since they went dual core on gpu, well..they should be around double in performance terms against xoom,maybe more,given xoom is running on higher res..what IS surprising is that samsung made the chips for them but still on their own products chose the lesser exynos/t2..
Wasn't it pa semi and intrinsity that made the chips for apple? Samsung also works with intrinsity to design their soc.
I've read a rumor on I want to say anandtech, that the CPU cores of the ipad 2 may only be 1Ghz cortex A8 vs our dual A9s. That gives us a heafty lead in the computing department. Also just because a mobile gpu has the potential to be rediculously fast, doesn't meant it actually will be when used in the soc. It it depends on frequencies and bandwidths as well. For instance the TI omap4 uses the same gpu as galaxy s, vr540, but handily wins benchmarking due to its dual A9s and dual memory channels. And if you guys are worried about cores, we basically have 4 split into eight, so eat your heart out, dual core gpus!
The fight definitely ain't over yet fellas! Have some faith in nVidia!
I'm not very sure about the architecture of mobile GPUs, but if it is in any way similar to those of desktop GPUs then the number of cores and the core speeds matter only upto to a certain extent. If you look at AMD GPUs, they are filled with a lot of cores (~ 800 is common) running at pretty good speeds, while the equivalent NVidia GPUs have about quarter or even less cores (~200 is common with the newer fermi cards having more cores) at usually lower speeds . But NVidia makes up the performance with its memory bandwidth. The end result -- both GPUs have a similar performance. My guess is it would be the same with these mobile GPUs. Just my 2c.
Flaunt77 said:
i guess we will find out on march 11th
but if the SGX543 has 2 cores and each single core is faster than the tegra 2 geforce ULP, then combined it will likely be 75-85% faster than the tegra 2 (i agree no more than double)
Having said that, that is still a huge upgrade.
Click to expand...
Click to collapse
Actually they have and it blew XOOM out the water I mean earth. PowerVR 543 is much better and that is why I am suprised that Nvidia didn't do enough for their GPU. I was also suprised that Samsung went with them instead of their accomplishment with the PowerVr
i sstill dont ththink those benchmarks are right because I don't thing the software was optimized for tergra2
Flaunt77 said:
This post is not to start a flaming war against any of the devices using any of these chips.
I just wanted to start this post for information.
Im a bit concerned with the geforce ULP which shows only around 20-30% increase over the previous PowerVR 540 chip.
The PowerVR 543 is supposed to be a multicore version that is much faster than the PowerVR 540 (Also, the PowerVR543 dual core is rumored to be in the ipad 2)
Additionally, the new adreno 220 has been showing off its tech and is rumored to be in the new dual core qualcomm chips.
The problem i see is, that while the dual core CPUs seem to be equally matched in terms of A5 vs tegra 2 vs dual core qualcomm, the geforce ULP seems to be heavily outclassed by the powerVR 543 (perhaps even the single core) and the adreno 220.
Does this worry anyone?
I know that the games coming out for the tegra 2 are looking quite amazing, but at the same time people are saying no low framerates, and when I looked at dungeon defenders HD in the store, the framerates, while playable, where not that high..
If the ipad 2 is going to have a dual core PowerVR SGx543 then it is going to crush the geforce ULP. And the adreno 220 looks to be a high performer as well.
I feel apprehensive jumping into a technology (the tegra 2 with geforce ULP) that is already heavily outclassed by its competition (ipad 2's A5 with power VR 543)
I dont want to get into a competition between iOS and honeycomb - and I agree that it depends fully on how the developers utilize the chip - but with such a huge performance difference, I can see developers making more amazing games with the faster chips.
Click to expand...
Click to collapse
muyoso said:
The iPad 2 is not going to be 2x faster than the Tegra 2. The graphics processor it uses will handily beat the Tegra 2, but not by 100%. It really depends on how many cores the SGX543 has. Wikipedia says it can be anywhere from 2-16 cores.
Click to expand...
Click to collapse
Flaunt77 said:
i guess we will find out on march 11th
but if the SGX543 has 2 cores and each single core is faster than the tegra 2 geforce ULP, then combined it will likely be 75-85% faster than the tegra 2 (i agree no more than double)
Having said that, that is still a huge upgrade.
Click to expand...
Click to collapse
muyoso said:
Don't forget the Tegra 2 has EIGHT cores in the GPU. Can't wait to see the anandtech review on the iPad2. Gonna be interesting.
Click to expand...
Click to collapse
I know this thread is ancient, but I just have to call all of you morons, especially you muyoso.
Not only is all of that wrong, but the benchmark used has glitches with the SGX 540, putting it at a disadvantage, also the galaxy s phones are framecapped.

Adreno 225 vs "New" Mali 400MP4

So I have been doing a lot of research looking for what will be better and I am guessing that the Mali 400 is going to out perform the Adreno 225. I wish android had a solid GPU test that would give something close to real world results. But if you take a look at these articles you will see on paper the Adreno is the same as the Apple 4S's Power VR543MP2
http://www.anandtech.com/show/5559/...mance-preview-msm8960-adreno-225-benchmarks/3
Now the International version will have the new Exynos 4 Quad (4412) Quad Core Cortex A9 but the US version is rumored to have a Dual Core Qualcomm Snapdragon MSM8960 with the Adreno 225.
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
and here are some other benchmarks just to sum up the difference in performance.
http://www.anandtech.com/show/5810/samsung-galaxy-s-iii-performance-preview
The main thing I am worried about is GPU performance the CPUs in just about every phone out right not seem over kill. I want to make sure the phone I buy will be able to run FPSE(Playstation Emulator) and N64oid(N64 Emulator) smooth. FPSE now has an Open GL plugin that needs a hard core GPU to run well. My Galaxy nexus is just not cutting it anymore.
So............... get the international version.
cmd512 said:
So............... get the international version.
Click to expand...
Click to collapse
LTE Speed vs 3G Speed = not worth it.
Don't get me wrong I want a phone with a power house GPU but if the mobile connection is slow its just not worth it. I'm on Verizon and I don't want to move away from their LTE.
Zzim said:
LTE Speed vs 3G Speed = not worth it.
Don't get me wrong I want a phone with a power house GPU but if the mobile connection is slow its just not worth it. I'm on Verizon and I don't want to move away from their LTE.
Click to expand...
Click to collapse
I hear ya, LTE is blazing fast. But on my unbranded SGS2, I get downloads of up to 7.5Mbps, pay $10 a month for unlimited HSPA+ data w/ tethering, and everything is plenty fast for what I do on my phone. So, while LTE is tempting for sure, still doesn't outweigh the other benefits.
Now, if I ever need my phone to seed torrents or something, I'll have to look at LTE then... hah.
cmd512 said:
I hear ya, LTE is blazing fast. But on my unbranded SGS2, I get downloads of up to 7.5Mbps, pay $10 a month for unlimited HSPA+ data w/ tethering, and everything is plenty fast for what I do on my phone. So, while LTE is tempting for sure, still doesn't outweigh the other benefits.
Now, if I ever need my phone to seed torrents or something, I'll have to look at LTE then... hah.
Click to expand...
Click to collapse
Are you with ATT because I can get a line through my work for 20 a month unlimited everything. 7.5 would be enough speed to make me switch and how consistent are these speeds?
How could the just give the us version a dual core? That makes the phone a very slight upgrade to the s2
Sent from my HTC Sensation Z710e using XDA
@ Op
did you see the date of the article regarding "Mobile SoC GPU Comparison" ? its dated february and they are comparing with the sgs2 mali 400 gpu not the one in sgs3. the new mali gpu is already beating all the current lineup of many gpus in many becnhmarks
bala_gamer said:
@ Op
did you see the date of the article regarding "Mobile SoC GPU Comparison" ? its dated february and they are comparing with the sgs2 mali 400 gpu not the one in sgs3. the new mali gpu is already beating all the current lineup of many gpus in many becnhmarks
Click to expand...
Click to collapse
The other articles were just to show the performance of the 225 this article shows how the new Mali will run http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
That article also show that the GS2(Mali400) and the GS3(Mali400MP4) are different in some way.
Zzim said:
Are you with ATT because I can get a line through my work for 20 a month unlimited everything. 7.5 would be enough speed to make me switch and how consistent are these speeds?
Click to expand...
Click to collapse
At work (average congestion), it's consistently 7Mbps+. In areas of great congestion (the mall, etc), it does slow down, but again, for work E-mails, surfing the web, youtube, etc, I've never had issues. Of course, I'm in Austin, TX as well, and I've heard HSPA+ speeds are very much region specific.
If you can get a line through from work with unlimited everything, they may be able to get you onto the smartphone data plan tier, which some folks have gotten up to 10-11+Mpbs. I'm on the $10 a month unlimited non-smartphone plan, so I think AT&T caps it at around 7.5-8Mpbs. Still though, plenty fast for what I do with my phone.
(And, the unlimited tethering is a blessing when you're in airports and stuff. Our US airports blow as there is almost never free WIFI.)
Zzim said:
The other articles were just to show the performance of the 225 this article shows how the new Mali will run http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
Click to expand...
Click to collapse
the scorecharts does shows the new mali400 topping the chart with good margin. what else do you need from a gpu ?
The SGS3's Mali-400 is just overclocked.
Anyways, if the US SGS3 comes with the S4 Pro (which has the new Adreno 320) then the difference in GPU will probably be minor.
dude have you seen those scores ? it beats the 4s graphics which we cant deny has a great gpu...
this is more than just an overclocked mali400 ... it may still be a mali400/mp4 but its not just overclocked its remade and has much higher clocks by the look of it
also im not sure about it coming with the s4 pro with adreno 320... i heard its not ready till end of year at earliest.. the mali-400 was the best android gpu and now its the best mobile gpu out atm
^^
u are right its not only just overclocked,there are some changes in the hardware part which we will know eventually in the upcoming days. i can easily OC my sgs2 mali400 to 400mhz, but you people know it wont give the same result as sgs3 which has much more pixels than s2
urmothersluvr said:
How could the just give the us version a dual core? That makes the phone a very slight upgrade to the s2
Sent from my HTC Sensation Z710e using XDA
Click to expand...
Click to collapse
More cores doesn't = faster...
Look at AMD's bulldozer CPU with 8 cores vs Intel's core i5 with 4 cores...the i5 is faster in basically almost everything except for very specalized applications.
Faster cores > more cores.
The LTE dual core version of the SGS3 will use Krait S4 cores which are faster than A9 Exynos cores.
I wished Samsung did dual core A15s instead of Quad Core A9s.
Daemos said:
More cores doesn't = faster...
Look at AMD's bulldozer CPU with 8 cores vs Intel's core i5 with 4 cores...the i5 is faster in basically almost everything except for very specalized applications.
Faster cores > more cores.
The LTE dual core version of the SGS3 will use Krait S4 cores which are faster than A9 Exynos cores.
I wished Samsung did dual core A15s instead of Quad Core A9s.
Click to expand...
Click to collapse
Let's be clear on this
CPU vs CPU
Dual core S4 is not quicker than Quad Core Exynos
ph00ny said:
Let's be clear on this
CPU vs CPU
Dual core S4 is not quicker than Quad Core Exynos
Click to expand...
Click to collapse
Hmmm I don't know about that...
Zzim said:
Hmmm I don't know about that...
Click to expand...
Click to collapse
I for one certainly do. The Exynos 4412 uses a 32nm fab process as opposed to nearly every other A9 architecture based processor (like the 4+1 T3) and High K metal gate tech which basically means twice the processing power of the Exynos 4410 dual core with about 20% less power consumption and that's on a core against core basis. The 4410 was used in the Galaxy S II. So even if the Exynos 4412 was dual core, it's already natively 20% more battery efficient and twice as powerful than last year's model. Clearly we're talking about a lot more than just quad vs dual and 28nm vs 32 or 40. There is a LOT that has gone into the design of the Exynos. For instance keeping it the same size physically as the dual core model, or accepting 128 bit instructions rather than the paltry 64 bit instructions most other mobile processors are limited to.
Trust me, do your research, a Google search of Exynos 4412 brought up instant results that detail what a beast this chip set is.
Like these:
http://www.phonearena.com/news/Exyn...re-processor-in-the-Samsung-Galaxy-S3_id29615
http://www.phonearena.com/news/Sams...nos-to-appear-in-Samsung-Galaxy-S-III_id29494
And of course the official press release. Read through this and then the benchmarks you pointed out in the OP (I'm linking em anyway) Anandtech's benchmark tests were performed on demo units on display to handled and groped by hundreds of people. There's no telling how many people had used it before they bench marked it and no telling if they were able to do it clean (reboot device, no other apps running). If not than they tested it after some fairly heavy use and it still proved itself a beast.
http://phandroid.com/2012/04/25/sam...ynos-4-quad-for-their-next-generation-galaxy/
http://www.anandtech.com/show/5810/samsung-galaxy-s-iii-performance-preview
Research is your best friend. If you're looking for the most powerful CPU and GPU on a phone right now, this is it. And when the devs get a hold of it, it will become even better and will really be utilized to its full.
Sent from my PG86100 using Tapatalk 2
Gene_Bailey said:
I for one certainly do. The Exynos 4412 uses a 32nm fab process as opposed to nearly every other A9 architecture based processor (like the 4+1 T3) and High K metal gate tech which basically means twice the processing power of the Exynos 4410 dual core with about 20% less power consumption and that's on a core against core basis. The 4410 was used in the Galaxy S II. So even if the Exynos was dual core, it's already natively 20% more battery efficient and twice as powerful. Clearly we're talking about a lot more than just quad vs dual and 28nm vs 32 or 40. There is a LOT that has gone into the design of the Exynos. For instance keeping it the same size physically as the dual core model, or accepting 128 bit instructions rather than the paltry 64 bit instructions most other mobile processors are limited to.
Trust me, do your research, a Google search of Exynos 4412 brought up instant results that detail what a beast this chip set is.
Like these:
http://www.phonearena.com/news/Exyn...re-processor-in-the-Samsung-Galaxy-S3_id29615
http://www.phonearena.com/news/Sams...nos-to-appear-in-Samsung-Galaxy-S-III_id29494
And of course the official press release. Read through this and then the benchmarks you pointed out in the OP (I'm linking em anyway) Anandtech's benchmark tests were performed on demo units on display to handled and groped by hundreds of people. There's no telling how many people had used it before they bench marked it and no telling if they were able to do it clean (reboot device, no other apps running). If not than they tested it after some fairly heavy use and it still proved itself a beast.
http://phandroid.com/2012/04/25/sam...ynos-4-quad-for-their-next-generation-galaxy/
http://www.anandtech.com/show/5810/samsung-galaxy-s-iii-performance-preview
Research is your best friend. If you're looking for the most powerful CPU and GPU on a phone right now, this is it. And when the devs get a hold of it, it will become even better and will really be utilized to its full.
Sent from my PG86100 using Tapatalk 2
Click to expand...
Click to collapse
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Zzim said:
Hmmm I don't know about that...
Click to expand...
Click to collapse
Outside of floating point tests such as linpack, CPU benches will even have quad core tegra3 well ahead of the dual core S4
Quadrant, Antutu, etc will all show the same exact same performance gap and it's a big one
Let's get this straight
Main selling points for Dual Core S4 setup = battery life from 28nm die size and integrated LTE
Spartoi said:
The SGS3's Mali-400 is just overclocked.
Anyways, if the US SGS3 comes with the S4 Pro (which has the new Adreno 320) then the difference in GPU will probably be minor.
Click to expand...
Click to collapse
you are wrong,mali 400mp4 is a quad core gpu while mali 400 is a dual core gpu.
I want to see mali 400mp4 against sgs543mp4

Plp are saying the Nexus 7 processor is faster then then the 10. This true?

Thx for any feedback
Sent from my SGH-I747 using xda app-developers app
Who's saying that? Lol. Must be mis-informed
Sent from my EVO using Tapatalk 2
Nexus 7: 1.2GHz quad core, A9 based
Nexus 10: 1.7GHz dual core, A15 based
It's probably quite difficult to find a use case on a tablet where more than two cores provides any tangible performance increase, but the much higher clock rate and newer architecture should definitely make a difference.
Sent from my HTC One X using xda app-developers app
Benchmarks shows that the new dual core is much faster than the n7 quad. Also the 2gb of ram
I suspect it's mostly because "Plp" don't understand how 2 cores can be faster than 4 cores.
Yes, they can and yes, they are. ARM15 CPUs have a new and vastly superior architecture.
It's top notch. Although core counts are great for marketing, don't get to worried. After people get their hands on it, we will have a better idea of performance. I remain confident I'm about to purchase the most powerful tablet available.
Sent from my HTC One S using Tapatalk 2
Biohazard0289 said:
It's top notch. Although core counts are great for marketing, don't get to worried. After people get their hands on it, we will have a better idea of performance. I remain confident I'm about to purchase the most powerful tablet available.
Sent from my HTC One S using Tapatalk 2
Click to expand...
Click to collapse
Technically that's not true, the latest iPad, I believe, vastly out performs this thing. That said, how is it gonna make use of all that power other than playing games, which let's face it, that's all iPad buyers do really...
Not like it'll be cracking aircrack dumps eh.
Sent from my GT-I9300 using Tapatalk 2
I saw a benchmark that placed the nexus 10 under the transformer TF300 (antutu). Would this be accurate? I would've thought that the nexus10 would've outperformed the transformer which has a quad core tegra 3 processor.
Sent from my ASUS Transformer Pad TF300T using xda app-developers app
UndisputedGuy said:
I saw a benchmark that placed the nexus 10 under the transformer TF300 (antutu). Would this be accurate? I would've thought that the nexus10 would've outperformed the transformer which has a quad core tegra 3 processor.
Sent from my ASUS Transformer Pad TF300T using xda app-developers app
Click to expand...
Click to collapse
Saw that too, as well benchmarks in which the note 10.1 is having a better performance than the Nexus 10
http://www.engadget.com/2012/11/02/nexus-10-review/
Funnily other sites really do say the Nexus 10 is the fastest they have seen so far.
4z01235 said:
Nexus 7: 1.2GHz quad core, A9 based
Nexus 10: 1.7GHz dual core, A15 based
Click to expand...
Click to collapse
4 * 1.2 GHz = 4.8 GHz
2 * 1.7 GHz = 3.4 GHz
Clearly, the Nexus 7 is much faster.
That is the discussion level when people come to the conclusion stated in the topic. Even in benchmarks - which all of the reviewers agree don't mirror the level of performance they experience in real world use - the Nexus 10 comes out way higher than the Nexus 7 while pushing 4 times the pixels! The Nexus 10 runs circles around its little brother...
The differences are so vast, it is almost incomprehensible to some. I'll use my same reference as I do with desktop processors. When quad cores really started coming around and making big appearances; software and games weren't exactly capable of utilizing the abilities. So, an e8400 3.0GHz dual core would out perform it's bigger quad core brother that ran at the same frequency and architecture in a lot of applications. Back in 2008, gaming rigs almost always would sport the dual core processors. Simple fact is that those processors use cache that is on the die. 6mb for two cores is better than 8mb for four cores.
So, until software developers really have a need for a quad core, the dual cores will run with the quad cores just fine. Benchmarking however, almost never show any real world performance. I'm sure this dual core is going to surprise even the more skeptical guys and gals.
Sent from the Blue Kuban on my Epic 4G Touch
Maxey said:
Saw that too, as well benchmarks in which the note 10.1 is having a better performance than the Nexus 10
http://www.engadget.com/2012/11/02/nexus-10-review/
Funnily other sites really do say the Nexus 10 is the fastest they have seen so far.
Click to expand...
Click to collapse
The benchmarks are selling the N10 very, very short. There is a great review linked in this post where the author says exactly that.
Here's the post that stamps its feet and declares the absolute truth:
Cores aren't everything.
And you can think that we're just a bunch of fanboys trying to justify the use of dual core in the Nexus 10, but even non-fanboys who know how these processors work would tell you a dual core A15 is far more powerful than a quad core A9.
Now everything below this post is purely observation based (I'm not an engineer nor have I really studied up on computer parts), but I think it gives a casual user a good idea of how a CPU works.
So, before you go out screaming "wtf. why are there only 2 cores in the nexus 10. lol so noob," ask yourself this: what do extra cores even do? If you can't answer that, then don't complain because you clearly don't know what you're talking about. It's not simple multiplication of the frequency (for example, 4 x 1.2ghz = 4.8ghz). A 4.8ghz phone would be able to run Crysis without a problem (of course with a dedicated GPU, but that's besides the point). That's obviously not the case.
The major difference between the A15 and the A9 is their microarchitecture. You can think of it this way. A mouse has to go through a course. There are two courses:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
and
.
We call the maze A9 and we call the straight route A15.
Now, we will pretend "ghz" is a measure of intelligence (higher the better). A mouse that has an intuition level of 2.0ghz has to go through maze A9. Complicated right? It'd take awhile even if the mouse was kinda smart. But now a mouse that has an intuition level of 1.7 ghz has to go through maze A15. Easy. The maze route is a straight line - who wouldn't be able to find the end? So in the end, we ask ourselves, "Who gets the job done faster?" Obviously, the mouse in the A15 does even though it's not as smart. In this way, the A15 is just more efficient. So clock speeds (which are things like 1.0 ghz, 2.0 ghz, 1.5 ghz, 1.8 ghz) are only a part of the story. You need to factor in the microarchitecture (or in this example, the way the maze is organized).
Now, we go into cores. We can think of cores as splitting the work. Now, we will consider this scenario: we want to calculate how much time it takes for the maze to be completed four times (regardless of how many mice you are using - we just want to complete this maze 4x). A quadcore CPU that contains the 2.0ghz mouse can be represented by 4 mazes:
The dualcore CPU that contains the 1.7 ghz mouse can be represented by 2 mazes.
.
With the quadcore CPU, we will finish the task in only one go. Just put a mouse in each maze, and once they're done, we've completed the maze four times. With the dualcore CPU, we will finish the task in two go's. The mouse in each maze will have to go through each maze twice to finish the maze 4x. However, let's look at the "microarchitecture," or the maze route. Even though the dual core CPU (A15) needs to finish this task in two go's, it'll still do it a lot faster, because the route is far easier to go through. This makes the A15 more powerful. You can complete tasks quickly.
So when judging the "power" of SOCs, you need to keep three things in mind: cores, clock speed, and microarchitecture.
Clock speed = frequency such as 1.5ghz
Cores = dual core, quad core
Microarchitecture = A9, A15
Sometimes the microarchitecture won't be a vast enough improvement to justify a seriously lopsided clock speed. For example, a 4.5 ghz Intel Sandy Bridge CPU will be tons faster than a 3.2 ghz Intel Ivy Bridge CPU even though the Ivy Bridge CPU has a new microarchitecture. But, an Intel Sandy Bridge CPU clocked at 4.5 ghz will be slower than an Ivy Bridge CPU clocked at 4.4 ghz because the microarchitecture is slightly better.
Anyway, I hope this clears things up. I know the information here is probably not 100% accurate, but I'm hoping this is easier to understand than pure numbers and technical talk.
Where are the idiots that keep saying this? Honestly, I'm growing tired
Think about it, do you really think the quad core in your tablet is faster than say, a dual core core i5 in a laptop?
NO
Maybe this explains to you how a new processor architecture (A15), even at a dual core, can be faster than a quad core on the old architecture
Good explanation @404 !
What I'm worried about is whether the processor can handle that screen. I wonder if its possible to overclock it?

NVIDIA Tegra 4 vs Nexus 10 processor

They've unveiled it today
http://www.engadget.com/2013/01/06/nvidia-tegra-4-official/
and apparently it's much powerful and faster than the eqynox on the nexus 10, but I don't know that much about this kind of tech, I'm probably finally going to buy the Nexus 10 this week if Samsung doesn't unveil a more powerful tablet, so I was wondering if this Tegra 4 processor is worth waiting for until it's implemented on a tablet.
May TEGRA 3 Rest in Peace ...
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Sent from my GT-I9100 using Tapatalk 2
Yes that thing is packing heat. Best case, the next device with a tegra 4 will come out next Christmas. Unless they've been hiding something.
cuguy said:
Yes that thing is packing heat. Best case, the next device with a tegra 4 will come out next Christmas. Unless they've been hiding something.
Click to expand...
Click to collapse
It will be out somewhere b/w June and August maybe..
It will not take that long ...
Sent from my GT-I9100 using Tapatalk 2
i think march....mark my words
Their browser test is having the Nexus 10 run Chrome while the Tegra runs AOSP. In my eyes that makes it a 100% unfair comparison.
Between bad experiences with Tegra 2 and 3 (Atrix/TF700) and their requirement that Tegra optimized games not run on other SoC vendors without any real reason other than because they can, I cant even consider a mobile Nvidia device. All they're good for is keeping the more reputable chip makers on their toes.
yes it's nice
Would be interesting to see this with both devices running the AOSP browser! From my experience it is much faster than the current chrome version (which still is version 18 on android, compared to 23 on desktop). Maybe the Tegra4 would be faster aswell, but not that much.
Everything on my N10 is extremely fast and fluid, so I wouldn't wait for whenever the first Tegra4 devices will be available. Plus its Nexus so you know what you are buying!
Jotokun said:
Their browser test is having the Nexus 10 run Chrome while the Tegra runs AOSP. In my eyes that makes it a 100% unfair comparison.
Between bad experiences with Tegra 2 and 3 (Atrix/TF700) and their requirement that Tegra optimized games not run on other SoC vendors without any real reason other than because they can, I cant even consider a mobile Nvidia device. All they're good for is keeping the more reputable chip makers on their toes.
Click to expand...
Click to collapse
Agreed, they're making an Apples and Pears comparison that was undoubtedly set to show the new processor in a good light. It's only to be expected, it is a sales pitch after all. It will no doubt be a faster chip though.
Sent from my Nexus 10 using XDA Premium HD app
I would much rather see a couple benchmark runs myself. A time comparison of a web browser is no way to test the power of a new chipset.
Still, I would expect Tegra 4 to be WAY WAY WAY more powerful than the Exynos 5250. Both devices use the A15 architecture, and Tegra 4 has twice as many CPU cores as we have. This alone is already a big boost in multithreaded apps. Then look at the GPU where you cant even compare the two at all except by end result numbers. They are just far too different. We have 4 GPU cores, Tegra 4 has 72 GPU cores. But those cores are designed far differently and not nearly as powerful per core. It is all about the companies definition of what a GPU "core" is. And then you have a smaller process node as well, which by itself already promises to use less power than the larger process node the Exynos 5250 uses.
I would honestly expect the Tegra 4 chipset to completely destroy our tablet in terms of performance, but a much more fair comparison would be to compare the Tegra 4 to the Exynos 5 quad. Those two are actually designed to compete with each other.
If you want to compare exynos and tegra 4 then wait for exynos 5450 (quad a15) which should come with galaxy s4 no of cores makes a difference here t4 is quad but early gl benchmarks show that A6X and exynos 5250 have a better GPU
First Tegra 4 Tablet running stock android 4.2:
http://www.theverge.com/2013/1/7/3845608/vizio-10-inch-tablet-combines-tegra-4-android-thin-body
Lets hope and pray Vizio managed to add microSD capability to it. Free of Nexus restraints, it should be doable, but since it is running stock JB, the storage would have to be mounted as USB (?) . So far this Vizio 10" was the most exciting Android development out of CES. We have few more our of Press events scheduled in Vegas and then it will be all over
rashid11 said:
Lets hope and pray Vizio managed to add microSD capability to it. Free of Nexus restraints, it should be doable, but since it is running stock JB, the storage would have to be mounted as USB (?) . So far this Vizio 10" was the most exciting Android development out of CES. We have few more our of Press events scheduled in Vegas and then it will be all over
Click to expand...
Click to collapse
Dont expect the Nexus advantage of up-to-date software or timely updates.
EniGmA1987 said:
I would much rather see a couple benchmark runs myself. A time comparison of a web browser is no way to test the power of a new chipset.
Still, I would expect Tegra 4 to be WAY WAY WAY more powerful than the Exynos 5250. Both devices use the A15 architecture, and Tegra 4 has twice as many CPU cores as we have. This alone is already a big boost in multithreaded apps. Then look at the GPU where you cant even compare the two at all except by end result numbers. They are just far too different. We have 4 GPU cores, Tegra 4 has 72 GPU cores. But those cores are designed far differently and not nearly as powerful per core. It is all about the companies definition of what a GPU "core" is. And then you have a smaller process node as well, which by itself already promises to use less power than the larger process node the Exynos 5250 uses.
I would honestly expect the Tegra 4 chipset to completely destroy our tablet in terms of performance, but a much more fair comparison would be to compare the Tegra 4 to the Exynos 5 quad. Those two are actually designed to compete with each other.
Click to expand...
Click to collapse
look at the new iphone only 2 cores (different architecure) beating the higher clock double cored galaxy s3 in some disciplines..
this presentation is scientifically SO SO irrelevant especially becasue they use different software.. I LOL SO HARD at ppl thinking this is anywhere near to be comparable
schnip said:
look at the new iphone only 2 cores (different architecure) beating the higher clock double cored galaxy s3 in some disciplines..
this presentation is scientifically SO SO irrelevant especially becasue they use different software.. I LOL SO HARD at ppl thinking this is anywhere near to be comparable
Click to expand...
Click to collapse
The new iPhone 5 doesnt use the same ARM architecture are the S3 though, it is a custom design. So those can be compared against each other fine to see which architecture is better. And if a slower clock speed CPU gets better scores then we know it is a superior design. This is the basis of all benchmarking. If we were only allowed to compare the exact same architectures together then we wouldnt learn anything.
Tegra4 uses a (probably slightly modified) A15 core, and the Exynos 5xxx uses a fairly stock A15 core. So a higher clocked A15 should beat out a lower clocked A15 in a direct comparison no matter what. Then when you throw 2 additional cores on top it should always win in multithreaded benchmarks too. Seems pretty common sense to me.
The main difference will be in the graphics side of things, where Nvidia has their own designed GPU compared to Samsung's use of the Mali GPU's.
You can still compare them together just fine, it just need to be both of them on the same browser if there is a browser comparison being done. In this PR release, Nvidia skewed the results like all companies do. So we cant really see the difference between the two from those pictures and we need to wait for 3rd party review sites to do proper testing to see actual results. Yet we can still estimate performance plenty fine since we have a baseline of the architecture already with this tablet.
"Tegra 4 more powerful than Nexus 10"... well duh! It's a new chip just unveiled by nvidia that won't show up in any on sale devices for at least a couple of months. Tablet and smartphone tech is moving very quickly at the moment, nvidia will hold the android performance crown for a couple of months and then someone (probably samsung or qualcomm) will come along with something even more powerful. Such is the nature of the tablet/smartphone market. People that hold off on buying because there is something better on the horizon will be waiting forever because there will always be a better device just a few months down the line!
EniGmA1987 said:
The new iPhone 5 doesnt use the same ARM architecture are the S3 though, it is a custom design. So those can be compared against each other fine to see which architecture is better. And if a slower clock speed CPU gets better scores then we know it is a superior design. This is the basis of all benchmarking. If we were only allowed to compare the exact same architectures together then we wouldnt learn anything.
Tegra4 uses a (probably slightly modified) A15 core, and the Exynos 5xxx uses a fairly stock A15 core. So a higher clocked A15 should beat out a lower clocked A15 in a direct comparison no matter what. Then when you throw 2 additional cores on top it should always win in multithreaded benchmarks too. Seems pretty common sense to me.
The main difference will be in the graphics side of things, where Nvidia has their own designed GPU compared to Samsung's use of the Mali GPU's.
You can still compare them together just fine, it just need to be both of them on the same browser if there is a browser comparison being done. In this PR release, Nvidia skewed the results like all companies do. So we cant really see the difference between the two from those pictures and we need to wait for 3rd party review sites to do proper testing to see actual results. Yet we can still estimate performance plenty fine since we have a baseline of the architecture already with this tablet.
Click to expand...
Click to collapse
That was kind of his point.
I don't think anyone is denying that the tegra will be faster. What's being disputed here is just how much faster it is. Personally, I don't think it'll be enough to notice in everyday use. Twice the cores does not automatically a faster CPU make, you need software that can properly take advantage and even then is not a huge plus in everyday tasks. Also, in the past Nvidia has made pretty crappy chips due to compromise. Good example being how the tegra 2 lacked neon support. The only concrete advantages I see are more cores and a higher clock rate.
Based on the hype : performance ratio of both Tegra 2 & 3 I wouldn't have high hopes until I see legit benchmark results.
What does seem promising though, is the fact that they are making more significant changes than from T2 to T3, such as dual channel memory (finally after 1-2 years of all other SoCs having it -.-) and the GPU cores are different too.
Still, the GPU has always been the weakest point of Tegra, so I still don't think it can beat an overclocked T-604 by much, even though this time around they will not be the first ones to debut a next-gen SoC. Given the A15 architecture they can't really screw up the CPU even if they wanted to, so that should be significantly faster than the Exynos 5 Dual.
I've also just read this article on Anandtech about power consumption and the SoC in the Nexus 10 consumes multiple times as much power as other tablet chipsets, making me wonder how nVidia plans to solve the battery life issue with 2 times as many cores and a (seemingly) beefier GPU, not even mentioning implementation in phones..
freshlysqueezed said:
First Tegra 4 Tablet running stock android 4.2:
http://www.theverge.com/2013/1/7/3845608/vizio-10-inch-tablet-combines-tegra-4-android-thin-body
Click to expand...
Click to collapse
Apparently this is the tablet that comes to take Nexus 10 spot, Vizio 10 inch tablet with Tegra 4, 2560 x 1600 resolution, 32gb storage, Android 4.2. And it should be coming out Q1 2013, I think this one makes me wait to hear some more about it until I buy the Nexus 10, although to be honest the brand is a bit of a let down for me.
Edit: the 10-inch model, key specs (aside from Tegra 4) include a 2,560 x 1,600 display, 32GB of on-board memory, NFC and dual 5MP / 1.3MP cameras.
http://www.engadget.com/2013/01/07/vizio-10-inch-tegra-4-tablet-hands-on/

Should Intel Be Worried?

I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
tcb4 said:
I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
Click to expand...
Click to collapse
It is important to remember that ultimately the same constraints and limitations will apply to both Intel and ARM CPUs. After all ARM and x86 are just instruction set architectures. There is no evidence to suggest that somehow ARM is at a significant advantage vs Intel in terms of increasing performance while keeping power low. It has been generally accepted now that ISA's have a negligible impact on IPC and performance per watt. Many of these newer ARM socs like the 810 are having overheating issues themselves. The higher performance Nvidia SOCs that have impressive performance are using 10+ watts TDPs too.
Also it is always a bit tricky to make cross platform and cross ISA CPUs comparisons in benchmarks like GeekBench and for whatever reason Intel cpus tend to do relatively poorly in GeekBench compared to other benchmarks. You can try to compare other real world uses between the i5-5200U and the Exynos 7420 and I can assure you that the tiny Exynos will be absolutely no match to the much larger, wider and more complex Broadwell cores. Don't get me wrong, the Exynos 7420 is very impressive for its size and power consumption, but I don't think we can take that GeekBench comparison seriously.
The fastest low power core right now is without a doubt the Broadwell Core M which is a 4.5 watt part. This is built on Intel's 14nm process which is more advanced than Samsungs.
http://www.anandtech.com/show/9061/lenovo-yoga-3-pro-review/4
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
"Once again, in web use, the Core M processor is very similar to the outgoing Haswell U based Yoga 2 Pro. Just to put the numbers in a bit more context, I also ran the benchmarks on my Core i7-860 based Desktop (running Chrome, as were the Yogas) and it is pretty clear just how far we have come. The i7-860 is a four core, eight thread 45 nm processor with a 2.8 GHz base clock and 3.46 GHz boost, all in a 95 watt TDP. It was launched in late 2009. Five years later, we have higher performance in a 4.5 watt TDP for many tasks. It really is staggering."
"As a tablet, the Core M powered Yoga 3 Pro will run circles around other tablets when performing CPU tasks. The GPU is a bit behind, but it is ahead of the iPad Air already, so it is not a slouch. The CPU is miles ahead though, even when compared to the Apple A8X which is consistently the best ARM based tablet CPU.
"
---------- Post added at 04:46 AM ---------- Previous post was at 04:33 AM ----------
tft said:
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
Click to expand...
Click to collapse
This.
All of the little atom CPUs we see in mobile right now are much smaller, narrower and simpler cores than Intel Core chips. Once you see Intel big cores trickle down into mobile, it will get much more interesting.
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
Click to expand...
Click to collapse
Technically Intel and AMD have had 64 bit for well over a decade now with AMD64/EM64T and many Intel mobile processors have had it for years, so the HW has supported it for a while but 64 bit enabled tablets/phones haven't started shipping until very recently.
Indeed Intel has been shipping 14nm products since last year and their 14nm process is more advanced than Samsung's. Note that there is no real science behind naming a process node so terms like "14nm" and "20nm" have turned into purely marketing material. For example, TSMC 16nm isn't actually any smaller than their 20nm process. Presumably Intel 14nm also yields higher and allows for higher performance transistors than the Samsung 14nm.
It is likely that Samsung has the most advanced process outside of Intel however. I do agree that Qualcomm is in a bit of trouble at the moment with players like Intel really growing in the tablet space and Samsung coming out with the very formidable Exynos 7420 SOC in the smartphone space. The SD810 just isn't cutting it and has too many problems. Qualcomm should also be considered that both Samsung and Intel have managed to come out with high end LTE radios, this was something that Qualcomm pretty much had a monopoly on for years. Intel now has the 7360 LTE radio and Samsung has the Shannon 333 LTE.
rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Click to expand...
Click to collapse
i agree about Qualcomm, i actually mentioned that some time ago.
i think Qualcomm will happen what happened to nokia/blackberry, they got huge and stopped innovating and ended up being left in the dust. perhaps Qualcomm thought they had a monopoly and that samsung and other device makers would continue to buy their chips..
in the end, i think the only thing Qualcomm will have left is a bunch of patents..
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
I think that Qualcomm is busy working on a new Krait of their own, but right now they're in hot water. They got a little lazy milking 32 bit chips, but once Apple announced their 64 bit chip they panicked and went with an ARM design. We'll have to see if they can bring a 64 bit Krait chip to the table, but right now Samsung's 7420 appears to be the best thing on the market.
tcb4 said:
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
Click to expand...
Click to collapse
Technically the Core M will dissipate more than 4.5w for "bursty" workloads but under longer steady workloads it will average to 4.5w. The ARM tablet and phone SOCs more or less do the same thing. In terms of actual battery life test results, yes the battery life of most of these devices hasn't really changed since the last generation Intel U series chips but that isn't a real apples to apples comparison. As SOC power consumption continues to drop, it is becoming a smaller and smaller chunk of total system power consumption. Lenovo did a poor job IMO in originally implementing the first Core M device but Apple will almost certainly do a much better job. The SOC is only one part of the system, it is the responsibility of the OEM to properly package up the device and do proper power management, provide an adequate battery etc. Yes the new Macbook doesn't get significantly longer battery life but it also weighs only 2.0 lbs and has a ridiculously small battery. It also has a much higher resolution and more power hungry screen and yet manages to keep battery life equal with the last generation. Benchmarks have also indicated that the newer 14nm Intel CPUs are much better at sustained performance compared to the older 22nm Haswells. This is something that phone and tablets typically are very poor at.
tcb4 said:
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Click to expand...
Click to collapse
A very fair point, browser benchmarks are especially notorious in being very misleading. I think in this case Chrome was used in all cases which helps a little. My point in showing this is that we need to take those GeekBench results with a little grain of salt. Outside of that benchmark, I don't think you'll find the A8X or Exynos 7420 getting anywhere near a higher speced Core M let alone a i5-5200U at any real world use or any other benchmark, browser based or not. Even other synthetic benchmarks like 3dmark Physics, etc don't show the Intel CPUs nearly as low as GeekBench does.
tcb4 said:
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
Click to expand...
Click to collapse
This is correct.
It is certainly possible for Samsung to have a design that is more power efficient than Intel when it comes to making a 2W phone SOC, but that won't be because Samsung uses ARM ISA while Intel uses x86. At this point, ISA is mostly just coincidental and isn't going to greatly impact the characteristics of your CPU. The CPU design and the ISA that the CPU uses are different things. The notion of "better design" is also a little tricky because a design that may be best for a low power SOC may not necessarily be the best for a higher wattage CPU. Intel absolutely rules the CPU landscape from 15w and up. Despite all of the hype around ARM based servers, Intel has continued to dominate servers and has actually continued to increase its lead in that space since Intel's performance per watt is completely unmatched in higher performance applications. Intel's big core design is just better for that application than any ARM based CPU's. It is important to remember that just because you have the best performance per watt 2 watt SOC, doesn't mean you can just scale that design into a beastly 90 watt CPU. If it were that easy, Intel would have probably easily downscaled their big core chips to dominate mobile SOCs.
You frequently find some people trying to reason that at 1.2 Ghz Apple's A8 SOC is very efficient and fast and then they claim that if they could clock that SOC at 3+ Ghz then it should be able to match an Intel Haswell core, but there is no guarantee that the design will allow such high clocks. You have to consider that maybe Apple made design choices to provide great IPC but that IPC came at the cost of limiting clock frequencies.

Categories

Resources