Hi,
I'm getting interested in a comparison. We discuss about the fact that 4-core Sofia 3GR units are faster than 8-core PX5 units. But are they? I really would like to know.
Some users say that benchmarks can't be compared, but those users are sometimes somewhat "biased" and it is not true. You can't compare several apps with each other. However, if the same app, Geekbench 4 in this case, does the exact same (software "rendering") graphical performance on one unit versus another unit, or do a Dijkstra calculation(1), you can definitely compare units.
Of course it is highly dependent in how optimized your Android version for your unit is to get the overall performance and user experience. We have already experienced that the @gtxaspec Custom Rom gives a smoother experience on the Sofia 3GR units than the Joying stock ROM. So a good ROM can improve a slower CPU, and vice versa.
And why Geekbench 4 instead of Antutu?
The Joying/FYT/SYU software on the Sofia 3GR "knows" Antutu. When it detects Antutu is running, it optimizes a number of things to get the highest scores. I don't want that, and I don't know how much "Antutu optimization" is taking place between the Sofia units and the PX5 units.
The Joying Sofia 3 GR units come in two flavours: The slightly older 5009 SOMs and the newer 6021 SOMs.
The 5009 SOM runs at 1040 MHz. The 6021 runs at 1200 MHz.
I have a 6021.
CPU scores: https://browser.geekbench.com/v4/cpu/8132254
GPU scores: https://browser.geekbench.com/v4/compute/2340164
I'm very interested in both the Single core as multi-core values. The single-core values determine how fast an single-threaded app runs on a single core, and a lot of apps, including the Joying ones, are still single-threaded.
And of course also in the multi-core scores when heavy loaded systems should perform, and where multi-threaded apps (the Google apps like Chrome, Google Maps, etc) will highly benefit. (Note that using more than 3 cores hardly improves a multi-threaded apps performance, but in a heavy loaded system a "few cores more" might be giving a difference there).
Again: I would really like to see GeekBench 4 scores for a 2GB PX5 Joying as well (and maybe also a 5009 Sofia 3GR). Not other apps scores (like Antutu) as you really can't compare those among each other.
The results will not say anything about user experience, buggy or great apps, or overall experience. It will simply compare "raw power".
_____________________________________
(1): The Dijkstra algorithm is a "shortest path" algorithm. The A-* algorithm is a parametrisable, optimzed form of the Dijkstra algorithm. The A-* algorithm is used in 9 out of 10 Navigation apps to (re)calculate the routes).
Comparing arm vs x86...its tough to get a good comparison, even using geekbench. You could compare app launch times etc...
Technically speaking, the type of ARM cores in the px5... Are "slower" than the x86 counterparts. I don't think you'll get any real world results from benchmarks.
The px5 is old too...as well as the Intel. At the time the x86 was a better contender.
Just thinking out loud.
gtxaspec said:
Comparing arm vs x86...its tough to get a good comparison, even using geekbench. You could compare app launch times etc...
Technically speaking, the type of ARM cores in the px5... Are "slower" than the x86 counterparts. I don't think you'll get any real world results from benchmarks.
The px5 is old too...as well as the Intel. At the time the x86 was a better contender.
Just thinking out loud.
Click to expand...
Click to collapse
Yes, We had these discussions on the old Carjoying forum as well were especially lbdroid used these arguments. And that is exactly why I don't want to compare "real live experience" with simple raw power calculations.
Until now I never saw a real comparison on CPU/GPU level, so until now everybody is actually not knowing what he/she is talking about without delivering/comparing hard numbers (and I really don't want to offend you).
And don't forget: Also Intel, Rockchip and AMD are using benchmarks to compare them: Are they really that untrustworthy then? And differences between Pentiums, Atoms, Celerons, Xeons, Core i-3/i-5/i-7 are compared with each other despite big differences in their architecture, cores, production manufacturing optimizations, power/battery optimizations, clockspeeds, burst speeds, L1/L2/L3 cache and so on.
And yes: Finally in the end it only matters whether you run a light-weight linux or a bloaty Windows 10 on some light-weight hardware, but at least you know the underlying capabilities of the hardware.
So again: I really would like to see numbers, not meanings without numbers based on gut feeling and superficial specs sheets.
I have a very technical background and only trust numbers. Theories are not more than that, until they are proven in real life or by numbers.
(It is the same with bigger or smaller fans to cool the unit. Some users say the bigger, the better. From my technical background I say those big fans in those small confined spaces are useless, but also there I don't provide the numbers, so actually I don't know and nobody else knows and is only acting on gut feeling, until somebody really does the measurements)
surfer63 said:
Yes, We had these discussions on the old Carjoying forum as well were especially lbdroid used these arguments. And that is exactly why I don't want to compare "real live experience" with simple raw power calculations.
Until now I never saw a real comparison on CPU/GPU level, so until now everybody is actually not knowing what he/she is talking about without delivering/comparing hard numbers (and I really don't want to offend you).
And don't forget: Also Intel, Rockchip and AMD are using benchmarks to compare them: Are they really that untrustworthy then? And differences between Pentiums, Atoms, Celerons, Xeons, Core i-3/i-5/i-7 are compared with each other despite big differences in their architecture, cores, production manufacturing optimizations, power/battery optimizations, clockspeeds, burst speeds, L1/L2/L3 cache and so on.
And yes: Finally in the end it only matters whether you run a light-weight linux or a bloaty Windows 10 on some light-weight hardware, but at least you know the underlying capabilities of the hardware.
So again: I really would like to see numbers, not meanings without numbers based on gut feeling and superficial specs sheets.
I have a very technical background and only trust numbers. Theories are not more than that, until they are proven in real life or by numbers.
(It is the same with bigger or smaller fans to cool the unit. Some users say the bigger, the better. From my technical background I say those big fans in those small confined spaces are useless, but also there I don't provide the numbers, so actually I don't know and nobody else knows and is only acting on gut feeling, until somebody really does the measurements)
Click to expand...
Click to collapse
That's ok, I don't ever get offended at all lol.
Isn't the "real live experience" what matters? For all we know, RK could have a bad processor design which doesn't push the cores to the intended performance specification from ARM...
So, now my question is...what is the best way to compare different processor architectures?
Aside from using "benchmarking" applications...?
I agree most companies do use benchmarks to compare, but aren't they typically within a similar or set computing architecture? Reminds me of PPC vs x86 back in the day .
Another issue that I seen is some benchmarking applications don't have native x86 libraries so on the Intel Android platform the benchmarking is run using Houdini which is an emulator and it's slow
gtxaspec said:
Isn't the "real live experience" what matters? For all we know, RK could have a bad processor design which doesn't push the cores to the intended performance specification from ARM...
Click to expand...
Click to collapse
Of course that is what matters.
That is exactly why I want to know. Is the Joying FYT/SYU PX5 much slower, equal or faster than a Sofia 3GR. And if it is much slower: is it due to drivers, bad apps or the CPU? and if it is much faster, the same questions.
It is the same as saying the one car uses much more gas than the other and is much slower, until it turns out that car is pulling a caravan.
That is why I wan to know what the car does, although the real life experience might be different due to other factors.
I use Magic Earth as navigation app. I experience less optimization on Intel compared to ARM. So would it perform much better on an ARM PX5 compared to a Sofia 3GR?
After all: Android is by far the biggest and most optimized for ARM.
For that reason the PX5 could even be a much better option than the Sofia 3GR despite completely unproven arguments about being a better CPU versus "slow" A53 cores.
@surfer63 Now that you own both a Sofia 3GR and a PX5 unit, have you done the benchmarks? Which is better? And in your personal opinion, which one "feels" faster in everyday use?
R4m80 said:
@surfer63 Now that you own both a Sofia 3GR and a PX5 unit, have you done the benchmarks? Which is better? And in your personal opinion, which one "feels" faster in everyday use?
Click to expand...
Click to collapse
The PX5. It is definitely faster.
@ste2002 asked me to do Antutu benchmarks which I did: see here.
Related
Bell points the finger at chipset makers - "The way it's implemented right now, Android does not make as effective use of multiple cores as it could, and I think - frankly - some of this work could be done by the vendors who create the SoCs, but they just haven't bothered to do it. Right now the lack of software effort by some of the folks who have done their hardware implementation is a bigger disadvantage than anything else."
Click to expand...
Click to collapse
What do you think about this guys?
He knows his stuff.
Sent from my GT-I9300
i would take it with a pinch of salt, though there are not many apps that takes advantage of multi core processor lets see what intel will tell when they have thier own dual core processor out in the market
Pretty good valid arguments for the most part.
I mostly agree though, but I think android makes good use of up to 2 cores. Anything more than that it doesn't at all.
There is a huge chunk of the article missing too.
Sent from my GT-I9300
full article
jaytana said:
What do you think about this guys?
Click to expand...
Click to collapse
I think they should all be covered in honey and then thrown into a pit full of bears and Honey bees. And the bears should have like knives ductaped to their feet and the bees stingers should be dipped in chilli sauce.
Reckless187 said:
I think they should all be covered in honey and then thrown into a pit full of bears and Honey bees. And the bears should have like knives ductaped to their feet and the bees stingers should be dipped in chilli sauce.
Click to expand...
Click to collapse
wow, saying Android isn't ready for multip-core deserves such treatment? or this guy had committed more serious crime previously?
Actually is a totally fail but in android 5 I think it's can be solved
Sent from my GT-I9300 using XDA
This was a serious problem on desktop Windows OS as well back when multi cores first starting coming out. I remember having to download patches for certain games and in other cases, having to set the CPU affinity to run certain games/apps with only one core so that it wouldn't freeze up. I am sure Android will move forward with multi-core support in the future.
simollie said:
wow, saying Android isn't ready for multip-core deserves such treatment? or this guy had committed more serious crime previously?
Click to expand...
Click to collapse
Its a harsh but fair punishment imo. They need to sort that sh*t out as its totally unacceptable or they're gonna get a taste of the Cat o Nine Tails.
Android kernel is based on Linux. So this is suggesting the Linux kernel is not built to support multi-core either. Not true. There is a reason the SGS3 gets 5000+ in Quadrant, the the San Diego only gets 3000+. And the San Diego is running 200MHz faster.
Just look at the blue bar here. http://www.engadget.com/2012/05/31/orange-san-diego-benchmarks/ . My SGS3 got over 2.5K on just CPU alone.
What Intel said was true. Android is multicore aware but the os and apps aren't taking advantage of it. When this user disabled 2 cores on the HTC one x it made no difference at all in anything other than benchmarks.
http://forum.xda-developers.com/showpost.php?p=26094852&postcount=3
Disabling the CPU cores will do nothing to the GPU, hence still getting 60 FPS. And you say that like you expected to see a difference. Those games may not be particularly CPU intensive, thats why they continue to run fine. They will more than likely be GPU limited.
Android is not a difficult OS to run, thats why it can run on the G1, or AOKP can run smooth as silk on my i9000. If it can run smooth as silk on one 2yr old 1GHz chip, how COULD it go faster on a next-gen chip like in the SGS3 or HOX? In terms of just using the phone, ive not experienced any lag at all.
If youre buying a phone with dual/quad CPU cores, and only expecting to use it as a phone (i.e, not play demanding games/benchmark/mod/what ever else), of course you wont see any advantage, and you may feel cheated. And if you disable those extra cores, and still only use it as a phone, of course you wont notice any difference.
If a pocket calculator appears to calculate 1+1 instantly, and a HOX also calculates 1+1 instantly, Is the pocket calculator awesome, is the HOX not using all its cores, or is what it is being asked to do simply not taxing enough to use all the CPU power the HOX has got?
I've been hearing this for some time now and is one of the reasons I didn't care that we weren't getting the quad core version of the GS3
916x10 said:
I've been hearing this for some time now and is one of the reasons I didn't care that we weren't getting the quad core version of the GS3
Click to expand...
Click to collapse
Okay folks... firstly linux kernel, which android is based on, is aware of multicore (its obvious) but most the applications are not aware, thats true!.. but is not the android which to blame neither the SoC makers. This is like the flame intel made that they wanted to say their single core can do faster to a dual core arm LOL, (maybe intel will make 1 core has 4 threads or 8 threads) <- imposibruuu for now dunno later
you will notice the core usage while playing HD video that require cpu to decode (better core decode fastly)... and im not sure single core intel does better to arm dual core.. ~haha~
but for average user the differences are not noticable.. if intel aiming for this market yes that make sense... but android user are above average user.. they will optimize its phone eventually IMO
What they have failed to disclose is which SoC they did their test on and their methodology. Not much reason to doubt what he's saying but you gotta remember that Intel only have a single core mobile SoC currently and are aiming to get a foothold in the mobile device ecosystem so part of this could be throwing salt on competing products as it's something that should be taken care of by Google optimising the CPU scheduling algorithms of their OS.
The problem is in the chip set. I currently attend SUNY Oswego and a professor of mine Doug Lea works on many concurrent structures. He is currently working on the ARM spec sheet that is used to make chips. The bench marks that he has done shows that no matter how lucky or unlucky you get, the time that it takes to do a concurrent process is about the same where on desktop chips there is a huge difference between best case and worse case. The blame falls on the people that make the chips for now. They need to change how it handles concurrent operations and then if android still cant use multi-core processors then it falls on the shoulders of google.
that is my two cents on the whole situation. Just finished concurrency with Doug and after many talks this is my current opinion.
Sent from my Transformer Prime TF201 using XDA
Flynny75 said:
Disabling the CPU cores will do nothing to the GPU, hence still getting 60 FPS. And you say that like you expected to see a difference. Those games may not be particularly CPU intensive, thats why they continue to run fine. They will more than likely be GPU limited.
Android is not a difficult OS to run, thats why it can run on the G1, or AOKP can run smooth as silk on my i9000. If it can run smooth as silk on one 2yr old 1GHz chip, how COULD it go faster on a next-gen chip like in the SGS3 or HOX? In terms of just using the phone, ive not experienced any lag at all.
If youre buying a phone with dual/quad CPU cores, and only expecting to use it as a phone (i.e, not play demanding games/benchmark/mod/what ever else), of course you wont see any advantage, and you may feel cheated. And if you disable those extra cores, and still only use it as a phone, of course you wont notice any difference.
If a pocket calculator appears to calculate 1+1 instantly, and a HOX also calculates 1+1 instantly, Is the pocket calculator awesome, is the HOX not using all its cores, or is what it is being asked to do simply not taxing enough to use all the CPU power the HOX has got?
Click to expand...
Click to collapse
That doesn't mean daily task doesn't need the cpu power. When I put my sgs 3 in power save mode which cut back the cpu to 800mHz, I feel the lag instantly when scrolling around and navigating the internet. So I can conclude that performance per core is still much more important than number of cores. There isn't any performance difference either with the dual core sensation xe running beside the single core sensational xl.
The hardware needs to be out for developers to have incentive to make use of it. It's not like Android was built from the ground up to utilize 4 cores. That said, once it hits enough hand it and software running in it will be made to utilize the new hardware.
Quite a simple question really, which was already mentioned in the title of the thread. What do you believe to be the best tablet? A 16 GB Nexus 7 WiFi model or a 16 GB Nexus 10 WiFi model?
Hmm...
Brad387 said:
Quite a simple question really, which was already mentioned in the title of the thread. What do you believe to be the best tablet? A 16 GB Nexus 7 WiFi model or a 16 GB Nexus 10 WiFi model?
Click to expand...
Click to collapse
Kind of an odd question really. Clearly the 10 has better specs, including screen.
But I'm pretty sure many of us bought a Nexus 7 because it was 7 inches portable. So, I'm pretty confident saying that the Nexus 7 is a better 7 inch tab than the 10 is.
PMOttawa said:
Kind of an odd question really. Clearly the 10 has better specs, including screen.
But I'm pretty sure many of us bought a Nexus 7 because it was 7 inches portable. So, I'm pretty confident saying that the Nexus 7 is a better 7 inch tab than the 10 is.
Click to expand...
Click to collapse
Well, it is obvious that the Nexus 7 (which is a 7" tab) is better at being a 7" tablet than a Nexus 10 (which isn't a 7" tab, but a 10" one). However, isn't the Nexus 10 only a dual-core processor? I know the screen resolution is quite amazing, but besides that isn't it actually worse?
CPU: http://www.arm.com/products/processors/cortex-a/cortex-a15.php
GPU: http://www.arm.com/products/multimedia/mali-graphics-hardware/mali-t604.php
CPU core count isn't all that matters. I don't have any real-world benchmarks, but I'm pretty sure that CPU alone can execute tasks faster and better than the Tegra 3. And since the GPU and CPU aren't on the same chip (that I know of), that also comes with it's share of better performance.
espionage724 said:
CPU: http://www.arm.com/products/processors/cortex-a/cortex-a15.php
GPU: http://www.arm.com/products/multimedia/mali-graphics-hardware/mali-t604.php
CPU core count isn't all that matters. I don't have any real-world benchmarks, but I'm pretty sure that CPU alone can execute tasks faster and better than the Tegra 3. And since the GPU and CPU aren't on the same chip (that I know of), that also comes with it's share of better performance.
Click to expand...
Click to collapse
This ^.
You cant really justify which is better becuase the size difference. Like the first poster said we all bought this for the form factor. So to us the N7 is better regardless of the specs. However spec wise... i would go with the N10.
Two completely different forms factors and uses. They are both great devices.
CPU in the N10 is about twice as fast as the best A9 (S4 Pro) out now. It is more than likely about 3-4 times faster than the T3.
Two different devices for different purposes, its like comparing a motor bike to a car
Brad387 said:
Quite a simple question really, which was already mentioned in the title of the thread. What do you believe to be the best tablet? A 16 GB Nexus 7 WiFi model or a 16 GB Nexus 10 WiFi model?
Click to expand...
Click to collapse
It is like asking: 'What is the best: a semi or a van?'
Those 2 tablets are just in a different market, ergo not comparable.
If you don't take the size in the comparison, the Nexus 10 would win: more efficient/faster processor, way better grafics, almost quadripple resolution, ..etc.
By specs, N10 destroys the N7.
In terms of pure performance, which one is better?
The Nexus 10 is a dual core vs Tegra 3 Quad core.
2gb ram vs 1gb ram.
Also take in consideration Tegra Zone support, although not really related to performance. The Tegra 3 gets larger list of premium games.
killer8297 said:
In terms of pure performance, which one is better?
The Nexus 10 is a dual core vs Tegra 3 Quad core.
2gb ram vs 1gb ram.
Also take in consideration Tegra Zone support, although not really related to performance. The Tegra 3 gets larger list of premium games.
Click to expand...
Click to collapse
It isn't even a comparison. The N10 slaughters the N7. Pros vs joes if you will.
I'd still keep my 7". It performs just fine for what I need it for. 10" is too big. I'm more comfortable with my laptop at that point.
Sent from my SGH-T999 using xda app-developers app
Tegra has CPUs and GPU on a single chip, and other details
espionage724 said:
CPU core count isn't all that matters. I don't have any real-world benchmarks, but I'm pretty sure that CPU alone can execute tasks faster and better than the Tegra 3. And since the GPU and CPU aren't on the same chip (that I know of), that also comes with it's share of better performance.
Click to expand...
Click to collapse
You are confused.
The Tegra is a System-on-Chip ("SoC") that has both CPU and GPU cores on the same die. The CPU complex has four A9 ARM cores, plus a fifth "ninja" A7 core. The GPU has 12 cores, plus a number of special functional units. All cores access the shared RAM through a single memory controller.
The CPU complex spends most of its time running only the power-optimized "ninja" core, with the other cores powered off. The ninja CPU has a simpler A7 core and is implemented with power-optimized low-leakage transistors. (The A7 core does less speculative work, and thus is more power efficient than the A9 cores even taking into account the extra clock cycles needed.) If the workload increases, the main cores are powered up and execution is switched over, with the ninja core left idle in a low power mode.
The GPU complex has 12 general execution units, but these aren't directly comparable to CPU cores. You can't even compare them to the "cores" in other types of GPUs. In addition, there are other special units such as video and audio decoders in the GPU complex. These operations could be done on the main CPU or, sometimes, the GPU. But they are common and power-hungry enough to get hard-wired logic.
All of this complexity makes it really difficult to benchmark and compare. Or really easy, if your goal is to make one product look faster than another.
The Tegra is carefully tuned to do HD video decode with only the ninja core and GPU turned on, thus consuming little power. There is just enough CPU time left over to supervise the cellular modem for housekeeping operations, or do other trivial tasks. But if you add in just a little application work, the main four cores are activated and power usage goes way up.
Another way to skew the test result is to pick specific micro benchmarks. The Apple A5 (which is unrelated to the ARM numbers e.g. A7 and A9) was designed for a high resolution screen, and knowing that many early apps would be iPhone apps with pixel doubling. They put extra gates to increase the pixel fill rate and smoothing performance. This resulted in a bigger chip, but better performance with modest power use for these functions.
My estimation: The Nexus 7 with Tegra 3 is faster, has the potential to be more power efficient, and will have better long-term support and improvements. The N10 has the big advantage of 2GB of memory, which may become important with future versions of Android.
becker. said:
You are confused.
The Tegra is a System-on-Chip ("SoC") that has both CPU and GPU cores on the same die. The CPU complex has four A9 ARM cores, plus a fifth "ninja" A7 core. The GPU has 12 cores, plus a number of special functional units. All cores access the shared RAM through a single memory controller.
The CPU complex spends most of its time running only the power-optimized "ninja" core, with the other cores powered off. The ninja CPU has a simpler A7 core and is implemented with power-optimized low-leakage transistors. (The A7 core does less speculative work, and thus is more power efficient than the A9 cores even taking into account the extra clock cycles needed.) If the workload increases, the main cores are powered up and execution is switched over, with the ninja core left idle in a low power mode.
The GPU complex has 12 general execution units, but these aren't directly comparable to CPU cores. You can't even compare them to the "cores" in other types of GPUs. In addition, there are other special units such as video and audio decoders in the GPU complex. These operations could be done on the main CPU or, sometimes, the GPU. But they are common and power-hungry enough to get hard-wired logic.
All of this complexity makes it really difficult to benchmark and compare. Or really easy, if your goal is to make one product look faster than another.
The Tegra is carefully tuned to do HD video decode with only the ninja core and GPU turned on, thus consuming little power. There is just enough CPU time left over to supervise the cellular modem for housekeeping operations, or do other trivial tasks. But if you add in just a little application work, the main four cores are activated and power usage goes way up.
Another way to skew the test result is to pick specific micro benchmarks. The Apple A5 (which is unrelated to the ARM numbers e.g. A7 and A9) was designed for a high resolution screen, and knowing that many early apps would be iPhone apps with pixel doubling. They put extra gates to increase the pixel fill rate and smoothing performance. This resulted in a bigger chip, but better performance with modest power use for these functions.
My estimation: The Nexus 7 with Tegra 3 is faster, has the potential to be more power efficient, and will have better long-term support and improvements. The N10 has the big advantage of 2GB of memory, which may become important with future versions of Android.
Click to expand...
Click to collapse
Best answer I've seen.
And has been said before, surely, in the end it comes down to what do you want to do with it. I prefer my n7 because 10" tablets are simply too big and uncomfortable
Sent from my Nexus 7 using xda app-developers app
Real world experience will require the device in hand. The resolution being pushed will need a lot more backbone to provide the same smooth experience as the lower resolution device. Just look at the iPad 2 vs 3. The iPad 2 felt like a better experience because of the lower resolution. Most people couldn't even tell the two apart or correctly identify which was one or the other.
Resolution that high is retarded on a 10" screen. Waste of battery and resources.
Sent from my Galaxy Nexus using XDA Premium HD app
I say wait another 3 months before committed to buying 10 inch. Google might upgrade its 10 inch with 3G, who knows, having experiencing what they did with 7 inch.
player911 said:
Real world experience will require the device in hand. The resolution being pushed will need a lot more backbone to provide the same smooth experience as the lower resolution device. Just look at the iPad 2 vs 3. The iPad 2 felt like a better experience because of the lower resolution. Most people couldn't even tell the two apart or correctly identify which was one or the other.
Resolution that high is retarded on a 10" screen. Waste of battery and resources.
Sent from my Galaxy Nexus using XDA Premium HD app
Click to expand...
Click to collapse
I agree.A super display is great if everything is built to look good on it but not if it comes at too big of cost in performance.That is what happened to the ipad 3.They made a good device pretty, but slow.On a small screen most can't tell the difference in dvd quality and full hd.Both would look good but one would smoke the other with the same hardware doing other things. jmo
player911 said:
The iPad 2 felt like a better experience because of the lower resolution. Most people couldn't even tell the two apart or correctly identify which was one or the other.
Resolution that high is retarded on a 10" screen. Waste of battery and resources.
Click to expand...
Click to collapse
Keep in mind why the iPad has pointlessly high resolution. It wasn't that Apple wanted to provide an exceptional experience. It was that the underlying software wasn't designed for different screen sizes and proportions. They had a choice between redesigning the API combined with converting apps, or making the screen exactly double the number of pixels in each direction. Apple's big market advantage was the higher app count, and many apps wouldn't be converted to a new interface ("walking dead" / will never be updated). So they went with a hardware solution, and marketed the "retina display" as a plus rather than a work-around for a primitive API. (A replay of the Mac ROM holding back OS improvements.)
Ofcourse specs wise N10 wins..But N10 lacks some features like its only WIFI no 3G/2G !!! it will be tough for my country .
I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
tcb4 said:
I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
Click to expand...
Click to collapse
It is important to remember that ultimately the same constraints and limitations will apply to both Intel and ARM CPUs. After all ARM and x86 are just instruction set architectures. There is no evidence to suggest that somehow ARM is at a significant advantage vs Intel in terms of increasing performance while keeping power low. It has been generally accepted now that ISA's have a negligible impact on IPC and performance per watt. Many of these newer ARM socs like the 810 are having overheating issues themselves. The higher performance Nvidia SOCs that have impressive performance are using 10+ watts TDPs too.
Also it is always a bit tricky to make cross platform and cross ISA CPUs comparisons in benchmarks like GeekBench and for whatever reason Intel cpus tend to do relatively poorly in GeekBench compared to other benchmarks. You can try to compare other real world uses between the i5-5200U and the Exynos 7420 and I can assure you that the tiny Exynos will be absolutely no match to the much larger, wider and more complex Broadwell cores. Don't get me wrong, the Exynos 7420 is very impressive for its size and power consumption, but I don't think we can take that GeekBench comparison seriously.
The fastest low power core right now is without a doubt the Broadwell Core M which is a 4.5 watt part. This is built on Intel's 14nm process which is more advanced than Samsungs.
http://www.anandtech.com/show/9061/lenovo-yoga-3-pro-review/4
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
"Once again, in web use, the Core M processor is very similar to the outgoing Haswell U based Yoga 2 Pro. Just to put the numbers in a bit more context, I also ran the benchmarks on my Core i7-860 based Desktop (running Chrome, as were the Yogas) and it is pretty clear just how far we have come. The i7-860 is a four core, eight thread 45 nm processor with a 2.8 GHz base clock and 3.46 GHz boost, all in a 95 watt TDP. It was launched in late 2009. Five years later, we have higher performance in a 4.5 watt TDP for many tasks. It really is staggering."
"As a tablet, the Core M powered Yoga 3 Pro will run circles around other tablets when performing CPU tasks. The GPU is a bit behind, but it is ahead of the iPad Air already, so it is not a slouch. The CPU is miles ahead though, even when compared to the Apple A8X which is consistently the best ARM based tablet CPU.
"
---------- Post added at 04:46 AM ---------- Previous post was at 04:33 AM ----------
tft said:
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
Click to expand...
Click to collapse
This.
All of the little atom CPUs we see in mobile right now are much smaller, narrower and simpler cores than Intel Core chips. Once you see Intel big cores trickle down into mobile, it will get much more interesting.
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
Click to expand...
Click to collapse
Technically Intel and AMD have had 64 bit for well over a decade now with AMD64/EM64T and many Intel mobile processors have had it for years, so the HW has supported it for a while but 64 bit enabled tablets/phones haven't started shipping until very recently.
Indeed Intel has been shipping 14nm products since last year and their 14nm process is more advanced than Samsung's. Note that there is no real science behind naming a process node so terms like "14nm" and "20nm" have turned into purely marketing material. For example, TSMC 16nm isn't actually any smaller than their 20nm process. Presumably Intel 14nm also yields higher and allows for higher performance transistors than the Samsung 14nm.
It is likely that Samsung has the most advanced process outside of Intel however. I do agree that Qualcomm is in a bit of trouble at the moment with players like Intel really growing in the tablet space and Samsung coming out with the very formidable Exynos 7420 SOC in the smartphone space. The SD810 just isn't cutting it and has too many problems. Qualcomm should also be considered that both Samsung and Intel have managed to come out with high end LTE radios, this was something that Qualcomm pretty much had a monopoly on for years. Intel now has the 7360 LTE radio and Samsung has the Shannon 333 LTE.
rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Click to expand...
Click to collapse
i agree about Qualcomm, i actually mentioned that some time ago.
i think Qualcomm will happen what happened to nokia/blackberry, they got huge and stopped innovating and ended up being left in the dust. perhaps Qualcomm thought they had a monopoly and that samsung and other device makers would continue to buy their chips..
in the end, i think the only thing Qualcomm will have left is a bunch of patents..
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
I think that Qualcomm is busy working on a new Krait of their own, but right now they're in hot water. They got a little lazy milking 32 bit chips, but once Apple announced their 64 bit chip they panicked and went with an ARM design. We'll have to see if they can bring a 64 bit Krait chip to the table, but right now Samsung's 7420 appears to be the best thing on the market.
tcb4 said:
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
Click to expand...
Click to collapse
Technically the Core M will dissipate more than 4.5w for "bursty" workloads but under longer steady workloads it will average to 4.5w. The ARM tablet and phone SOCs more or less do the same thing. In terms of actual battery life test results, yes the battery life of most of these devices hasn't really changed since the last generation Intel U series chips but that isn't a real apples to apples comparison. As SOC power consumption continues to drop, it is becoming a smaller and smaller chunk of total system power consumption. Lenovo did a poor job IMO in originally implementing the first Core M device but Apple will almost certainly do a much better job. The SOC is only one part of the system, it is the responsibility of the OEM to properly package up the device and do proper power management, provide an adequate battery etc. Yes the new Macbook doesn't get significantly longer battery life but it also weighs only 2.0 lbs and has a ridiculously small battery. It also has a much higher resolution and more power hungry screen and yet manages to keep battery life equal with the last generation. Benchmarks have also indicated that the newer 14nm Intel CPUs are much better at sustained performance compared to the older 22nm Haswells. This is something that phone and tablets typically are very poor at.
tcb4 said:
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Click to expand...
Click to collapse
A very fair point, browser benchmarks are especially notorious in being very misleading. I think in this case Chrome was used in all cases which helps a little. My point in showing this is that we need to take those GeekBench results with a little grain of salt. Outside of that benchmark, I don't think you'll find the A8X or Exynos 7420 getting anywhere near a higher speced Core M let alone a i5-5200U at any real world use or any other benchmark, browser based or not. Even other synthetic benchmarks like 3dmark Physics, etc don't show the Intel CPUs nearly as low as GeekBench does.
tcb4 said:
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
Click to expand...
Click to collapse
This is correct.
It is certainly possible for Samsung to have a design that is more power efficient than Intel when it comes to making a 2W phone SOC, but that won't be because Samsung uses ARM ISA while Intel uses x86. At this point, ISA is mostly just coincidental and isn't going to greatly impact the characteristics of your CPU. The CPU design and the ISA that the CPU uses are different things. The notion of "better design" is also a little tricky because a design that may be best for a low power SOC may not necessarily be the best for a higher wattage CPU. Intel absolutely rules the CPU landscape from 15w and up. Despite all of the hype around ARM based servers, Intel has continued to dominate servers and has actually continued to increase its lead in that space since Intel's performance per watt is completely unmatched in higher performance applications. Intel's big core design is just better for that application than any ARM based CPU's. It is important to remember that just because you have the best performance per watt 2 watt SOC, doesn't mean you can just scale that design into a beastly 90 watt CPU. If it were that easy, Intel would have probably easily downscaled their big core chips to dominate mobile SOCs.
You frequently find some people trying to reason that at 1.2 Ghz Apple's A8 SOC is very efficient and fast and then they claim that if they could clock that SOC at 3+ Ghz then it should be able to match an Intel Haswell core, but there is no guarantee that the design will allow such high clocks. You have to consider that maybe Apple made design choices to provide great IPC but that IPC came at the cost of limiting clock frequencies.
I am looking for a new HU and after a few days reading around here, I've made a summary of my conclusions, which is a kind of the current state of chinese head unit market in 2017.
Some information here is probably wrong (i am not an expert at all), if someone has something to fix, or to add, feel free to post here and I will try to keep the thread updated, right now there is no official post of updated information about head units which would be of great value for newcomers.
Chipsets
Most popular chipsets:
• Rockchip PX3 – RK3188, comes with 1GB RAM/16GB ROM.
• Great support by the community.• Intel Sofia 3GR, comes with 2GB RAM/32GB ROM.
• Has a problem with heat, a fan can be installed.
gustden said:
I can only comment on the Joying unit. I wouldn't say it has a "heat problem", but it does benefit from an additional heat sink and/or fan, in hot conditions or when running benchmark tests. From my observations, even when it is throttled back to 900 Mhz, it still provides good performance.
Click to expand...
Click to collapse
• No so big support yet by the community.
Both chipsets have similar performance, but since Sofia has 2GB RAM, it has an overall better performance.
You can solder a 2GB module to a PX3, which is not expensive (30usd), but requires some expertise soldering.
gustden said:
As far as performance, it is difficult to compare CPUs with different instruction sets ( x86 vs arm ) using phone benchmark tests. Chips with the x86 architecture typically perform poorly on tests like antutu, but generally perform better than the scores would suggest, in real world activities.
Click to expand...
Click to collapse
Other chipsets:
• Rockchip PX5 – RK3368 (octacore), comes with 2GB RAM/32GB ROM.
• MT3562 (octacore), ARM Cortex-A53, 1.5-1.8GHz, comes with 2GB RAM/16GB ROM.
Standby mode
Since most devices have a cold start of 20-30 seconds, it is important that they support standby mode, which is a kind of hibernation-low consumption mode. Within standby, the device wakes up instantly. You have to watch battery consumption while in standby not to drain your battery. Currently both Intel Sofia and RK3368 support it.
Manufactures
When it comes to chinese market, it is hard to know who is the real manufacturer of something, or just a reseller.
• Joying: www.carjoying.com
• Most used and robust.
• Has sofia and PX3 models.
• Joying publish discounts periodically at xda-developers.
• Joying has announced the release of units with PX5 for March 2017.• Dasaita (hotaudio):
• Has a unit with Rockchip PX5 y Android 6.0.1.• Pumpkin: www.autopumpkin.com
• Has units with Sofia and PX3.
• They usually but 3G in their models.• Klyde: www.szklyde.com
• Has a unit with PX5.• Omnice: www.ownice.com
• The model C500 has the MT3562 chipset, with 4G LTE and uses Android 6.0. https://youtu.be/Aj5G0QUaGLA• Xtrons: xtrons.co.uk
• Is TB706APL a PX5?
Motherboards/platforms
Not sure if MTC refers to motherboard or anything else.
• There are models MTCB/C/D, they all are different, and different Android versions requires different MTC versions.
• Sofia are MTCD o “009”.
• 3188 are MTCB/C/D.
• Lollipop requires MTCB/C.
• Android 4.4.4 requires MTCB.
corpcd said:
• Intel Sofia 3GR, comes with 2GB RAM/32GB ROM.
• Has a problem with heat, a fan can be installed.
• No so big support yet by the community.
Both chipsets have similar performance, but since Sofia has 2GB RAM, it has an overall better performance.
You can solder a 2GB module to a PX3, which is not expensive (30usd), but requires some expertise soldering.
Click to expand...
Click to collapse
I can only comment on the Joying unit. I wouldn't say it has a "heat problem", but it does benefit from an additional heat sink and/or fan, in hot conditions or when running benchmark tests. From my observations, even when it is throttled back to 900 Mhz, it still provides good performance.
As far as performance, it is difficult to compare CPUs with different instruction sets ( x86 vs arm ) using phone benchmark tests. Chips with the x86 architecture typically perform poorly on tests like antutu, but generally perform better than the scores would suggest, in real world activities.
gustden said:
I can only comment on the Joying unit. I wouldn't say it has a "heat problem", but it does benefit from an additional heat sink and/or fan, in hot conditions or when running benchmark tests. From my observations, even when it is throttled back to 900 Mhz, it still provides good performance.
As far as performance, it is difficult to compare CPUs with different instruction sets ( x86 vs arm ) using phone benchmark tests. Chips with the x86 architecture typically perform poorly on tests like antutu, but generally perform better than the scores would suggest, in real world activities.
Click to expand...
Click to collapse
Thank you for your comments, i've added them to the first post.
Any other comments are welcome.
New user here.
The new Joying headunit with Snapdragon caught my attention, as it runs a Snapdragon 625 chip instead of the Unisoc UIS7862, and command a higher price (£55 diff) . So I did a little more research on their respective performance.
Against my expectation,, almost every metric and benchmark puts the UIS7862 ahead of the Snapdragon 625 by a decent margin, it is almost 4 years newer and on a more advanced 12nm node (instead of 14nm).
More to my suprise, Snapdragon 625 only supports upto 3GB ram, so the 4GB installed in the head unit would not be fully utilized.
Am I missing something blatantly obvious? Should I look at more than just specsheet and benchmark? I am very confused as to which version of the head unit would preform better in the long run. Which one would you choose?
Processor Comparison - Head 2 Head
UNISOC T610 vs Qualcomm Snapdragon 625 - Benchmarks, Tests and Comparisons
www.notebookcheck.net
Qualcomm MSM8953 Snapdragon 625 vs Unisoc UIS7862 Benchmarks, Specs, Performance Comparison and Differences - GadgetVersus
Comparison between Qualcomm MSM8953 Snapdragon 625 and Unisoc UIS7862 with the specifications of the processors, the number of cores, threads, cache memory, also the performance in benchmark platforms such as Geekbench, Passmark, Cinebench or AnTuTu.
gadgetversus.com
marcowong_7 said:
New user here.
The new Joying headunit with Snapdragon caught my attention, as it runs a Snapdragon 625 chip instead of the Unisoc UIS7862, and command a higher price (£55 diff) . So I did a little more research on their respective performance.
Against my expectation,, almost every metric and benchmark puts the UIS7862 ahead of the Snapdragon 625 by a decent margin, it is almost 4 years newer and on a more advanced 12nm node (instead of 14nm).
More to my suprise, Snapdragon 625 only supports upto 3GB ram, so the 4GB installed in the head unit would not be fully utilized.
Am I missing something blatantly obvious? Should I look at more than just specsheet and benchmark? I am very confused as to which version of the head unit would preform better in the long run. Which one would you choose?
Processor Comparison - Head 2 Head
UNISOC T610 vs Qualcomm Snapdragon 625 - Benchmarks, Tests and Comparisons
www.notebookcheck.net
Qualcomm MSM8953 Snapdragon 625 vs Unisoc UIS7862 Benchmarks, Specs, Performance Comparison and Differences - GadgetVersus
Comparison between Qualcomm MSM8953 Snapdragon 625 and Unisoc UIS7862 with the specifications of the processors, the number of cores, threads, cache memory, also the performance in benchmark platforms such as Geekbench, Passmark, Cinebench or AnTuTu.
gadgetversus.com
Click to expand...
Click to collapse
Hi there.
I am currently going down the same rabbit hole, trying to get a new head unit for my 2011 Ford Escape. I even found the same comp pages that you listed in your post.
I have asked Joying support last week some of the same questions you asked here, and they didn't offer any useful details and their one-sentence emails about this topic are very cagey.
Mainly - why is the new Teyes-like UI that they have on the Snapdragon not available on the clearly superior 7862 units, and only on the slower Snapdragon 625? They didn't answer that. If the new UI was available on the 7862 I would not even be doing this research, really. That's the only thing that has me going back and forth. That, and the horrible USA LTE band support on the 7862.
The fact that the slower, less RAM 8xA-53 unit is more expensive than 2xA-75/6xA-55 unit is puzzling.
Saab Unleashed has excellent reviews of these units:
Snapdragon
7862
he reviews the 1920x1200 version, which is yet another dilemma for us
It seems that likes the Snapdragon version mainly for the UI, and in the comments he mentions that the performance is about the same as the 7862 version. Weird.
Since you wrote your post in November 2021 - have you decided on one of these? If so, what was your experience like?
I sold many 7862 unit without problem. Recommend 7862
klaymen2 said:
Hi there.
I am currently going down the same rabbit hole, trying to get a new head unit for my 2011 Ford Escape. I even found the same comp pages that you listed in your post.
I have asked Joying support last week some of the same questions you asked here, and they didn't offer any useful details and their one-sentence emails about this topic are very cagey.
Mainly - why is the new Teyes-like UI that they have on the Snapdragon not available on the clearly superior 7862 units, and only on the slower Snapdragon 625? They didn't answer that. If the new UI was available on the 7862 I would not even be doing this research, really. That's the only thing that has me going back and forth. That, and the horrible USA LTE band support on the 7862.
The fact that the slower, less RAM 8xA-53 unit is more expensive than 2xA-75/6xA-55 unit is puzzling.
Saab Unleashed has excellent reviews of these units:
Snapdragon
7862
he reviews the 1920x1200 version, which is yet another dilemma for us
It seems that likes the Snapdragon version mainly for the UI, and in the comments he mentions that the performance is about the same as the 7862 version. Weird.
Since you wrote your post in November 2021 - have you decided on one of these? If so, what was your experience like?
Click to expand...
Click to collapse
I bought the UIS7862 version, it is running flawlessly with AGAMA launcher. My decision was simple: first, the Teyes launcher seems more buggy from other review, compare to AGAMA launcher which is tried and tested. Second, I am not paying 20% more for 25% less performance and 25% less usable ram. It just does not sit right with me.
I would personally avoid the 1200p as text and icons looks tiny on a higher res screen, which makes it very hard to use while driving.
marcowong_7 said:
I bought the UIS7862 version, it is running flawlessly with AGAMA launcher. My decision was simple: first, the Teyes launcher seems more buggy from other review, compare to AGAMA launcher which is tried and tested. Second, I am not paying 20% more for 25% less performance and 25% less usable ram. It just does not sit right with me.
I would personally avoid the 1200p as text and icons looks tiny on a higher res screen, which makes it very hard to use while driving.
Click to expand...
Click to collapse
That all makes perfect sense.
Thanks for the note about the AGAMA launcher vs Teyes. I ruled out Teyes unit itself but because I liked the look of that launcher I was still considering the Snapdragon Joying unit with the Teyes-like Launder. But the specs and price of that unit compare to the 7862 version just don't make sense.
I will be ordering the 7862 version as well.
Thanks.