Related
Hi,
I was wondering if the 2 CPU's are working simultaneously together? or I'st just 1?., I'm using FLEXREAPER X10 ICS 4.0.3 . Sometimes I get screen glitches .... when My tab is trying to sleep and I touched the screen. Also...when I try the benchmark it only say's the CPU1 processing speed... & etc. Also when I'm browsing in the Playstore the screen animation is a bit lag... Can some1 enlighten me...or is there an app for this? than can force 2 cpu to work all the time together.?
Yes, both cores are enabled at all times. But no, you cannot make an application use both cores unless the application was designed to do so.
FLEXREAPER X10 ICS 4.0.3 base a leak rom ICS, not a stable rom, so it has some problems.
Your benchmark is correct.
There are NOT 2 CPU's. There is only one CPU, with 2 cores. It doesn't process two applications at once, it CAN process two threads of the same application at the same time. Think of it as this: two CPUs would be two people writing on different pieces of paper.A single CPU with two cores would be one person writing with both hands at the same time. He can only write on the same piece of paper, but it's faster then it would be if he was writing with only one hand.
Note: this is not related to multi-task. Multi-tasking works based on processing a little bit of each app at a time, so altough it may seen that both are running at the same time, it is not.
Most apps are not designed to work with threads though, so there's your (actually, our) problem. But this is not an A500 problem, it applies to any multi-core processor based devices ou there (including desktops).
danc135 said:
There are NOT 2 CPU's. There is only one CPU, with 2 cores
Click to expand...
Click to collapse
Essentially true, but...
It doesn't process two applications at once
Click to expand...
Click to collapse
False. Two cores is just two CPUs on the same die.
Thanks for the response guys... I'm getting bit confused with this "multi-core processor".... I was expecting that it is fast to no lag, during browsing apps in my lib,switching application, even browsing in The PlAYSTORE". So It's correct to say that multi-core processor is a bit of a waste if an app can't use it's full/all cores potential? Also if the UI of an OS can't use all cores at the same time?
Dual Core, Dual CPU....
Not entirely, because if the kernel is capable of multi-threading, then it can use one core to run services while another is running the main application. The UI is only another application running on top of the kernel...
The only difference between a dual core Intel cpu and a dual core tegra 2 is the instruction set and basic capabilities, otherwise they can be thought of as essentially the same animal. The kernel, which is the core of the OS, handles the multi-tasking, but android has limited multi-tasking capabilities for Applications. Even so, services that run in the background are less of a hindrance on a dual core cpu than a single core one, and more and more applications are being written to take advantage of multiple cores.
Just have a bunch of widgets running on your UI, and you are looking at multi-tasking and multi-threading. Which are both better on multi-core processors.
A multiple core cpu are not more then one processor stacked on one die. They thread and load balance thru software.Applications MUST BE AWARE Of multi core cpus to take advantage of the dual cores.
A multiple Processor computer has a 3rd processor chip on the main board. this chip balances the load on hardware. this does not add over head on the processors. as on a Dual multi CORE CHIP. has a much higher load overhead.
SO Many people confuse the two. This is due to how the companies market the muticore cpu devices .
So a application that can not thread itself on a multi core chip will run in one of the cpu cores. a threaded app can well guess?
a dual Processor computer can run non multi thread aware app or program on two cores..
Its quite simply complicated..
You can throw all the hardware you want at a system. In the end, if the software sucks (not multi-threaded, poorly optimized, bad at resource management, etc...), it's still going to perform bad.
Dual core doesn't mean it can run one applicaton at twice speed, it means that it can run two applications at full speed, given that they're not threaded. Android's largely meant to run one application foregrounded, and since they can't magically make every application multi-threaded, you won't be seeing the benefits of multiple cores as much as you will on a more traditional platform.
Also, a dual-core tegra 2 is good, but only in comparison to other ARM processors (and even then, it's starting to show its age.) It's going to perform poorly compared to a full x86 computer, even one that's older.
netham45 said:
You can throw all the hardware you want at a system. In the end, if the software sucks (not multi-threaded, poorly optimized, bad at resource management, etc...), it's still going to perform bad.
Dual core doesn't mean it can run one applicaton at twice speed, it means that it can run two applications at full speed, given that they're not threaded. Android's largely meant to run one application foregrounded, and since they can't magically make every application multi-threaded, you won't be seeing the benefits of multiple cores as much as you will on a more traditional platform.
Also, a dual-core tegra 2 is good, but only in comparison to other ARM processors (and even then, it's starting to show its age.) It's going to perform poorly compared to a full x86 computer, even one that's older.
Click to expand...
Click to collapse
This is so true . With the exception of a TRUE Server dual OR Quad processor computer.. There is a special on board chip that will thread application calls to balance the load for non threaded programs and games..My first dual processor computer was a amd MP3000 back when dual cpu computers started to be within user price ranges. Most applications/programs did not multi thread .
And yes as far as computer speed and performance you will not gain any from this. but only will feel less lag when running several programs at once.a 2.8 ghz dual processor computer still runs at 2.8 not double that.
erica_renee said:
With the exception of a TRUE Server dual OR Quad processor computer.. There is a special on board chip that will thread application calls to balance the load for non threaded programs and games..
Click to expand...
Click to collapse
Actually this is incorrect. All such decisions are left to the OS's own scheduler, for multiple reasons: the CPU cannot know what kind of tasks it is to run, what should be given priority under which conditions and so on, like e.g. on a desktop PC interactive, user-oriented and in-focus applications and tasks are usually given more priority than background-tasks, whereas on a server one either gives all tasks similar priority or handles tasks priorities based on task-grouping. Not to mention realtime operating system which have entirely different requirements altogether.
If it was left to the CPU the performance gains would be terribly limited and could not be adjusted for different kinds of tasks and even operating systems.
(Not that anyone cares, I just thought to pop in and rant a little...)
Self correction
I said a multi-core processor only runs threads from the same process. That is wrong (thanks to my Computer Architecture professor for misleading me). It can run multiple threads from different processes, which would constitute true parallel processing. It's just better to stick with same process threads because of shared memory within the processor. Every core has its own cache memory (level 1 caches), and shared, on-die level 2 caches.
It all depends on the OS scheduler, really.
With ICS (and future Android versions), I hope the scheduler will improve to get the best of multi-core.
In the end though, it won't matter if applications aren't multi-thread (much harder to code). What I mean is, performance will be better, but not as better as it could be if developers used a lot of multi-threading.
To answer hatyrei's question, yes, it is a waste, in the sense that it has too much untapped potential.
Do you?
Do you keep it overckocked for a longer period, permanently, or just when/while you need it? How much (exact frequencies would be cool) I'm thinking of OCing mine (both CPU and GPU) since some games like NOVA 3 lag on occasions but not sure how safe/advisable it is.
Sent from my Nexus 7 using Tapatalk HD
I don't think it's needed. I've heard that OC won't help much with gaming, but you can definitely try
I don't yet - I might later. My N7 is still less than a month old.
The device manufacturers (e.g. Asus in this case) have motivations to "not leave anything on the table" when it comes to performance. So, you have to ask yourself - why would they purposely configure things to go slowly?
After all, they need to compete with other handset/tablet manufacturers, who are each in turn free to go out and buy the exact same Tegra SoC (processor) from Nvidia.
At the same time, they know that they will manufacture millions of units, and they want to hold down their product outgoing defect levels and in-the-field product reliability problems to an acceptable level. If they don't keep malfunctions and product infant mortality down to a fraction of a percent, they will suffer huge brand name erosion problems. And that will affect not only sales of the current product, but future products too.
That means that they have to choose a conservative set of operating points which will work for 99+ % of all customer units manufactured across all temperature, voltage, and clock speed ranges. (BTW, Note that Asus didn't write the kernel EDP & thermal protection code - Nvidia did; that suggests that all the device manufacturers take their operating envelope from Nvidia; they really don't even want to know where Nvidia got their numbers)
Some folks take this to mean that the vast majority of units sold can operate safely at higher speeds, higher temperatures, or lower voltages, given that the "as shipped" configuration will allow "weak" or "slow" units to operate correctly.
But look, it's not as if amateurs - hacking kernels in their spare time - have better informed opinions or data about what will work or won't work well across all units. Simply put, they don't know what the statistical test properties of processors coming from the foundry are - and certainly can't tell you what the results will be for an individual unit. They are usually smart folks - but operating completely in the dark in regards to those matters.
About the only thing which can be said in a general way is that as you progressively increase the clock speed, or progressively weaken the thermal regulation, or progressively decrease the cpu core voltage stepping, your chances of having a problem with any given unit (yours) increase. A "problem" might be (1) logic errors which lead to immediate system crashes or hangs, (2) logic errors (in data paths) that lead to data corruption without a crash or (3) permanent hardware failure (usually because of thermal excursions).
Is that "safe"?
Depends on your definition of "safe". If you only use the device for entertainment purposes, "safe" might mean "the hardware won't burn up in the next 2-3 years". Look over in any of the kernel threads - you'll see folks who are not too alarmed about their device freezing or spontaneously rebooting. (They don't like it, but it doesn't stop them from flashing dev kernels).
If you are using the device for work or professional purposes - for instance generating or editing work product - then "safe" might mean "my files on the device or files transiting to and from the cloud won't get corrupted", or "I don't want a spontaneous kernel crash of the device to cascade into a bricked device and unrecoverable files". For this person, the risks are quite a bit higher.
No doubt some tool will come in here and say "I've been overclocking to X Ghz for months now without a problem!" - as if that were somehow a proof of how somebody else's device will behave. It may well be completely true - but a demonstration on a single device says absolutely nothing about how someone else's device will behave. Even Nvidia can't do that.
There's a lot of pretty wild stuff going on in some of the dev kernels. The data that exists as a form of positive validation for these kernels is a handful of people saying "my device didn't crash". That's pretty far removed from the rigorous testing performed by Nvidia (98+% fault path coverage on statistically significant samples of devices over temperature, voltage, and frequency on multi-million dollar test equipment.)
good luck!
PS My phone has it's Fmax OC'ed by 40% from the factory value for more than 2 years. That's not a proof of anything really - just to point out that I'm not anti-OC'ing. Just trying to say - nobody can provide you any assurances that things will go swimmingly on your device at a given operating point. It's up to you to decide whether you should regard it as "risky".
Wow thanks for your educational response, I learned something. Great post! I will see if I will over clock it or not since I can play with no problems at all, it is just that it hics up when there is too much stuff around. Thanks again!
Sent from my Nexus 7 using Tapatalk HD
With the proper kernel its really not needed. Havent really seen any difference,aside from benchmark scores(which can be achieved without oc'ing)
Sent from my Nexus 7 using XDA Premium HD app
Yes, I run mine at 1.6 peak.
I've come to the Android world from the iOS world - the world of the iPhone, the iPad, etc.
One thing they're all brilliant at is responsive UI. The UI, when you tap it, responds. Android, prior to 4.1, didn't.
Android, with 4.1 and 4.2, does. Mostly.
You can still do better. I'm running an undervolted, overclocked M-Kernel, with TouchDemand governor, pushing to 2 G-cores on touch events.
It's nice and buttery, and renders complex PDF files far faster than stock when the cores peak at 1.6.
I can't run sustained at 1.6 under full load - it thermal throttles with 4 cores at 100% load. But I can get the peak performance for burst demands like page rendering, and I'm still quite efficient on battery.
There's no downside to running at higher frequencies as long as you're below stock voltages. Less heat, more performance.
If you start pushing the voltages past spec, yeah, you're likely into "shortening the lifespan." But if you can clock it up, and keep the voltages less than the stock kernel, there's really not much downside. And the upside is improved page rendering, improved PDF rendering, etc.
Gaming performance isn't boosted that much as most games aren't CPU bound. That said, I don't game. So... *shrug*.
Bitweasil said:
I can't run sustained at 1.6 under full load - it thermal throttles with 4 cores at 100% load.
Click to expand...
Click to collapse
@Bitweasil
Kinda curious about something (OP, allow me a slight thread-jack!).
in an adb shell, run this loop:
# cd /sys/kernel/debug/tegra_thermal
# while [ 1 ] ; do
> sleep 1
> cat temp_tj
> done
and then run your "full load".
What temperature rise and peak temperature do you see? Are you really hitting the 95C throttle, or are you using a kernel where that is altered?
I can generate (w/ a mutli-threaded native proggy, 6 threads running tight integer loops) only about a 25C rise, and since the "TJ" in mine idles around 40C, I get nowhere near the default throttle temp. But I am using a stock kernel, so it immediately backs off to 1.2 Ghz when multicore comes on line.
Same sort of thing with Antutu or OpenGL benchmark suites (the latter of which runs for 12 minutes) - I barely crack 60C with the stock kernel.
?
bftb0
The kernel I'm using throttles around 70C.
I can't hit that at 1200 or 1300 - just above that I can exceed the temps.
I certainly haven't seen 95C.
M-Kernel throttles down to 1400 above 70C, which will occasionally get above 70C at 1400, but not by much.
Bitweasil said:
The kernel I'm using throttles around 70C.
I can't hit that at 1200 or 1300 - just above that I can exceed the temps.
I certainly haven't seen 95C.
M-Kernel throttles down to 1400 above 70C, which will occasionally get above 70C at 1400, but not by much.
Click to expand...
Click to collapse
Thanks. Any particular workload that does this, or is the throttle pretty easy to hit with arbitrary long-running loads?
Odp: Do you overclock your N7?
I'll never OC a quadcore phone/tablet, I'm not stupid. This is enough for me.
Sent from my BMW E32 using XDA App
I've over clocked my phone, but not my N7. I've got a Galaxy Ace with a single core 800MHz processor OC'd to 900+. The N7 with its quad core 1.3GHz is more than enough for doing what I need it to do. Using franco.Kernel and everything is smooth and lag-free. No need for me to overclock
Sent From My Awesome AOSPA3.+/franco.Kernel Powered Nexus 7 With XDA Premium
Impossible to do so can't even get root but did manage to unlock the bootloader
Sent from my Nexus 7 using xda app-developers app
CuttyCZ said:
I don't think it's needed. I've heard that OC won't help much with gaming, but you can definitely try
Click to expand...
Click to collapse
I'm not a big OC'er, but I do see a difference in some games when I OC the GPU. It really depends on the game and what is the performance bottleneck. If the app is not Kernel bound than an OC won't make much difference. Must games are I/O and GPU bound.
Sent from my N7 using XDA Premium
Dirty AOKP 3.5 <&> m-kernel+ a34(t.10)
I've overclocked all of my devices since my first HTC hero. I really don't see a big deal with hardware life.
I know that this n7 runs games better at 1.6ghz than at 1.3ghz.
First thing I do when I get a new device is swap recovery and install aokp with the latest and greatest development kernel. Isn't that why all this great development exists? For us to make our devices better and faster? I think so. I'd recommend aokp and m-kernel to every nexus 7 owner. I wish more people would try non-stock.
scottx . said:
I've overclocked all of my devices since my first HTC hero. I really don't see a big deal with hardware life.
I know that this n7 runs games better at 1.6ghz than at 1.3ghz.
First thing I do when I get a new device is swap recovery and install aokp with the latest and greatest development kernel. Isn't that why all this great development exists? For us to make our devices better and faster? I think so. I'd recommend aokp and m-kernel to every nexus 7 owner. I wish more people would try non-stock.
Click to expand...
Click to collapse
Do you mean the pub builds of AOKP? Or Dirty AOKP
Ty
bftb0 said:
Thanks. Any particular workload that does this, or is the throttle pretty easy to hit with arbitrary long-running loads?
Click to expand...
Click to collapse
Stability Test will do it reliably. Other workloads don't tend to run long enough to trigger it that I've seen.
And why is a quadcore magically "not to be overclocked"? Single threaded performance is still a major bottleneck.
Bitweasil said:
Stability Test will do it reliably. Other workloads don't tend to run long enough to trigger it that I've seen.
And why is a quadcore magically "not to be overclocked"? Single threaded performance is still a major bottleneck.
Click to expand...
Click to collapse
Hi Bitweasil,
I fooled around a little more with my horrid little threaded cpu-blaster code. Combined simultaneously with something gpu-intensive such as the OpenGL ES benchmark (which runs for 10-12 minutes), I observed peak temps (Tj) of about 83C with the stock kernel. That's a ridiculous load, though. I can go back and repeat the test, but from 40C it probably takes several minutes to get there. No complaints about anything in the kernel logs other than the EDP down-clocking, but that happens just as soon as the second cpu comes on line, irrespective of temperature. With either of the CPU-only or GPU-only stressors, the highest I saw was a little over 70C. (But, I don't live in the tropics!)
To your question - I don't think there is much risk of immediate hardware damage, so long as bugs don't creep into throttling code, or kernel bugs don't cause a flaw that prevents the throttling or down-clocking code from being serviced while the device is running in a "performance" condition. And long-term reliability problems will be no worse if the cumulative temperature excursions of the device are not higher than what than what they would be using stock configurations.
The reason that core voltages are stepped up at higher clock rates (& more cores online) is to preserve both logic and timing closure margins across *all possible paths* in the processor. More cores running means that the power rails inside the SoC package are noisier - so logic levels are a bit more uncertain, and faster clocking means there is less time available per clock for logic levels to stabilize before data gets latched.
Well, Nvidia has reasons for setting their envelope the way they do - not because of device damage considerations, but because they expect to have a pretty small fraction of devices that will experience timing faults *anywhere along millions of logic paths* under all reasonable operating conditions. Reducing the margin, whether by undervolting at high frequencies, or increasing max frequencies, or allowing more cores to run at peak frequencies will certainly increase the fraction of devices that experience logic failures along at least one path (out of millions!). Whether or not OC'ing will work correctly on an individual device can not be predicted in advance; the only thing that Nvidia can estimate is a statistical quantity - about what percent of devices will experience logic faults under a given operating conditon.
Different users will have different tolerance for faults. A gamer might have very high tolerance for random reboots, lockups, file system corruption, et cetera. Different story if you are composing a long email to your boss under deadline and your unit suddenly turns upside down.
No doubt there (theoretically) exists an overclocking implementation where 50% of all devices would have a logic failure within (say) 1 day of operation. That kind of situation would be readily detected in a small number of forum reports. But what about if it were a 95%/5% situation? One out of twenty dudes report a problem, and it is dismissed with some crazy recommendation such as "have you tried re-flashing your ROM?". And fault probability accumulates with time, especially when the testing loads have very poor path coverage. 5% failure over one day will be higher over a 30 day period - potentially much higher.
That's the crux of the matter. Processor companies spend as much as 50% of their per-device engineering budgets on test development. In some cases they actually design & build a second companion processor (that rivals the complexity of the first!) whose only function is to act as a test engine for the processor that will be shipped. Achieving decent test coverage is a non-trivial problem, and it is generally attacked with extremely disciplined testing over temperature/voltage/frequency with statistically significant numbers of devices - using test-vector sets (& internal test generators) that are known to provide a high level of path coverage. The data that comes from random ad-hoc reports on forums from dudes running random applications in an undisciplined way on their OC'ed units is simply not comparable. (Even "stressor" apps have very poor path coverage).
But, as I said, different folks have different tolerance for risk. Random data corruption is acceptable if the unit in question has nothing on it of value.
I poked my head in the M-kernel thread the other day; I thought I saw a reference to "two units fried" (possibly even one belonging to the dev?). I assume you are following that thread ... did I misinterpret that?
cheers
I don't disagree.
But, I'd argue that the stock speeds/voltages/etc are designed for the 120% case - they're supposed to work for about 120% of shipped chips. In other words, regardless of conditions, the stock clocks/voltages need to be reliable, with a nice margin on top.
Statistically, most of the chips will be much better than this, and that's the headroom overclocking plays in.
I totally agree that you eventually will get some logic errors, somewhere, at some point. But there's a lot of headroom in most devices/chips before you get to that point.
My use cases are heavily bursty. I'll do complex PDF rendering on the CPU for a second or two, then it goes back to sleep while I read the page. For this type of use, I'm quite comfortable with having pushed clocks hard. For sustained gaming, I'd run it lower, though I don't really game.
Hi, I was going through my kernel setting and I saw that it has a maximum of only two cores enabled at any time. I was wondering if, out side of the battery, if there are any other disadvantages or risks of more enabled cores. I'm sure this question is a stupid one, but I will have to learn from somewhere.
You neglect to mention which kernel you're using but I'll hazard a guess that it might be M-Kernel because that's the only one that I know of (currently) that by default is set to use 2 cores - Metallice has a post in the M-Kernel thread where he states why he's chosen the 2 core default over 4.
Basically it comes down to the understanding that Android was designed for 2 cores and most - which means the overwhelming majority, mind you - every app that's out there will use no more than 2 cores by default, with a very few (like less than 10 or so) apps will seriously push more than 2 cores to any significant degrees.
2 cores = less power usage = longer battery life overall = doesn't make that much difference in regular day to day use of any given quad core Android-powered device.
All 4 cores kick in for a variety of reasons but when they do it's usually just temporary, a momentary spike in CPU power to handle something faster and then it's right back to idle/offline status. In general, the only thing you might notice is a tiny bit of lag with using 2 cores in starting up apps or other such momentary situations that can make use of more CPU processing power but then they end up shutting off again.
In the long run, having 4 cores is more of a luxury than an absolute necessity. There's a video on YouTube of a guy using a Samsung Galaxy Note II and running 4 videos at the same time on the device; that's about the only time he could get it to require all 4 cores being utilized and even then they were far from being maxed out unless he had some other stuff going on at the same time.
Your battery will thank you by utilizing that ROM's default of 2 cores - if you really really need all 4 cores you'll know it and you can easily enable the other cores with something like Trickster Mod which is what Metallice recommends for M-Kernel tweaking anyway (the only app he recommends, actually).
You really should read or at the bare minimum skim the thread about the kernel you're using, even if the thread is very long - it's more useful to read and learn stuff than creating new threads for such info which is generally frowned upon around here.
Self-research is the best course of action, aka finding out for yourself from the volumes of info this forum has.
Thank you, I am sorry that I forgot to mention the kernel, and yes, it is m kernel. I thought I said it but it seems I didn't.
With the availability of rooting the SM-N900V along with flashing Custom ROMs and Kernels, has anyone attempted to assign specific hardware tasks to the CPU cores?
The Snapdragon processor uses asynchronous core processing to allow tasks to be ran on any or all cores, which is great for general use. However, with viewing how battery saving and performance modes control ramping, it is my viewpoint that having a core focus on a specific function, (ie computes for the modem or gpu) instead of anything and everything possible, would increase RAM efficiency as well as increasing stability. The cores, in theory, would ramp indepently of each other, instead of having a single core being maxed out before it triggers others to come online.
I know that Linux systems have the isolcpu command to assign processes this way, but as involved as Android is, as well as being a POSIX system, it may be ridiculous to try to assign each and every process this way. Perhaps even more so with user apps being added and removed frequently.
I would love to hear insight as to whether this idea has any benefit to implementing, as well as reasons why it might be worse than the methods currently in place.
I know the question contains a little of ignorance, but idk much about windows kernels and how works the OS en general, but, it is posible that a android phone with, idk, for example a snapdragon processor with an arch of ARM been used as more CPU processing power to the computer? Im just proposing it theoretically
And also by the way if someone could explain me what are the cores of the CPU and if it has anything related to the question thanks you
No. It will not work. Cores of the CPU are like brains in Humans, more cores = more processing power. Android uses the Linux kernel and Windows...the Windows kernel. Two differant beast. It would be like Cats and Dogs agreeing on the best place to go poo....it won't happen.
A CPU, or Central Processing Unit, is the part of the computer that does the actual work - performing operations. Modern CPUs have multiple cores, where each core is able to work on a different part of the operation. In a mobile context, multiple cores are also used to provide a balance between performance and power saving; depending on the CPU, there are generally 2 or more "little" cores that prioritize efficiency over performance; 2 or more "mid" cores that provide more processing power when the "little" cores aren't up to the task; and 1 or 2 "big" cores that provide the best performance but use the most power. When someone talks about "throttling" in a kernel, they're talking about the runtime mechanism that decides what cores a CPU will use under given load conditions.
There are multiple different CPU architectures, and as far as I know, it's not possible to parallel them - you can't use an ARM64 CPU in parallel with an Intel x64, even though they're both 64 bit. The reason for this is different architectures use different basic instructions and scheduling, so the amount of code that would need to go into a kernel to make different types work together would slow the system down and make the whole endeavor pointless, unless you're working with a really large scale operation.
If you look at multi-CPU systems, you'll see that everything from Xeon servers to supercomputers all use the same types of CPU to simplify interconnects, as well as the ability to use one kernel.
It's worth mentioning that there are some projects that do make use of different platforms - for example, SETI @ Home uses a network of Internet connected computers to create a sort of supercomputer. Botnets do the same sort of thing. The difference here is that these systems aren't paralleled, and they work at the application level, so they can only use a certain amount of the client system's resources.
V0latyle said:
A CPU, or Central Processing Unit, is the part of the computer that does the actual work - performing operations. Modern CPUs have multiple cores, where each core is able to work on a different part of the operation. In a mobile context, multiple cores are also used to provide a balance between performance and power saving; depending on the CPU, there are generally 2 or more "little" cores that prioritize efficiency over performance; 2 or more "mid" cores that provide more processing power when the "little" cores aren't up to the task; and 1 or 2 "big" cores that provide the best performance but use the most power. When someone talks about "throttling" in a kernel, they're talking about the runtime mechanism that decides what cores a CPU will use under given load conditions.
There are multiple different CPU architectures, and as far as I know, it's not possible to parallel them - you can't use an ARM64 CPU in parallel with an Intel x64, even though they're both 64 bit. The reason for this is different architectures use different basic instructions and scheduling, so the amount of code that would need to go into a kernel to make different types work together would slow the system down and make the whole endeavor pointless, unless you're working with a really large scale operation.
If you look at multi-CPU systems, you'll see that everything from Xeon servers to supercomputers all use the same types of CPU to simplify interconnects, as well as the ability to use one kernel.
It's worth mentioning that there are some projects that do make use of different platforms - for example, SETI @ Home uses a network of Internet connected computers to create a sort of supercomputer. Botnets do the same sort of thing. The difference here is that these systems aren't paralleled, and they work at the application level, so they can only use a certain amount of the client system's resources.
Click to expand...
Click to collapse
Whoa ok! Cool thanks for your explanation and time. I understood most of the reply so thanks for answering me question!
Have a good day
7zLT said:
Whoa ok! Cool thanks for your explanation and time. I understood most of the reply so thanks for answering me question!
Have a good day
Click to expand...
Click to collapse
No problem. Here is a Wiki article that may provide a more concise explanation. Turns out I was wrong about instruction sets, at least concerning AMD APUs.
The bottom line is...Yes, it's absolutely possible to use multiple different systems to provide more processing power than just one. But, unless those systems are specifically designed to work in parallel with other systems, it would be a bit more complicated to get everything to work together, and the end result wouldn't necessarily be faster. If you're enterprising enough, you could set up an application on your computer as well as your phone that uses your phone's CPU to perform operations, but it wouldn't be easy.
Oh!
Ok, thanks for the references
The Northbridge chipset has limited bandwidth and is optimized to work with specific cpu's. Integrating at this level be ineffective at best even if you could get it to work because of the Northbridge bandwidth limitations.
A dual processor board is the one that you wanted, originally used mostly for servers they are also used in high end workstations. Most games are designed to run on 4 cores so it may not yield much. Some 3D rendering softwares and such are designed to take advantage of dual processor mobos. Again designed to work with a specific processor family like the Xeon series ie matched processors.