Related
Hello guys! Walking around the net I found some diagrams that describe the benchmarks of Tegra 2 stock vs. 3 relative to other processors. I always wondered if now that the processor is overclocked to 1.7GHz (70% more power) has reduced the distances from the future quad-core (which we remember to be 5 times faster than current SoC as Nvidia has said).
Some developers can make this comparison? It would be interesting to see the benefits
"when looking at the compiler versions and settings used to compile the CoreMark benchmark, the Core 2 Duo numbers were produced via GCC 3.4 and only the standard set of optimizations (-O2), while the Tegra 3 numbers were run on a more recent GCC 4.4 with aggressive optimizations (-O3). Il Sistemista website took a Core 2 Duo T7200 and re-ran the benchmark compiled with GCC 4.4 and the same optimization settings. The results were no longer in favor of NVIDIA, as the Core 2 chip scored about 15,200 points, compared to the Tegra's 11,352."
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
CoreMark benchmark comparing nVidia Tegra 3 @1GHz clock to various real and hypothetical products
CoreMark/MHz index shows how much Coremarks can a particular chip extract given its frequency
WoW....More powerful than Core2Due O_O!!!
Now THATS something to get excited about!!!
Ahmed_PDA2K said:
WoW....More powerful than Core2Due O_O!!!
Now THATS something to get excited about!!!
Click to expand...
Click to collapse
if you read the bold they fixed compiler optimizations and the Tegra did not hold up
At this point it would be really interesting to know how much you have reduced the gap with the super-overclock to 1.7GHz of Tegra (which is really the maximum achievable?). I don't know how to operate the CoreMark 1.0 so I ask someone in the community for check if values are underestimated or substantially improved compared to those offered by Nvidia and especially to see if at this point Tegra 3 can really be worth .
I recently read that Nvidia is pushing a lot of their projects and have already popped the development of Tegra 4. The specifications include a 4 Tegra chip manufacturing to 28Nm. IMHO it would be more correct to wait for this to update the Tegra 2 hardware version (so these benchmarks could be a way to understand this in this context).
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
In any case it would be nice to make a comparative improvement in all frequencies compared to those values offered by Nvidia, to give an idea of who has the device the hardware's potential. But I think it's a matter of development and optimization as we are seeing here on the forum these days ... the tablet is slowly improving on all fronts
I'm sure that once we have access to the OS code the tf will run like a beast! I had an OG Droid and the difference between stock and modded was mind blowing.
Sent from my ADR6300 using Tapatalk
brando56894 said:
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
Click to expand...
Click to collapse
I'm one of the few with one that does 1.7GHz no problem.
Its not only the cpu speed of tegra3 that will pwn the transformer1, a 12core gpu and support for dynamic lightning , high profile hd decoding and much more. No way a tegra2 will outperform the tegra3.
Just face it: transformer1 will be pwned big time by transformer2, launched okt/nov 2011....
Not to mention the new Ti cpus coming q1 2012.... they will pwn even more then tegra3...
Sent from my Desire HD using XDA App
Tempie007 said:
Its not only the cpu speed of tegra3 that will pwn the transformer1, a 12core gpu and support for dynamic lightning , high profile hd decoding and much more. No way a tegra2 will outperform the tegra3.
Just face it: transformer1 will be pwned big time by transformer2, launched okt/nov 2011....
Not to mention the new Ti cpus coming q1 2012.... they will pwn even more then tegra3...
Sent from my Desire HD using XDA App
Click to expand...
Click to collapse
Yeah it is true that is stronger, but my thesis is that Tegra 3 is only a technology shift. Although it has 12 computing cores capable of dynamically processing the lights, this CPU produces these values and it would be interesting to relate them to the values of the Tegra 2 overclocked and see if indeed the Tegra 3 can really be a solution to replace Tegra 2. We are faced with two different SoC, the first a dual core, the second a quad core, but probably, common sense tells me that if well exploited this device could give a long hard time to the next model (I remember the story of HTC HD2, which was released back in 2009 and today is one of the longest-running phones through a rearrangement of the software every day). I argue that there is no need for raw power in these devices but the fineness of the calculation, since the batteries are not as performance and putting more capacity probabily they can't make the devices more streamlined in near future.
Someone knows how to use CoreMark to update these values?
Of course the tegra two will beat the crap out of the tegra 3, thats like comparing a core 2 duo to a core i7 lol
chatch15117 said:
I'm one of the few with one that does 1.7GHz no problem.
Click to expand...
Click to collapse
Lucky bastard! :-D how many mV are you running it at?
Never tried 1.7Ghz but have done 1.5-1.6 on 3 tabs without messing with voltages. Acutally right now running 1.2Ghz messing with LOWERING the voltages.
Coremark download
If anyone can compile Coremark for run under Android here there is the software to download:
Download the Coremark Software readme file
Download the CoreMark Software documentation
This documentation answers all questions about porting, running and score reporting
Download the Coremark Software
Download the CoreMark Software MD5 Checksum file to verify download
Use this to verify the downloads: >md5sum -c coremark_<version>.md5
Download the CoreMark Platform Ports
These CoreMark ports are intended to be used as examples for your porting efforts. They have not been tested by EEMBC, nor do we guarantee that they will work without modifications.
Here there are result's table:
http://www.coremark.org/benchmark/index.php
If a DEV can compile and Run some test for comparing the results with the new frequencys would be great!
brando56894 said:
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
Click to expand...
Click to collapse
There is no danger zone.. at least not on my kernel. I haven't posted the one for 3.1 (only 3.01), but there is a built in down clock once the chip reaches temps beyond 47degC... This only happens if the tablet (in laptop form) is left in a bag with the screen on (by accident, sometimes the hinge says its open when its closed).. So if you can run 1.7ghz stable, theres no reason not to run it.. as you'll be protected from heat by the built in thermal throttling.
Default thermal throttling had some seriously bizzare ranges.. 80-120degC..
I wish nvidia's boards were produced like this:
http://hardkernel.com/renewal_2011/main.php
http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G129705564426
Something modular would be a nice change in the mobile tech segment.. Swapping SoCs would require some serious low-level work tho.. Unless writing to the bootloader and boot partitions were made easy via external interface.. something like that.
Blades said:
There is no danger zone.. at least not on my kernel. I haven't posted the one for 3.1 (only 3.01), but there is a built in down clock once the chip reaches temps beyond 47degC... This only happens if the tablet (in laptop form) is left in a bag with the screen on (by accident, sometimes the hinge says its open when its closed).. So if you can run 1.7ghz stable, theres no reason not to run it.. as you'll be protected from heat by the built in thermal throttling.
Default thermal throttling had some seriously bizzare ranges.. 80-120degC..
I wish nvidia's boards were produced like this:
http://hardkernel.com/renewal_2011/main.php
http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G129705564426
Something modular would be a nice change in the mobile tech segment.. Swapping SoCs would require some serious low-level work tho.. Unless writing to the bootloader and boot partitions were made easy via external interface.. something like that.
Click to expand...
Click to collapse
And so? are you good to use CoreMark to run some bench on your device and compare with Tegra 3 results? coz here i would test this
devilpera64 said:
"when looking at the compiler versions and settings used to compile the CoreMark benchmark, the Core 2 Duo numbers were produced via GCC 3.4 and only the standard set of optimizations (-O2), while the Tegra 3 numbers were run on a more recent GCC 4.4 with aggressive optimizations (-O3).
Click to expand...
Click to collapse
That statement there tells why benchmarks are extremely useless and misleading.
im one of the few that can reach 1.7 ghz no problem to. never ran it long enough to get hot tho. maybe 30 mins just to run benchmarks. never had fc's or have my system crash
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Low power Processing brunt for phones, netbooks, laptops & tablets.
It seems ARM and Nvidia have big plans for the future.
Newer small chipsets like these will sport some serious multi core action.
Here's the basic road map.
Tegra 3 (Kal-El) series
Processor: Quad-core ARM Cortex-A9 MPCore, up to 1.5 GHz
12-Core Nvidia GPU with support for 3D stereo
Ultra Low Power GPU mode
40 nm process by TSMC
Video output up to 2560×1600
NEON vector instruction set
1080p MPEG-4 AVC/h.264 High Profile video decode
The Kal-El chip (CPU and GPU) is to be about 5 times faster than Tegra 2
Estimated release date is now to be Q4 2011 for tablets and Q1 2012 for smartphones, after being set back from Nvidia's prior estimated release dates of Q2 2011, then August 2011, then October 2011
The Tegra 3 is functionally a quad-core processor, but includes a fifth "companion" core. All cores are Cortex-A9's, but the companion core is manufactured with a special low power silicon process. This means it uses less power at low clock rates, but more at higher rates; hence it is limited to 500 MHz. There is also special logic to allow running state to be quickly transferred between the companion core and one of the normal cores. The goal is for a mobile phone or tablet to be able to power down all the normal cores and run on only the companion core — using comparatively little power — during "standby" mode or when otherwise using little CPU. According to Nvidia, this includes playing music or even video content.
Tegra (Wayne) series
Processor: Quad-core/Octa-core ARM Cortex-A15 MPCore (octa core already)
Improved 24 (for the Quad Version) and 32 to 64 (for the Octa-core Version) GPU cores with support for Directx 11+, OpenGL 4.X, OpenCL 1.X, and Physx
28 nm
About 10 times faster than Tegra 2
To be released in 2012
Tegra (Grey) series
Processor: ARM Cortex MPCore
28 nm
Integrated Icera 3G/4G baseband
To be released in 2012
Tegra (Logan) series
Processor: ARM ?
Improved GPU core
28 nm[23]
About 50 times faster than Tegra 2
To be released in 2013
Tegra (Stark) series
Processor: ARM ?
Improved GPU core
About 75 times faster than Tegra 2[24]
To be released in 2014
THE TEGRA 3's SECERET CORE? what does it do?
There's a not-so-dirty little secret about NVIDIA's upcoming Tegra 3 platform (which will soon find a home in plenty of mobile devices): the quad-core processor contained within has a fifth core for less intensive tasks.
In a paper published by NVIDIA, they provided in-depth details about their Variable Symmetric Multiprocessing (vSMP). Simply put, vSMP implemented in Kal-El not only optimizes CPU multi-threading and multi-tasking for max performance and power efficency at a moments notice, but it offloads background tasks and less intensive CPU activities, such as background syncing/updating, music playback, video playback to the fifth core, which runs at a considerably slower 500 MHz, and therefore consumes considerably less power. Bottom line: battery life!
All five CPU cores are identical ARM Cortex A9 CPUs, and are individually enabled and disabled (via aggressive
power gating) based on the work load. The "Companion" core is OS transparent, unlike current Asynchronous SMP architectures, meaning the OS and applications are not aware of this core, but automatically take advantage of it. This strategy saves significant software efforts and new coding requirements.
The Tegra 3 logic controller also has the power to dynamically enable and disable cores depending on the workload at any given time, making sure not to waste any power from unused cores. So what's the quantitative payoff?
NVIDIA ran Coremark benchmark using the Tegra 3 and pitted it against against current industry chipsets, such as the TI OMAP4 and the Qualcomm 8x60 (take these with a grain of salt, obviously). They found that when handling the same workload, Tegra 3 consumed 2-3x less power than the competition. When running max performance tests, the Tegra 3 was twice as fast while still using less power.
Compared to the Tegra 2 chipset, the Tegra 3 CPU uses 61% less power while video playback is happening, and 34% less power during gaming activities. While most of the work is done by the GPUs in mobile devices, previous chipsets lacked the ability to ramp down the energy output of unused cores like Tegra 3 is purportedly able to do.
What I'm trying to say is that you should be excited for this mobile quad-core processor to arrive and not scared for your battery life.
5 cores already?! Here I am sitting with aluminum foil ghetto heatsink between my x10 and battery to dissapate heat from 1.2ghz of increasingly mediocre temp bound performance, wondering when to jump ship to dual core. And now 5core is around the years corner...
Nonetheless... thanks for the info omegaRED
This is going to be huge...
I bet you're going to see a lot of uneducated posts in this thread about more cores and battery usage, etc...
Lol
*waits for the **** storm*
scoobysnacks said:
This is going to be huge...
I bet you're going to see a lot of uneducated posts in this thread about more cores and battery usage, etc...
Lol
*waits for the **** storm*
Click to expand...
Click to collapse
here comes scooby again.lol/come back for everything
josephnero said:
here comes scooby again.lol/come back for everything
Click to expand...
Click to collapse
This is contributing to the thread how?
Let me explain this thing in laid mans terms.
It has 5 cores.. 1 runs at 500Mhz at all times and handles most background processes.
Video
Media
ect...
Due to it's 500mhz speed it's battery usage is very low.
The other 4 cpu's running at 1.5Ghz can handle all the good stuff..
games
physix
media processing
and can be put into a off like deep sleep when going offline if not required for the task.
The 500mhz cpu and ultra low power nvidia GPU keeps everything going when all 4 cores go offline.
Also the 28nm silicon wafer technology.
The power if this design can compete with any console on the market while keeping your device going for much much longer than current chipsets.
OmegaRED^ said:
Let me explain this thing in laid mans terms.
It has 5 cores.. 1 runs at 500Mhz at all times and handles most background processes.
Video
Media
ect...
Due to it's 500mhz speed it's battery usage is very low.
The other 4 cpu's running at 1.5Ghz can handle all the good stuff..
games
physix
media processing
and can be put into a off like deep sleep when going offline if not required for the task.
The 500mhz cpu and ultra low power nvidia GPU keeps everything going when all 4 cores go offline.
Also the 28nm silicon wafer technology.
The power if this design can compete with any console on the market while keeping your device going for much much longer than current chipsets.
Click to expand...
Click to collapse
Ooh I agree completely and already understand this stuff.
I'm in the industry..
this will definitely help those who argue that more cores equals more battery drain, and who don't understand power distribution.
Compared to a core2duo
a hypothetical Tegra [email protected] would have scored 17,028 points, again beating the Core 2 Duo using the same compiler settings. If we extend the projections to a hypothetical 2.5GHz Cortex-A9 chip, we arrive at 28,380 CoreMarks, which is the very least we should expect from Qualcomm's recently announced Cortex-A15 based chip at 2.5GHz.
There's always a bigger fish.
2.5Ghz.. O_O
The quad core is just a quad core.. but usage is 25% less.
Anyone that takes this over the Tegra3..
Enjoy the 30 second battery life.
It's a massive stride forward.!!
But it's still too hungry.
I wish phone developers would shove their battery usage predictions and add 1 to 2 Amps to the projected figure.
sweet, I'll definitely look into new SE products, great hardware
Love the name of the pocessor, Kal-El
The future of smartphones is bright Look forward to purchasing a new phone in oh.... 2 years. When that time comes I'll make sure to investigate whats the best on the market rather than going in blind
Knowing Samsung's reputation for preducing the best Android smart phones and probably the best smart phones period for the period of their release, and how adamant they are about staying above the competition (Note: HTC One X), do you think we'll see one?
With LG producing the first ever smartphone with 2 GBs of RAM (double the average competitors software), therefore dethrowning them as the defacto best Android phone makers, and with newer hardware being utilized by Samsung's biggest rivals, Apple and their latest iPhone; do you foreseably see a rendition of the Samsung flagship phone before the year is up?
megagodx said:
Knowing Samsung's reputation for preducing the best Android smart phones and probably the best smart phones period for the period of their release, and how adamant they are about staying above the competition (Note: HTC One X), do you think we'll see one?
With LG producing the first ever smartphone with 2 GBs of RAM (double the average competitors software), therefore dethrowning them as the defacto best Android phone makers, and with newer hardware being utilized by Samsung's biggest rivals, Apple and their latest iPhone; do you foreseably see a rendition of the Samsung flagship phone before the year is up?
Click to expand...
Click to collapse
There is no doubt about that.If you remember Galaxy Note marketing strategy,Galaxy Note 2 to be expected to be announce very soon will be their new flagship,note 10.1 tablet benchmarks clearly shows better GPU Mali 600(5 times as powerful as the Mali 400 use in galaxy sIII),just imagine 5 times,there is no doubt that Note 2 will be launching with the same hardware.Note 2 is their next flagship(expected to be announced shortly).As for LG they are ready with A15 krait quad core with adreno 320 and is officially the most powerful smartphone out there,enough to put all these A9 quad core to shame(remember what dual core A15 did to quad A9,Tegra).Now it's up to you which flagship you want,as in android unfortunately every week will have a new flagship,specs and size going higher and higher.
Again, Krait is no A15, but QCOM's own design, with pipeline lengths and OoE somewhere in between of A9 and A15 (stock ARM designs). Then again Exynos is not stock A9 design, but optimized by Samsung.
So, we don't know till we taste the pudding.
Also, the timeframe for Note 2 (or whatever it will be called) announcement is somewhere in October and shipping probably in late November or early December.
That's a long ways away
vasra said:
Again, Krait is no A15, but QCOM's own design, with pipeline lengths and OoE somewhere in between of A9 and A15 (stock ARM designs). Then again Exynos is not stock A9 design, but optimized by Samsung.
So, we don't know till we taste the pudding.
Also, the timeframe for Note 2 (or whatever it will be called) announcement is somewhere in October and shipping probably in late November or early December.
That's a long ways away
Click to expand...
Click to collapse
How is that a long way,a flagship in July to be replaced by another flagship in October that's like 4months of flagship
megagodx said:
Knowing Samsung's reputation for preducing the best Android smart phones and probably the best smart phones period for the period of their release, and how adamant they are about staying above the competition (Note: HTC One X), do you think we'll see one?
With LG producing the first ever smartphone with 2 GBs of RAM (double the average competitors software), therefore dethrowning them as the defacto best Android phone makers, and with newer hardware being utilized by Samsung's biggest rivals, Apple and their latest iPhone; do you foreseably see a rendition of the Samsung flagship phone before the year is up?
Click to expand...
Click to collapse
competition is bigger , market will be changing every now and then. A 2 gb of ram dosent dethrow anything, eventhough 2gb sounds nice practically its of no use. Every flagship will be at their top for atleast a year as per their specs, you can expect the same for galaxy s3, its the best around now. If you need a bigger screen then go for Note or wait for Note 2.(assuming note2 wont have any significant changes)
Yes there are always next generation(exynos 5 series,quad core krait) waiting to demolish all the current lineups like s3,hox,etc etc but it wont happen for atleast a year. this will be the same in the feature but with the reduced time gap
The SGS3 in Japan actually has 2GB RAM.
http://www.samsung.com/jp/galaxys3/index.html?pid=jp_home_thelatest_main_galaxys3_20120515
It uses S4 instead of Exynos.
It's like a totally different phone with the name of SSG3 (in terms of hardware).
BTW guys, recent "leak" by Nordichardware suggests that Note 10.1 will have a Exynos Quad using Mali-T604.
http://www.nordichardware.com/news/...th-mali-604t-gpu-crushes-the-competitors.html
rd_nest said:
The SGS3 in Japan actually has 2GB RAM.
http://www.samsung.com/jp/galaxys3/index.html?pid=jp_home_thelatest_main_galaxys3_20120515
It uses S4 instead of Exynos.
It's like a totally different phone with the name of SSG3 (in terms of hardware).
BTW guys, recent "leak" by Nordichardware suggests that Note 10.1 will have a Exynos Quad using Mali-T604.
http://www.nordichardware.com/news/...th-mali-604t-gpu-crushes-the-competitors.html
Click to expand...
Click to collapse
mali t604 packed with exynos 4412 a9 design?? i dont think its possible......
bala_gamer said:
mali t604 packed with exynos 4412 a9 design?? i dont think its possible......
Click to expand...
Click to collapse
Check this:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
@ rd_nest
it shows a dual core mali 400 mp for 2012-2013 , but we already have a quad core mali 400... ? the graph is interesting though
android_master said:
There is no doubt about that.If you remember Galaxy Note marketing strategy,Galaxy Note 2 to be expected to be announce very soon will be their new flagship,note 10.1 tablet benchmarks clearly shows better GPU Mali 600(5 times as powerful as the Mali 400 use in galaxy sIII),just imagine 5 times,there is no doubt that Note 2 will be launching with the same hardware.Note 2 is their next flagship(expected to be announced shortly).As for LG they are ready with A15 krait quad core with adreno 320 and is officially the most powerful smartphone out there,enough to put all these A9 quad core to shame(remember what dual core A15 did to quad A9,Tegra).Now it's up to you which flagship you want,as in android unfortunately every week will have a new flagship,specs and size going higher and higher.
Click to expand...
Click to collapse
Did you ever take in consideration that that might be a rumor? Or the over hype factor like what happen with the SGS3?
-1080P Screen
-1.5 ghz quad core
-12 mp camera
-2gb ram
Remember all that? I mean it's believable because LG is late tot eh party, but lets not forget that the S4 Pro hasn't even been found in a device and you think they're going to skip over that?
If it's true, I just want to believe it when I see clear and concrete evidence.
bala_gamer said:
@ rd_nest
it shows a dual core mali 400 mp for 2012-2013 , but we already have a quad core mali 400... ? the graph is interesting though
Click to expand...
Click to collapse
Yea, this graph is representative of the ARM portfolio. That dual-core Mali (year 2012-2013) was projected for a device in the entry-mid range level. If some company releases a phone in the entry level with Mali400 MP2, I think it's quite acceptable in 2012.
Hello,
I backed this interesting project on Kickstarter : Parallella: A Supercomputer For Everyone. Basically it's a 3.4'' x 2.1'' board.
It has open access (e.g. no NDAs), it is based on free and open source development tools and libraries, and it's very affordable (the project targets to make boards available for $100 a piece).
This project only has 25 hours left to reach its funding goal. You can help reach it funding goal by spreading the word, or even better, become a backer as well!
Looking forward to your comments!
Cheers,
-- Freddy
PS : as a XDA forum noob I'm not able to add any URLs to this post, but searching for "Parallella: A Supercomputer For Everyone" will do the trick!
Visionscaper said:
Hello,
I backed this interesting project on Kickstarter : Parallella: A Supercomputer For Everyone. Basically it's a 3.4'' x 2.1'' board.
It has open access (e.g. no NDAs), it is based on free and open source development tools and libraries, and it's very affordable (the project targets to make boards available for $100 a piece).
This project only has 25 hours left to reach its funding goal. You can help reach it funding goal by spreading the word, or even better, become a backer as well!
Looking forward to your comments!
Cheers,
-- Freddy
PS : as a XDA forum noob I'm not able to add any URLs to this post, but searching for "Parallella: A Supercomputer For Everyone" will do the trick!
Click to expand...
Click to collapse
Interesting proposal, this FPGA+SOC is the same as on the Zedboard which costs $299.
It's a good deal even without the parallel co-processor if you want ARM with the programmable FPGA setup.
It's useful for a whole host of embedded and robotic applications if you need the FPGA.
I guess the co-processor is a nice addition too.
Of course there is a risk that they won't deliver, I guess there is no guarantee that people who pledge get the board at the end.
I pledged for 2 boards, hopefully they will come through.
And the link
And the link.
http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone
I've backed it up for 2 boards, will be checking anxiously to see if it passes the funding goal tomorrow!!! Fingers crossed.
Very Cool!!
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Some More Info:
How many cores do the initial $99 Epiphany-III based Parallella boards have?
2 ARM-A9 cores and 16 Epiphany cores.
When will the 64-core Parallella boards be available?
We will be offering the Epiphany-IV based 66 (2+64) core version Parallella boards as soon as we reach our stretch funding goal of $3M. The reward will be available for those who pledge more than $199. The estimated delivery time for the 64 core boards would be May, 2013.
Why do you call the Parallella a supercomputer?
The Parallella project is not a board, it's intended to be a long term computing project and community dedicated to advancing parallel computing. The current $99 board aren't considered supercomputers by 2012 standards, but a cluster of 10 Parallella boards would have been considered a supercomputer 10 years ago. Our goal is to put a bona-fida supercomputer in the hands of everyone as soon as possible but the first Parallella board is just the first step. Once we have a strong community in place, work will being on PCIe boards containing multiple 1024-core chips with 2048 GFLOPS of double precision performance per chip. At that point, there should be no question that the Parallella would qualify as a true supercomputing platform.
Where can I learn more about the Epiphany processors?
Introduction:
http://www.adapteva.com/introduction
Microprocessor Report:
http://www.adapteva.com/news/adapteva-more-flops-less-watts/
Epiphany Datasheets:
http://www.adapteva.com/products/silicon-devices/e16g301
http://www.adapteva.com/products/silicon-devices/e64g401
Why is there only 1GB of RAM?
The current board configuration only supports up to 1GB of SDRAM. This is a limit our current host ARM CPU. If the Parallella project gets funded, there will be more boards coming that have significantly more RAM.
Will Parallella run Windows?
The Parallella board uses a dual core 32 bit A9 ARM CPU that currently supports Ubuntu 11.10 and bare bone Linux. Our plan is to move to Ubuntu LTS 12.04 as soon as possible. It may be possible to support Windows through Wine or a virtual machine going forward, but we haven't checked those options yet. It may not be practical due to the memory limitation of the board.
Will you open source the Epiphany chips?
Not initially, but it may be considered in the future.
Why is the performance so much lower than a leading edge GPU or CPU?
The Epiphany chips are much smaller high end CPUs and GPUs. The 64-core Epiphany chip only occupies 10mm^2, about 1/30th the size of modern GPUs and CPUs. If we would scale up our chips to the same die size, the Epiphany chips would win in terms of raw performance. Still, that's not really the point. For a 5 Watt power envelop, it's energy efficiency that matters.
Why should I buy this board instead of Raspberry Pi?
We think you should buy both! The Raspberry Pi has a much bigger eco-system at the moment and is a great starting point. Still, the Parallella board have some distinct advantage:
1.) 10-50x more performance than the Raspberry Pi
2.) An accelerator that can be programmed in OpenCL/ C/ C++
3.) Open specs/documents
4.) Gigabit ethernet
5.) More flexible and powerful GPIO
Why do you say the Parallella is a 45GHz computer?
We have received a lot of negative feedback regarding this number so we want to explain the meaning and motivation. A single number can never characterize the performance of an architecture. The only thing that really matters is how many seconds and how many joules YOUR application consumes on a specific platform.
Still, we think multiplying the core frequency(700MHz) times the number of cores (64) is as good a metric as any. As a comparison point, the theoretical peak GFLOPS number often quoted for GPUs is really only reachable if you have an application with significant data parallelism and limited branching. Other numbers used in the past by processors include: peak GFLOPS, MIPS, Dhrystone scores, CoreMark scores, SPEC scores, Linpack scores, etc. Taken by themselves, datasheet specs mean very little. We have published all of our data and manuals and we hope it's clear what our architecture can do. If not, let us know how we can convince you.
Does Parallella come with an Operating System?
Yes, the Parallella prototypes have been extensively tested with Ubuntu 12.04. The Ubuntu O/S runs on the dual-core ARM A9 CPU on the board.
Click to expand...
Click to collapse
I see nothing very new in this. Why? http://www.nvidia.co.uk/object/cuda_home_new_uk.html
Nvidia card for $100 can have even ~96 computing units and these can communicate on bus much faster than 1.4GB/s.
Impresionant!! Good! Interesting!
Rebellos said:
I see nothing very new in this. Why? http://www.nvidia.co.uk/object/cuda_home_new_uk.html
Nvidia card for $100 can have even ~96 computing units and these can communicate on bus much faster than 1.4GB/s.
Click to expand...
Click to collapse
Doesn't nvidia produce graphics cards?
This is a full computer, with CPU(s) and complete motherboard
Sent from my HTC One S using XDA app
hiu115 said:
Doesn't nvidia produce graphics cards?
This is a full computer, with CPU(s) and complete motherboard
Sent from my HTC One S using XDA app
Click to expand...
Click to collapse
I think they are both very interesting development research, but in the end the question is: What would you do with it?
There are plenty of interesting dev boards and highly parallel computing platforms out there. Another is Tilera. (which is obviously more focused on networking: L4-L7 filtering, utm applications)
I know that cuda is on a gpu it is designed for graphics, it is still a processor and you can use it to say, offload computational or compilation processes. So you could have a highly parallel build server. But there are many more applications you could do with a processor like that. It would require a host computer with pci-e and a large high output power supply.
The interesting note with parallella is it's power usage:
Once completed, the 64-core version of the Parallella computer would deliver over 90 GFLOPS of performance and would have the the horse power comparable to a theoretical 45 GHz CPU [64 CPU cores * 700MHz] on a board the size of a credit card while consuming only 5 Watts under typical work loads. For certain applications, this would provide raw performance than a high end server costing thousands of dollars and consuming 400W.
Click to expand...
Click to collapse
But again, what would you do with a board like this?
I can think of a few applications, but I don't think they would be android based.
Why the OnePlus 6 is one of XDA's favorite gaming phones of 2018
The OnePlus 6 is here and we at XDA have been testing it and sharing our thoughts with you on the portal. Our research has shown that this phone is the best gaming phone of 2018 so far. OnePlus tagged the OP6 with the slogan "The Speed You Need" and this proves to be true.
Powered by the Snapdragon 845 and options for 6 or 8GB of RAM, this is the fastest phone on the market right now. The Adreno 630 GPU makes the most recent games like PUBG run effortlessly at 30FPS, which is the max framerate for the game. There is also a noticeable improvement in game load-times. We tested Asphalt 8, and PUBG load times against the OnePlus 5T and here are the results:
Asphalt app launch time:
OnePlus 5T: 5.18 seconds
OnePlus 6: 4.95 seconds
PUBG app launch time:
OnePlus 5T: 19.79 seconds
OnePlus 6: 17.98 seconds
This performance extends to the UI as well. OnePlus always goes with a near-stock Android experience. They are constantly optimizing their software to ensure the smoothest UX possible. Navigating your phone results in very few dropped frames and animations that are quick and lag-free.
"The Adreno 630 featured in the OnePlus 6’s Snapdragon 845 is actually one of the beefiest specification upgrades this new flagship brings. This GPU features a revamped architecture, with Qualcomm claiming a 30% boost to graphics performance and 30% in power reduction (at the same level of performance as last 2017’s Snapdragon 835), something which we were able to verify in our Snapdragon 845 hands-on earlier this year." - Mario Serrafero
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
With up to 8GB of RAM, the OnePlus 6 has the same great RAM management that we have seen in previous phones. We were able to load forum mobile games and have them remain in memory (in particular, Asphalt 8, Lineage II: Revolution, PUBG and Modern Combat 5), so gamers should appreciate the additional RAM even if the number of apps that can be held at any given time is limited.
"The OnePlus 6 certainly feels extremely fast, and when you do try and measure its speed for improvements you can also find some small steps forward. I get to use multiple phones every year, and often carry two phones at any given time — I get to notice the speed advantage every day, even if it’s not always that significant. For example, while I’ve grown fond of my Galaxy Note 8 (which brought its fair share of improvements) the difference between that and the OnePlus 6 is clear and immediate the moment I switch phones. I’m not just talking app launch speeds here, either, it’s an advantage that permeates the user experience. This is an area where OnePlus has been consistently surpassing competitors, and that’s almost become popular knowledge with reviews, YouTube speed tests and user feedback all agreeing on the matter."- Mario Serrafero
Check out more of our coverage on the OnePlus 6 here:
OnePlus 6 Speed, Smoothness & Gaming XDA Review: Living up to the Slogan
OnePlus 6 Hands-On: Redefined Speed and a Premium Design that Reflects 2018’s Smartphone Trends
Check out the OnePlus 6 forums to see what users think about this phone.
OnePlus 6 Forums
We thank OnePlus for sponsoring this post. Our sponsors help us pay for the many costs associated with running XDA, including server costs, full time developers, news writers, and much more. While you might see sponsored content (which will always be labeled as such) alongside Portal content, the Portal team is in no way responsible for these posts. Sponsored content, advertising and XDA Depot are managed by a separate team entirely. XDA will never compromise its journalistic integrity by accepting money to write favorably about a company, or alter our opinions or views in any way. Our opinion cannot be bought.
What is this shameless spam post doing in our OP5T threads?????????????????.............take your "notch", and get out!! lol
Isn't it against forum rules ?
Rule 11 and 13 at least.
Aklo01 said:
Isn't it against forum rules ?
Rule 11 and 13 at least.
Click to expand...
Click to collapse
It says: "sponsored by OnePlus"............so that means, money talks!!
Aklo01 said:
Isn't it against forum rules ?
Rule 11 and 13 at least.
Click to expand...
Click to collapse
It is.
XDARoni said:
Why the OnePlus 6 is one of XDA's favorite gaming phones of 2018
The OnePlus 6 is here and we at XDA have been testing it and sharing our thoughts with you on the portal. Our research has shown that this phone is the best gaming phone of 2018 so far. OnePlus tagged the OP6 with the slogan "The Speed You Need" and this proves to be true.
Powered by the Snapdragon 845 and options for 6 or 8GB of RAM, this is the fastest phone on the market right now. The Adreno 630 GPU makes the most recent games like PUBG run effortlessly at 30FPS, which is the max framerate for the game. There is also a noticeable improvement in game load-times. We tested Asphalt 8, and PUBG load times against the OnePlus 5T and here are the results:
Asphalt app launch time:
OnePlus 5T: 5.18 seconds
OnePlus 6: 4.95 seconds
PUBG app launch time:
OnePlus 5T: 19.79 seconds
OnePlus 6: 17.98 seconds
This performance extends to the UI as well. OnePlus always goes with a near-stock Android experience. They are constantly optimizing their software to ensure the smoothest UX possible. Navigating your phone results in very few dropped frames and animations that are quick and lag-free.
"The Adreno 630 featured in the OnePlus 6’s Snapdragon 845 is actually one of the beefiest specification upgrades this new flagship brings. This GPU features a revamped architecture, with Qualcomm claiming a 30% boost to graphics performance and 30% in power reduction (at the same level of performance as last 2017’s Snapdragon 835), something which we were able to verify in our Snapdragon 845 hands-on earlier this year." - Mario Serrafero
With up to 8GB of RAM, the OnePlus 6 has the same great RAM management that we have seen in previous phones. We were able to load forum mobile games and have them remain in memory (in particular, Asphalt 8, Lineage II: Revolution, PUBG and Modern Combat 5), so gamers should appreciate the additional RAM even if the number of apps that can be held at any given time is limited.
"The OnePlus 6 certainly feels extremely fast, and when you do try and measure its speed for improvements you can also find some small steps forward. I get to use multiple phones every year, and often carry two phones at any given time — I get to notice the speed advantage every day, even if it’s not always that significant. For example, while I’ve grown fond of my Galaxy Note 8 (which brought its fair share of improvements) the difference between that and the OnePlus 6 is clear and immediate the moment I switch phones. I’m not just talking app launch speeds here, either, it’s an advantage that permeates the user experience. This is an area where OnePlus has been consistently surpassing competitors, and that’s almost become popular knowledge with reviews, YouTube speed tests and user feedback all agreeing on the matter."- Mario Serrafero
Check out more of our coverage on the OnePlus 6 here:
OnePlus 6 Speed, Smoothness & Gaming XDA Review: Living up to the Slogan
OnePlus 6 Hands-On: Redefined Speed and a Premium Design that Reflects 2018’s Smartphone Trends
Check out the OnePlus 6 forums to see what users think about this phone.
OnePlus 6 Forums
We thank OnePlus for sponsoring this post. Our sponsors help us pay for the many costs associated with running XDA, including server costs, full time developers, news writers, and much more. While you might see sponsored content (which will always be labeled as such) alongside Portal content, the Portal team is in no way responsible for these posts. Sponsored content, advertising and XDA Depot are managed by a separate team entirely. XDA will never compromise its journalistic integrity by accepting money to write favorably about a company, or alter our opinions or views in any way. Our opinion cannot be bought.
Click to expand...
Click to collapse
Don't have to take pictures with a sub par camera!