Related
Hardware. Pure and simple. The Nexus One hardware was great at the time, but there are a few things that the Nexus One's hardware that needed to be upgraded, or they wanted to support in their new dev phone:
1) Proper Multi-touch screen.
Nexus One's screen isn't multi touch, and it's hardly even dual touch. It's a single touch screen that offered some limited dual touch support that only really works for pinch to zoom. The rotate with two fingers gesture that's in the new version of maps isn't supported on the Nexus One.
2) Front facing camera.
iPhone has one, and made it somewhat popular. Google needed it in their dev phone to keep up.
3) PowerVR SGX540 GPU.
The PowerVR SGX540 chip is *the* most powerful mobile chip on the market. It's significantly better than the adreno 200 found in n1, and has roughly double the power of PowerVR SGX535 that's in the iPhone 4 and iPad. Galaxy S maxes out most of the commonly used benchmarks, and comes close to maxing nenamark1 too.
4) Wolfson Sound Chip is brilliant
The Galaxy S phones have *the* best sound chip on the market, and Nexus S has the same chip
check out the perfect audio quality part in GSMArena's review of Galaxy S
Oh, and there's also the NFC chip, Super AMOLED screen, three-axis Gyroscope, and larger battery.
Rawat said:
Hardware. Pure and simple. The Nexus One hardware was great at the time, but there are a few things that the Nexus One's hardware that needed to be upgraded, and they wanted to support in their new dev phone:
1) Proper Multi-touch screen.
Nexus One's screen isn't multi touch, and it's hardly even dual touch. It's a single touch screen that offered some limited dual touch support that only really works for pinch to zoom. The rotate with two fingers gesture that's in the new version of maps isn't supported on the Nexus One.
2) Front facing camera.
iPhone has one, and made it somewhat popular. Google needed it in their dev phone to keep up.
3) PowerVR SGX540 GPU.
The PowerVR SGX540 chip is *the* most powerful mobile chip on the market. It's significantly better than the adreno 205 found in n1, and has roughly double the power of PowerVR SGX535 that's in the iPhone 4 and iPad. Galaxy S maxes out most of the commonly used benchmarks, and comes close to maxing nenamark1 too.
4) Wolfson Sound Chip is brilliant
The Galaxy S phones have *the* best sound chip on the market, and Nexus S has it too
check out the perfect audio quality part in GSMArena's review of Galaxy S
Oh, and there's also the NFC chip, Super AMOLED screen, three-axis Gyroscope, and larger battery.
Click to expand...
Click to collapse
This is a great analysis Rawat
I'll be really interested to see how quick my Nexus-1 gets gingerbread. If it takes weeks after the 16th or after the new year then I would have to agree
ap3604 said:
This is a great analysis Rawat
I'll be really interested to see how quick my Nexus-1 gets gingerbread. If it takes weeks after the 16th or after the new year then I would have to agree
Click to expand...
Click to collapse
I think the trend will be that the newer versions of android will be developed on Nexus S, and as such they'll be the first to receive it, and the N1 will get the updates a around a month or so later, as long as the device meets the minimum spec.
Google said they do their OS development on one device. Think it was andy rubin when he was showing parts of the mototab, and it was maybe in one of the Nexus S / gingerbread phone videos.
The nexus one actually has an adreno 200 the 205's are much more improved as seen in the g2,desire hd,and my touch 4g. Also the new snapdragons are believed to be on par if not better than hummingbird cpu's
Some comparison androidevolutions . com /2010/10/13/gpu-showdown-adreno-205-msm7230-in-htc-g2-vs-powervr-sgx540-hummingbird-in-samsung-galaxy-s/
Indeed you're correct. 1st gen chips had adreno 200, 2nd gen had 205s.
I don't think the gpu and CPU are the reason more so the screen along with samsungs ability to prodce said screens.
adox said:
The nexus one actually has an adreno 200 the 205's are much more improved as seen in the g2,desire hd,and my touch 4g. Also the new snapdragons are believed to be on par if not better than hummingbird cpu's
Some comparison androidevolutions . com /2010/10/13/gpu-showdown-adreno-205-msm7230-in-htc-g2-vs-powervr-sgx540-hummingbird-in-samsung-galaxy-s/
Click to expand...
Click to collapse
The CPU's may be on parr. However. CPU isn't what needs improved on the Snapdragons.
This is correct. SGX540 does perform about 2x as fast as SGX530 (found in Droid X, Droid 2, iPhone 3GS and a variation of it in iPhone 4). Unfortunately, Samsung's Galaxy S has been using the same GPU for many months now. So TI is playing a catch up on Samsung's SoC. To be fair, other manufacturers aren't exactly doing any better. Qualcomm's second generation GPU - Adreno 205 also performs significantly worse than SGX540 and (soon to be released) Tegra 2's GPU is also expected to be outperformed by SGX540. With Samsung claiming Orion improving GPU performance by another 3-4x over SGX540 must sound scary to other manufacturers!
Click to expand...
Click to collapse
SGX540 = Hummingbird's GPU.
GPU means a ton when it comes to what you're actually going to see in action on the screen.
In the link I posted that doesn't seem so, the gpu actually faired well against the humming bird in the epic
adox said:
I don't think the gpu and CPU are the reason more so the screen along with samsungs ability to prodce said screens.
Click to expand...
Click to collapse
Google said they added more features for better game programming. That's one of the major improvements in 2.3, so why would they pick screen over gpu? Galaxy S phones are considered one of the best device for Android gaming so it makes a lot of sense to have Samsung make a phone. The screen is an icing on the cake. I bet Samsung is going to use samoled screens a lot more on big phones they manufacture.
so true cant wait!
adox said:
In the link I posted that doesn't seem so, the gpu actually faired well against the humming bird in the epic
Click to expand...
Click to collapse
On one benchmark. I wouldn't read into those results too much
http://www.anandtech.com/show/4059/nexus-s-and-android-23-review-gingerbread-for-the-holidays
anadtech review
Rawat said:
Hardware. Pure and simple. The Nexus One hardware was great at the time, but there are a few things that the Nexus One's hardware that needed to be upgraded, or they wanted to support in their new dev phone:
1) Proper Multi-touch screen.
Nexus One's screen isn't multi touch, and it's hardly even dual touch. It's a single touch screen that offered some limited dual touch support that only really works for pinch to zoom. The rotate with two fingers gesture that's in the new version of maps isn't supported on the Nexus One.
2) Front facing camera.
iPhone has one, and made it somewhat popular. Google needed it in their dev phone to keep up.
3) PowerVR SGX540 GPU.
The PowerVR SGX540 chip is *the* most powerful mobile chip on the market. It's significantly better than the adreno 200 found in n1, and has roughly double the power of PowerVR SGX535 that's in the iPhone 4 and iPad. Galaxy S maxes out most of the commonly used benchmarks, and comes close to maxing nenamark1 too.
4) Wolfson Sound Chip is brilliant
The Galaxy S phones have *the* best sound chip on the market, and Nexus S has the same chip
check out the perfect audio quality part in GSMArena's review of Galaxy S
Oh, and there's also the NFC chip, Super AMOLED screen, three-axis Gyroscope, and larger battery.
Click to expand...
Click to collapse
Goddammint!!! I can't wait til Thursday!!!
zachthemaster said:
Goddammint!!! I can't wait til Thursday!!!
Click to expand...
Click to collapse
Rofl I can't wait till there are tons of threads started such as "Goddammit I LOVE this phone!!!"
ap3604 said:
Rofl I can't wait till there are tons of threads started such as "Goddammit I LOVE this phone!!!"
Click to expand...
Click to collapse
Haha goddammit i can't wait to post in those threads.. I'm so excited... New phone, new network... PUMPED
hmm.. sounds awsome..
but hey, someone knows if we can open the battery cover to replace the battery? im too used to carry two batteries.. i need it cause long weekends with heavy usage of the phone.. >.<
i didnt find anything about this :3
D4rkSoRRoW said:
hmm.. sounds awsome..
but hey, someone knows if we can open the battery cover to replace the battery? im too used to carry two batteries.. i need it cause long weekends with heavy usage of the phone.. >.<
i didnt find anything about this :3
Click to expand...
Click to collapse
yah... duh haha
Sure you can
Here's a view of the phone with the cover off:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello guys! Walking around the net I found some diagrams that describe the benchmarks of Tegra 2 stock vs. 3 relative to other processors. I always wondered if now that the processor is overclocked to 1.7GHz (70% more power) has reduced the distances from the future quad-core (which we remember to be 5 times faster than current SoC as Nvidia has said).
Some developers can make this comparison? It would be interesting to see the benefits
"when looking at the compiler versions and settings used to compile the CoreMark benchmark, the Core 2 Duo numbers were produced via GCC 3.4 and only the standard set of optimizations (-O2), while the Tegra 3 numbers were run on a more recent GCC 4.4 with aggressive optimizations (-O3). Il Sistemista website took a Core 2 Duo T7200 and re-ran the benchmark compiled with GCC 4.4 and the same optimization settings. The results were no longer in favor of NVIDIA, as the Core 2 chip scored about 15,200 points, compared to the Tegra's 11,352."
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
CoreMark benchmark comparing nVidia Tegra 3 @1GHz clock to various real and hypothetical products
CoreMark/MHz index shows how much Coremarks can a particular chip extract given its frequency
WoW....More powerful than Core2Due O_O!!!
Now THATS something to get excited about!!!
Ahmed_PDA2K said:
WoW....More powerful than Core2Due O_O!!!
Now THATS something to get excited about!!!
Click to expand...
Click to collapse
if you read the bold they fixed compiler optimizations and the Tegra did not hold up
At this point it would be really interesting to know how much you have reduced the gap with the super-overclock to 1.7GHz of Tegra (which is really the maximum achievable?). I don't know how to operate the CoreMark 1.0 so I ask someone in the community for check if values are underestimated or substantially improved compared to those offered by Nvidia and especially to see if at this point Tegra 3 can really be worth .
I recently read that Nvidia is pushing a lot of their projects and have already popped the development of Tegra 4. The specifications include a 4 Tegra chip manufacturing to 28Nm. IMHO it would be more correct to wait for this to update the Tegra 2 hardware version (so these benchmarks could be a way to understand this in this context).
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
In any case it would be nice to make a comparative improvement in all frequencies compared to those values offered by Nvidia, to give an idea of who has the device the hardware's potential. But I think it's a matter of development and optimization as we are seeing here on the forum these days ... the tablet is slowly improving on all fronts
I'm sure that once we have access to the OS code the tf will run like a beast! I had an OG Droid and the difference between stock and modded was mind blowing.
Sent from my ADR6300 using Tapatalk
brando56894 said:
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
Click to expand...
Click to collapse
I'm one of the few with one that does 1.7GHz no problem.
Its not only the cpu speed of tegra3 that will pwn the transformer1, a 12core gpu and support for dynamic lightning , high profile hd decoding and much more. No way a tegra2 will outperform the tegra3.
Just face it: transformer1 will be pwned big time by transformer2, launched okt/nov 2011....
Not to mention the new Ti cpus coming q1 2012.... they will pwn even more then tegra3...
Sent from my Desire HD using XDA App
Tempie007 said:
Its not only the cpu speed of tegra3 that will pwn the transformer1, a 12core gpu and support for dynamic lightning , high profile hd decoding and much more. No way a tegra2 will outperform the tegra3.
Just face it: transformer1 will be pwned big time by transformer2, launched okt/nov 2011....
Not to mention the new Ti cpus coming q1 2012.... they will pwn even more then tegra3...
Sent from my Desire HD using XDA App
Click to expand...
Click to collapse
Yeah it is true that is stronger, but my thesis is that Tegra 3 is only a technology shift. Although it has 12 computing cores capable of dynamically processing the lights, this CPU produces these values and it would be interesting to relate them to the values of the Tegra 2 overclocked and see if indeed the Tegra 3 can really be a solution to replace Tegra 2. We are faced with two different SoC, the first a dual core, the second a quad core, but probably, common sense tells me that if well exploited this device could give a long hard time to the next model (I remember the story of HTC HD2, which was released back in 2009 and today is one of the longest-running phones through a rearrangement of the software every day). I argue that there is no need for raw power in these devices but the fineness of the calculation, since the batteries are not as performance and putting more capacity probabily they can't make the devices more streamlined in near future.
Someone knows how to use CoreMark to update these values?
Of course the tegra two will beat the crap out of the tegra 3, thats like comparing a core 2 duo to a core i7 lol
chatch15117 said:
I'm one of the few with one that does 1.7GHz no problem.
Click to expand...
Click to collapse
Lucky bastard! :-D how many mV are you running it at?
Never tried 1.7Ghz but have done 1.5-1.6 on 3 tabs without messing with voltages. Acutally right now running 1.2Ghz messing with LOWERING the voltages.
Coremark download
If anyone can compile Coremark for run under Android here there is the software to download:
Download the Coremark Software readme file
Download the CoreMark Software documentation
This documentation answers all questions about porting, running and score reporting
Download the Coremark Software
Download the CoreMark Software MD5 Checksum file to verify download
Use this to verify the downloads: >md5sum -c coremark_<version>.md5
Download the CoreMark Platform Ports
These CoreMark ports are intended to be used as examples for your porting efforts. They have not been tested by EEMBC, nor do we guarantee that they will work without modifications.
Here there are result's table:
http://www.coremark.org/benchmark/index.php
If a DEV can compile and Run some test for comparing the results with the new frequencys would be great!
brando56894 said:
I highly doubt many people can get their tegra 2 up to 1.7, I can only get mine stable at 1.4, probably 1.5 and maybe 1.6 if I spent a lot of time messing with the voltages. I think I read somewhere that 1.7ghz produces a lot of heat, like almost in the danger zone but I could be exaggerating it a little.
Sent from my ADR6300 using Tapatalk
Click to expand...
Click to collapse
There is no danger zone.. at least not on my kernel. I haven't posted the one for 3.1 (only 3.01), but there is a built in down clock once the chip reaches temps beyond 47degC... This only happens if the tablet (in laptop form) is left in a bag with the screen on (by accident, sometimes the hinge says its open when its closed).. So if you can run 1.7ghz stable, theres no reason not to run it.. as you'll be protected from heat by the built in thermal throttling.
Default thermal throttling had some seriously bizzare ranges.. 80-120degC..
I wish nvidia's boards were produced like this:
http://hardkernel.com/renewal_2011/main.php
http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G129705564426
Something modular would be a nice change in the mobile tech segment.. Swapping SoCs would require some serious low-level work tho.. Unless writing to the bootloader and boot partitions were made easy via external interface.. something like that.
Blades said:
There is no danger zone.. at least not on my kernel. I haven't posted the one for 3.1 (only 3.01), but there is a built in down clock once the chip reaches temps beyond 47degC... This only happens if the tablet (in laptop form) is left in a bag with the screen on (by accident, sometimes the hinge says its open when its closed).. So if you can run 1.7ghz stable, theres no reason not to run it.. as you'll be protected from heat by the built in thermal throttling.
Default thermal throttling had some seriously bizzare ranges.. 80-120degC..
I wish nvidia's boards were produced like this:
http://hardkernel.com/renewal_2011/main.php
http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G129705564426
Something modular would be a nice change in the mobile tech segment.. Swapping SoCs would require some serious low-level work tho.. Unless writing to the bootloader and boot partitions were made easy via external interface.. something like that.
Click to expand...
Click to collapse
And so? are you good to use CoreMark to run some bench on your device and compare with Tegra 3 results? coz here i would test this
devilpera64 said:
"when looking at the compiler versions and settings used to compile the CoreMark benchmark, the Core 2 Duo numbers were produced via GCC 3.4 and only the standard set of optimizations (-O2), while the Tegra 3 numbers were run on a more recent GCC 4.4 with aggressive optimizations (-O3).
Click to expand...
Click to collapse
That statement there tells why benchmarks are extremely useless and misleading.
im one of the few that can reach 1.7 ghz no problem to. never ran it long enough to get hot tho. maybe 30 mins just to run benchmarks. never had fc's or have my system crash
HTC ONE GLBenchmark only scores 34 FPS at 1080P offscreen, this is much lower than the Samsung SHV-E300s which scores 41.3 FPS, both using Snapdragon 600, in the same test. IRC HTC One is using LPDDR2 RAM, so are we seeing a lack of bandwidth compared to the Samsung which may use LPDDR3, which is supported by the S600.
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro25&D=HTC+One
how do you know HTC One uses LPDDR2 memory
kultus said:
how do you know HTC One uses LPDDR2 memory
Click to expand...
Click to collapse
http://www.htc.com/uk/smartphones/htc-one/#specs
http://www.anandtech.com/show/6754/hands-on-with-the-htc-one-formerly-m7/2
Turbotab said:
HTC ONE GLBenchmark only scores 34 FPS at 1080P offscreen, this is much lower than the Samsung SHV-E300s which scores 41.3 FPS, both using Snapdragon 600, in the same test. IRC HTC One is using LPDDR2 RAM, so are we seeing a lack of bandwidth compared to the Samsung which may use LPDDR3, which is supported by the S600.
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro25&D=HTC+One
Click to expand...
Click to collapse
My first question would be is how they even got a benchmark of the SHV-E300?
Xistance said:
My first question would be is how they even got a benchmark of the SHV-E300?
Click to expand...
Click to collapse
How do any results appear on GLbenchmark?
I believe with GLBenchmark, that if you don't register / login before running the test, it automatically uploads to their server for public viewing, so maybe it was done intentionally, or somebody forgot to login?
fp581 said:
he is spamming all around the htc one just look at his posts plz ban him from posting in any htc forum ever again.
he probably works in sony nokia or samsung
Click to expand...
Click to collapse
Who are you talking about?
sorry wrong person i'll delete that lest one.
but i would love pics of that benchmark for proof
fp581 said:
sorry wrong person i'll delete that lest one.
but i would love pics of that benchmark for proof
Click to expand...
Click to collapse
Dude I was going to go atomic, I admit it I have a terrible temper
I believe the benchmark was run by a German Android site, called Android Next, there is a video on Youtube, the GLBenchmark starts at 2.22
http://www.youtube.com/watch?v=Wl1dmNhhcXs&list=UUan0vBtcwISsThTNo2uZxSQ&index=1
thanks turbo for advanced my knoledge...what a shame they didnt choose LPDDR3 but i think its nt issue these days
Just to temper this news, we must remeber that the HTC One is running at 1.7ghz, while the Samsung device is running at 1.9.
Although 200mhz does not seem like much, it could possibly account for the 7 fps difference when u factor in the difference in UI.
If in fact the Samsung device really has DDR3 ram, and the difference (after accounting for clock speed) is 2-3 fps, I can understand why HTC opted not to include it. Was not worth the extra cost most likely.
Maedhros said:
Just to temper this news, we must remeber that the HTC One is running at 1.7ghz, while the Samsung device is running at 1.9.
Although 200mhz does not seem like much, it could possibly account for the 7 fps difference when u factor in the difference in UI.
If in fact the Samsung device really has DDR3 ram, and the difference (after accounting for clock speed) is 2-3 fps, I can understand why HTC opted not to include it. Was not worth the extra cost most likely.
Click to expand...
Click to collapse
GLBenchmark is a test of GPU performance, and isn't really changed by CPU clockkspeed, but it is affected by bandwidth.
As a test, I downclocked my Nexus 7 from an overclocked 1.6 GHz to just 1.15 GHz, I ran GLBench and got 10 FPS. I then ran at the test again but with CPU at 1.6 GHz, the result, 10 FPS again.
I've benched the N7 with both CPU & GPU overclocked to the same level as Transformer Infinity, which gets 13 FPS, but I always get 10 FPS, the reason my N7 has lower memory bandwidth than the Transformer Infinity, because it use slower RAM and thus has less bandwidth. That is a difference of 30% in FPS, just because of lower bandwidth.
I read that LPDDR3 starts at 800 MHz or 12.8 GB/s in dual-channel configuration, whereas LPDDR2 maxs at 533 MHz or 8.5 GB/s max bandwidth in dual-channel configuration.
Turbotab said:
GLBenchmark is a test of GPU performance, and isn't really changed by CPU clockkspeed, but it is affected by bandwidth.
As a test, I downclocked my Nexus 7 from an overclocked 1.6 GHz to just 1.15 GHz, I ran GLBench and got 10 FPS. I then ran at the test again but with CPU at 1.6 GHz, the result, 10 FPS again.
I've benched the N7 with both CPU & GPU overclocked to the same level as Transformer Infinity, which gets 13 FPS, but I always get 10 FPS, the reason my N7 has lower memory bandwidth than the Transformer Infinity, because it use slower RAM and thus has less bandwidth. That is a difference of 30% in FPS, just because of lower bandwidth.
I read that LPDDR3 starts at 800 MHz or 12.8 GB/s in dual-channel configuration, whereas LPDDR2 maxs at 533 MHz or 8.5 GB/s max bandwidth in dual-channel configuration.
Click to expand...
Click to collapse
In that case the results are quite disappointing.
All these fantastic new phones, and so much disappointment.
Sent from my GT-I9300 using xda premium
Tomatoes8 said:
They could have used faster memory for the same price if they didn't cut off Samsung as a supplier. Makes you wonder where their priorities lie. Making the best products possible or just going with the motions.
Click to expand...
Click to collapse
No one is going to take anything you say here seriously, as you've managed to have 2 threads closed in the last 30 mins. One of those inane posts you made involved you saying that HTC is going to be paying, according to your genius calculation, 20% of their profits to Apple (I forget what insanely unintelligent reason you gave). Yeah, because being able to completely migrate data from 1 completely different phone to another is such a bad idea for a company that wants to push their product.
So, what is the per unit cost of what HTC is paying for RAM now vs. what they could have gotten from Samsung? Exactly, you have no idea. I also didn't hear anything about HTC "cutting off" Samsung as a supplier, but maybe I missed it, so I google'd "htc cut off samsung supplier" and found 2 links...
http://tech2.in.com/news/smartphones/following-apple-htc-cuts-component-orders-from-samsung/505402
http://www.digitimes.com/news/a20121009PD213.html
I'm not sure if you have the capability of reading or not, but I'll spoon feed you this information, ok hunny? I've taken the info from the 1st link, since there is more there.
After Apple Inc slashed its orders for memory chips for its new iPhone from major supplier and competitor, Samsung Electronics Co Ltd, HTC too has reportedly cut down on its smartphone component orders from the South Korean company.
Click to expand...
Click to collapse
So, Apple cut down on memory orders. You know, they are the one's who make the iPhone? Have a logo of an Apple on their products? Steve Jobs was the CEO before he died. Anyway, I'll continue...
According to a report by DigiTimes, HTC has reduced its orders from Samsung, and instead opted to order CMOS image sensors from OmniVision and Sony. The company has also chosen to move part of its AMOLED panel orders to AU Optronics, DigiTimes reported citing ‘sources’.
Click to expand...
Click to collapse
Notice it said that HTC reduced its orders from Samsung, specifically on the image sensors (that's for the camera, if you didn't know) and the screen. You know, the thing on the front of your phone that you touch to make it do things? You know what I mean, right? I encourage you to read this link (or possibly have someone read it to you)...
http://dictionary.reference.com/browse/reduce
The point is that reduce isn't the same as cut off. Cutting off would require HTC not ordering ANYTHING from Samsung. Guess what? The One doesn't use an OmniVision CMOS sensor (don't forget, that's what the camera uses) or an AMOLED screen (the bright part of your phone that shows you info).
Also, this is a far better designed phone, especially in regards to hardware, than anything Samsung has ever produced. I went back to my EVO 4G LTE, mainly because I couldn't stand the terrible build quality of the Note 2. It just feels like a cheap toy. And, IMO, Sense is far better than TW. Samsung may have the market right now because of the Galaxy line of products, but that doesn't mean that HTC is out of the game by any means.
Seriously, attempt to use just a bit of intelligence before opening your mouth and spewing diarrhea throughout the One forums. As the saying goes: "it's better to keep your mouth shut and have people think you're an idiot, then to open your mouth and prove it". Unfortunately for you, it's too late.
I really think Turbo was too hasty to open a new thread for this as we've been discussing this in the mega thread
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro25&D=HTC+One
It scores 34fps in Egypt HD 1080p offscreen, while the leaked Samsung s600 device socres 41fps which is perfectly inline with Qualcomm's promised speed (3x Adreno 225)
here is a video of what I suspect the source of the benchmark, because we had no benchmark before it
http://www.youtube.com/watch?v=Wl1dmNhhcXs
notice how the battery is almost at end (HTC bar at this stage means its in the last 25%) also notice the activity in the notification area
more important the post ran more than a few full benchmarks, like quadrant before running GL benchmark, this alone is enough to lower the score, especially since Adreno 320 was known to throttle in the Nexus 4
I think benchmarks scores should not be relied on in such events, especially with hundreds of hands messing with the device, we have learned from the One X launch where videos poped up showing horrible performance from the One X, eventually turned out to be were very far from the final device in ur hands
finally both the One X and Nexus 7 at the same gpu clock, but the first is DDR2 and the second is DDR3, score the same in GL Benchmark
in other words its worrying but it's best to wait for proper testers like Anand
Thread cleaned
...from some serious trolling. There should be no trace from him for some time .
but remember:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
But...
I just wonder that a Samsung phone uses high end parts from Qualcomm instead of Samsungs processors. But I am not in Samsung devices so far, so I would not judge this
Gz
Eddi
Here's a second video also showing Egypt off screen bench at 34FPS.
https://www.youtube.com/watch?v=wijp79uCwFg
Skip to 3:30
Maedhros said:
Just to temper this news, we must remeber that the HTC One is running at 1.7ghz, while the Samsung device is running at 1.9.
Although 200mhz does not seem like much, it could possibly account for the 7 fps difference when u factor in the difference in UI.
If in fact the Samsung device really has DDR3 ram, and the difference (after accounting for clock speed) is 2-3 fps, I can understand why HTC opted not to include it. Was not worth the extra cost most likely.
Click to expand...
Click to collapse
So you're saying that 200mhz o the CPU can account for 7 fps on a GPU test?
Following what you said, the Nexus 4 should have scored 27 fps? Since it has 200mhz less...
But no, it scored 33.7...only 0.3 fps less than the One!
And you know why? First both use the same GPU (and it's what counts for a graphic test) and second the HTC phones are always slower due to Sense!
So stop *****ing and realize that the One is no god phone
Samsung device is running 4.2.1
I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
tcb4 said:
I know, this is one of those silly little topics that gets thrown around every time a newer faster arm chip comes out, but this is the first time that I personally have ever seen an Arm chip as a threat to intel. When I saw the Galaxy s6 scoring around a 4800 multi-core I stopped and thought to myself, "hey, that looks pretty darn close to my fancy i5." Sure enough, the I5 5200u only scores around a 5280 in the Geekbench 64 bit multi-core benchmark. I understand that this is only possible because the Galaxy S6 has 8 cores, but it's still very impressive what Arm and Samsung were able to achieve using a fraction of the power intel has on hand. Of course I don't think that this chip will take over the market, but if Arm's performance continues increase at the same rate while maintaining the same low power draw, then intel might have some real competition in the laptop space within the near future. Heck, maybe Microsoft will bring back RT but with full app support.
I also know that I didn't account for how much power the GPU was drawing, but I feel as if that wouldn't be the only factor after seeing the issues with Core M.
Click to expand...
Click to collapse
It is important to remember that ultimately the same constraints and limitations will apply to both Intel and ARM CPUs. After all ARM and x86 are just instruction set architectures. There is no evidence to suggest that somehow ARM is at a significant advantage vs Intel in terms of increasing performance while keeping power low. It has been generally accepted now that ISA's have a negligible impact on IPC and performance per watt. Many of these newer ARM socs like the 810 are having overheating issues themselves. The higher performance Nvidia SOCs that have impressive performance are using 10+ watts TDPs too.
Also it is always a bit tricky to make cross platform and cross ISA CPUs comparisons in benchmarks like GeekBench and for whatever reason Intel cpus tend to do relatively poorly in GeekBench compared to other benchmarks. You can try to compare other real world uses between the i5-5200U and the Exynos 7420 and I can assure you that the tiny Exynos will be absolutely no match to the much larger, wider and more complex Broadwell cores. Don't get me wrong, the Exynos 7420 is very impressive for its size and power consumption, but I don't think we can take that GeekBench comparison seriously.
The fastest low power core right now is without a doubt the Broadwell Core M which is a 4.5 watt part. This is built on Intel's 14nm process which is more advanced than Samsungs.
http://www.anandtech.com/show/9061/lenovo-yoga-3-pro-review/4
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
"Once again, in web use, the Core M processor is very similar to the outgoing Haswell U based Yoga 2 Pro. Just to put the numbers in a bit more context, I also ran the benchmarks on my Core i7-860 based Desktop (running Chrome, as were the Yogas) and it is pretty clear just how far we have come. The i7-860 is a four core, eight thread 45 nm processor with a 2.8 GHz base clock and 3.46 GHz boost, all in a 95 watt TDP. It was launched in late 2009. Five years later, we have higher performance in a 4.5 watt TDP for many tasks. It really is staggering."
"As a tablet, the Core M powered Yoga 3 Pro will run circles around other tablets when performing CPU tasks. The GPU is a bit behind, but it is ahead of the iPad Air already, so it is not a slouch. The CPU is miles ahead though, even when compared to the Apple A8X which is consistently the best ARM based tablet CPU.
"
---------- Post added at 04:46 AM ---------- Previous post was at 04:33 AM ----------
tft said:
I doubt they're worried. intel CPUs are wicked fast. i have a 3 year old i7 and it's faster than most of AMDs current gen CPUs.
if Intel is able to apply the same method/engineering they use on CPUs to the mobile platform, i bet it will smoke anything out there. kind of like how intel CPUs kill basically anything AMDs can put out.
Click to expand...
Click to collapse
This.
All of the little atom CPUs we see in mobile right now are much smaller, narrower and simpler cores than Intel Core chips. Once you see Intel big cores trickle down into mobile, it will get much more interesting.
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Sent from my SM-G920T using XDA Free mobile app
Click to expand...
Click to collapse
Technically Intel and AMD have had 64 bit for well over a decade now with AMD64/EM64T and many Intel mobile processors have had it for years, so the HW has supported it for a while but 64 bit enabled tablets/phones haven't started shipping until very recently.
Indeed Intel has been shipping 14nm products since last year and their 14nm process is more advanced than Samsung's. Note that there is no real science behind naming a process node so terms like "14nm" and "20nm" have turned into purely marketing material. For example, TSMC 16nm isn't actually any smaller than their 20nm process. Presumably Intel 14nm also yields higher and allows for higher performance transistors than the Samsung 14nm.
It is likely that Samsung has the most advanced process outside of Intel however. I do agree that Qualcomm is in a bit of trouble at the moment with players like Intel really growing in the tablet space and Samsung coming out with the very formidable Exynos 7420 SOC in the smartphone space. The SD810 just isn't cutting it and has too many problems. Qualcomm should also be considered that both Samsung and Intel have managed to come out with high end LTE radios, this was something that Qualcomm pretty much had a monopoly on for years. Intel now has the 7360 LTE radio and Samsung has the Shannon 333 LTE.
rjayflo said:
Intel will catch up...quick too just watch. They've been working on 64-bit for over a year now...and they're already onto 14nm. Qualcomm should be worried, I don't think their ready for this competition. They talked trash about octa cores and 64-bits...now their doing both and seems their product is still in beta status, not ready for the real world. Intel and Samsung are gonna give them problems
Click to expand...
Click to collapse
i agree about Qualcomm, i actually mentioned that some time ago.
i think Qualcomm will happen what happened to nokia/blackberry, they got huge and stopped innovating and ended up being left in the dust. perhaps Qualcomm thought they had a monopoly and that samsung and other device makers would continue to buy their chips..
in the end, i think the only thing Qualcomm will have left is a bunch of patents..
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
I think that Qualcomm is busy working on a new Krait of their own, but right now they're in hot water. They got a little lazy milking 32 bit chips, but once Apple announced their 64 bit chip they panicked and went with an ARM design. We'll have to see if they can bring a 64 bit Krait chip to the table, but right now Samsung's 7420 appears to be the best thing on the market.
tcb4 said:
I understand that Core M is a powerful part, but I'm not sure I believer their TDP figures. I am, however, more inclined to believe Samsung as they achieving this performance with an soc that is within a phone; in other words, they don't have the surface area to displace large quantities of heat. Nvidia has always skewed performance per watt numbers, and, as a result, they haven't been able to put an soc in a phone for years. Now, the reason I doubt intel's claims is because of battery life tests performed by reviewers and because of the low battery life claims made by manufacturers. For instance, the New Macbook and Yoga Pro 3 aren't showing large improvements in battery life when compared to their 15w counterparts.
Click to expand...
Click to collapse
Technically the Core M will dissipate more than 4.5w for "bursty" workloads but under longer steady workloads it will average to 4.5w. The ARM tablet and phone SOCs more or less do the same thing. In terms of actual battery life test results, yes the battery life of most of these devices hasn't really changed since the last generation Intel U series chips but that isn't a real apples to apples comparison. As SOC power consumption continues to drop, it is becoming a smaller and smaller chunk of total system power consumption. Lenovo did a poor job IMO in originally implementing the first Core M device but Apple will almost certainly do a much better job. The SOC is only one part of the system, it is the responsibility of the OEM to properly package up the device and do proper power management, provide an adequate battery etc. Yes the new Macbook doesn't get significantly longer battery life but it also weighs only 2.0 lbs and has a ridiculously small battery. It also has a much higher resolution and more power hungry screen and yet manages to keep battery life equal with the last generation. Benchmarks have also indicated that the newer 14nm Intel CPUs are much better at sustained performance compared to the older 22nm Haswells. This is something that phone and tablets typically are very poor at.
tcb4 said:
I'm not sure how I feel about the iPad comparison though; I feel as if you just compounded the issue by showing us a benchmark that was not only cross platform, but also within different browsers.
Click to expand...
Click to collapse
A very fair point, browser benchmarks are especially notorious in being very misleading. I think in this case Chrome was used in all cases which helps a little. My point in showing this is that we need to take those GeekBench results with a little grain of salt. Outside of that benchmark, I don't think you'll find the A8X or Exynos 7420 getting anywhere near a higher speced Core M let alone a i5-5200U at any real world use or any other benchmark, browser based or not. Even other synthetic benchmarks like 3dmark Physics, etc don't show the Intel CPUs nearly as low as GeekBench does.
tcb4 said:
Also, I think I understand what you mean about how an ISA will not directly impact performance per watt, but is it not possible that Samsung and Arm could just have a better design? I mean intel and AMD both utilize the same instruction set, but Intel will run circles around AMD in terms of efficiency. I may be way off base here, so feel free to correct me.
Click to expand...
Click to collapse
This is correct.
It is certainly possible for Samsung to have a design that is more power efficient than Intel when it comes to making a 2W phone SOC, but that won't be because Samsung uses ARM ISA while Intel uses x86. At this point, ISA is mostly just coincidental and isn't going to greatly impact the characteristics of your CPU. The CPU design and the ISA that the CPU uses are different things. The notion of "better design" is also a little tricky because a design that may be best for a low power SOC may not necessarily be the best for a higher wattage CPU. Intel absolutely rules the CPU landscape from 15w and up. Despite all of the hype around ARM based servers, Intel has continued to dominate servers and has actually continued to increase its lead in that space since Intel's performance per watt is completely unmatched in higher performance applications. Intel's big core design is just better for that application than any ARM based CPU's. It is important to remember that just because you have the best performance per watt 2 watt SOC, doesn't mean you can just scale that design into a beastly 90 watt CPU. If it were that easy, Intel would have probably easily downscaled their big core chips to dominate mobile SOCs.
You frequently find some people trying to reason that at 1.2 Ghz Apple's A8 SOC is very efficient and fast and then they claim that if they could clock that SOC at 3+ Ghz then it should be able to match an Intel Haswell core, but there is no guarantee that the design will allow such high clocks. You have to consider that maybe Apple made design choices to provide great IPC but that IPC came at the cost of limiting clock frequencies.
All HTC M9+ owners!
We're a handful yet on XDA, but getting more and more as the device is rolling out to more locations. and people who look for an elegant, high end device with premium build quality and the extra features like fingerprint scanner and 2K display and the duo camera settle with this fantastic device. It's not the perfect for everything unfortunately, not the best gaming phone out there, but in my experience a very well performing device in terms of phone call quality, reception quality, wifi power, multimedia, battery, display panel and overall UX feel, smoothness. High end games of latest year 2015 might stutter a bit here and there or lack some important shaders to compensate, but nonetheless good for last years (2014 and before 3D gaming and all kinds of 2D games), let's gather the experience and benchmarks of this unique device which was maybe the first MTK device in alubody
Let's discuss performance related experience, real user feel/experience and benchmarks free of whining, facing the truth that in some respects it's not the top-notch device, but let the curious ones who consider to buy this device, know what to expect if choosing this elegant business class phone.
I'll start with some of my benchmarks result screenshots in separate posts.
UPDATE:
Here's my short game testing video on M9+
https://www.youtube.com/watch?v=QmLGCoI4NLw
Antutu on stock Europe base 1.61
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Vellano tests base 1.61
Futuremark test base 1.61
Sent from my HTC One M9PLUS using XDA Free mobile app
My real life experience is that in everyday chores, the phone is very snappy, with some small lags when starting new apps to the memory. Task switching is quite good.
There's unfortunately some memory leak issue with 5.0.x Android version which after a while kicks in (on most 5.0.x devices as well), but overall the UX smoothness is just quite good. Occasionally there are some apps that brings up popup windows a bit stuttery, like facebook comments animation tends to be a bit stuttery.
The Sense Home/Blink Feed experience is just perfect. At normal operations, when no big application updates are happening in the background, I never faced any lags on the Sense Home UI.
As for the games, games from and before 2014 run perfectly. New ones might get some shaders removed, or with reduced polygon counts, so don't expect a jaw dropping 3D experience. If you are ok with last years (2014 and before) 3d game quality, M9+ is quite good, but latest games will be most probably running in the dumbed down mode accordingly to the PowerVR GPU the M9+ with mtk chipset has. The phone keeps a good temperature mostly, except when charging battery and at the same time playing 3D heavy games. (But that's expected from most devices, especially with alu body)
The screen quality is quite good, i've got a perfect panel with the first unit I got, and the refresh rate is 60hz, smooth and with lovely dpi and brightness of 2K.
The benchmarks show a general good performance with operations bound to the CPU. Where the MTK chip comes shorthanded is the GPU part. All tests show 3D performance that is below 2014's flagships performance by far. The GPU PowerVR 6200 is the same that was built into iPhone5s, which let's face is a mediocre GPU for a 2K display. If you face this fact and accept that gaming won't be with the highest quality textures and shaders, probably you won't be disappointed.
I'm curious what others think...
Played with it for a few hours. First impression is that it's clearly slower then the M9. The fingerprint sensor is almost perfect, a few hit and misses. I'll be back after a few days of using it with a more adequate feedback.
Sent from my HTC_M9pw using Tapatalk
That's why I don't trust benchmarks at all: @tbalden benchmarks shows it should be on par with the M9, I got those scores on mine; but the 3D department is awful to say the least, Godfire is light years away on the M9
DeadPotato said:
That's why I don't trust benchmarks at all: @tbalden benchmarks shows it should be on par with the M9, I got those scores on mine; but the 3D department is awful to say the least, Godfire is light years away on the M9
Click to expand...
Click to collapse
Yeah, tho the benchmarks 3d score on antutu is more than half less, 3d is 9k on m9+ and 21k on m9, so after all it tells you something true. The CPU of the m9+ is quite good while the gpu is rather old, from the iPhone5S era when the display resolutions were much smaller. That says it all.
So everyday chores are quite snappy on the phone and gaming is mediocre on high end games.
tbalden said:
Yeah, tho the benchmarks 3d score on antutu is more than half less, 3d is 9k on m9+ and 21k on m9, so after all it tells you something true. The CPU of the m9+ is quite good while the gpu is rather old, from the iPhone5S era when the display resolutions were much smaller. That says it all.
So everyday chores are quite snappy on the phone and gaming is mediocre on high end games.
Click to expand...
Click to collapse
I see didn't notice that, I don't understand Mediatek though, why did they put such an outdated Graphics card on their 'flagship' processor?
DeadPotato said:
I see didn't notice that, I don't understand Mediatek though, why did they put such an outdated Graphics card on their 'flagship' processor?
Click to expand...
Click to collapse
Indeed, it should have been a better one, although it's a question to explore if the SoC could or couldn't handle a higher bandwidth of system memory to gpu memory. Maybe they simply don't have a better gpu compatible or there are technical difficulties with incorporating one.
Either way the CPU itself is a promising high end one, and unfortunately we can't help with the gpu part. Maybe there will be some possibility to tweak it on kernel level. Remains to be seen, but I'm not holding breath, no wonders can be done
tbalden said:
Indeed, it should have been a better one, although it's a question to explore if the SoC could or couldn't handle a higher bandwidth of system memory to gpu memory. Maybe they simply don't have a better gpu compatible or there are technical difficulties with incorporating one.
Either way the CPU itself is a promising high end one, and unfortunately we can't help with the gpu part. Maybe there will be some possibility to tweak it on kernel level. Remains to be seen, but I'm not holding breath, no wonders can be done
Click to expand...
Click to collapse
Ya I agree, sad thing CPU wise Mediatek is doing a very good job to enter the high-end tier smartphones, but looks like they need to improve a lot GPU wise for the Mediatek Helio X20 to be actually considered a valid option for flagship devices
DeadPotato said:
Ya I agree, sad thing CPU wise Mediatek is doing a very good job to enter the high-end tier smartphones, but looks like they need to improve a lot GPU wise for the Mediatek Helio X20 to be actually considered a valid option for flagship devices
Click to expand...
Click to collapse
Second you.
X10 is MTK's first high-end tier SoC. They should equip with high-end graphics card to become well-known,not two-years ago one same with iPhone5s era. Although the most casual gaming is OK,but it is not covered with "flagship" devices.
It's excellent there were some possibility to tweak it on kernel level. But the Mediatek Helio X20 should work better,better & better on GPU part.
Not heavy gamer,the UX is pretty good for me. (Camera is another topic.)
Even I measured the CPU speed with Chess game,the result reached the top-tier.
tbalden said:
Antutu on stock Europe base 1.61
Vellano tests base 1.61
Futuremark test base 1.61
Sent from my HTC One M9PLUS using XDA Free mobile app
Click to expand...
Click to collapse
FYI.
Mine : stock , Taiwanese base 1.08,rooted device
Sigh.... the result was maxed out for my device on Ice Storm "Extreme".
Very similar result as mine was except ice storm unlimited. I think it might be related to temperature throttling, I'll test it again later.
Yeah, second run on base 1.61
Hello guys,
I live in France and i really look forward the HTC m9+ ; i thinks it is what the M9 should have been, but i don't understand HTC 2015 sh*tty choices of everything.
WHat i would like to know is about battery life ; Can you guys tell me whats about it ?
I just bought galaxy s6 edge a few days ago and it's creepy. I have no battery in the end of afternoon. I don't play games, just few photos and browsing, and messenger sometimes. And anyways i miss boomsound and metal body.
I'm so desapointed. Should never have sold my old one m8 or xperia Z3.
My only hope is turned on lenovo vibe x3 but there is no news since march ...
Thanks guys !
After installing more and more software that is synchronized, like dropbox and a few other my battery life got lower a notch, from 23+ hours to 22+ and average screen on around 4 hrs. All in all, not a miracle, but that's what I usually got with my previous phone, OPO with its larger battery and lower screen resolution. A bit less though, admittedly. So it gets me through the day nicely, but I'm not gaming.
It's average I think. Compared to the amazing Sony z3 compact it's pathetic, my friend's z3c gets 3+ days and 5 hours of screen on time. And I think it's done with some great stock kernel enchanted by Sony engineers and Sony stock rom... wish other phones could do that
Not sure why, I could not choose no lock screen and lock screen without security mode under security settings. Both options are greyed out and it says it is disabled by administrator,encryption policies or other apps.
I only have prey and lockout security installed which could affect this but this was totally fine on my M7 device.
Anyone has any idea? Thank you.
They really should've went with the 801, YES THE 801, for this phone.
On the same resolution the adreno 330 has better performance AND the price wouldn't change, really shameful on HTC for picking a MTK SoC (**** or crap).
I am very disappointed.
JellyKitkatLoli said:
They really should've went with the 801, YES THE 801, for this phone.
On the same resolution the adreno 330 has better performance AND the price wouldn't change, really shameful on HTC for picking a MTK SoC (**** or crap).
I am very disappointed.
Click to expand...
Click to collapse
1.61 base,stock,rooted
lower the resolution to 1080P,just for fun.
GPU do tie down the result,right?
yvtc75 said:
1.61 base,stock,rooted
lower the resolution to 1080P,just for fun.
GPU do tie down the result,right?
Click to expand...
Click to collapse
Yes, and it's fairly accurate.
I got 11871 with 2K, I couldn't screenshot as it doesn't know how to (Doesn't work above 1080p without dedicated software for it).
M8 1080p.
Also now that I look at it, your cpu integer is out of proportion due to all the cores the MTK SoC has lol, too bad that has no real life use, and as you can see single core performance is much better on the 801 :/.
JellyKitkatLoli said:
Also now that I look at it, your cpu integer is out of proportion due to all the cores the MTK SoC has lol, too bad that has no real life use, and as you can see single core performance is much better on the 801 :/.
Click to expand...
Click to collapse
hmmm......
The efficiency of SW decode will be performed vividly by multi-cores in your real life.
There were much comparison between MT6795(not "T") and SD801 in this site.
About the browser speed, RAR compression ,real gaming, power consumption........
http://tieba.baidu.com/p/3825199511?see_lz=1#69839534914l
yvtc75 said:
1.61 base,stock,rooted
lower the resolution to 1080P,just for fun.
GPU do tie down the result,right?
Click to expand...
Click to collapse
Are those results after reducing resolution to 1080p, did I read correctly? Because if that's the case, it's sad to confirm the limits of the 3D graphic card as other stated before, if you see the above M8 (which I find surprisingly higher than I experienced on my own M8 anyway) you'll notice even the last year's M8 has a better 3D score, and not to mention the regular M9 which produces almost double of that score. Multicore wise the Helio X10 is a beast, but graphics wise, looks like it's not so much