I want to see this same write up this time next year. Intel is coming up with some scary tech...also here's to hoping AMD wanders around again. This is just apples to oranges BUT it does show that there is gap worthy of being compared....this is a good thing :D
Seeing as how the top of the line ARM build(iPad 4) is anywhere between 1.4~3x slower than Intel's HD 4000 whilst greatly saving on power, I'd wager that MS' decision to go alongwith AMD for their next iteration of Surface Pro was a sound one not to mention the fallacy/myth that ARM can't scale up fast enough is soon gonna melt away ! Now I've pretty much always bashed how folks at AT favor Intel over anything & everything else, however this graph should be proof enough that ARM/AMD are squeezing them much harder than what most would like to believe, not to mention the fact that their monopolistic position in traditional x86 market is what is keeping them afloat but with the near collapse of the desktop market up ahead they'll be scrambling for cover not unlike AMD |-:
Intel's graphics certainly pull more power than img, but they also deliver much better performance (especially in the fills/triangles/fragment/vertex). Their efficiency looks to be roughly of a kind. That's impressive considering Intel doesn't have to be as concerned with a constrained environment as img.
I disagree, IMG tech has pushed past the chips implemented here already, and by a lot. I see IMG tech really pushing the bar, particularly if they get their CPU side going.
It'd be interesting, if IMG tech sped up the process and put together their own SoC using their graphics and ARM C A15.
From what I remember reading, the first version of IMG's new series 6 will be as fast as the iPad4 (but with more features) with the most advanced versions in the future being 2-3x that, so about what the Surface Pro delivers now. By that time, Intel will be delivering 2-3x the current graphics performance. While there is no doubt that IMG is the leader in the smart-phone space, it does look like it will be lagging intel in the tablet space.
Intel's IGP have been delivering incremental performance gains, unlike AMD/Nvidia on the dedicated GPU front, not to mention the old adage of moar cores has been their only saving grace thus far ! They've been constantly adding EU's on the CPU die without adding much to the actual CPU transistor count, one of the reasons the CPU gains have flattened since SNB, so nope they'll not deliver ~3x their current IGP performance anytime soon unless of course they double/treble the GPU on die area & drastically cripple the CPU side of things !
I'm not sure what you mean by "pushed past", since, to my knowledge, rogue hasn't been released yet so we don't know how they will really perform, and what they have released is not as fast as what intel has (though it is more efficient). So, on average, intel is about two and quarter times faster than the 543mp4 while using up to three times the power, so intel is less efficient, but not as much as I'd anticipated.
All I see is Arm devices sitting there behind the HD4000 etc. I see nothing positive for Arm as the Atom based devices will soon catch-up and the HD 4600 move further away.
Interesting indeed. However, ARM didn't develop the GPU cores in the iPad. PowerVR did. PowerVR are delivering the most powerful graphics core in the mobile space at the moment, but everyone else is closing in. It'll be interesting to see if we can get xbox 360 gpu performance in the current powerband within the next couple of years.
I'd like to see the same benchmarks in another 3 - 6 months when we have Tegra 4 based tablets (Android / RT) in the market. If the rumor of Tegra 4 being much faster than all things ARM based today is true, it'll probably be the time for me to seriously consider a tablet (thin but powerful enough to work).
I think this is interesting for sure. I think another interesting thing to throw in would be a 5 - 7 year old Dell PC with a cheapish dedicated GPU.
I'm also curious how something like Razer Edge compares to the Xbox 360 or PS3 but it would be harder to measure. Something like that serves no purpose than to satisfy my curiosity of how far behind mobile graphics are from other platforms but I am sure other people wonder the same thing.
how is this "behind"? The battery life of the Razer edge is defined by the power cable aka nonexistent. Performance wise it's more telling that the Razer performs better than the surface.
I think things are pretty much exactly where they should be. I agree that if you have a unified benchmark it would be a good time to test performance across the board.
It's still not a good test because the Razer Edge's 640M LE should outperform the HD4000 by 50-100% or more, but is behind the iGPU in this one. Even worse than the 3DMark benches. At least 3DMark can be a rough representation of shortly upcoming titles.
It wasn't a surprise to seeing the Surface Pro and the Razer Edge leading the pack in these tests. The question was how much of a lead they'd carry over more mobile centric ARM parts. These numbers don't bode well for Intel. ARM SoC's are looking to further improve CPU and GPU performance this year while in the same thermal envelope as is Intel. The problem is that Intel will still be at a higher power consumption. By the time Broadwell and/or Skylake reach consumers, the battle for control of the mobile market may have already been won by ARM. Part of winning the battle is showing up for the fight.
It'll show up at the cost of performance. It is unclear how that new architecture will stack up with ARM's offerings when it arrives. That is still 8 months away with a new generation of ARM and PowerVR chips due out in the same time frame.
Baytrail is rumored to have a 4core IGP; if it's the same core as in IVB it'll be ~25% as fast as what the SurfacePro is showing give or take variations in clocking/etc. That's between the Nexus10 and iPad4 in performance. While fast enough to be competitive, it's almost certain to be eclipsed by GPUs in this years new arm SoCs.
why not bode well? 10w haswell will be a pretty good improvement over the current 17w ivy, given that the performance will be same or better (which so far seems it will be). new generation atoms are also coming in autumn (rumored to have a significant gpu overhaul, probably intel's own hd).
on arm side, things are going well, but they are also starting to run into the same tdp problem on both cpu and gpu front - with substantial improvements in performance, tdp will rise.
and at the same time, both performance and tdp gaps are still significant. interesting times.
How many times do people have to say it? The 10W version will most likely have POORER performance than the 17W IVB. They will do it the same way they tried to do it with IVB this year, and underclock it to 800 Mhz or something. The "tablet" Haswell will be nowhere near as fast as a laptop/hybrid 17W IVB that we have today.
If microsoft packed in 8GB of RAM for their surface pro and allocated 512MB to HD4000, unlike only 116MB, I wonder how much closer the scores would have been to GT640M LE?
So what you're saying is that the PowerVR is so impressive over the other arm devices for using a much larger die and power envelope, but the Intel is crap for doing the same versus the PowerVR?
It's the difference in power use that matters. Imagine if you scaled Apple's A7 SoC up to 17 watts, or Intel's i5 down to 5 watts. Which do you think would be on top then, and by how much? I'm willi g to bet that the A7 would have a bigger lead than the i5 has now.
What will be interesting is Imagination,s 6000 series, when it arrives this year. How much faster will those be?
Kinda odd to say that because it could be easily said that the HD4000 in the surface pro probably uses about 15w of the 17w, or so, and the A6X is somewhere round 5W with the majority again being used on GPU, it appears that GPU to GPU you are looking at 3x the perf for 3x the power. In other words, pretty much the same perf/w, especially in shader constrained situations.
Hope to see this benchmark used in all tablet reviews from now on, even x86 ones. I'd be especially interested in seeing how the AMD's Z-60 does. Heck, you should throw a Trinity APU on there as well just for curiosity's sake.
Yeah, I have a Nexus 7 right now, and I'm on hold for AMD's offerings. The current Atom SOC has such a bad GPU that it really gets noticeable in even simple games. I like AMD's reference design and the turbo dock concept, and I think they will offer more low-end value than Intel.
That said, I'm air disappointed with the XPS 10. I almost bought one last week, and I'm quite surprised that the theoretical advantages it has over T3 do not show up. Drivers maybe? Or is it because it only has 2 cores?
Because the difference between the Razer Edge and the Surface Pro in this benchmark is minimal and on some charts even wrong, whereas in real life the nVidia GPU is magnitudes faster, this benchmark seems to be flawed and wrong. (either Intel paid them a lot, or the benchmakr is crap, and I expect from a techsite like Anandtech to validate a benchmakr before posting results): http://www.anandtech.com/show/6858/the-razer-edge-... Or does it make sense for you, that in T-Rex HD Onscreen the Surface Pro displaying 1080p outperforms the Razer Edge displaying on just 720p, yet, in Offscreen the Razer Edge is faster suddenly. That makes absolutely no sense.
Additionally is this article flooded with useless and meaningless synthetic benchmarks.
Bases on this flaws, this article and the whole benchmakr became uselesss.
GLBenchmark has always had these sort of problems and many people have posted complaints about that in the comments for the mobile reviews but nothing has ever been done about it.
All the synthetics being posted is even more silly. What does it matter if GPU X can do Y amount of triangles if you'll never even touch that theoretical limit in even a benchmark (that actually draws something).
It may be academically interesting, but putting it up front and center inflates its significance.
The whole thing smells fishy, I agree. Take a look at the results of the surface pro onscreen vs off screen. The only difference in these is the v-sync (resolution is 1080p in both). The results vary wildly between twice as fast when v-sync is off and slightly slower. That calls into question what is the effect of v-sync on the various parameters in the benchmark and until that can be answered , I would be very hesitant to make any conclusions. Is it possible to run the offscreen test with v-sync on as well?
I agree that this test is not indicative of performance (or else seems to be very heavily optimized for mobile SOCs). The 640m LE is generally about twice a powerful as the ULV hd 4000 and I'm not seeing that.
Suggestions: - In onscreen benchmarks often V-Sync is the limiting factor. Maybe you can make this visible in the chart with a vertical line indicating 60FPS V-Sync, so people with less knowledge about technology better understand the chart. Additionally, because you always run both on and off-screen, yet, only offscreen makes a comparison between GPUs possible, you could combine both charts and use two bars for each GPU in a single chart (a small bar for onscreen, the prominent one for offscreen) - The synthetic benchmarks are meaningless. The HD4000 seems to be almost as fast as the nVidia card, yet, in game benchmarks is less than half as fast. So those synthetic benchmarks don't show reality. Quite contrary, Intel seems to be cheating. The Fill Test offscreen performs two times better than the Fill Test offscreen, yet, the resolution (1080) should remain unchanged. (or your results are flawed, again ^^) Well, it's always easy to fill an article with charts, but at least remove the onscreen synthetic benchmarks. Even better, put all synthetic offscreen benchmarks in a single chart, so one GPU/Product has several bars, one bar for each test, 4 bars in total in different colors.
That way people take a closer look at the 'real world benchmarks' and don't waste their time comparing synthetic benchmarks.
At least I hope that you change something. At the moment the benchmarks are a 'mess' and hard to decipher.
I'm talking about the synthetic benchmark which gets measured in MTexels/s or MTriagels/s and should be independent of the V-Sync, or how can you explain that the Razer is able to output more MTexels/s on a smaller resolution onscreen? You can't. Not sure how this isn't confusing!
Whatever, synthetic benchmarks are meaningless, and they especially don't qualify to get tested on both on and off-screen and take more space in such an article than the 'real world demos'.
Just take a look at recent discrete AMD/NVidia GPU reviews? I see not a single synthetic benchmark 80% games, 20% Benchmark demos, but no synthetic one. There's a reason for this.
Hmm, by definition since this is a benchmark program not an actual game or program, everything about it is synthetic, including the real time rendering part -- the "benchmark demo" you referred to. I think what you meant are the individual theoretical tests. Imo they are still useful for research and comparison purposes, if they are accurate.
In normal AMD/NVIDIA GPU reviews because we have so many actual games and applications that use the same platform we can exclude synthetic benchmark programs, but we don't have that luxury in mobile space. If a benchmark program is really good that reflects well in the real world, it's still a good data point to consider. In server people use a lot of synthetic benchmark programs.
Its amazing how fast A6X is with a 3+ year old GPU. Imagine how fast PowerVR Rouge will be. With a bit of luck we will see that in A7. Chance to beat Nvidia 640M. Also remember that Intel/Nvidia parts are 22/28 nm parts while A6X is 32 nm. Just doing ARM on 28nm with SOI makes it possible to clock Cortex9 at 3ghz.
I therefore disagree with the article. Intel can't compete on power. 1) The X86 stuff makes the dies always 1/3 bigger. 2) CISC have always been slower than RISC. 3) Intel uses 64 bit extensions. Remember that X64 is slower than X32 by 3%! On RISC 64 bit have always been much faster than 32bit. Just look at history. Since 2006 intel have increased desktop speeds by about 130%. At the same time ARM is 1700% faster.
And people should be happy about it. No more 1000 dollar Intel CPUs. Instead ARM licensed SoCs where ARM gets 6-10 cent per core. Intel can therefore never compete on price. An ARM SoC cost 15 dollar to produce. Intel can't sustain its leading edge in fabs on that revenue.
Erm, yea, when an ARM beats Intel on performance, then Intel will have a serious problem. But at this point, performance is so closely tied to transistor improvements that Intel is the one with the advantage. They 2-4 years ahead in process technology and they are already developing the 10 nm node. And the reason why ARM's performance looks like it's come up faster is just because it was so much slower to begin with. They're hitting the power wall now, just like Intel, and that's why you see all the smartphones with 2-4 cores.
Urm, what? Assuming your right about the x86 die size, Intel have a 2y lead in FAB tech, and have been at least one process node ahead of pretty much everyone else.
As for the history, well that's just a daft metric, given that in 2006 Intel had just introduced the Core 2, which can equal current ARM for performance...all your percentage shows is that ARM was damn slow in 2006, and is still pretty slow now.
As AnandTech have been saying for a while, in some senses its a race, ARM to increase performance 4x-6x fold, and Intel to reduce power 2x-3x fold (without sacrificing performance)
However one issue is that Intel might not decide to go all the way down to properly compete in the phone space...if they stop at tablet, then the power budget is much greater. Also ARM can't ignore the phone space, so its race for performance can't increase its power either, otherwise it stands to lose its biggest market.
In my mind given ARM's reliance on third parties, Intel has the upper hand, it has more control over the platform and could hit the power budget by reducing the whole system...i.e. the intel cpu would consume more power than the ARM, but the total system power could be equal.
The key to me is x86 compatability. I have an android tablet, and I will never consider another one. Never, ever. I have never used a more frustrating device in my life.
I have an Android 10.1" tablet, an 11.6" laptop and a powerful desktop. When I'm at home, there is no reason to use the Android tablet for browsing the web or reading emails, as many people seem to do, because I have my desktop PC, a nice 27" monitor and a comfortable chair at my desk. When I'm on the go, I like having the versatility of the laptop (running all my media, having 100+GB space, running full office, Steam and gog.com games) and I am never gone long enough for the battery life advantage of the Android tablet to kick in. Also, I hate having every program full screen in Android. In my Windows devices, I always use the 50/50 split with one window usually being the browser and the other being a video, music player, office document, another browser... Can't do that in Android. I've pretty much only used the Android tablet for some games that took advantage of the larger display compared to my Galaxy Nexus. But those are so few and far between (and many hit Steam these days anyway), that the thing has been gathering dust for months now. ARM Tablet are great for people who don't use a PC for anything resembling productivity and have very limited demands in other areas.
That seems fair. I was really really looking forward to the Microsoft Surface Pro release, but sad to see that they basically bungled it. I'm basically waiting for a proper windows 8 tablet to be truly worth buying.
You're off by an order of magnitude: x86 decoding takes up about 3% of the CPU die. For an SoC where the CPUs encompass < 50% of the total die, the cost shrinks to 1.5%. The CISC versus RISC debate is pointless these days: the tiny power cost isn't a significant factor compared to architectural differences.
Look at the drastic jump in power consumption between the A9 and the A15. The new Exynos is downclocked to 1.2Ghz to keep power to an acceptable level. You can expect similar increases in the next generation as ARM approaches the holy grail of "good enough not to be noticed"-Core2Duo-level performance. At the same time, Haswell will be targeting high end tablets, preparing Intel for a concerted push into the ~5W SoC market with its successor.
Intel has plenty of work left to compete directly with ARM, particularly since they will need to forego the high margins they've grown accustom to, but you would be a fool to write them off entirely at this point in the game.
"Look at the drastic jump in power consumption between the A9 and the A15. The new Exynos is downclocked to 1.2Ghz to keep power to an acceptable level." The A15 in the Exynos 5 Octa is clocked at 1.6 GHz. It's the A7 that are at 1.2 GHz.
I would also like to see for comparison an AMD C50, C60 or C70 at 1 Ghz...
Also, I found it strange that on some benchmarks the results for offscreen are lower than the ones on screen and on some the other way around. Of course, probably offscreen means 1080p and onscreen means whatever resolution the tablet has, which gives less meaning to the onscreen/offscreen comparisons. It should have been better just to include more than one resolution for the "offscreen" test.
A nice comparison would also have been with a top of the line 680 or 7970 system... :-)
I see you're missing a Nexus 4. Well, I am too. Here's the results from an LG Optimus G (4.1.2) I have laying around (on/offscreen): Fill rate: 842M/870M Triangle Throughput: 61.2M/49.9M Triangle Throughput (vertex lit): 49.1/41.9 T-Rex HD: 19/14 Egypt HD: 45/34
I must say I'm seriously impressed with improvements the ARM GPUs have made in the last few years. These tests show they are now only 2 to 3 times slower than a respected desktop/laptop solution. When you consider how tiny they are and how little power they draw its impressive stuff. It's always nice to see genuine advances made in computer hardware.
I've said it a while ago that Intel's high-end chips for laptops are only ~3x faster than the fastest ARM chips, while using much more power. But nobody seemed to believe me. Many still seemed to think Intel's chips are tens of times faster or something.
Sorry guys - there was a problem with the Razer Edge numbers, they have since been corrected. Vivek still has the Edge and is in Korea right now so debugging funny results had to cross a few timezones :)
I wish you did a better job at explaining why the ARM devices were much weaker in geometry tests. OpenGL ES 2.0, and even OpenGL ES 3.0, doesn't support shaders. So in the end the test is not very comparable, because there are MAJOR differences between the API's - so the DirectX devices will get a considerably larger score because they have extra API features.
It might've been different if we had compared devices with DirectX 11.1 vs OpenGL 4.3. but ideally you really just want to keep it DirectX vs DirectX devices, and OpenGL ES vs OpenGL ES devices. Otherwise the numbers will be very skewed because of the API's, and NOT because of the difference in hardware.
Why are you comparing a RAZR and Surface Pro? Are 2h and 5h battery life, with $1,000-2,000 price tags actually considered "competition" for the current ARM tablets?
They are not, and will never be. Compare only the Atom ones, and the RT tablets if you want. Otherwise you might as well compare ARM chips with Xeon and Tesla's and say ARM "lost".
Please read this article: http://anandtech.com/show/6871/a-comment-on-pc-gam... The iPad also only lasts 4h if you're gaming on it. The Razer should last 8h+ if you use it as an iPad. If you play the much more demanding games on it however, only then it's battery is dead in 2h-3h.
Also those benchmarks show that the gap between X86 laptop/desktop and the ARM smartphone/tablet sector is closing. Anandtech will surely post power consumption benchmark between those systems (just as they did in the past) with the results, that Intel and ARM are coming closer and closer.
Where did you get 4hrs for ipad gaming from? The article you linked says 6 hours for the ipad4 compared to a bit over 2hrs for the Razer Edge. Where did you get 8hrs+ if you use it as an ipad for the razer edge from? Anandtech's review says 5.6hr for web browsing and 4.9 for movie playback compared to 9.5 and 13.45hr for ipad4.
Because, uh, the specifications of the devices outside of their CPU/GPU are irrelevant for the purposes of this article? This isn't about "should you get a Surface Pro or a Razer Edge?".
I'm quite surprised, I knew the performance of SoC GPUs was going up very quickly and the SGX line can be scaled to very high performance, but I never would have thought it would be close to even the HD4000, or even as close as it was to the 640M. I've been underestimating these things on the GPU side.
Is there a reason that some smartphone SoC's aren't included in here? Running offscreen benchmarks would give a good indication of their performance regardless of screen res, right? Would be interesting to see how the Adreno 320/330 and Mali 400 stack up here. The only Adreno GPU used was a 225 and that too in a Windows RT device, which may or may not have the same optimization issues as Tegra 3 on RT.
The problem is that a smartphone might use the same SoC a tablet uses, but is clocked much lower and throttled to meet the lower battery capacity and worse heat disspation. A Tegra 3 in a smartphone scores worse than the same Tegra 3 in a tablet. The Exynos Octa core in the Galaxy S4 performs worse than in a tablet.
that's all well and good, but its not like i have a choice to play iOS games on a surface pro, or the android games on the pro, or the pro games on the ios/androis, or any mix of that (short of the IAP shovelware that gets 'diarrhead' onto every conceivable platform). so while the results are somewhat interesting, they're still largely irrelevant in terms of practical applications.
There is Blue Stacks for Windows, which emulates Android apps (rather well nowadays I heard). And you can run x86-Android on some Windows laptops/tablets.
But you are mostly missing the point here. This is about showing the performance differences of the platforms, about any games you want to play.
Hugely impressed with the performance of hd4000 compared to all the rest (including the razers edge). It looks to be roughly as efficient at graphical tasks as the best img can manage, and better than nvidia. Looking forward to haswell, and even more so to broadwell.
Because, uh, the specifications of the devices outside of their CPU/GPU are irrelevant for the purposes of this article? This isn't about "should you get a Surface Pro or a Razer Edge".
You should consider the fact that each of these devices have a different power limit and are designed based on that. You need to compare the performance per watt each device achieves, rather than purely comparing the performance numbers you get by just simply running these benchmarks on each device and call yourself a contributor in this community. A good article/argument requires you do lot more research than just comparing the simple scores. otherwise, it could as miss leading as your article...
I'm amazed by the amout of people flocking to bash the article in an attempt To défend their beloved ARM socs. It's like seeing middle age priests burning scientists who would dare Say the earth is round. Truth #1: AMR, X86, i don't care. I'll buy what's good for me. Truth #2: ARM, Intel, Amd, Nvidia, they all pushed their architecture to the most efficient state possible and, if all made on Intel 22nm process for a given 10W thermal enveloppe, they would bé simular. Truth #3: autocorrect on my surface pro SUCKS!!!
To me the only compare that should be made is of devices within the same range, in this case you can look at the ARM and the x86 based intel devices, what is shown here is that the atom really sucks in graphics, which was known all the time, it can't even do a decent flash player..... OEM should drop that device instantly and kick Intel in the butt, but no Typycal OEM behaviour go with the flow, notebook all over again,
Oh btw for those who believe HD4000 in this 17W is powerfull here, they should add a 17W trinity to compare, but offcourse you don't find these often due to the same OEM screwup and foolished consumer by store sales and marketing.
Really, comparing Surface Pro to a bunch of ARM tablets? Surface pro has less than HALF the battery life of say a Nexus 10, or iPad 4 while weighing 50% more and costing hundreds more to boot.
Yet again, Anandtech pits mismatched parts meant for different markets at each other and draws erroneous conclusions. The market, however, does not care what Anandtech thinks. The Surface Pro is a flop at this point in time, for reasons which Anand apparently cannot fathom.
The Surface RT was the appropriate competitor to place against this the lineup of Android and iOS devices.
What next Anand? Give us a comparison of a Macbook Pro vs an Eee PC?
I think you have the wrong Atom CPU for the VivoTab Smart. Atom Z2560: Clovertrail+ @1.6 GHz, SGX 544MP2, this chip is intended for Android devices. Atom Z2760: Clovertrail (no + )@1.8 GHz, SGX 545, this chip is intended for Windows devices.
I believe the VivoTab Smart uses the Z2760, especially since it is Windows.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
83 Comments
Back to Article
glockjs - Monday, April 1, 2013 - link
I want to see this same write up this time next year. Intel is coming up with some scary tech...also here's to hoping AMD wanders around again. This is just apples to oranges BUT it does show that there is gap worthy of being compared....this is a good thing :DR0H1T - Monday, April 1, 2013 - link
Seeing as how the top of the line ARM build(iPad 4) is anywhere between 1.4~3x slower than Intel's HD 4000 whilst greatly saving on power, I'd wager that MS' decision to go alongwith AMD for their next iteration of Surface Pro was a sound one not to mention the fallacy/myth that ARM can't scale up fast enough is soon gonna melt away ! Now I've pretty much always bashed how folks at AT favor Intel over anything & everything else, however this graph should be proof enough that ARM/AMD are squeezing them much harder than what most would like to believe, not to mention the fact that their monopolistic position in traditional x86 market is what is keeping them afloat but with the near collapse of the desktop market up ahead they'll be scrambling for cover not unlike AMD |-:tuxRoller - Monday, April 1, 2013 - link
Intel's graphics certainly pull more power than img, but they also deliver much better performance (especially in the fills/triangles/fragment/vertex). Their efficiency looks to be roughly of a kind. That's impressive considering Intel doesn't have to be as concerned with a constrained environment as img.lmcd - Tuesday, April 2, 2013 - link
I disagree, IMG tech has pushed past the chips implemented here already, and by a lot. I see IMG tech really pushing the bar, particularly if they get their CPU side going.It'd be interesting, if IMG tech sped up the process and put together their own SoC using their graphics and ARM C A15.
Speedfriend - Tuesday, April 2, 2013 - link
From what I remember reading, the first version of IMG's new series 6 will be as fast as the iPad4 (but with more features) with the most advanced versions in the future being 2-3x that, so about what the Surface Pro delivers now. By that time, Intel will be delivering 2-3x the current graphics performance. While there is no doubt that IMG is the leader in the smart-phone space, it does look like it will be lagging intel in the tablet space.R0H1T - Tuesday, April 2, 2013 - link
Where did you get that info, haswell GT3 anyone ?Intel's IGP have been delivering incremental performance gains, unlike AMD/Nvidia on the dedicated GPU front, not to mention the old adage of moar cores has been their only saving grace thus far ! They've been constantly adding EU's on the CPU die without adding much to the actual CPU transistor count, one of the reasons the CPU gains have flattened since SNB, so nope they'll not deliver ~3x their current IGP performance anytime soon unless of course they double/treble the GPU on die area & drastically cripple the CPU side of things !
tuxRoller - Tuesday, April 2, 2013 - link
I'm not sure what you mean by "pushed past", since, to my knowledge, rogue hasn't been released yet so we don't know how they will really perform, and what they have released is not as fast as what intel has (though it is more efficient).So, on average, intel is about two and quarter times faster than the 543mp4 while using up to three times the power, so intel is less efficient, but not as much as I'd anticipated.
damianrobertjones - Tuesday, April 2, 2013 - link
All I see is Arm devices sitting there behind the HD4000 etc. I see nothing positive for Arm as the Atom based devices will soon catch-up and the HD 4600 move further away.joos2000 - Tuesday, April 2, 2013 - link
Interesting indeed. However, ARM didn't develop the GPU cores in the iPad. PowerVR did. PowerVR are delivering the most powerful graphics core in the mobile space at the moment, but everyone else is closing in. It'll be interesting to see if we can get xbox 360 gpu performance in the current powerband within the next couple of years.rexian96 - Monday, April 1, 2013 - link
I'd like to see the same benchmarks in another 3 - 6 months when we have Tegra 4 based tablets (Android / RT) in the market. If the rumor of Tegra 4 being much faster than all things ARM based today is true, it'll probably be the time for me to seriously consider a tablet (thin but powerful enough to work).Milliamp - Monday, April 1, 2013 - link
I think this is interesting for sure. I think another interesting thing to throw in would be a 5 - 7 year old Dell PC with a cheapish dedicated GPU.I'm also curious how something like Razer Edge compares to the Xbox 360 or PS3 but it would be harder to measure. Something like that serves no purpose than to satisfy my curiosity of how far behind mobile graphics are from other platforms but I am sure other people wonder the same thing.
designerfx - Monday, April 1, 2013 - link
how is this "behind"? The battery life of the Razer edge is defined by the power cable aka nonexistent. Performance wise it's more telling that the Razer performs better than the surface.I think things are pretty much exactly where they should be. I agree that if you have a unified benchmark it would be a good time to test performance across the board.
extide - Monday, April 1, 2013 - link
Razer Edge is faster than Xbox 360/PS3 in both CPU and GPU.amb9800 - Monday, April 1, 2013 - link
Quick correction: Surface Pro specs should read "i5-3317U" instead of "i3-3317U."IntelUser2000 - Monday, April 1, 2013 - link
It's still not a good test because the Razer Edge's 640M LE should outperform the HD4000 by 50-100% or more, but is behind the iGPU in this one. Even worse than the 3DMark benches. At least 3DMark can be a rough representation of shortly upcoming titles.Kevin G - Monday, April 1, 2013 - link
It wasn't a surprise to seeing the Surface Pro and the Razer Edge leading the pack in these tests. The question was how much of a lead they'd carry over more mobile centric ARM parts. These numbers don't bode well for Intel. ARM SoC's are looking to further improve CPU and GPU performance this year while in the same thermal envelope as is Intel. The problem is that Intel will still be at a higher power consumption. By the time Broadwell and/or Skylake reach consumers, the battle for control of the mobile market may have already been won by ARM. Part of winning the battle is showing up for the fight.MadMan007 - Monday, April 1, 2013 - link
You forget Baytrail Atom SoCs - many CPU architectural improvements, and Intel HD-based graphics. Those will more than 'show up for the fight'.Kevin G - Monday, April 1, 2013 - link
It'll show up at the cost of performance. It is unclear how that new architecture will stack up with ARM's offerings when it arrives. That is still 8 months away with a new generation of ARM and PowerVR chips due out in the same time frame.DanNeely - Monday, April 1, 2013 - link
Baytrail is rumored to have a 4core IGP; if it's the same core as in IVB it'll be ~25% as fast as what the SurfacePro is showing give or take variations in clocking/etc. That's between the Nexus10 and iPad4 in performance. While fast enough to be competitive, it's almost certain to be eclipsed by GPUs in this years new arm SoCs.londiste - Monday, April 1, 2013 - link
why not bode well? 10w haswell will be a pretty good improvement over the current 17w ivy, given that the performance will be same or better (which so far seems it will be). new generation atoms are also coming in autumn (rumored to have a significant gpu overhaul, probably intel's own hd).on arm side, things are going well, but they are also starting to run into the same tdp problem on both cpu and gpu front - with substantial improvements in performance, tdp will rise.
and at the same time, both performance and tdp gaps are still significant. interesting times.
Krysto - Monday, April 1, 2013 - link
How many times do people have to say it? The 10W version will most likely have POORER performance than the 17W IVB. They will do it the same way they tried to do it with IVB this year, and underclock it to 800 Mhz or something. The "tablet" Haswell will be nowhere near as fast as a laptop/hybrid 17W IVB that we have today.meacupla - Monday, April 1, 2013 - link
If microsoft packed in 8GB of RAM for their surface pro and allocated 512MB to HD4000, unlike only 116MB, I wonder how much closer the scores would have been to GT640M LE?IntelUser2000 - Monday, April 1, 2013 - link
meacupla: It won't matter. Because the bandwidth is still shared with rest of the platform.Also in actual games, the GT640M LE drastically outperforms the HD 4000, not by maybe 5-10% or even losing as shown in the latest GL/DX bench.
meacupla - Monday, April 1, 2013 - link
ah, gotcha. That's good to know.I'm slightly less disappointed that I can't set the memory size on my surface pro myself.
xTRICKYxx - Monday, April 1, 2013 - link
I wouldn't worry about it. Even if X amount of memory is allocated to the HD 4000, it will dynamically scale up to a lot higher than what it says.mayankleoboy1 - Monday, April 1, 2013 - link
All i see in these benchmarks is how much the PowerVR GPU's kill the competition.And if you look at perf/watt, Intels iGPU's are pathetic.
thedemandful - Monday, April 1, 2013 - link
So what you're saying is that the PowerVR is so impressive over the other arm devices for using a much larger die and power envelope, but the Intel is crap for doing the same versus the PowerVR?melgross - Wednesday, April 3, 2013 - link
It's the difference in power use that matters. Imagine if you scaled Apple's A7 SoC up to 17 watts, or Intel's i5 down to 5 watts. Which do you think would be on top then, and by how much? I'm willi g to bet that the A7 would have a bigger lead than the i5 has now.What will be interesting is Imagination,s 6000 series, when it arrives this year. How much faster will those be?
extide - Monday, April 1, 2013 - link
Kinda odd to say that because it could be easily said that the HD4000 in the surface pro probably uses about 15w of the 17w, or so, and the A6X is somewhere round 5W with the majority again being used on GPU, it appears that GPU to GPU you are looking at 3x the perf for 3x the power. In other words, pretty much the same perf/w, especially in shader constrained situations.kyuu - Monday, April 1, 2013 - link
Hope to see this benchmark used in all tablet reviews from now on, even x86 ones. I'd be especially interested in seeing how the AMD's Z-60 does. Heck, you should throw a Trinity APU on there as well just for curiosity's sake.MonkeyPaw - Monday, April 1, 2013 - link
Yeah, I have a Nexus 7 right now, and I'm on hold for AMD's offerings. The current Atom SOC has such a bad GPU that it really gets noticeable in even simple games. I like AMD's reference design and the turbo dock concept, and I think they will offer more low-end value than Intel.That said, I'm air disappointed with the XPS 10. I almost bought one last week, and I'm quite surprised that the theoretical advantages it has over T3 do not show up. Drivers maybe? Or is it because it only has 2 cores?
UpSpin - Monday, April 1, 2013 - link
Because the difference between the Razer Edge and the Surface Pro in this benchmark is minimal and on some charts even wrong, whereas in real life the nVidia GPU is magnitudes faster, this benchmark seems to be flawed and wrong. (either Intel paid them a lot, or the benchmakr is crap, and I expect from a techsite like Anandtech to validate a benchmakr before posting results): http://www.anandtech.com/show/6858/the-razer-edge-...Or does it make sense for you, that in T-Rex HD Onscreen the Surface Pro displaying 1080p outperforms the Razer Edge displaying on just 720p, yet, in Offscreen the Razer Edge is faster suddenly. That makes absolutely no sense.
Additionally is this article flooded with useless and meaningless synthetic benchmarks.
Bases on this flaws, this article and the whole benchmakr became uselesss.
ChronoReverse - Monday, April 1, 2013 - link
GLBenchmark has always had these sort of problems and many people have posted complaints about that in the comments for the mobile reviews but nothing has ever been done about it.All the synthetics being posted is even more silly. What does it matter if GPU X can do Y amount of triangles if you'll never even touch that theoretical limit in even a benchmark (that actually draws something).
It may be academically interesting, but putting it up front and center inflates its significance.
Bast - Monday, April 1, 2013 - link
The whole thing smells fishy, I agree.Take a look at the results of the surface pro onscreen vs off screen. The only difference in these is the v-sync (resolution is 1080p in both). The results vary wildly between twice as fast when v-sync is off and slightly slower. That calls into question what is the effect of v-sync on the various parameters in the benchmark and until that can be answered , I would be very hesitant to make any conclusions. Is it possible to run the offscreen test with v-sync on as well?
extide - Monday, April 1, 2013 - link
That makes absolutely no sense. You can't do an offscreen render with v-sync as there is no screen to sync to. It just draws as fast as it can..whyso - Monday, April 1, 2013 - link
I agree that this test is not indicative of performance (or else seems to be very heavily optimized for mobile SOCs). The 640m LE is generally about twice a powerful as the ULV hd 4000 and I'm not seeing that.Anand Lal Shimpi - Monday, April 1, 2013 - link
Check the updated numbers :)UpSpin - Monday, April 1, 2013 - link
Thank you, makes much more sense now.Suggestions:
- In onscreen benchmarks often V-Sync is the limiting factor. Maybe you can make this visible in the chart with a vertical line indicating 60FPS V-Sync, so people with less knowledge about technology better understand the chart. Additionally, because you always run both on and off-screen, yet, only offscreen makes a comparison between GPUs possible, you could combine both charts and use two bars for each GPU in a single chart (a small bar for onscreen, the prominent one for offscreen)
- The synthetic benchmarks are meaningless. The HD4000 seems to be almost as fast as the nVidia card, yet, in game benchmarks is less than half as fast. So those synthetic benchmarks don't show reality. Quite contrary, Intel seems to be cheating. The Fill Test offscreen performs two times better than the Fill Test offscreen, yet, the resolution (1080) should remain unchanged. (or your results are flawed, again ^^)
Well, it's always easy to fill an article with charts, but at least remove the onscreen synthetic benchmarks. Even better, put all synthetic offscreen benchmarks in a single chart, so one GPU/Product has several bars, one bar for each test, 4 bars in total in different colors.
That way people take a closer look at the 'real world benchmarks' and don't waste their time comparing synthetic benchmarks.
At least I hope that you change something. At the moment the benchmarks are a 'mess' and hard to decipher.
extide - Monday, April 1, 2013 - link
The reason the Surface Pro performs better in off-screen is because it is hitting the fps cap in on-screen. Not sure why that is confusing.UpSpin - Monday, April 1, 2013 - link
I'm talking about the synthetic benchmark which gets measured in MTexels/s or MTriagels/s and should be independent of the V-Sync, or how can you explain that the Razer is able to output more MTexels/s on a smaller resolution onscreen? You can't. Not sure how this isn't confusing!Whatever, synthetic benchmarks are meaningless, and they especially don't qualify to get tested on both on and off-screen and take more space in such an article than the 'real world demos'.
Just take a look at recent discrete AMD/NVidia GPU reviews? I see not a single synthetic benchmark 80% games, 20% Benchmark demos, but no synthetic one. There's a reason for this.
Th-z - Monday, April 1, 2013 - link
Hmm, by definition since this is a benchmark program not an actual game or program, everything about it is synthetic, including the real time rendering part -- the "benchmark demo" you referred to. I think what you meant are the individual theoretical tests. Imo they are still useful for research and comparison purposes, if they are accurate.In normal AMD/NVIDIA GPU reviews because we have so many actual games and applications that use the same platform we can exclude synthetic benchmark programs, but we don't have that luxury in mobile space. If a benchmark program is really good that reflects well in the real world, it's still a good data point to consider. In server people use a lot of synthetic benchmark programs.
shompa - Monday, April 1, 2013 - link
Its amazing how fast A6X is with a 3+ year old GPU. Imagine how fast PowerVR Rouge will be. With a bit of luck we will see that in A7. Chance to beat Nvidia 640M.Also remember that Intel/Nvidia parts are 22/28 nm parts while A6X is 32 nm. Just doing ARM on 28nm with SOI makes it possible to clock Cortex9 at 3ghz.
I therefore disagree with the article. Intel can't compete on power. 1) The X86 stuff makes the dies always 1/3 bigger. 2) CISC have always been slower than RISC. 3) Intel uses 64 bit extensions. Remember that X64 is slower than X32 by 3%! On RISC 64 bit have always been much faster than 32bit. Just look at history. Since 2006 intel have increased desktop speeds by about 130%. At the same time ARM is 1700% faster.
And people should be happy about it. No more 1000 dollar Intel CPUs. Instead ARM licensed SoCs where ARM gets 6-10 cent per core. Intel can therefore never compete on price. An ARM SoC cost 15 dollar to produce. Intel can't sustain its leading edge in fabs on that revenue.
hobagman - Monday, April 1, 2013 - link
Erm, yea, when an ARM beats Intel on performance, then Intel will have a serious problem. But at this point, performance is so closely tied to transistor improvements that Intel is the one with the advantage. They 2-4 years ahead in process technology and they are already developing the 10 nm node. And the reason why ARM's performance looks like it's come up faster is just because it was so much slower to begin with. They're hitting the power wall now, just like Intel, and that's why you see all the smartphones with 2-4 cores.cjb110 - Monday, April 1, 2013 - link
Urm, what? Assuming your right about the x86 die size, Intel have a 2y lead in FAB tech, and have been at least one process node ahead of pretty much everyone else.As for the history, well that's just a daft metric, given that in 2006 Intel had just introduced the Core 2, which can equal current ARM for performance...all your percentage shows is that ARM was damn slow in 2006, and is still pretty slow now.
As AnandTech have been saying for a while, in some senses its a race, ARM to increase performance 4x-6x fold, and Intel to reduce power 2x-3x fold (without sacrificing performance)
However one issue is that Intel might not decide to go all the way down to properly compete in the phone space...if they stop at tablet, then the power budget is much greater. Also ARM can't ignore the phone space, so its race for performance can't increase its power either, otherwise it stands to lose its biggest market.
In my mind given ARM's reliance on third parties, Intel has the upper hand, it has more control over the platform and could hit the power budget by reducing the whole system...i.e. the intel cpu would consume more power than the ARM, but the total system power could be equal.
frozentundra123456 - Monday, April 1, 2013 - link
The key to me is x86 compatability. I have an android tablet, and I will never consider another one. Never, ever. I have never used a more frustrating device in my life.aTonyAtlaw - Monday, April 1, 2013 - link
Really? Why? I've been considering getting one, but have held off because I'm just not sure it will satisfy me.What have you found frustrates you so much?
Death666Angel - Monday, April 1, 2013 - link
I have an Android 10.1" tablet, an 11.6" laptop and a powerful desktop. When I'm at home, there is no reason to use the Android tablet for browsing the web or reading emails, as many people seem to do, because I have my desktop PC, a nice 27" monitor and a comfortable chair at my desk. When I'm on the go, I like having the versatility of the laptop (running all my media, having 100+GB space, running full office, Steam and gog.com games) and I am never gone long enough for the battery life advantage of the Android tablet to kick in. Also, I hate having every program full screen in Android. In my Windows devices, I always use the 50/50 split with one window usually being the browser and the other being a video, music player, office document, another browser... Can't do that in Android. I've pretty much only used the Android tablet for some games that took advantage of the larger display compared to my Galaxy Nexus. But those are so few and far between (and many hit Steam these days anyway), that the thing has been gathering dust for months now. ARM Tablet are great for people who don't use a PC for anything resembling productivity and have very limited demands in other areas.aTonyAtlaw - Tuesday, April 2, 2013 - link
That seems fair. I was really really looking forward to the Microsoft Surface Pro release, but sad to see that they basically bungled it. I'm basically waiting for a proper windows 8 tablet to be truly worth buying.dcollins - Monday, April 1, 2013 - link
You're off by an order of magnitude: x86 decoding takes up about 3% of the CPU die. For an SoC where the CPUs encompass < 50% of the total die, the cost shrinks to 1.5%. The CISC versus RISC debate is pointless these days: the tiny power cost isn't a significant factor compared to architectural differences.Look at the drastic jump in power consumption between the A9 and the A15. The new Exynos is downclocked to 1.2Ghz to keep power to an acceptable level. You can expect similar increases in the next generation as ARM approaches the holy grail of "good enough not to be noticed"-Core2Duo-level performance. At the same time, Haswell will be targeting high end tablets, preparing Intel for a concerted push into the ~5W SoC market with its successor.
Intel has plenty of work left to compete directly with ARM, particularly since they will need to forego the high margins they've grown accustom to, but you would be a fool to write them off entirely at this point in the game.
Death666Angel - Monday, April 1, 2013 - link
"Look at the drastic jump in power consumption between the A9 and the A15. The new Exynos is downclocked to 1.2Ghz to keep power to an acceptable level."The A15 in the Exynos 5 Octa is clocked at 1.6 GHz. It's the A7 that are at 1.2 GHz.
Kabij2289 - Monday, April 1, 2013 - link
An Edge would wreck the PS3 and Xbox. *cough* Seven year old hardware *cough*Mugur - Monday, April 1, 2013 - link
I would also like to see for comparison an AMD C50, C60 or C70 at 1 Ghz...Also, I found it strange that on some benchmarks the results for offscreen are lower than the ones on screen and on some the other way around. Of course, probably offscreen means 1080p and onscreen means whatever resolution the tablet has, which gives less meaning to the onscreen/offscreen comparisons. It should have been better just to include more than one resolution for the "offscreen" test.
A nice comparison would also have been with a top of the line 680 or 7970 system... :-)
Becherovka05 - Monday, April 1, 2013 - link
not to be picky but shouldn't the heading be about GPU not CPUBecherovka05 - Monday, April 1, 2013 - link
Learn to read lol... my mistakearthur449 - Monday, April 1, 2013 - link
I see you're missing a Nexus 4. Well, I am too. Here's the results from an LG Optimus G (4.1.2) I have laying around (on/offscreen): Fill rate: 842M/870M Triangle Throughput: 61.2M/49.9M Triangle Throughput (vertex lit): 49.1/41.9 T-Rex HD: 19/14 Egypt HD: 45/34AP27 - Monday, April 1, 2013 - link
So the Adreno 320 gets 14fps on T-Rex offscreen, 2fps behind the iPad 4. That's pretty impressive.colonelclaw - Monday, April 1, 2013 - link
I must say I'm seriously impressed with improvements the ARM GPUs have made in the last few years. These tests show they are now only 2 to 3 times slower than a respected desktop/laptop solution. When you consider how tiny they are and how little power they draw its impressive stuff.It's always nice to see genuine advances made in computer hardware.
Krysto - Monday, April 1, 2013 - link
I've said it a while ago that Intel's high-end chips for laptops are only ~3x faster than the fastest ARM chips, while using much more power. But nobody seemed to believe me. Many still seemed to think Intel's chips are tens of times faster or something.Anand Lal Shimpi - Monday, April 1, 2013 - link
Sorry guys - there was a problem with the Razer Edge numbers, they have since been corrected. Vivek still has the Edge and is in Korea right now so debugging funny results had to cross a few timezones :)Take care,
Anand
Krysto - Monday, April 1, 2013 - link
I wish you did a better job at explaining why the ARM devices were much weaker in geometry tests. OpenGL ES 2.0, and even OpenGL ES 3.0, doesn't support shaders. So in the end the test is not very comparable, because there are MAJOR differences between the API's - so the DirectX devices will get a considerably larger score because they have extra API features.It might've been different if we had compared devices with DirectX 11.1 vs OpenGL 4.3. but ideally you really just want to keep it DirectX vs DirectX devices, and OpenGL ES vs OpenGL ES devices. Otherwise the numbers will be very skewed because of the API's, and NOT because of the difference in hardware.
Krysto - Monday, April 1, 2013 - link
doesn't support geometry shaders*.Krysto - Monday, April 1, 2013 - link
Why are you comparing a RAZR and Surface Pro? Are 2h and 5h battery life, with $1,000-2,000 price tags actually considered "competition" for the current ARM tablets?They are not, and will never be. Compare only the Atom ones, and the RT tablets if you want. Otherwise you might as well compare ARM chips with Xeon and Tesla's and say ARM "lost".
meacupla - Monday, April 1, 2013 - link
umm, yes?I switched over from a asus transformer to a sufrace pro, instead of a newer ARM based tablet.
UpSpin - Monday, April 1, 2013 - link
Please read this article: http://anandtech.com/show/6871/a-comment-on-pc-gam...The iPad also only lasts 4h if you're gaming on it. The Razer should last 8h+ if you use it as an iPad. If you play the much more demanding games on it however, only then it's battery is dead in 2h-3h.
Also those benchmarks show that the gap between X86 laptop/desktop and the ARM smartphone/tablet sector is closing.
Anandtech will surely post power consumption benchmark between those systems (just as they did in the past) with the results, that Intel and ARM are coming closer and closer.
thunng8 - Monday, April 1, 2013 - link
Where did you get 4hrs for ipad gaming from? The article you linked says 6 hours for the ipad4 compared to a bit over 2hrs for the Razer Edge. Where did you get 8hrs+ if you use it as an ipad for the razer edge from? Anandtech's review says 5.6hr for web browsing and 4.9 for movie playback compared to 9.5 and 13.45hr for ipad4.kyuu - Monday, April 1, 2013 - link
Because, uh, the specifications of the devices outside of their CPU/GPU are irrelevant for the purposes of this article? This isn't about "should you get a Surface Pro or a Razer Edge?".Azzras - Tuesday, April 2, 2013 - link
ARM lost.tipoo - Monday, April 1, 2013 - link
I'm quite surprised, I knew the performance of SoC GPUs was going up very quickly and the SGX line can be scaled to very high performance, but I never would have thought it would be close to even the HD4000, or even as close as it was to the 640M. I've been underestimating these things on the GPU side.AP27 - Monday, April 1, 2013 - link
Is there a reason that some smartphone SoC's aren't included in here? Running offscreen benchmarks would give a good indication of their performance regardless of screen res, right? Would be interesting to see how the Adreno 320/330 and Mali 400 stack up here. The only Adreno GPU used was a 225 and that too in a Windows RT device, which may or may not have the same optimization issues as Tegra 3 on RT.Anand Lal Shimpi - Monday, April 1, 2013 - link
That's coming next :)UpSpin - Monday, April 1, 2013 - link
The problem is that a smartphone might use the same SoC a tablet uses, but is clocked much lower and throttled to meet the lower battery capacity and worse heat disspation.A Tegra 3 in a smartphone scores worse than the same Tegra 3 in a tablet. The Exynos Octa core in the Galaxy S4 performs worse than in a tablet.
araczynski - Monday, April 1, 2013 - link
that's all well and good, but its not like i have a choice to play iOS games on a surface pro, or the android games on the pro, or the pro games on the ios/androis, or any mix of that (short of the IAP shovelware that gets 'diarrhead' onto every conceivable platform). so while the results are somewhat interesting, they're still largely irrelevant in terms of practical applications.Death666Angel - Monday, April 1, 2013 - link
There is Blue Stacks for Windows, which emulates Android apps (rather well nowadays I heard). And you can run x86-Android on some Windows laptops/tablets.But you are mostly missing the point here. This is about showing the performance differences of the platforms, about any games you want to play.
Azzras - Tuesday, April 2, 2013 - link
Surface Pro can play most PC games where as iOS and Android cannot.Surface Pro can also run Steam.
tuxRoller - Monday, April 1, 2013 - link
Hugely impressed with the performance of hd4000 compared to all the rest (including the razers edge).It looks to be roughly as efficient at graphical tasks as the best img can manage, and better than nvidia.
Looking forward to haswell, and even more so to broadwell.
kyuu - Monday, April 1, 2013 - link
Because, uh, the specifications of the devices outside of their CPU/GPU are irrelevant for the purposes of this article? This isn't about "should you get a Surface Pro or a Razer Edge".kyuu - Monday, April 1, 2013 - link
Argh, the new comment system strikes again. This was meant to be in response to another comment, please ignore.NerdT - Monday, April 1, 2013 - link
You should consider the fact that each of these devices have a different power limit and are designed based on that. You need to compare the performance per watt each device achieves, rather than purely comparing the performance numbers you get by just simply running these benchmarks on each device and call yourself a contributor in this community. A good article/argument requires you do lot more research than just comparing the simple scores. otherwise, it could as miss leading as your article...Da W - Monday, April 1, 2013 - link
I'm amazed by the amout of people flocking to bash the article in an attempt To défend their beloved ARM socs. It's like seeing middle age priests burning scientists who would dare Say the earth is round.Truth #1: AMR, X86, i don't care. I'll buy what's good for me.
Truth #2: ARM, Intel, Amd, Nvidia, they all pushed their architecture to the most efficient state possible and, if all made on Intel 22nm process for a given 10W thermal enveloppe, they would bé simular.
Truth #3: autocorrect on my surface pro SUCKS!!!
duploxxx - Tuesday, April 2, 2013 - link
To me the only compare that should be made is of devices within the same range, in this case you can look at the ARM and the x86 based intel devices, what is shown here is that the atom really sucks in graphics, which was known all the time, it can't even do a decent flash player..... OEM should drop that device instantly and kick Intel in the butt, but no Typycal OEM behaviour go with the flow, notebook all over again,Oh btw for those who believe HD4000 in this 17W is powerfull here, they should add a 17W trinity to compare, but offcourse you don't find these often due to the same OEM screwup and foolished consumer by store sales and marketing.
shady28 - Tuesday, April 2, 2013 - link
Really, comparing Surface Pro to a bunch of ARM tablets? Surface pro has less than HALF the battery life of say a Nexus 10, or iPad 4 while weighing 50% more and costing hundreds more to boot.Yet again, Anandtech pits mismatched parts meant for different markets at each other and draws erroneous conclusions. The market, however, does not care what Anandtech thinks. The Surface Pro is a flop at this point in time, for reasons which Anand apparently cannot fathom.
The Surface RT was the appropriate competitor to place against this the lineup of Android and iOS devices.
What next Anand? Give us a comparison of a Macbook Pro vs an Eee PC?
Kidster3001 - Tuesday, April 2, 2013 - link
I think you have the wrong Atom CPU for the VivoTab Smart.Atom Z2560: Clovertrail+ @1.6 GHz, SGX 544MP2, this chip is intended for Android devices.
Atom Z2760: Clovertrail (no + )@1.8 GHz, SGX 545, this chip is intended for Windows devices.
I believe the VivoTab Smart uses the Z2760, especially since it is Windows.
Qellion - Friday, April 12, 2013 - link
not really sure its fair to compare full os tabets/software to ios and android, but interesting non the less.