On the last page there's "Should the fab engineers at Intel do their job well, Ivy Bridge could deliver much better power characteristics than Ivy." in the second paragraph, should that be Sandy on the end?
It's good that Intel has decided that their GPU sucks and that to be competitive they need to catch up to AMD. This will take some time as AMD has a two year head start on APUs but it's all good for consumers.
In spite of the marketing hype from Intel it looks like they've conceded that AMD has the better system approach with APUs for mainstream consumers and laptops. CPU performance alone is no longer a valid yardstick for PC performance thanks to AMD's advance thinking and Llano.
I realize Intel is behind (has been for since anyone can remember) in the graphics arena compared to AMD/ATI and NVIDIA, but is AMD "two years ahead" in terms of an APU? While Clarkdale can't really be considered a true all in one package (hey remember when Kentsfield wasn't considered a true quad core?), it was still an all in one package, and with Sandy Bridge, Intel brought it all together onto one die. Intel isn't calling it an APU but if you compare SNB and Llano/Bulldozer, they share some similarities.
AMD's edge is on graphics, and Intel is closing the gap.
I might be missing something though in why you say they are ahead in terms of an "APU", but from my knowledge, Intel as first to release a retail/commercial APU.
Intel might be catching up in terms of physical performance, but Intel's drivers still, quite frankly, suck. It's the one thing really holding the platform back, in terms of both compatibility and performance. Even simple things like flash acceleration can sometimes be wonky with Intel's drivers.
Furthermore, developers like Carmack have been pestering Intel to give them lower-level access to the hardware, with potentially enormous performance benefits; they'd like to treat PCs more like a console, and stripping away much of the driver/graphics overhead (particularly in terms of memory management) could see some pretty big performance gains. There's a reason why modern integrated GPUs like in Ivy Bridge have theoretical performance on-par with a 360 or PS3, but in practice, are nowhere near as performant.
Of course, the same is true for AMD and nVidia; high-end developers like Carmack and Sweeney seem to want lower-level access to hardware. From Carmack's latest QuakeCon keynote, it does seem like the hardware manufacturers are listening.
Yeah, Intel seems to want to catch up and made a pretty decent effort of no sucking with the HD3000. BUT then they go and do a dumb@ss thing like put the crappy HD2000 on 90% of the Sandies they sell. I think if marketing would get the he11 out of the way they wouldn't be too sucky.
I love AMD as much as the next guy (have three running AMD boxes), but are you going to continue to be "that guy" that posts this nonsense in every Intel/AMD thread? We get it. You love AMD and want to help them win the CPU wars. Yay for fanbois everywhere!
Intel hasn't released an APU. They have released a CPU and GPU on the same slab of silicon. That's not an integrated APU.
No nonsense, just facts. I like facts. Some folks can't handle facts but that's life. I like choice and scrupulous businesses. That's what AMD is unlike Intel.
For someone arguing against marketing hype and looking for facts you seem overly preoccupied by AMD's APU term. If you are looking for which product currently on the market has tighter CPU/IGP integration then that produce is Sandy Bridge not Llano. For instance Sandy Bridge allows bidirectional communication/sharing of instructions and data between the CPUs and IGP via a shared on die L3 cache instead of through a crossbar and off die system memory as in Llano. Sandy Bridge also has more advanced power and thermal monitoring allowing efficient sharing of TDP room between the CPU and IGP, allowing each to be overclocked as needed, something Llano doesn't do.
Yes, Llano has the faster GPU, but that's not the critical concern if what you are interested in is integration. Intel's CPU and GPU on a slab of silicon was Arrandale. Sandy Bridge has moved well beyond that. Llano's CPU/GPU integration looks to be somewhere in between Arrandale and Sandy Bridge. Seeing Llano is AMD's 1st generation Fusion product along with Brazos that's fine. But just because AMD's calls their product an APU doesn't mean it's the pinnacle of CPU/GPU integration.
Guess you're one of those fanboys that just couldn't come back down from high of AMD's time in the spotlight with the Athlon 64?
For someone who speaks of facts, you need to go check the architecture of both the SNB and Llano/Brazo cores before you say AMD has the more integrated approach.
AMD is just using marketing nonsense with calling their new CPU an "APU", just like when they called the Phenom X4s "true quad cores".
Marketing fluff is Intel's bag, right? Maybe you forget the whole "clock speed" fiasco. Selling P4's claiming they are faster than the competition, although they are not...
At least consumers eventually caught on and OEMs began looking at AMD processors as well. :)
You all sound like fanboys though, who really cares who's right? We should just be excited about the TECH!
Lol with Intel's capital and recruiting experienced GPU engineers that "two year lead" will evaporate faster than boiling water. I don't know where you're getting your delusions of the mainstream market sapping up AMD's CPU/GPU combination marketing and products, but the average computer user doesn't use or need anything more than Intel's current generation of graphics. And as others have mentioned Intel's design is more integrated than AMDs on an engineering/design level.
Yes, they have the more powerful GPU, but you have to be an idiot to think it's more integrated than Ivy Bridge. CPU performance and graphics good enough to power 2D and 3D accelerated media are the yardstick for PC performance for the vast majority of users. You're truly deluding yourself if you think the average computer user is playing The Witcher 2 and Deus Ex on their PCs with cards more powerful than IVB's. Even now with AMD's two year advantage, guess who owns the market for systems with a combined CPU/GPU? For integrated graphics? Wintel.
Am I an Intel fanboy? No, the last desktop system I built had an AMD CPU and discrete GPU, but you can't logically deny how well their business is doing now, and you'd be a food to think they would overlook the mainstream demand for a high-end APU. In the future when the market needs/wants it, Intel will have something equivalent or better to AMD/ATI.
I agree. Software is key. Intel is good at parts of it, AMD is better when it comes to keeping up with game developers. However, business markets make enthusiast markets look miniscule. Still, both are great competitors and we consumers just keep winning. :)
"In spite of the marketing hype from Intel it looks like they've conceded that AMD has the better system approach with APUs for mainstream consumers and laptops. CPU performance alone is no longer a valid yardstick for PC performance thanks to AMD's advance thinking and Llano. "
This is utter nonsense. All AMD has done is transfer 400 of its shader units on to the CPU core. What you have with AMD is a 4-5 year old GPU combined with a 3 year old CPU.
Both sides of the coin yeild a huge YAWN from anyone looking for real performance.
4-5 year old GPU? Heh, bud...most hardware takes years to develop. And the HD3000 series may be a bit dated but it makes even the XBox 360 look weak in comparison. Hardly dismal.
We don't have solid details on either one, but don't count on it. The reasons we don't see full FP64 support on non-halo GPUs are still in play for CPUs.
I know its really early to be talking about this cause ivy won't be out for awhile..but what about what amounts to be "ivyb-e" ? I'm sure details are very scarce...but will it follow the desktop path (both s1155) and be socket compatible? in this case s2011? if ivyb-e is socket compatible with sb-e...that'd be great..but by then all the chipset problems would be fleshed out huh..buy a new mono anyway
Does Ivy Bridge finally allow the IGP and QuickSync engine to be available even with a discrete GPU plugged in for both mobile and desktop without resorting to specific chipsets (ie. limited to the high-end chipset) or third-party software (relying on motherboard makers and OEMs to deal with Lucid)? WIth the IGP being OpenCL and DirectCompute capable, even if you have the latest Quad SLI/Crossfire setup it would be useful to have the IGP help out in GPGPU tasks.
And it's interesting that with AMD introducing a beefier form of SMT with two full integer cores, Intel decided not to similarly increase hardware resource duplication to expand Hyperthreading. Instead Intel is focusing on improving single threaded performance by making sure a single thread can use all the resources if Hyperthreading is not needed. Seeing most software isn't making use of 8 simultaneous threads, focusing on making 4 threads (1 per core) work as fast as possible does make sense.
"As we've already seen, introducing a 35W quad-core part could enable Apple to ship a quad-core IVB in a 13-inch MacBook Pro." Here is to hoping that someone other than apple will also ship a decent 13-inch with a quad.
Other than that great insight, I really hope the GPU on IVB will be half way useable. I think we've hit a point where CPU performance is more than adequate for 95% of consumers. Now just need to up the GPU performance and get power down so we can use our laptops on battery all day. I'm more than happy with my 2 year old C2D CPU performance but want battery life, hugely tempted with AMD's A6-3400M. But with Bulldozer looming I think I may hold back for 6 months.
I hope so too, I simply used Apple as an example because it has migrated to quad-core in every member of its MBP family with the exception of the 13-inch. I've updated the statement to be a bit more broad :)
Thanks man, you're a star. You really should just ignore whiny comments like mine as you provide some of the best (if not the best) tech articles online and its free of charge! Everytime you push a article like this my life comes to a standstill so I can read it. Keep up the good work!
I agree with this 100%. I love reading the articles here on AnandTech. The articles are well written, and provide plenty of charts/data/photos to provide as much of a complete understanding as possible of the product in question.
I also like the fairly recent upsurge in articles, you have a great team here.
Cubic as in "to the third power". I remember a slide from on of the Intel presentations saying that, but i'd like to know how it comes about. Vcore^3 ~ power
"Ivy Bridge won't get rid of the need for a discrete GPU but, like Sandy Bridge, it is a step in the right direction."
I'm not so sure I'd agree getting rid of the need for a discrete GPU is a good thing. In terms of furthering technological possibilities, yes, I get that; in terms of me building the computer I want to build and tailoring the results to my purposes, I really don't want these things to be tied together in an inflexible way.
Standards are not all bad. In the case of car manufacturers, we now have things like sealed bearings (so you don't have to regularly grease the bearings in your wheels, and they actually last longer and cost less), safety systems like ABS, seat belts, airbags, etc.
With computers, we need standards as well for compatibility. It lowers cost, ensures that hardware works fluidly between platforms, etc. If we didn't have standards we would have things like rambus - which would only cost us a fortune and slow technological progression.
I think the author means that you won't NEED a discrete CHIP (GPU other than the one on-die) to run a system. Discrete here seems to imply an IGP (integrated onto the motherboard) OR on a separate graphics card. That isn't to say one won't still be required for graphics intensive applications. Ideally, the on-die GPU will be able to work in tandem when a graphics card is installed.
I'd have liked a little more on this.. What's the source? I searched anyway, and found it is using thermal sampling. Presumably it's also seeded. Anyway, I thought it was of interest.
I heard when sandy bridge came out they was considering a GPUless version for enthusiasts who don't need it..is that something they will do eventually?
I suspect its tied to the core, so not going to happen because of high costs. But wouldn't that save even more power/heat problems with that removed?
It just seems like its a mobile orientated cpu vs consumer. :D
"I believe that x86 CPU performance is what sells CPUs today"
That's not all that true anymore,there was a time when apps used by everybody required a fast CPU but that's not the case anymore nowdays..Just a few years ago playing HD content was a chalange on older systems but now ,if you look at usage paterns and what kind of perf is needed, the picture has changed. This is one of the reasons PC sales are not doing so great,there is no need to upgrade your system every 1-2 years.Even Windows is not driving system requirements up anymore. In the consumer space GPU and battery life matter more now. Intel is trying to fight all this with lower power consumption, ultrabooks but that far from enough.If they want to survive the ARM "tsunami" (think about the financial part too here not just perf) , they got to push the software to be more demanding too and maybe the easiest is to do it on the GPU side -not in games.
Intel's quarterly results say there is less to worry about than hyperbolic ARM domination headlines would lead one to think. One IDF slide showed large growth in emerging markets where the analysts aren't as able to get reliable data. Yes, PC upgrade cycles are longer, but that doesn't mean there is not net worldwide growth.
There is room for growth in both areas, it's not a zero-sum game, and some things like mobile video consumption actually go hand-in-hand with faster beefy CPUs.
It's been a while that most users didn't really need faster CPUs or GPUs. In a couple of years, why on earth would anyone but gamers need a PC? Emails, browsing, video would be covered by tablets and the likes.
My Suzuki Alto has about the horsepower of a team of mules and it's fine, really. However, if for about the same money and fuel economy I could get 600 hp, darn right I need 600 hp. That's about where we are at with CPU power. I believe your issue is more properly stated as "there is a need to re engineer humanity because it is not doing what I want."
"In Sandy Bridge, many of those structures are statically partitioned. If you have a buffer that can hold 20 entries, each thread gets up to 10 entries in the buffer. In the event of a single threaded workload, half of the buffer goes unused."
If you turn off HT does this go away in Sandy Bridge?
It depends if the designers thought it would be important enough to implement. Losing 1/2 of the many resources (though probably not execution resources) is huge, and on a non-HT chip it's almost like castrating it.
I am wondering if Ivy Bridge will be faster for gaming then the Sandy Bridge-E. I lot of the improvements seem to be with threading, but more games are starting to implement threading. Sandy Bridge-E will have PCI-Express 3.0 and more memory channels, but Ivy Bridge will have faster memory.
Will Intel FINALLY be turning on hyper threading on every CPU? Cause if not that's the final straw that breaks the camels back, I'm going AMD. It took them years to finally get a decent quad core down to 200 bucks, but then if you wanted HT it cost another 100 bucks. Ridiculous. I want to be able to buy a K series quad core with HT for under 200 bucks. Also WHY are there USB 2.0 ports on this AT ALL?
If AMD has all usb 3.0 ports and the CPU performance is comparable I'm def switching camps.
Do you guys know if AMD has any plans on releasing SSD caching on their motherboards too? Cause that really is a "killer app" so to speak. Large SSD's are too expensive to make any sense unless you're filthy rich but 64GB with two 2TB HDD's in RAID sounds pretty great.
When AMD releases serious competitors at the relevant price points. I hope bulldozer kicks ass, because a solid quad core will be two hundred bucks until there is real competition.
2: I don't care about price. I make enough it doesn't matter. I just care about performance. At the same time, I don't waste money, so I don't buy Extreme Editions either. I buy whatever CPU has the best performance around 200 bucks.
Point: At this point if AMD is even close (within 15%) I'm switching.
Along with the fact that USB3 controllers are larger and need more pins on the chip to connect. They're the same reasons that AMD only has a 4 USB3 ports on its most recent southbridges.
They're the only ones who will market it with a flashy Apple logo light on a pretty aluminum case. Everyone knows that lightweight pretty aluminum cases are a great investment on a system that is outdated after just a few years. I wish Apple would make cars instead of PCs so we could bring the DeLorean back. Something about that stainless steel body just gets me so hot. Sure, it would get horrible gas mileage and be less safe in an accident. But it's just so pretty! Plus, although it would use a standard engine made by Ford or GM under the hood, its drivers would SWEAR that Apple builds its own superior hardware!
Am I the only one who thinks Intel is really wasting a lot of time and money on improvements to their on-die GPU? They keep adding features and improvements to the onboard video, right up to including DirectX 11 support, but isn't this really all an excersise in futility?
Ultimately a GPU integrated with the CPU is going to be bottlenecked by the simple fact that it does not have access to any local memory of it's own. Every time it rasterizes a triangle or performs a texture operation, it is doing it through the same memory bus the CPU is using to fetch instructions, read and write data, etc.
I read that the GPU is taking a larger proportion of the die space in Ivy Bridge, and all I see is a tragic waste of space that would have been better put into another (pair of?) core or more L1/L2 cache.
I can see the purpose of integrated graphics in the lowest-end SKUs for budget builds, and there are certainly power and TDP advantages, and things like Quick-Sync are a great idea, but why stuff a GPU in a high-end processor that will be blown away by a comparatively middle-of-the-road discrete GPU?
I disagree. AMD has shown that on-die GPUs can already compete with middle-of-the-road discrete graphics in notebooks. Trinity will probably take on middle-of-the-road in the current desktop space. Your memory bandwidth argument also doesn't seem to be correct, either. Except for some AMD mainboard graphics with dedicated sideport memory, all IGPs use the RAM, but a lot of them are doing fine. It is also nice to finally see higher clocked RAM be taken advantage of (see Llano 1666MHz vs 1800MHz). DDR4 will add bandwidth as well. Once the bandwidth becomes a bottleneck, you can address that, but at the moment Intel doesn't seem to be there, yet, so they keep addressing their other GPU issues. What is wrong with that? Also, how many people who buy high-end CPUs end up gaming 90% of the time on them? A lot of people need high-end CPUs for work related stuff, coding, CAD etc. Why should they have to buy a discrete graphics card?
Overall, you are doing a lot of generalization and you don't take into account quite a few things. :-)
Ironically I spend lots of time in AutoCAD, and a discrete graphics board makes a tremendous difference. Gamer-grade stuff is usually not the best thing in that arena though, it needs to be the special "workstation" cards, which have very different drivers. Quadro or FireGL.
I agree with you on the work usage, and gaming workloads not being 90% of the time, but on the other hand,workstations tend to have Xeons in them, with discrete graphics cards.
As a fraction of the computer market, buyers who want power over everything else have plunged. Mobility is so important for OEMs now that fitting already-existent performance levels into smaller, cheaper devices becomes more important than pushing the envelope. I still remember a time when hardly anybody gave a rat's ass about how much power a CPU consumed as long as it didn't melt down. Today, power consumption is a crucial factor due to battery life and heat.
Personally these developments make me rather sad, partly because I like ever-shinier games, and (more importantly) because seeing the unwashed masses talk about computers as if they were clothing brands makes me want to rip out their throats. That's how the world works, though. Hopefully the chip makers will realize that there's still a market for power over fluff.
Looking at it on the bright side, CPU power stagnation might make game designers pay more attention to content. Hey, you have to look on the bright side of life.
I think that's largely because for the average consumer, PCs have reached the point where CPU capabilities are no longer the bottleneck. Look at the success of the 2010 MacBook Air, which had a slow C2D but a speedy SSD, and sold well enough to last into mid-2011. Games are the next major hurdle, but that's the GPU rather than the CPU, and hence the reason it receives a bigger focus in Ivy Bridge (as it also did in Sandy Bridge compared to Westmere).
The emphasis now is having the power we have last longer and be available in smaller, more portable devices.
You're missing the point. They aren't trying to beef the power of the CPU. CPUs are already quite powerful for most tasks. They are trying to lower energy usage and sell en-mass to businesses that use thousands of computers.
"I've complained in the past about the lack of free transcoding applications (e.g. Handbrake, x264) that support Quick Sync. I suspect things will be better upon Ivy Bridge's arrival."
As long as Intel doesn't expose the Quick Sync API there is no way for such applications to make use of it, not to mention the technical limitations.
There are hints on doom9 that they know a bit about the lower level details but that it's all NDA'ed. Even with that knowledge he says that it's probably not possible or probable to do so.
I would also wonder who (software wise) would be willing to put a lot of resources into supporting something that isn't really available on most SB platforms - or at least not available without jumping through hoops (correct mb, correct chip, 3rd party software, etc).
Considering Trinity was shown at IDF up and running and the fact that Trinity and other AMD nex gen products were developed concurrently with Zambezi and Opteron Bulldozer chips - which have been shipping by the tens of thousands already, I'd say Trinity will be here in Q1 '12.
You talk about what's for support for handbrake but to put it harshly your mind is stuck in the past gen device era. I simply grab a full DVD and run makemkv on it to just store it unmodified in a single file and copy it to my iPad2 directly. Plays perfectly fine under avplayerhd.
I consider it that you would have to be insane as in you think your an onion to bother handbrakin your videos if you got a device like ipad2 that can just play them straight.
If your the hoarder type that insists that you watch Rambo 4 etc every week and need to pack 100+ full movies on your single device at the same time your a freak so pipe you niche life style comments to /dev/null. I would not understand why you have time to bother shrinking/ converting your movies all the time over just getting sick of some of them and putting new stuff on from time to time.
Full 8gb is big but they still copy of amazingly quickly over to a ipad2 64gb, a lot of DVDs don't get that full size anyway. If you bought a honeycomb tablet and put sdslot storage on it, I am sure it would be a extremely painfull slow copying experience if you use SD over built in flash, maybe this is what Apple avoid sd lslotd in the first place. Built in flash is lighting fast and less draw on battery.
Having full on pc and just coying over in 2mins vs bothering to convert I know what i just choose full copy every time. Once I have watched it takes at least a year before I consider watching the same thing again.
The GPU part would be streets ahead, the drivers would be good, Tegra 3 (4..5...) on the 22nm trigate process is an absolutely mouth-watering proposition, and who knows what else could have been accomplished with the engineering effort saved on Intel GPUs and the (so far) fruitless efforts to push x86 into smart consumer devices.
The iMac 2011 27" model ships with the Z68 chipset.
So the question is whether or not it would support IVY BRIDGE CPUs in it? (given that all other things like TDP etc requirements match up).
I wonder if IVY BRIDGE CPUs would require a full EFI or kernel module upgrade to be supported? (i mean i really don't care if the USB 3.0 works, but I do care about the new design, gpu performance and lower power to performance ratio compared to sandy bridge!).
So being that this is a current/future platform, what's the big deal about support for DDR3L (which as a standard was ratified in July 2010)? I realize the specs of DDR3U ("Ultra low voltage" 1.25V) are not "final" yet, but you'd think it would be implemented given that DDR3U has been available to engineers (according to Hynix/Google) since June 2010.
I wonder why Intel add DX11, but no OpenGL 4? Both are needed by developers of apps and DX11 isnt need by allmost all app. OpenGL 4 is needed by Linux desktop like KDE 4, GNOME, Xfce and others. So why Intel still doesnt support it.
Why do the new intel chipsets (series 7) still contain so many (10) usb2 ports ? Would any PC/laptop manufacturer chose to use a usb2 port instead of anavailable usb3 port from the chipset ? for e.g would they use 2 usb2 + 2usb3 instead of 4 usb3 from the chipset ?
I know PC manufacturers are using this configuration (2 usb2 + 2 usb3) because now they need to support usb3 through an external controller so they are saving cost by using a 2 port controller. But once series 7 chipsets arrive with native usb3 support, there would be no cost advantage to do this. Is this to derisk any interoperability issues with older usb2 devices (i.e if for some reason usb3 ports don't work well with some existing usb2 devices) ?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
97 Comments
Back to Article
driscoll42 - Saturday, September 17, 2011 - link
On the last page there's "Should the fab engineers at Intel do their job well, Ivy Bridge could deliver much better power characteristics than Ivy." in the second paragraph, should that be Sandy on the end?Anand Lal Shimpi - Saturday, September 17, 2011 - link
Thank you! Fixed :)Take care,
Anand
Beenthere - Saturday, September 17, 2011 - link
It's good that Intel has decided that their GPU sucks and that to be competitive they need to catch up to AMD. This will take some time as AMD has a two year head start on APUs but it's all good for consumers.In spite of the marketing hype from Intel it looks like they've conceded that AMD has the better system approach with APUs for mainstream consumers and laptops. CPU performance alone is no longer a valid yardstick for PC performance thanks to AMD's advance thinking and Llano.
TypeS - Saturday, September 17, 2011 - link
I realize Intel is behind (has been for since anyone can remember) in the graphics arena compared to AMD/ATI and NVIDIA, but is AMD "two years ahead" in terms of an APU? While Clarkdale can't really be considered a true all in one package (hey remember when Kentsfield wasn't considered a true quad core?), it was still an all in one package, and with Sandy Bridge, Intel brought it all together onto one die. Intel isn't calling it an APU but if you compare SNB and Llano/Bulldozer, they share some similarities.AMD's edge is on graphics, and Intel is closing the gap.
I might be missing something though in why you say they are ahead in terms of an "APU", but from my knowledge, Intel as first to release a retail/commercial APU.
Guspaz - Monday, September 19, 2011 - link
Intel might be catching up in terms of physical performance, but Intel's drivers still, quite frankly, suck. It's the one thing really holding the platform back, in terms of both compatibility and performance. Even simple things like flash acceleration can sometimes be wonky with Intel's drivers.Furthermore, developers like Carmack have been pestering Intel to give them lower-level access to the hardware, with potentially enormous performance benefits; they'd like to treat PCs more like a console, and stripping away much of the driver/graphics overhead (particularly in terms of memory management) could see some pretty big performance gains. There's a reason why modern integrated GPUs like in Ivy Bridge have theoretical performance on-par with a 360 or PS3, but in practice, are nowhere near as performant.
Of course, the same is true for AMD and nVidia; high-end developers like Carmack and Sweeney seem to want lower-level access to hardware. From Carmack's latest QuakeCon keynote, it does seem like the hardware manufacturers are listening.
fic2 - Monday, September 19, 2011 - link
Yeah, Intel seems to want to catch up and made a pretty decent effort of no sucking with the HD3000. BUT then they go and do a dumb@ss thing like put the crappy HD2000 on 90% of the Sandies they sell. I think if marketing would get the he11 out of the way they wouldn't be too sucky.Oh, yeah, except the drivers.
bigboxes - Saturday, September 17, 2011 - link
I love AMD as much as the next guy (have three running AMD boxes), but are you going to continue to be "that guy" that posts this nonsense in every Intel/AMD thread? We get it. You love AMD and want to help them win the CPU wars. Yay for fanbois everywhere!Beenthere - Saturday, September 17, 2011 - link
Intel hasn't released an APU. They have released a CPU and GPU on the same slab of silicon. That's not an integrated APU.No nonsense, just facts. I like facts. Some folks can't handle facts but that's life. I like choice and scrupulous businesses. That's what AMD is unlike Intel.
ltcommanderdata - Saturday, September 17, 2011 - link
For someone arguing against marketing hype and looking for facts you seem overly preoccupied by AMD's APU term. If you are looking for which product currently on the market has tighter CPU/IGP integration then that produce is Sandy Bridge not Llano. For instance Sandy Bridge allows bidirectional communication/sharing of instructions and data between the CPUs and IGP via a shared on die L3 cache instead of through a crossbar and off die system memory as in Llano. Sandy Bridge also has more advanced power and thermal monitoring allowing efficient sharing of TDP room between the CPU and IGP, allowing each to be overclocked as needed, something Llano doesn't do.Yes, Llano has the faster GPU, but that's not the critical concern if what you are interested in is integration. Intel's CPU and GPU on a slab of silicon was Arrandale. Sandy Bridge has moved well beyond that. Llano's CPU/GPU integration looks to be somewhere in between Arrandale and Sandy Bridge. Seeing Llano is AMD's 1st generation Fusion product along with Brazos that's fine. But just because AMD's calls their product an APU doesn't mean it's the pinnacle of CPU/GPU integration.
gramboh - Sunday, September 18, 2011 - link
Boom. Beenthere just got roasted, and of course disappears rather than admitting he was wrong.shiznit - Saturday, September 17, 2011 - link
Intel's APU is more integrated than AMD'sTypeS - Monday, September 19, 2011 - link
Guess you're one of those fanboys that just couldn't come back down from high of AMD's time in the spotlight with the Athlon 64?For someone who speaks of facts, you need to go check the architecture of both the SNB and Llano/Brazo cores before you say AMD has the more integrated approach.
AMD is just using marketing nonsense with calling their new CPU an "APU", just like when they called the Phenom X4s "true quad cores".
JonnyDough - Monday, September 19, 2011 - link
Marketing fluff is Intel's bag, right? Maybe you forget the whole "clock speed" fiasco. Selling P4's claiming they are faster than the competition, although they are not...At least consumers eventually caught on and OEMs began looking at AMD processors as well. :)
You all sound like fanboys though, who really cares who's right? We should just be excited about the TECH!
Kaihekoa - Saturday, September 17, 2011 - link
Lol with Intel's capital and recruiting experienced GPU engineers that "two year lead" will evaporate faster than boiling water. I don't know where you're getting your delusions of the mainstream market sapping up AMD's CPU/GPU combination marketing and products, but the average computer user doesn't use or need anything more than Intel's current generation of graphics. And as others have mentioned Intel's design is more integrated than AMDs on an engineering/design level.Yes, they have the more powerful GPU, but you have to be an idiot to think it's more integrated than Ivy Bridge. CPU performance and graphics good enough to power 2D and 3D accelerated media are the yardstick for PC performance for the vast majority of users. You're truly deluding yourself if you think the average computer user is playing The Witcher 2 and Deus Ex on their PCs with cards more powerful than IVB's. Even now with AMD's two year advantage, guess who owns the market for systems with a combined CPU/GPU? For integrated graphics? Wintel.
Am I an Intel fanboy? No, the last desktop system I built had an AMD CPU and discrete GPU, but you can't logically deny how well their business is doing now, and you'd be a food to think they would overlook the mainstream demand for a high-end APU. In the future when the market needs/wants it, Intel will have something equivalent or better to AMD/ATI.
Zoomer - Saturday, September 17, 2011 - link
Let's not forget drivers and game support, not to mention IQ. Last I checked, Intel graphics drivers were still pretty horrible.iwodo - Sunday, September 18, 2011 - link
Exactly. Designing Hardware is easy. You throw Money and Engineers you could be there in no time. Especially with the expertise from Intel.Software - on the other hand, takes time. No matter how many engineers you put in. Drivers is the problems Intel has to overcome.
JonnyDough - Monday, September 19, 2011 - link
I agree. Software is key. Intel is good at parts of it, AMD is better when it comes to keeping up with game developers. However, business markets make enthusiast markets look miniscule. Still, both are great competitors and we consumers just keep winning. :)iwodo - Sunday, September 18, 2011 - link
I forgot to add, there is a reason why Nvidia has more Software Engineers then Hardware.medi01 - Sunday, September 18, 2011 - link
It's actually the other way round. Pretty much any CPU starting from about 2008 is "more than good enough" for most users.BSMonitor - Sunday, September 18, 2011 - link
"In spite of the marketing hype from Intel it looks like they've conceded that AMD has the better system approach with APUs for mainstream consumers and laptops. CPU performance alone is no longer a valid yardstick for PC performance thanks to AMD's advance thinking and Llano. "This is utter nonsense. All AMD has done is transfer 400 of its shader units on to the CPU core. What you have with AMD is a 4-5 year old GPU combined with a 3 year old CPU.
Both sides of the coin yeild a huge YAWN from anyone looking for real performance.
JonnyDough - Monday, September 19, 2011 - link
4-5 year old GPU? Heh, bud...most hardware takes years to develop. And the HD3000 series may be a bit dated but it makes even the XBox 360 look weak in comparison. Hardly dismal.moozoo - Saturday, September 17, 2011 - link
Does its GPU support double precision under OpenCL? i.e. cl_khr_fp64Does Trinity?
Ryan Smith - Saturday, September 17, 2011 - link
We don't have solid details on either one, but don't count on it. The reasons we don't see full FP64 support on non-halo GPUs are still in play for CPUs.Galcobar - Saturday, September 17, 2011 - link
Perhaps I'm missing something in the acronyms, but the table and text seems to disagree on the availability of SSD caching.The text states "All of the 7-series consumer chipsets will support Intel's Rapid Storage Technology (RST, aka SSD caching)."
The table, however, puts No under the Z75 column for Intel SRT (SSD caching).
As I understand things, you need RST (software) to support SRT (bound to the motherboard), but without SRT you don't get SSD caching.
Anand Lal Shimpi - Saturday, September 17, 2011 - link
Fixed :) SRT is only on the Z77/H77, not the Z75.Take care,
Anand
mlkmade - Saturday, September 17, 2011 - link
I know its really early to be talking about this cause ivy won't be out for awhile..but what about what amounts to be "ivyb-e" ? I'm sure details are very scarce...but will it follow the desktop path (both s1155) and be socket compatible? in this case s2011? if ivyb-e is socket compatible with sb-e...that'd be great..but by then all the chipset problems would be fleshed out huh..buy a new mono anywayAnand Lal Shimpi - Saturday, September 17, 2011 - link
I would hope so, but as of now there is no IVB-E on the roadmaps so anything I'd say here would be uninformed and speculative at this point :-/Take care,
Anand
ltcommanderdata - Saturday, September 17, 2011 - link
Does Ivy Bridge finally allow the IGP and QuickSync engine to be available even with a discrete GPU plugged in for both mobile and desktop without resorting to specific chipsets (ie. limited to the high-end chipset) or third-party software (relying on motherboard makers and OEMs to deal with Lucid)? WIth the IGP being OpenCL and DirectCompute capable, even if you have the latest Quad SLI/Crossfire setup it would be useful to have the IGP help out in GPGPU tasks.And it's interesting that with AMD introducing a beefier form of SMT with two full integer cores, Intel decided not to similarly increase hardware resource duplication to expand Hyperthreading. Instead Intel is focusing on improving single threaded performance by making sure a single thread can use all the resources if Hyperthreading is not needed. Seeing most software isn't making use of 8 simultaneous threads, focusing on making 4 threads (1 per core) work as fast as possible does make sense.
Meegulthwarp - Saturday, September 17, 2011 - link
"As we've already seen, introducing a 35W quad-core part could enable Apple to ship a quad-core IVB in a 13-inch MacBook Pro." Here is to hoping that someone other than apple will also ship a decent 13-inch with a quad.Other than that great insight, I really hope the GPU on IVB will be half way useable. I think we've hit a point where CPU performance is more than adequate for 95% of consumers. Now just need to up the GPU performance and get power down so we can use our laptops on battery all day. I'm more than happy with my 2 year old C2D CPU performance but want battery life, hugely tempted with AMD's A6-3400M. But with Bulldozer looming I think I may hold back for 6 months.
Anand Lal Shimpi - Saturday, September 17, 2011 - link
I hope so too, I simply used Apple as an example because it has migrated to quad-core in every member of its MBP family with the exception of the 13-inch. I've updated the statement to be a bit more broad :)Take care,
Anand
Meegulthwarp - Saturday, September 17, 2011 - link
Thanks man, you're a star. You really should just ignore whiny comments like mine as you provide some of the best (if not the best) tech articles online and its free of charge! Everytime you push a article like this my life comes to a standstill so I can read it. Keep up the good work!zshift - Saturday, September 17, 2011 - link
I agree with this 100%. I love reading the articles here on AnandTech. The articles are well written, and provide plenty of charts/data/photos to provide as much of a complete understanding as possible of the product in question.I also like the fairly recent upsurge in articles, you have a great team here.
PS: Bench rocks!
lowenz - Saturday, September 17, 2011 - link
From power page: "Voltage changes have a cubic affect on power"Cubic?
P ~ C * v^2 * freq * switching activity
know of fence - Saturday, September 17, 2011 - link
Cubic as in "to the third power".I remember a slide from on of the Intel presentations saying that, but i'd like to know how it comes about.
Vcore^3 ~ power
Here somebody posted some data of Vcore vs Power. If you were to plot power consumption in relation to Vcore^3 then one ought to get a linear graph.
http://www.awardfabrik.de/forum/showthread.php?t=6...
KalTorak - Saturday, September 17, 2011 - link
Cubic. Because f, for a big chunk of the V-f curve, tends to be linear in V.gevorg - Saturday, September 17, 2011 - link
Will IVB have 8-core unlocked CPUs like 2500K/2600K SNB?Sabresiberian - Saturday, September 17, 2011 - link
"Ivy Bridge won't get rid of the need for a discrete GPU but, like Sandy Bridge, it is a step in the right direction."I'm not so sure I'd agree getting rid of the need for a discrete GPU is a good thing. In terms of furthering technological possibilities, yes, I get that; in terms of me building the computer I want to build and tailoring the results to my purposes, I really don't want these things to be tied together in an inflexible way.
;)
platedslicer - Sunday, September 18, 2011 - link
Standardization seems to be the current trend... next thing you know, the computer industry has gone the way of car manufacturers.JonnyDough - Monday, September 19, 2011 - link
Standards are not all bad. In the case of car manufacturers, we now have things like sealed bearings (so you don't have to regularly grease the bearings in your wheels, and they actually last longer and cost less), safety systems like ABS, seat belts, airbags, etc.With computers, we need standards as well for compatibility. It lowers cost, ensures that hardware works fluidly between platforms, etc. If we didn't have standards we would have things like rambus - which would only cost us a fortune and slow technological progression.
JonnyDough - Monday, September 19, 2011 - link
I think the author means that you won't NEED a discrete CHIP (GPU other than the one on-die) to run a system. Discrete here seems to imply an IGP (integrated onto the motherboard) OR on a separate graphics card. That isn't to say one won't still be required for graphics intensive applications. Ideally, the on-die GPU will be able to work in tandem when a graphics card is installed.piroroadkill - Saturday, September 17, 2011 - link
I'd have liked a little more on this.. What's the source?I searched anyway, and found it is using thermal sampling. Presumably it's also seeded. Anyway, I thought it was of interest.
Jamahl - Saturday, September 17, 2011 - link
Don't you get tired of saying "intel is finally taking gpu performance seriously" every year? I do.JonnyDough - Monday, September 19, 2011 - link
I'd just like to say...=) Yes sir, I do.
imaheadcase - Saturday, September 17, 2011 - link
I heard when sandy bridge came out they was considering a GPUless version for enthusiasts who don't need it..is that something they will do eventually?I suspect its tied to the core, so not going to happen because of high costs. But wouldn't that save even more power/heat problems with that removed?
It just seems like its a mobile orientated cpu vs consumer. :D
DanNeely - Saturday, September 17, 2011 - link
With power gating if you're not using the IGP it doesn't consume any power; so the only thing they'd save on is die area by removing it.imaheadcase - Saturday, September 17, 2011 - link
Ah did not know that part. thanks.fic2 - Monday, September 19, 2011 - link
"I heard when sandy bridge came out they was considering a GPUless version for enthusiasts who don't need it..."Interesting since Intel did the exact opposite - put the only GPU with half decent performance into the enthusiast 'K' series.
JonnyDough - Monday, September 19, 2011 - link
The only people who would actually consider that are businesses and home users who don't play "real" games. :Pjjj - Saturday, September 17, 2011 - link
"I believe that x86 CPU performance is what sells CPUs today"That's not all that true anymore,there was a time when apps used by everybody required a fast CPU but that's not the case anymore nowdays..Just a few years ago playing HD content was a chalange on older systems but now ,if you look at usage paterns and what kind of perf is needed, the picture has changed. This is one of the reasons PC sales are not doing so great,there is no need to upgrade your system every 1-2 years.Even Windows is not driving system requirements up anymore.
In the consumer space GPU and battery life matter more now. Intel is trying to fight all this with lower power consumption, ultrabooks but that far from enough.If they want to survive the ARM "tsunami" (think about the financial part too here not just perf) , they got to push the software to be more demanding too and maybe the easiest is to do it on the GPU side -not in games.
MadMan007 - Saturday, September 17, 2011 - link
Intel's quarterly results say there is less to worry about than hyperbolic ARM domination headlines would lead one to think. One IDF slide showed large growth in emerging markets where the analysts aren't as able to get reliable data. Yes, PC upgrade cycles are longer, but that doesn't mean there is not net worldwide growth.There is room for growth in both areas, it's not a zero-sum game, and some things like mobile video consumption actually go hand-in-hand with faster beefy CPUs.
medi01 - Sunday, September 18, 2011 - link
It's been a while that most users didn't really need faster CPUs or GPUs.In a couple of years, why on earth would anyone but gamers need a PC? Emails, browsing, video would be covered by tablets and the likes.
dealcorn - Friday, November 4, 2011 - link
My Suzuki Alto has about the horsepower of a team of mules and it's fine, really. However, if for about the same money and fuel economy I could get 600 hp, darn right I need 600 hp. That's about where we are at with CPU power. I believe your issue is more properly stated as "there is a need to re engineer humanity because it is not doing what I want."Billy_Boy - Saturday, September 17, 2011 - link
"In Sandy Bridge, many of those structures are statically partitioned. If you have a buffer that can hold 20 entries, each thread gets up to 10 entries in the buffer. In the event of a single threaded workload, half of the buffer goes unused."If you turn off HT does this go away in Sandy Bridge?
BioTurboNick - Saturday, September 17, 2011 - link
They are talking about a hardware implementation, so it wouldn't go away by disabling hyper-threading.Zoomer - Saturday, September 17, 2011 - link
It depends if the designers thought it would be important enough to implement. Losing 1/2 of the many resources (though probably not execution resources) is huge, and on a non-HT chip it's almost like castrating it.BioTurboNick - Saturday, September 17, 2011 - link
Right. That's probably why Ivy Bridge is moving to completely single-thread-capable resources.danjw - Saturday, September 17, 2011 - link
I am wondering if Ivy Bridge will be faster for gaming then the Sandy Bridge-E. I lot of the improvements seem to be with threading, but more games are starting to implement threading. Sandy Bridge-E will have PCI-Express 3.0 and more memory channels, but Ivy Bridge will have faster memory.Hrel - Saturday, September 17, 2011 - link
Will Intel FINALLY be turning on hyper threading on every CPU? Cause if not that's the final straw that breaks the camels back, I'm going AMD. It took them years to finally get a decent quad core down to 200 bucks, but then if you wanted HT it cost another 100 bucks. Ridiculous. I want to be able to buy a K series quad core with HT for under 200 bucks. Also WHY are there USB 2.0 ports on this AT ALL?If AMD has all usb 3.0 ports and the CPU performance is comparable I'm def switching camps.
Do you guys know if AMD has any plans on releasing SSD caching on their motherboards too? Cause that really is a "killer app" so to speak. Large SSD's are too expensive to make any sense unless you're filthy rich but 64GB with two 2TB HDD's in RAID sounds pretty great.
philosofool - Saturday, September 17, 2011 - link
When AMD releases serious competitors at the relevant price points. I hope bulldozer kicks ass, because a solid quad core will be two hundred bucks until there is real competition.medi01 - Sunday, September 18, 2011 - link
If you count motherboard price in, AMD is already more than competitive.Hrel - Thursday, September 22, 2011 - link
1: I said comparable, not competitive.2: I don't care about price. I make enough it doesn't matter. I just care about performance. At the same time, I don't waste money, so I don't buy Extreme Editions either. I buy whatever CPU has the best performance around 200 bucks.
Point: At this point if AMD is even close (within 15%) I'm switching.
mino - Monday, September 26, 2011 - link
If price does not matter, the you shall not bother about desktop stuff and go directly fro 2P workstations with ECC.Just a thought.
JKflipflop98 - Monday, October 24, 2011 - link
Hind sight is 20/20 now.Zoomer - Saturday, September 17, 2011 - link
That stuff can, and imo should, be implemented in the filesystem.Cr0nJ0b - Saturday, September 17, 2011 - link
I'm wondering they wounldn't just got with all USB 3.0 ports since they are backward compatible with other UBS forms. Maybe a licensing cost issue?Zoomer - Saturday, September 17, 2011 - link
Intel's platform is really a mess and a hodgepodge nowadays. Pity.ggathagan - Saturday, September 17, 2011 - link
There aren't enough PCIe lanes to allow for that kind of bandwidth.DanNeely - Sunday, September 18, 2011 - link
Along with the fact that USB3 controllers are larger and need more pins on the chip to connect. They're the same reasons that AMD only has a 4 USB3 ports on its most recent southbridges.marcusj0015 - Saturday, September 17, 2011 - link
Intel Invented USB...so no there are no licensing costs that i can think of.
Aone - Sunday, September 18, 2011 - link
Is Ivy's Quick Sync in the same power gated domain together with IGP as it happens in SB or Quick Sync and IGP can be switched on/off independently?Arnulf - Sunday, September 18, 2011 - link
"Voltage changes have a cubic affect on power, so even a small reduction here can have a tangible impact."P = V^2/R
Quadratic relationship, rather than cubic ?
damianrobertjones - Sunday, September 18, 2011 - link
" As we've already seen, introducing a 35W quad-core part could enable Apple (and other OEMs) to ship a quad-core IVB in a 13-inch system."Is Apple the only company that can release a 13" system?
medi01 - Monday, September 19, 2011 - link
No. But it's the only one that absolutely needs to be commented on in orgasmic tone in US press (and big chunk of EU press too)JonnyDough - Monday, September 19, 2011 - link
They're the only ones who will market it with a flashy Apple logo light on a pretty aluminum case. Everyone knows that lightweight pretty aluminum cases are a great investment on a system that is outdated after just a few years. I wish Apple would make cars instead of PCs so we could bring the DeLorean back. Something about that stainless steel body just gets me so hot. Sure, it would get horrible gas mileage and be less safe in an accident. But it's just so pretty! Plus, although it would use a standard engine made by Ford or GM under the hood, its drivers would SWEAR that Apple builds its own superior hardware!cldudley - Sunday, September 18, 2011 - link
Am I the only one who thinks Intel is really wasting a lot of time and money on improvements to their on-die GPU? They keep adding features and improvements to the onboard video, right up to including DirectX 11 support, but isn't this really all an excersise in futility?Ultimately a GPU integrated with the CPU is going to be bottlenecked by the simple fact that it does not have access to any local memory of it's own. Every time it rasterizes a triangle or performs a texture operation, it is doing it through the same memory bus the CPU is using to fetch instructions, read and write data, etc.
I read that the GPU is taking a larger proportion of the die space in Ivy Bridge, and all I see is a tragic waste of space that would have been better put into another (pair of?) core or more L1/L2 cache.
I can see the purpose of integrated graphics in the lowest-end SKUs for budget builds, and there are certainly power and TDP advantages, and things like Quick-Sync are a great idea, but why stuff a GPU in a high-end processor that will be blown away by a comparatively middle-of-the-road discrete GPU?
Death666Angel - Sunday, September 18, 2011 - link
I disagree. AMD has shown that on-die GPUs can already compete with middle-of-the-road discrete graphics in notebooks. Trinity will probably take on middle-of-the-road in the current desktop space.Your memory bandwidth argument also doesn't seem to be correct, either. Except for some AMD mainboard graphics with dedicated sideport memory, all IGPs use the RAM, but a lot of them are doing fine. It is also nice to finally see higher clocked RAM be taken advantage of (see Llano 1666MHz vs 1800MHz). DDR4 will add bandwidth as well.
Once the bandwidth becomes a bottleneck, you can address that, but at the moment Intel doesn't seem to be there, yet, so they keep addressing their other GPU issues. What is wrong with that?
Also, how many people who buy high-end CPUs end up gaming 90% of the time on them? A lot of people need high-end CPUs for work related stuff, coding, CAD etc. Why should they have to buy a discrete graphics card?
Overall, you are doing a lot of generalization and you don't take into account quite a few things. :-)
cldudley - Sunday, September 18, 2011 - link
Ironically I spend lots of time in AutoCAD, and a discrete graphics board makes a tremendous difference. Gamer-grade stuff is usually not the best thing in that arena though, it needs to be the special "workstation" cards, which have very different drivers. Quadro or FireGL.I agree with you on the work usage, and gaming workloads not being 90% of the time, but on the other hand,workstations tend to have Xeons in them, with discrete graphics cards.
platedslicer - Sunday, September 18, 2011 - link
As a fraction of the computer market, buyers who want power over everything else have plunged. Mobility is so important for OEMs now that fitting already-existent performance levels into smaller, cheaper devices becomes more important than pushing the envelope. I still remember a time when hardly anybody gave a rat's ass about how much power a CPU consumed as long as it didn't melt down. Today, power consumption is a crucial factor due to battery life and heat.Personally these developments make me rather sad, partly because I like ever-shinier games, and (more importantly) because seeing the unwashed masses talk about computers as if they were clothing brands makes me want to rip out their throats. That's how the world works, though. Hopefully the chip makers will realize that there's still a market for power over fluff.
Looking at it on the bright side, CPU power stagnation might make game designers pay more attention to content. Hey, you have to look on the bright side of life.
KPOM - Monday, September 19, 2011 - link
I think that's largely because for the average consumer, PCs have reached the point where CPU capabilities are no longer the bottleneck. Look at the success of the 2010 MacBook Air, which had a slow C2D but a speedy SSD, and sold well enough to last into mid-2011. Games are the next major hurdle, but that's the GPU rather than the CPU, and hence the reason it receives a bigger focus in Ivy Bridge (as it also did in Sandy Bridge compared to Westmere).The emphasis now is having the power we have last longer and be available in smaller, more portable devices.
JonnyDough - Monday, September 19, 2011 - link
You're missing the point. They aren't trying to beef the power of the CPU. CPUs are already quite powerful for most tasks. They are trying to lower energy usage and sell en-mass to businesses that use thousands of computers.AstroGuardian - Monday, September 19, 2011 - link
"Intel implied that upward scalability was a key goal of the Ivy Bridge GPU design, perhaps we will see that happen in 2013."No we wont. The world ends in 2012 remember?
JonnyDough - Monday, September 19, 2011 - link
It ended in the year 2000. Hello! Y2K ring any bells? Come on, keep up with current events would ya?TheRyuu - Monday, September 19, 2011 - link
"I've complained in the past about the lack of free transcoding applications (e.g. Handbrake, x264) that support Quick Sync. I suspect things will be better upon Ivy Bridge's arrival."As long as Intel doesn't expose the Quick Sync API there is no way for such applications to make use of it, not to mention the technical limitations.
There are hints on doom9 that they know a bit about the lower level details but that it's all NDA'ed. Even with that knowledge he says that it's probably not possible or probable to do so.
You can find various rambling/rage here:
http://forum.doom9.org/showthread.php?t=156761 (Dark_Shikari and pengvado are the x264 devs).
tl;dr: http://forum.doom9.org/showthread.php?p=1511469#po... (to the end of the thread)
fic2 - Monday, September 19, 2011 - link
I would also wonder who (software wise) would be willing to put a lot of resources into supporting something that isn't really available on most SB platforms - or at least not available without jumping through hoops (correct mb, correct chip, 3rd party software, etc).fic2 - Monday, September 19, 2011 - link
"By the time Ivy Bridge arrives however, AMD will have already taken another step forward with Trinity."I wonder how realistic this is considering that AMD can't even get Bulldozer out the door.
My money is on Ivy Bridge showing up before Trinity.
Beenthere - Monday, September 19, 2011 - link
Considering Trinity was shown at IDF up and running and the fact that Trinity and other AMD nex gen products were developed concurrently with Zambezi and Opteron Bulldozer chips - which have been shipping by the tens of thousands already, I'd say Trinity will be here in Q1 '12.fic2 - Monday, September 19, 2011 - link
"Opteron Bulldozer chips - which have been shipping by the tens of thousands already"And, yet, nobody can benchmark them.
I hope that I am wrong, but given AMD's continual delays shipping the desktop BD I am not holding my breath.
Whichever comes first gets my money - assuming that BD is actually competitive with SB performance.
thebeastie - Tuesday, September 20, 2011 - link
You talk about what's for support for handbrake but to put it harshly your mind is stuck in the past gen device era.I simply grab a full DVD and run makemkv on it to just store it unmodified in a single file and copy it to my iPad2 directly.
Plays perfectly fine under avplayerhd.
I consider it that you would have to be insane as in you think your an onion to bother handbrakin your videos if you got a device like ipad2 that can just play them straight.
If your the hoarder type that insists that you watch Rambo 4 etc every week and need to pack 100+ full movies on your single device at the same time your a freak so pipe you niche life style comments to /dev/null.
I would not understand why you have time to bother shrinking/ converting your movies all the time over just getting sick of some of them and putting new stuff on from time to time.
TheRyuu - Tuesday, September 20, 2011 - link
8.5GB for a movie seems a bit impractical for an ipad.thebeastie - Wednesday, September 21, 2011 - link
Full 8gb is big but they still copy of amazingly quickly over to a ipad2 64gb, a lot of DVDs don't get that full size anyway.If you bought a honeycomb tablet and put sdslot storage on it, I am sure it would be a extremely painfull slow copying experience if you use SD over built in flash, maybe this is what Apple avoid sd lslotd in the first place. Built in flash is lighting fast and less draw on battery.
Having full on pc and just coying over in 2mins vs bothering to convert I know what i just choose full copy every time.
Once I have watched it takes at least a year before I consider watching the same thing again.
NeBlackCat - Wednesday, September 21, 2011 - link
The GPU part would be streets ahead, the drivers would be good, Tegra 3 (4..5...) on the 22nm trigate process is an absolutely mouth-watering proposition, and who knows what else could have been accomplished with the engineering effort saved on Intel GPUs and the (so far) fruitless efforts to push x86 into smart consumer devices.On the downside, there's be no AMD.
mrpatel - Wednesday, September 21, 2011 - link
The iMac 2011 27" model ships with the Z68 chipset.So the question is whether or not it would support IVY BRIDGE CPUs in it? (given that all other things like TDP etc requirements match up).
I wonder if IVY BRIDGE CPUs would require a full EFI or kernel module upgrade to be supported? (i mean i really don't care if the USB 3.0 works, but I do care about the new design, gpu performance and lower power to performance ratio compared to sandy bridge!).
caggregate - Friday, September 23, 2011 - link
So being that this is a current/future platform, what's the big deal about support for DDR3L (which as a standard was ratified in July 2010)? I realize the specs of DDR3U ("Ultra low voltage" 1.25V) are not "final" yet, but you'd think it would be implemented given that DDR3U has been available to engineers (according to Hynix/Google) since June 2010.fb39ca4 - Sunday, September 25, 2011 - link
No OpenGL 4 support? Seriously?OCguy - Tuesday, September 27, 2011 - link
Are they even trying anymore?Olbi - Tuesday, October 18, 2011 - link
I wonder why Intel add DX11, but no OpenGL 4? Both are needed by developers of apps and DX11 isnt need by allmost all app. OpenGL 4 is needed by Linux desktop like KDE 4, GNOME, Xfce and others. So why Intel still doesnt support it.tkafafi - Tuesday, March 20, 2012 - link
Why do the new intel chipsets (series 7) still contain so many (10) usb2 ports ? Would any PC/laptop manufacturer chose to use a usb2 port instead of anavailable usb3 port from the chipset ? for e.g would they use 2 usb2 + 2usb3 instead of 4 usb3 from the chipset ?I know PC manufacturers are using this configuration (2 usb2 + 2 usb3) because now they need to support usb3 through an external controller so they are saving cost by using a 2 port controller. But once series 7 chipsets arrive with native usb3 support, there would be no cost advantage to do this. Is this to derisk any interoperability issues with older usb2 devices (i.e if for some reason usb3 ports don't work well with some existing usb2 devices) ?
Thanks