A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by Andrew D.

How would you compare the product offering of VMWare to those of its key competitors, whit kind of performance hit can I expect running Windows from within a virtualized environment, are there any advantages/disadvantages for leveraging an Intel platform as opposed to an AMD one for a VMWare solution?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

The performance hit depends of course on your application and your hardware. I am going to assume your server is a recent one, with support for hardware accelerated pages and hardware virtualization. You can get an idea of the performance hit by looking at perfmon and the taskmanager of Windows. In the performance tab of the task manager you can enable "show kernel times". The more time your application spends in the kernel, the higher performance hit. The performance hit also depends on the amount of I/O that you have going on.

If your app spends a lot of time in the kernel and has high amounts of I/O going on, the performance hit may be high (15-30%). But that does mean your application will have to suffer this performance hit. If you spend more time on optimizing (database buffering, jumbo frames) and if you use paravirtualized drivers (VMXnet, PVSCSI) the performance will get a lot smaller (5-10%). In short, performance hit can be high if you just throw your native application in a VM, but modern hypervisors are able to keep the performance hit very small if you make the right choices and you take some time to tune the app and the VM.

If your application is not I/O intensive, but mostly CPU intensive, the performance hit can be unnoticeable (1-2%).

AMD versus Intel: we have numerous articles on that on Anandtech. There are two areas where Intel has an objective advantage. The first one is licensing. The twelve-core AMD Opteron 6100 and six-core Xeon 5600 perform more or less the same. However if you like to buy VMware vSphere essentials (which is an interesting option if you can run your services on 3 servers) you get a license for 3 servers, 2 CPUs per servers and 6 cores per CPU. You have buy additional licences if you have more cores per CPU.

If your IT strategy involves buying servers with the best RAS capabilities out there, Intel has also an advantage. Servers based on the Xeon 7500 series have the best RAS features available in the x86 space and can also address the most memory. These servers need more power than typical x86 servers, but you can consolidate more VMs on them.

For all other cases, and that is probably 80-90% of the market, only one suggestion: read our comparisons in the IT section of Anandtech :-). The situation can quickly change.

Question #2 by Colin R.

How is the performance of virtualization of high throughput devices like networking and storage developing?

Answer #2 by Rich Uhlig, Intel Fellow

One trend is that new standards are being developed to make I/O devices more “virtualization friendly”. For example, the PCI-SIG has developed a specification for PCI-Express devices to make their resources more easily shareable among VMs. The specification – called “Single Root I/O Virtualization” (or SR-IOV for short) – defines a way for devices to expose multiple “virtual functions” (VFs) that can be independently and directly assigned to guest OSes running within VMs, and remove some of the overheads of virtualization in the process. As an example, Intel supports SR-IOV in our recent network adaptors. A big challenge with direct assignment of I/O devices is that it can complicate other important virtualization capabilities like VM migration, since exposing a physical I/O resource directly to a guest OS can make it harder to detach from the resource when moving VM state to another physical platform. We’ve been working with VMM vendors to tackle these issues so that we can get the performance benefits of direct I/O assignment through SR-IOV, while preserving the ability to do VM migration.

Question #3 by Bill L.

Are the days of bare metal OS installs numbered? If so, when should we expect to see ALL NEW servers ship with a hypervisor? Will hypervisors have virtual switches in them in the future or will network and storage traffic bypass the hypervisor all together using technologies such as SR-IOV, MR-IOV, VMDirectPath, etc.?

Answer #3 by Rich Brunner, VMware Chief Platform Architect

I do expect that at some point, bare metal hypervisor installs will reach a plateau in the enterprise and service provider environments, but I do not expect that embedded hypervisors will be the only alternative. There has been some industry buzz about PXE boot of hypervisors (this is much more than PXE boot of an installer) and a move toward a truly stateless model. I expect to see more of this; stay tuned. SMB may still want a turn-key solution which either has an installed hypervisor from the Server Manufacturer or an embedded hypervisor.

I do not expect that the network and storage control traffic will ever "bypass" the hypervisor; the hypervisor will always be involved in ensuring QoS, ACLs, and routing for this traffic. Even for SR-IOV, there is a fair amount of control required by the hypervisor to make this work. I can see that the actual data traffic can bypass the hypervisor to reduce CPU overhead provided that the hypervisor has sufficient audit control of this data. VMware and others are working to ensure that in the future for SR-IOV devices.

MR-IOV can be transparent to the hypervisor on a single system instance, but the load balancing is a perfect target for control by a centralized management agent across the multiple system images that share the resource (e.g. blades in a chassis share a high-performance NIC which is load-balanced by the management agent across the blades ).

Comments Locked

42 Comments

View All Comments

  • Zibi - Wednesday, July 28, 2010 - link

    Storage is kinda crude - mirror on 2 internal HDDs.
    After the tests we will connect FC cards to our SAN.
    However the disk config should not affect the test much:
    1. It's mirror config to the native
    2. The DB is smaller than available RAM
    3. I've made some tests on our older machines with RDM drives over FC.
    There were tremendous differences in SQLIO results (like 40000 IOs/s to 1300 in random read of 8KB) but amount of max CS/s was like 25000 and the whole test was much slower.
  • docbones - Thursday, July 22, 2010 - link

    The price difference between Workstation and Fusion is huge. I really would like to see Vmware re-offer a home use license for Workstation.
  • dgz - Friday, July 23, 2010 - link

    Can you please elaborate on your choice to Workstation in a home environment? It is by no means "light" compared to VB (which offers pretty much the same features) and if you really need some extra stuff there's always ESXi to play with.
  • docbones - Friday, July 23, 2010 - link

    I do alot of beta testing at work and at home. Home testing is more personal products based versus work - and I like the feature set of Workstation (which I use at work). Vmware used to offer a home license, so that was very nice.

    I am looking at trying out vmware server running on Windows Home Server and see if that will work instead. The one area of concern with that is how well it handles games and 3d engine.

    As far as using the MS version, haven't really tried it - as that I do testing with none MS OS's also I have not been really interested in that. (Plus would miss the vmware library)
  • GeorgeH - Thursday, July 22, 2010 - link

    Is GPU virtualization reasonably suitable for gaming going to be possible anytime soon?

    As an add-on question, does OnLive (the cloud gaming service) somehow virtualize GPU resources, or do they have a discrete physical GPU for every client? If it's a trade secret, some educated speculation would still be interesting.
  • dgz - Friday, July 23, 2010 - link

    GPU virtualization is quite possible with Quadro and FireGL cards, combined with IOMMU capable CPU + Motherboard, which is exactly what OnLive is doing. Tech is not ready for home deployment at this time, though. I know, as I've been looking at this for quite some time now. Short answer - not going to happen (free of hassle and without money/time sinking) any time soon.
  • justaviking - Thursday, July 22, 2010 - link

    My QUESTION is this: Is there any value or benefit of virtualization for the average consumer? Even if they don't know what it is, will it soon find its way into laptops or desktops sold to the general population in places like Best Buy? If so, why?
  • redisnidma - Thursday, July 22, 2010 - link

    I'd love to see a second part with some AMD experts in the topic also to see their point of view on this topic. Since the Santa Rosa Opterons, AMD have been taking virtualization performance very seriously.

    Please guys, if you're going to cover a topic about virtualization, try to have different POVs from different vendors.
  • spddemon - Friday, July 23, 2010 - link

    Intel vs AMD really isn't a valid argument in the virtualization arena. Both companies do the same thing. at any point in time either company will be the top dog. Intel developers appear to work more closely with VMware on some of the new offerings than what AMD has been. That could also be related to profits though. Intel has had great profits, AMD not so much..

    Both companies will be at VMworld. You could get AMD's take on the topic of AMD vs Intel in a virtualized environment, if you like... AMD will say they win because they have a 12 core CPU. Intel will say they win because they have a 8 Core CPU capable of 2 threads per core...

    for most companies, it comes down to cost vs power vs performance..
  • marraco - Thursday, July 22, 2010 - link

    Since today is easy to plug many monitors, mouse, and keyboards to common desktop PCs, why virtualization products do not allow many users to operate different virtual computers on single common PCs?

    The market is not important enough?

    Entire families would be enabled to buy a single powerful computer instead of many weaker ones. Public places who offer computer services would be allowed to save money on hardware. Why that is still not appealing to virtualization companies?

    There are plans to add these features?

    It seems like virtualization products are already very near of that goal, they just need to allow assigning some hardware resources to some virtual computers. Is that true, or really is (much) more complex to provide that feature?

Log in

Don't have an account? Sign up now