If you read our last article, it is clear that when your applications are virtualized, you have a lot more options to choose from in order to build your server infrastructure . Let us know how you would build up your "dynamic datacenter" and why!

{poll 157:440}
Comments Locked

48 Comments

View All Comments

  • joekraska - Sunday, October 11, 2009 - link

    Dell R710's with 72GB of ram. Dual 10GE, aggregating to Force 10 10GE switches top of rack. 10GE CISCO line cards to the core.

    Dell EqualLogic tiered storage cluster for the VMDK files in Tier 1. In Tier 2, NetApp NFS volume with DEDUP turned on.

    Joe.

  • Czar - Saturday, October 10, 2009 - link

    We have a 6 ESX host setup on IBM LS21 blades, those with 2x dualcore AMD processors. They are a bit old, but we are not in any problems with regard of cpu performance. Since these are IBM blades without the memory expansion we only have two 1gb nics per host. Though it has not been much of a problem, we have had to link a few vm's together so they are always on the same host.

    But yes with VM's CPU is not a limiting factor, Memory is not a limiting factor. Network and Disk IO is a limiting factor, but those are both hardware related that has nothing to do with virtualization.

    If I had my way and were to design my work setup again I would go with HP blades, same size, dual socket, just more cores now :) and more memory, go with 4 nics.

    Then expand this setup as needed.

    Oh and I would seriosly think about going with iSCSI after seeing Citrix test setup (think it was citrix, it was on brianmadden.com)
  • Quarkhoernchen - Saturday, October 10, 2009 - link

    If I could break up from the roots:

    Servers:

    Supermicro SuperServers (Twin 2U Series with 1200 Watt redundant PS)
    http://www.supermicro.com/products/system/2U/6026/...">http://www.supermicro.com/products/system/2U/6026/...

    Each node with the following configuration:
    2x Xeon X5550 / X5560 / X5570
    6x 8 GB Dual-Rank RDIMM DDR3-1066 (48 GB) (upgradable to 96 GB)
    1x Additional Intel PRO 1000 ET quad-port server adapter (a total of 6 NICs per node)

    I would ever take a look for network adapters with the new Intel 82576 Chipsatz (16 TX/RX queues, VMDq support). Next year Intel comes up with the new 82599 chipset (128 TX/RX queues) for 10 Gb ethernet.

    Storage:

    EqualLogic PS6000XV / PS6000E

    Network:

    Cisco Catalyst 3750G for user traffic
    Cisco Catalyst 3750-E for storage traffic
    Cisco Catalyst 3750G for VMotion / FTI traffic

    Virtual Network Configuration:

    2x 1 Gbit for user traffic (per node)
    2x 1 Gbit for storage traffic (per node)
    1x 1 Gbit for VMotion (per node)
    1x 1 Gbit for FTI (per node)

    If more bandwidth per node is required I would consider buying a normal 1/2U server with support for two or more additional network cards.

    regards, Simon

    regards, Simon
  • LeChuck - Friday, October 9, 2009 - link

    I vote for Dual Rack Servers. They offer great performance and are probably the best bang for the buck. Start with getting at least two to form an ESX/vSphere-Resource-Cluster, better 3 or more, depending on what your need is. Ideally you have them in separate buildings/rooms to be prepared for an outage of one of them in case there is a major problem in the housing room. You gotta have the licenses for vMotion and Virtual Center Server and all that, that goes without saying. And I'm talking only VmWare here... obviously. ;)
  • LoneWolf15 - Friday, October 9, 2009 - link

    Either dual-socket blade servers or dual-socket rack servers connected via fiber to attached storage devices in RAID. Saving rack space and power would be a priority, along with redundancy to prevent storage failure.
    The heaviest stuff (I'd call us small enterprise) we run would scream on a dual quad-core with 32GB of RAM; a dual hex-core would be overkill for some time to come.
  • LoneWolf15 - Friday, October 9, 2009 - link

    Actually, since we're talking virtualization though, I guess bumping it to 64GB for expansion headroom and clustering some dual-socket quad-cores or hex-cores would keep us ahead of the curve for awhile.

    Of course, that assumes I had the budget.
  • Brovane - Friday, October 9, 2009 - link

    We use 5 Dell R 900's for our VMware ESX Cluster. We just use 2 CPU's in the box and leave the other 2 empty. We find we run out of Memory before we run out of CPU. Probably if we where buying new hardware now we would buy R710 servers.
  • VooDooAddict - Friday, October 9, 2009 - link

    I'd go with "Large" Quad Socket Rack servers for PRODUCTION Virtual hosts due to the following reasons in order of importance.

    1. Memory cost. Quad Socket servers typically have more memory slots, enabling more RAM at lower densities and therefor lower costs.
    2. Licensing Price per virtual host of VMware.
    3. Fewer Large Virtual hosts will reduce the overhead and quantity of VMotions for load balancing, when compared with many blade hosts.

    I'd still like to keep the systems as dense as possible, 1U (rare for 4 socket) or 2U servers would still be ideal. 4U and 6U massive quad socket systems with room for internal disk arrays are unneeded as all the storage (besides the Virtual Host OS mirror) would be on a SAN.


    That said... DEV and TEST virtual hosts are better on dual socket 1U vhosts due to the added flexibility needed and lower costs.
  • Ninevah - Friday, October 16, 2009 - link

    Licensing for VMWare is based upon the number of processor sockets in the host, so you would pay the same price for 2 servers with 2 sockets as you would for 1 server with 4 sockets.
  • andrewaggb - Friday, October 9, 2009 - link

    The company I work for was just aquired by a larger company and we just took over a 5400 square foot datacenter. I see an opportunity to recommend getting some real hardware :-).

    I'm curious what people who run larger vm clusters successfully would use for servers, licensing, storage, backups, switching etc for say 40-60 virtual machines (most lightly used) and what they think it would cost. Most of our vm's are running VOIP applications of some kind or low traffic web servers.

Log in

Don't have an account? Sign up now