If you read our last article, it is clear that when your applications are virtualized, you have a lot more options to choose from in order to build your server infrastructure . Let us know how you would build up your "dynamic datacenter" and why!

{poll 157:440}
Comments Locked

48 Comments

View All Comments

  • mlambert - Saturday, October 10, 2009 - link

    Theres multiple whitepapers on the benefits of NFS over VMFS provisioned LUNs but some key points are:

    Thin VM provisioning by default
    Backup/Restore of individual VM's rather than entire datastores

    Plus when you tie that into NetApp storage, you have flexible volumes for your NFS datastores (meaning on the fly shrink + grow of any/all datastores), WAFL gets to do the data de-duplicaiton (meaning it understands the blocks and can do better single instsance storage). Add in snapmirror for all your NFS datastores and you have a super easy to maintain, replicated, portable, de-duplicated solution for all your business continuity requirements.

    I kinda sound like a NTAP salesman here, but it really is the best for ESX.

    Also for any of the FC purists remember that the Oracle Austin datacenter runs almost all of its databases on NFS. Only a select few remain on FC and those are all on DMX.
  • tomt4535 - Wednesday, October 7, 2009 - link

    Dual Socket Blades for sure. We use HP C7000s here with BL460c servers. With the amount of RAM you can put in them these days and the ease of use with Virtual Connect, you cant go wrong.
  • ltfields - Wednesday, October 7, 2009 - link

    We use C7000s as well with BL465c G6 boxes, and they scream on performance. The sweet spot for us right now is still 32GB of RAM, but wouldn't be surprised if it's 64GB next year or possibly 128GB. We don't use Virtual Connect yet, but we're considering it, probably when we make the jump to 10GBe switching...
  • HappyCracker - Wednesday, October 7, 2009 - link

    Our setup is quite similar. We use BL465c blades for most hosts, and have peppered in a few BL685c servers for applications with more CPU or memory requirements. Overall, management of blades is just better integrated than that of traditional racked servers. I think the modularity of the smaller blades allows a bit more flexibility across a virtualized server environment; a host outage doesn't affect as many guests.

    Instead of Virtual Connect, we jumped to the Cisco 10GbE on the network stuff and the integrated 9124 switches for the FC traffic. It's knocked the FC port count way down, gotten rid of the rat's nest in cables, and made management easier overall.
  • Casper42 - Wednesday, October 7, 2009 - link

    @HappyCracker: How many Cisco 3120X do you have in each Chassis. Right now Cisco only offers 1Gb to each Blade and then 10Gb uplink.

    So I would imagine you have at least 4 (2 layers x Left+Right modules) so you can separate your VM Traffic from your Console/vMotion/HA/etc traffic. Or are you stuffing all that into a single pair of 1Gb NICs?

    Whats your average VMWare density on the 465? 10:1, 15:1 ?
  • HappyCracker - Thursday, October 8, 2009 - link

    It ranges from about 9:1 to 15:1; the 685c is run much tighter with 6:1 at its most loaded. DRS is used and guests are shuffled around by the hosts as necessary.

    As for the network aggregators, there are six in each chassis, then the two FC modules rounding out the config.
  • rcr - Wednesday, October 7, 2009 - link

    Is there a way to enable to show the results of polls without voting, because I'm not a IT expert or something like that, but it would be pretty interresting to see what those prefer.
  • Voo - Thursday, October 8, 2009 - link

    I second that and just clicked "Something Else" which shouldn't distort the outcome too much.

Log in

Don't have an account? Sign up now