Comments Locked

8 Comments

Back to Article

  • bakerzdosen - Thursday, May 22, 2008 - link

    OK, I forgot to comment on that (since you asked in the article).

    We've found the x4450's to be reliable... except we've got one bug that's killing us. It could be a Veritas Volume Manager bug, but at this point, I really couldn't say what it is with 100% certainty. Each system has crashed (hard) once in the past 6 months. Other than that, they've been fine. Our current state is not acceptable, but they are still much more reliable than any Windows box we've got out there doing the same job... (Our apps tend to push hardware a LOT - not like your typical Apache/PHP sort of apps FWIW.)

    All in all, I REALLY like them.
  • bakerzdosen - Thursday, May 22, 2008 - link

    Well, I personally LOVE the x4450's. They simply amaze me at how fast they are. They are overkill for most of our customers, but when you compare them to something similar in the Sparc world (say, oh, I dunno, a v890), they are about 15-20% of the price.

    Admittedly it's apples (note lower case) and oranges, but it's just a fast machine...

    http://browse.geekbench.ca/geekbench2/top">http://browse.geekbench.ca/geekbench2/top
  • MGSsancho - Saturday, May 3, 2008 - link

    Looking back at other comments. I would imaging all the issues you asked us are important to different people. how about looking into mixed blad offerings? Sun's blades apparently support Intel, AMD and Niagara procs in the same chasies, not cheap but nothing to scoff at either. Either way keep up the IT blogging =)
  • tjoynt - Tuesday, April 29, 2008 - link

    In my experience with companies running their own datacenter, space can be an issue, but power and cooling are far more limiting. Indeed, greater density from blade or 1/2U 2/4-way multicore systems tends to exacerbate the power and cooling issues. Thus efficiency becomes critical, not because of the cost of the electricity, but the limits the datacenter infrastructure places on expansion.
  • JohanAnandtech - Wednesday, April 30, 2008 - link

    That is indeed my experience too. It seems that power density is in many cases (there are exceptions: the ones that the two posters give as an example) "power density" is very important.

    The question is if there is any merit to a high performance blade... I feel in most cases it is best to go with a lower power blade instead. If you really need that kind of top performance, 1U are delivering now densities which rival high performance blade, but they are easier to cool. That is what brings me to my other question: how important is the reduced cabling with blades in your opinion?
  • jnusbaum - Monday, April 28, 2008 - link

    In my industry (finance) many data centers are always full. By the time we get things built we already need/have way more machines than we planned for. Remote sites and colo are options but can't be used for a lot of things because of security and bandwidth (into colo and remote sites) considerations. So yes density matters and it is always good to use less space rather than more all other things being equal (which they aren't).
  • Marquis - Monday, April 28, 2008 - link

    While I certainly wouldn't call this "typical" any sense of the word, I recently did some work where density was a *huge* deal.

    Essentially, we needed to pack in about 13K systems (not a typo) into as little space as possible.

    Unfortunately, vendor choice was somewhat limited, so we ended up speccifying HP blade servers. But that certainly ended up saving quite a bit of rack space.
  • MGSsancho - Monday, April 28, 2008 - link

    if 1 person or 2 people can service a box. and i would imagine cost of parts. commodity parts are awesome. but i work for companies that are cheap cheap cheap so my opinion prolly doesn't mater =P

Log in

Don't have an account? Sign up now