If you are just trying to get your application live on the Internet, you might be under the impression that “green IT” and “Power Usage Efficiency” is something the infrastructure guys should worry about. As an hardware enthusiast working for an internet company, you probably worry a lot more about keeping everything “in the air” than the power budget of your servers. Still, it will have escaped no one’s attention that power is a big issue.

 

But instead of reiterating the clichés again, let us take a simple example. Let us say that you need a humble setup of 4 servers to run your services on the internet. Each server consuming 250W when running at a high load (CPU at 60% or so). If you are based in the US, that means that you need about 10-11 amps: 1000W  divided by 110V plus a safety margin. In Europe, you’ll need about 5 amps (Voltage = 230V). In Europe you’ll pay up to $100 per month per amp that you need. The US prices vary typically between $15 and $30 per amp per month. So depending on where you live, it is not uncommon to pay something like $300 to $500 per month just to feed electricity to your servers. In the worst case, you are paying up to $6000 per year ($100 x 5 x 12 months) to keep a very basic setup up and running. That is $24000 in four years. If you buy new servers every 4 years, you probably end up spending more on power than on the hardware!

Keeping an eye on power when choosing the hardware and software components is thus much more than naively following the hype of “green IT”. It is simply the smart thing to do. We take another shot at understanding how choosing your server components wisely can give you a cost advantage. In this article, we focus on low power Xeons in a consolidated Hyper-V/Windows 2008 virtualization scenario. Do Low Power Xeons save energy and costs? We designed a new and improved methodology to find out.

The new methodology
Comments Locked

49 Comments

View All Comments

  • cserwin - Thursday, July 15, 2010 - link

    Some props for Johan, too, maybe... nice article.
  • JohanAnandtech - Thursday, July 15, 2010 - link

    Thanks! We have more data on "low power choices", but we decided to cut them up in several article to keep it readable.
  • DavC - Thursday, July 15, 2010 - link

    not sure whats going on with your electricity cost calcs on your first page. firstly your converting current unnessacarily from watts to amps (meaning your unnessacarily splitting into US and europe figures).

    basically here in the UK, 1kW which is what your your 4 PCs in your example consume, costs roughly 10p per hour. working on an average of 720 hours in a month, that would give a grand total of £72 a month to run those 4 PCs 24/7.

    £72 to you US guys is around $110. And I cant imagine you're electricity is priced any dearer than ours.

    giving a 4 year life cycle cost of $5280.

    have I missed something obvious here or are you just out with the maths?
  • JohanAnandtech - Thursday, July 15, 2010 - link

    You are calculating from the POV of a datacenter. I take the POV of a datacenter client, which has to pay per amp that he/she "reserves". AFAIK, datacenters almost always count with amps, not Watts.

    (also 10p per KWh seems low)
  • MrSpadge - Thursday, July 15, 2010 - link

    With P=V*I at constant voltage power and amps are really just a different name for the same thing, i.e. equivalent. Personally I prefer W, because this is what matters in the end: it's what I pay for and what heats my room. Amps by themselves don't mean much (as long as you're not melting the wires), as voltages can easily be converted.
    Maybe the datacenter guys just like to juggle around smaller numbers? Maybe the should switch over to hecto watts instead? ;)

    MrS
  • JohanAnandtech - Thursday, July 15, 2010 - link

    I am surprised the electrical engineers have not jumped in yet :-). As you indicate yourself, the circuits/wires are made for a certain amount of amps, not watts. That is probably the reason datacenters specify the amount of power you get in watt.
  • JohanAnandtech - Thursday, July 15, 2010 - link

    I meant amps in that last sentence of course.
  • knedle - Thursday, July 15, 2010 - link

    Watts are universal, doesn't matter if you're in UK, or US - 220W is still 220W, but with ampers it's different. Since in the Europe voltage is higher than in the USA (EU=220V, US=110V), and P=U*I, you've got twice as much power for 1A, which means that in USA your server will use 2A, while the same server in UK will use only 1A...
  • has407 - Friday, July 16, 2010 - link

    No, not all Watts are the same.

    Watts in a decent datacenter come with power distribution, cooling, UPS, etc. Those typically add 3-4x to the power your server actually consumes. Add to that the amortized cost of the infrastructure and you're looking at 6-10x the cost of the power your server consumes directly.

    Such is the fallacy of simplistic power/cost comparisons (and Johan, you should know better). Can we now dispense with the idiotic cost/KWH calculations?
  • Penti - Saturday, July 17, 2010 - link

    A high-performance server probably can't be used on 1A 230V which is the cheapest options in some datacenters. However something like half a rack or 1/4 would probably have 10A/230V, more then enough for a small servercollection of 4 moderate servers. The big cost is cooling, normal racks might handle 4kW (up to 6kW over that then it's high density) of heat/power just. Then you need more expensive stuff. A cheap rack won't handle 40 250W servers in other regards. 6 kW power/cooling and 2x16A/230V shouldn't be that expensive. Any way you also pay for cooling (and UPS). Even cheap solutions normally charge per used kW here though. 4 2U is about 1/4 rack anyway. And like 15 amps is needed if in the states.

Log in

Don't have an account? Sign up now