In our last article about server CPUs, I wrote: 
 
"the challenge for AMD and Intel is to convince the rest of the market - that is 95% or so - that the new platforms provide a compelling ROI (Return On Investment). The most productive or intensively used servers in general get replaced every 3 to 5 years. Based on Intel's own inquiries, Intel estimates that the current installed base consists of 40% dual-core CPU servers and 40% servers with single-core CPUs."
 
At the end of the presentation of Pat Gelsinger (Intel) makes the point that replacing nine servers based on the old single core Xeons with one Xeon X5570 based server will result in a quick payback. Your lower energy bill will pay back  your investment back in 8 months according to Intel.
 
Why these calculations are quite optimistic is beyond the scope of this blogpost, but suffice to say that Specjbb is a pretty bad benchmark to perform ROI calculations (it can be "inflated" too easiliy) and that Intel did not consider the amount of work it takes to install and configure those servers. However, Intel does have a point that replacing the old power hungry Xeons (irony...) will deliver a good return on investment.
 
In contrast, John Fruehe (AMD) is pointing out that you could upgrade dualcore Opteron based servers (the ones with four numbers in their modelnumbers and DDR-2) with hex-core AMD "Istanbul" CPUs. I must say that I encountered few companies who would actually bother upgrading CPUs, but his arguments make some sense as the CPU will still use the same kind of memory: DDR-2. As long as your motherboard supports it, you might just as well upgrade the BIOS, pull out your server, replace the 1 GB DIMMs with 4 GB DIMMs and replace the dual cores with hex-cores instead of replacing everything. It seems more cost effective than redo the cabling, reconfigure a new server and so on...
 
There were two reasons why few professional IT people bothered with CPU upgrades:
  1. You could only upgrade to a slightly faster CPU. Upgrading a CPU to a higher clocked, but similar CPU rarely gave any decent performance increase that was worth the time. For example, the Opteron was launched at 1.8 GHz, and most servers you could buy at the end of 2003 were not upgradeable beyond 2.4 GHz.
  2. You could not make use of more CPU performance. With the exception of the HPC people, higher CPU performance rarely delivered anything more than even lower CPU percentage usage. So why bother?
AMD has also a point that both things have changed. The first reason may not be valid anymore if hex-cores do indeed work in a dualcore motherboard. The second reason is no longer valid as virtualization allows you to use the extra CPU horse power to consolidate more virtual servers on one physical machine. On the condition of course that the older server allows you to replace those old 1 GB DIMMs with a lot of 4 GB ones. I checked for example the HP DL585G2 and it does allow up to 128 GB of DDR-2.
 
So what is your opinion? Will replacing CPUs and adding memory to extend the lifetime of servers become more common? Or should we stick to replacing servers anyway?
 
{poll 124:400}
Comments Locked

23 Comments

View All Comments

  • Rigan - Wednesday, April 8, 2009 - link

    Very true, we do things in lots of 300+. Nothing could make me replace/upgrade the memory/cpu in 300 servers. Hell, in one project we've got 3 boxes at each of 153 sites (some of which are not overly easy to get to) with a project life span of 8 years. For this sort of project we just buy 15% extra hardware and provide our own warranty.

    It's very hard to make upgrades cost effective in that sort of environment. Not to mention the trouble you'll get in if something is down for longer than the predefined limit and you have to admit you cannot blame it on Dell.

    But, I can see how a small business with a competent guy might get away with doing in house upgrades. I'd still be very nervous about that guy leaving. A truck number of 1 is bad.
  • Rolphus - Tuesday, April 7, 2009 - link

    In my former role I was the (very hands-on) IT manager for a company with approx $30m turnover and around 90 employees (4 of which were in IT, including me), and 21 servers.

    As our servers tended to be task-specific, we didn't generally upgrade them unless we had a need to. We took a view that over-specifying hardware was the way to go, so we didn't generally rebuild kit unless we were looking for something new. That said, we replaced a number of aging boxes during 4 years I was there, and upgraded 6 of the machines due to performance issues with a VOIP phone system and SQL Server DB. Those were simple single-to-dual CPU upgrades and RAM bumps in the first instance, and a simple RAM bump in the second.

    Hope that helps...
  • Casper42 - Tuesday, April 7, 2009 - link

    Aside from maybe the AMD Opteron 2xxx/8xxx series as mentioned, the platforms themselves change too quickly and so you cannot get all the bang for our buck that would make upgrading CPUs worthwhile.

    Note: This next section excludes Virtualization servers:
    Part of this I think comes from the fact that servers often get overspec'd to make sure there is headroom, and then a few years later are not yet so taxed that they even need upgrading. By the time your ready to upgrade the CPU in a machine, you can no longer get them or as already pointed out, you can only upgrade from a 2.5 to a 3.0.

    I think the last 2 generations of Xeons were a pretty good example of that. Now if Intel really wants to see people upgrade, then they should continue releasing NEW CPUs for older platforms after new platforms have arrives.
    For instance, now that the Xeon 55xx is out, we're barely going to see any further developments on the 54xx series. But if Intel put some of their new knowledge and design into that old platform, you could see either faster chips in the same thermal envelope or similarly spec'd chips with very reduced thermals.

    Memory IS in my opinion the single most upgraded component in a server. Memory is dirt cheap right now for DDR2, and DDR3 isnt far behind. Now the caveat to that is bleeding edge memory like 8GB FBDIMMs. An 8GB DDR2 FBDIMM in the HP Server world costs 4x the price of a 4GB one rather than only 2x.

    Disks can be upgraded in a server, but I see that the ROI there is even worse than CPU Upgrades unless your making the jump to SSDs. Disk speeds increase at a snails pace compared to other technologies in the server.
    Disk Expansion is a completely different animal and happens quite frequently for File/DB servers.

Log in

Don't have an account? Sign up now