« Rose Hill Drive | Fall! »

The Power Crunch in the Data Center

Back when I was at Excite, I would often pat myself on the back for co-founding a company with a low environmental impact that “simply pushed electrons around a network.” Granted, a company whose only “product” is HTML-based web pages is less resource-intensive than say, an aluminum smelter or a strip-mining operation, but I wasn’t thinking critically enough about just how much energy a data center with thousands of servers can suck down.

And for years, computer manufacturers and CPU makers paid no attention to the profligate energy consumption of each successive generation of ever more powerful machines. Arguably, a driving factor in Apple’s switch to the Intel platform was the fact that IBM’s PowerPC chips were big power hogs that ran hot. Frustrated with IBM’s inability to deliver a version of the G5 chip that wouldn’t melt a laptop, Apple moved to Intel, whose chips crank out more MIPS per watt than IBM’s, though Intel still lags AMD in terms of compute mileage. And the increasing popularity of dense server configurations like blades has only made the problem worse as it has become possible to pack hundreds of CPUs into a single standard data center rack.

Fortunately, the technology world is waking up to this issue. Not because of altruism but because it impacts the bottom line of anyone who uses computers. And I’m glad we don’t have to rely on altruism alone, since market forces do a much better job of spurring action. Energy costs (for both the compute power and the HVAC required to cool down the computers) are the single biggest expense in any data center operation.

Google, owner of the biggest server farm on the planet, with hundreds of thousands of servers, feels this problem acutely, and may have the biggest electric bill on the planet, and therefore has gone to great lengths to reduce power consumption in their data centers, and is pushing for more electrical efficiency in PCs, while Jonathan Schwartz has been blogging regularly about power issues in the data center and their (very smart) focus on the energy efficiency of their servers. Certainly the performance relative to power consumption of the Niagara servers is quite compelling and can really cut down on the power density in a data center, provided your application isn’t heavy with floating-point operations, in which case the Niagra might not be the box for you. In fact, nearly a year ago, Google predicted that the lifetime cost of providing electricity to a server will eclipse the capital cost of the server itself.

Still, there’s nothing like a close-to-home experience to really drive this issue home. I’m on the board of Technorati, and they recently finished the painful task of transitioning their entire server farm to a new data center. Technorati’s old data center was over-provisioned and out of power. As Technorati planned to expand and occupy more rack space, their old provider was only able to offer half the power per new rack than they had been offering previously. Ouch. So the only option was to move to a new data center. But this is only a stop-gap measure, so Technorati is also busy evaluating more power efficient servers (Opteron-based and the Niagara, for instance) in an effort to continually increase MIPS/watt and, therefore, increase the compute density per rack in their data centers. Another anecdote from my portfolio was Postini’s experience a few years ago when they established data centers in Europe. They were surprised to discover that most European data centers offered substantially less power density per rack than was available in the US.

Finally, in talking to a friend of mine who builds and configures data centers for a living, he observed that the fundamental issue for many data centers is not so much the ability to bring enough power to each rack, but rather the ability to keep the building cool enough with all the equipment kicking off so much heat. Many of the buildings that house data centers are converted warehouses or were simply not built with a mindset designed around thermal management issues. Small design changes in buildings can have a huge impact on their power needs. So while the chipmakers and server makers have plenty of work to do, the folks who design data centers also need to do their homework to optimize the building for the application. The high (and growing) cost of electricity also makes me think that hosting providers like SolarHost might be on to something — investing in on-site power generation, be it photovoltaic, stationary fuel cells or whatever, could give a hosting facility a permanent operational cost advantage.

If you’re involved with a SaaS company or a web2.0 play (which is really just a SaaS company for consumers) make sure the engineering and operations team is thinking hard about these issues. Many such companies have probably already been forced to deal with hosting cost increases, power rationing in their data centers, or have dealt with downtime due to overheated machines. If it hasn’t happened yet, it certainly will soon, since the problem is only going to get worse over time. Increasing compute cycles per rack (while holding power consumption per rack steady or reducing it) has to be top of mind for the software and network operations teams of any such company.

Technorati Tags: , , , , , , ,

September 27th, 2006     Categories: Uncategorized    
  • http://www.private-intellectual.de Pi.

    It is undoubtedly a good thing that Technorati have moved – or are in the prcess of moving, since you claim this to be a stop-gap thing – to more energy efficient servers.
    What does this mean for the customer base though? You haven’t addressed the problems this has raised for the customer out there in the ether who has only discovered through lax and ego-boosting rare entries in the technorati weblog that anything is happening at all. What I have seen is what Liz Dunn aptly described as spotty service which, to all intents and purposes, is still the case.
    The information flow is simply not there. That information flow is not simply that technorati should be informing people of what is happening to ensure that some level of understanding and acceptance is there, but the physical information flow too. WordPress software, for example, has embedded links to technorati which show exactly which weblogs and which posts within those weblogs are linking to a site. This links have been fluctuating in accuracy for several weeks and now, in her latest weblog entry claiming that the interruptions are over, Liz Dunn appears to gloss over what is still happening.
    On my weblog a whole series of links have simply vanished from the WordPress Dashboard because technorati is not passing the information on. My ranking, while being of lesser importance to me, fluctuates by more than one hundred thousand positions at times because the number of links are not being accurately recorded. The links are still there, but the record has vanished at technorati.
    A fine example of this is Liz Dunn’s weblog entry on technorati recently commenting on how others were giving them a pat on the back. For nearly a week my comment linked to this entry was there, now it has vanished completely, although it is, naturally, still on my weblog and fully linked.
    Further problems with the tags, a bug which I sent in to technorati which has not had the courtesy of a reply. Technorati reads – at least for WordPress – the categories and ignores the embedded tags completely. The categories are general, the tags specific. The information technorati gives out on my weblog entries is, therefore, inaccurate.
    This is what the customer base sees because technorati is not using its weblog efficiently and is not communicating with the customer base. Perhaps that is another energy saving effort? Perhaps, much better, it should be thought through again, and the customer base – the people who see and click on paying adverts – should be brought back into the fold instead of being alienated.

  • http://profile.typekey.com/valeski/ jvaleski

    It’s too often the case that buildings don’t take into account optimal energy usage and handling. It’s incredible what can be done when simple things like landscaping (shade trees) are given the attention they need. Keeping the basics in mind (heat rises, cold sinks) can go a long way . These guys have got something along these lines: http://www.airius.us/

  • http://sparkplug9.com/bizhack/ John Koetsier

    If you’re on the board, are you able to affect the situation at all?
    Technorati’s service has always had great PROMISE, but always been greatly “spotty,” as Liz Dunn quaintly referred to it.
    First of all: some major effort to solve the problem with the million$ that Technorati has recently raised should be a number one priority. And secondly, some honesty and openness about the situation would be greatly appreciated.
    I mean … the arbiter of the blogosphere has what, 2 posts on its blog all of last month – a horrible month, when it’s been up and down like a yo-yo? This is ridiculous: be upfront, be honest, be real.
    (This is something that Dave Sifry should take to heart too … as I’ve tried and tried to bring a few issues to his attention via email, links to his blog, and comments on his blog … all without the least sign of success.)
    The issue is not only being up or down, or the notorious, infamous “Technorati is experiencing a high volume of searches right now and could not complete your request,” which I’ve seen on the HOME PAGE.
    It’s also data integrity as the service appears to wildly swing between mutually inconsistent datasets. Links appear and disappear with disconcerting frequency.
    Some kind of information about what the company plans to do about it would be nice. How about telling bloggers the plans for ensuring that it won’t happen again? A little PR wouldn’t be out of place.
    As a member of the blogosphere, I suggest that could start on your blog.

  • tom o

    don’t forget that the datacenters being used today were built a minimum of 7 years ago. at that time equinix was building their sites to support a draw of 1.75kw per cabinet. that is just shy of a full 20amp circuit. today large customers, ebay, paypal, salesforce, youtube, amazon, google, etc are installing cabs that draw 10kw. this is going from roughly 80-100 watts a foot to 500watts a foot. guess what, there are no datacenters designed to support that density across the board so they’re left with ‘dead space’ and they are all wise to that now and charge the customer for that dead space. hence the Tier1 Research statement that demand for DC space is 4X supply. h/w has gotten 5X more dense and the market a. doesn’t have the inventory to support that demand even if hw density had stayed the same and b. sure as hell can’t catch up to the density requirement on a large scale. for 500watts a ft it would likely cost $3k per foot to build….so a 100k sq site is gonna cost $300M. makes the 80 20 initiative(i forget it that is the name but it is something like that) seem logical as does the google initiative on getting efficiency in power use quite important. this is especially relevant to companies like goog, msft, amzn, myspace, etc as their power consumption grows 20% loss means millions of dollars wasted each year due to inefficiencies. no doubt capitalism will prevail and if virtualization isn’t the answer something else will be…..the question is when and what happens in the meantime because it will cost billions and billions to scale datacenters to support the future powering and cooling of hardware.
    ryan, you know of any startups addressing this? i know xsigo, nuovo, and a couple stealth companies are, to some degree, addressing this but for the most part people seem to just recently have gotten a clue about this dilemna….one can argue the hw mfg are to blame because they know datacenters cant support this but they keep pumping out blades, rackables, etc. rackable has a 25Kw single cabinet!!!! there is no logic in robbing peter(dead space) to pay paul(cooling and power) when datacenters are charging for the robbing of peter.

  • http://www.aiso.net Steven Craig

    I saw this posting and that you mentioned SolarHost, SolarHost is going out of business, and all of their clients have switchted to us. We are featured in Inc. Magazines’ Top 50 Green Companies(http://www.inc.com/magazine/20061101/green50_integrators_pagen_4.html), The Wall Street Journal, Wired Magazine and Entrepreneur(Nov. 05 issue) magazine. We also have a web cam on their site that proves they have solar panels
    running their data center, and we dont use energy credits. You can host your web site hosted for as low as $9.95 a month, so its pretty affordable! Check out our web site at http://www.aiso.net
    A little more about us…
    Affordable Internet Services Online, Inc.(AISO.net) is a reliable and responsible green energy web site hosting company. They have made a strong commitment to help fight pollution and preserving our natural resources along with providing unsurpassed toll-free and e-mail support. 120+ Solar panels run their data center and office. Solar tubes bring in natural light from the outside providing light during the day. AMD Opteron powered servers use sixty percent less energy and generate fifty percent less heat. These are just some of the ways AISO.net is becoming the world’s most responsible green energy hosting company.

  • http://sexxoasis.info Zmajrui
  • http://sexxoasis.info Zmajrui