Pizza Boxes Are Power Hogs

22 thoughts on “Pizza Boxes Are Power Hogs”

  1. Nah, we’re not headed toward big iron — we’re headed to more efficient blades. HP’s current C-Class blade servers are already much lower power than the previous generation, and the company’s Lights Out data center concept that they showed at the World Design Congress in October really shows a way to drive down the energy consumption and costs of modular servers without sacrificing the flexibility that has made them so appealing so far. Big iron isn’t going to be back any time soon.

    C-Class BladeSystem: http://www.hp.com/hpinfo/newsroom/press/2007/071112a.html

    Lights Out Data Center:
    http://www.eweek.com/article2/0,1895,1995884,00.asp

  2. No way. Bit Torrent, Amazon Web Services, and newer forms of virtualization. The future is the grid. We will all just use computers as a utility, paying for what we consume from the cloud. You know that, Om.
    But the post will work well to get comments like this.

  3. I don’t get it …

    “Even tiny startups are beginning to buy 1,000 of these boxes to just stay in business.”

    Isn’t that some heavy weight exageration there? C’mon this is a startup that is “Building a new open global search engine” and building a cluster to document the entire web.

    Kind of a silly statement

  4. As recently as two years ago (the last time I undertook a detailed analysis), blade servers weren’t much more efficient than individual 1U boxes – in either power or space. And they were significantly more expensive.

    It’s similar to going with DC power supplies: saves about 20% power usage depending on the vendor, but at the three data centers to whom I talked that could actually provide it, DC power was 50-100% more expensive on a watt-for-watt basis versus AC.

    If power is a growth constraint, AMD’s high efficiency processors help, but the higher up-front costs almost outweigh the power savings over what we used as the typical life of a server.

    Also, it’s worth keeping in mind that an idle server uses significantly fewer watts than a server being at full utilization, especially components like CPUs and hard drives (the big power hogs). Software power management is actually pretty good these days if correctly configured.

  5. Phil Windley has a great write-up as he posted on ZDnet.com today. Here’s a link to his blog about DC power in the datacenter.

    http://www.windley.com/archives/2007/12/dc_power_in_datacenters.shtml

    I’ll say what I said there though – I agree with Phil in his article that there is not enough demand for DC in the datacenter. Companies just aren’t demanding it right now – where as they should be. It saves trees and dollars.

    Two ways to address the issue that I’ve seen here at work are higher quality VRMs (voltage regulators), and higher-efficient power supplies. Most vendors range in efficiency for the PWS between 70-80%. There are motherboards and servers out there that use higher-quality VRMs AND highly-efficient PWS’s. Here is a review of one by AnandTech that achieves over 90% efficiency and reduces the pizza-box footprint in half:

    http://www.anandtech.com/showdoc.aspx?i=2997

    Blade servers are also being addressed by many companies as pointed out earlier, and reports that I have show that a 93% efficient PWS with 10 blades can save 1051 kW/hour per year and over $4,700 per 3 years, for each blade server w/10 blades. Now that’s hard $$ that a company will take to the bank if it would switch to a row of blade servers in the datacenter that had that kind of savings.

  6. @ Ward. Actually this is becoming common place for most web companies especially ones that want to offer services to millions of people. What I wanted to point out with this post was that computing is running into issues like power consumption.

    I think this is going to become more of an issue going forward as we move stuff to the cloud.

  7. Widespread installation of blades and pizza boxes in most of the outsourced data centers is not doable. The reason being that you can’t cool them with air(assuming a facility with 1000 cabinets of capacity and each one being filled with blades or pizza boxes and consuming 30kw per rack/cab).

    The limits on air cooling hit diminishing return at about 250 watts per foot. Anything past that is going to require an alternative method for cooling….it may be chilled water similar to the mainframes of the past or it may be some liquid based design. Of course there are scenarios where these high density cabinets are running fine in certain data centers but they very isolated and are not the standard configuration of every cabinet in the facility.

    When companies like amazon and google use outsourced facilities and install their cabinets which consume 10kw each, they are typically buying about 5x more space than they need because that is the only way they are able to get that much power and the associated cooling capacity in a fixed resource environment like a data center.

    The problem doesn’t go away with a grid infrastructure…it just shifts from being the responsibility of the customer(the one who is purchasing the grid service) to the operator of the grid platform or to their data center vendor. So while the problem many companies are challenged with today may be solved by closing their own data centers and outsourcing their computing requirements to a grid provider, it doesn’t make the problem disappear, it just compounds it for someone else. Unless I’m mistaken, you can’t change the fundamentals of physics, or more specifically to this case, you can’t force air to cool more than it’s physically capable of doing so you are forced to use alternative techniques and strategies. Hence the talk of bringing chilled water back to the data center floor which is unthinkable to most internet people but was common practice in the past.

  8. Om,

    In your original post, you say “50.5 billion kilowatts per hour” and you mean “50.5 billion kilowatt hours”. (The source paper by Koomey is correct.) I see an awful lot of discussion of energy consumption where the numbers make no sense. My pet peeve is that people are sloppy with the math (and especially the units). In debates like this, let’s at least get that part of it right.

  9. Not Big Iron or pizza boxes. The future is a new class of servers focused on this market. Optimized for scale, but still cheap, with tons of storage.

    Verrari and Rackable are leading the way, but the solutions are just beginning to evolve.

  10. Statistics like 20% utilization are kind of misleading, as that’s an average over an entire day (right?). That 20% gives you 80% headroom for spikes of usage, which might be during the evenings or even just 9-5, whatever it is. If instead of 5 pizza boxes at 20%, you set up 1 pizza box with 5x the apps/OSes loaded so that it was pegged at 100% used, well gee, you’re screwed as soon as any more demand comes in.

    It certainly is a good idea to think about how to save power, but not at the expense of not being able to actually serve up customer load.

This site uses Akismet to reduce spam. Learn how your comment data is processed.