It looks like after Amazon, a mere book retailer, showed them the way, all the technology powerhouses have fallen in love with cloud computing. Hewlett-Packard, Intel and Yahoo earlier this week said they’ve teamed up with three universities to create a cloud computing testbed, and Michael Dell talked about his company’s cloud computing plans with me in a recent interview as well.
Perhaps that’s why it came as no surprise when Big Blue sent over a press release outlining their plans to build a data center in North Carolina that will be the underpinning of their continuing cloud computing efforts. IBM will construct a $360 million, state-of-the-art data center at its facility in Research Triangle Park, N.C., and use it to sell cloud-computing services to its clients – mostly large corporations. The first phase of the new North Carolina facility will be 60,000 square feet and will use:
High-density computing systems utilizing virtualization technology, which reduces energy costs by running multiple software applications on the same servers. This technology, along with IBM’s Cool Blue portfolio of energy efficient technologies and a modular data center design will allow the RTP facility to support 2.5 to three times the amount of client demand in the square footage of an industry average site. The data center’s mechanical system design is 50 percent more efficient than the industry average, equating to a reduction of approximately 31,799 tons of carbon dioxide emissions a year.
Today, in addition to North Carolina, IBM also announced that it’s unveiling its newest cloud-computing facility, this time in Tokyo. IBM says it has cloud computing efforts already under way in Dublin, Johannesburg, the Netherlands, as well as in the Chinese cities of Beijing and Wuxi.
Now I am a little hard-pressed to buy into this whole “cloud computing” message from Big Blue. From the way I see it, the only IBM effort that qualifies as a “cloud computing initiative” is their partnership with Google, which involves the two companies spending $100 million to offer computing resources to the academic community.
Don’t get me wrong: If there is one company that can chant the cloud mantra, it is the original proponent of time-sharing, IBM. Except that in the days leading up to our Structure 08 conference, Dr. Jay Subrahmonia, director of advanced customer solutions at IBM, told Stacey that the company wasn’t in the business of operating clouds. Sure, they’ve hosted them for one or two customers, and they’re happy to build them, but she said IBM was more interested in selling hardware to companies or organizations that will run them themselves.
Yet suddenly we have multiple clouds? I wondered if IBM was repackaging data centers as “cloud computing” and further bastardizing the term. When I asked them if this was the case, a company spokesperson emailed me back with the following response:
We’ve tried to only talk about centers and clients that are legitimate cloud services and environments. In Tokyo tomorrow clients can run and test in an operational cloud environment. And, specifically for North Carolina, this is being built from the ground up based on cloud principles being developed by the 200 full time researchers we’ve dedicated to cloud since the initial IBM-Google academic initiative last year. This is bearing fruit; the center in NC will be able to support 100,000 processing cores.
He also explained that the billing of clients is very different from the data center hosting. These contracts are short; they are not multiyear deals. “But we don’t bill ‘by the minute’ as others do,” he said. “Specific pricing varies heavily based on what we are doing with clients.” He went on to explain how IBM defines clouds, which is in line with our own definition of cloud computing.
The centers we are announcing operate in a full multitenancy model. They virtualize server, storage, and network resources and give end users the ability to reserve secure, virtual units of these resources via a self-service Web 2.0 portal. We can “dispense” pre-built virtual servers based on virtual machine images of anything ranging from bare-bones operating systems (IaaS) to empty middleware containers to full J2EE applications on WebSphere and DB2.
To sum it up, IBM’s decision to move computing into the “clouds” reflects the fact that the business of selling infrastructure hardware is changing fast, and it isn’t impossible to imagine a day when a substantial portion of hardware infrastructure is sold as a service.