Thursday, July 30, 2009

Does the Cloud need “Servers”?

A cloud is something soft and fluffy. When used to describe shared compute services, it means, among other things, that the borders between the various compute services provided and the users of those services are blurred. Why shouldn’t the devices providing these services also be a bit blurry. Servers as we’ve come to know them (I’m mostly talking about x86 servers here) have architectures that are starting to really reach their architectural limits when providing cloud compute services. It’s time for something new and different.

Why couldn’t the hardware of which servers are now comprised be treated differently. Rather than clusters of servers or pools of virtual machines running on clusters of servers, I think it might make more sense for there to be pools of processors, pools of memory, pools of I/O, pools of storage. These can be provisioned in any combination of components as needed to deliver the functionality required by the applications in question.

Instead of racks and racks of servers filling acres of air conditioned space why not have racks or processors and racks of memory and of storage and so forth. Each of these module achieve different densities and consume different power volumes producing different heat profiles. This means that memory and processors which produce the most heat can be cooled separately from other components that produce less heat or storage that works fine at higher temperatures than processors and memory.

I’ll try and explore this idea more at a later time.

Thursday, July 16, 2009

We don’t need no stinkin’ chillers!

Google’s Chiller-less Data Center

Google (GOOG) has begun operating a data center in Belgium that has no chillers to support its cooling systems, a strategy that will improve its energy efficiency while making local weather forecasting a larger factor in its data center management.

Kudos to Data Center Knowledge for bringing this to my attention. The story speaks for itself, so I’ll limit my comments to highlighting a few things that I found very interesting.

Google maintains its data centers at temperatures above 80 degrees.

Most data centers are kept below 80 degrees. Co-Lo facilities speak of keeping temperatures below 70 degrees. Google has an advantage over other large companies: among other things their equipment is much more uniform so airflow and other temperature management issues are less complex to manage.

Co-Lo facilities probably have to keep things cooler than corporate data centers because their tenants often won’t use best practices in provisioning for air flow and other factors such as highly heterogeneous hardware use.

At last month’s Structure 09 conference, Google’s Vijay Gill hinted that the company has developed automated tools to manage data center heat loads and quickly redistribute workloads during thermal events (a topic covered by The Register).

Google has had a head start here, and this is an area where the major vendors are playing catch-up. Most current hardware management tools now include power and temperature monitoring as standard features and we’re starting to see performance tuning for power and heat appear in the administrator console as a standard feature. This trend will undoubtedly continue as components become more manageable for energy use.