Monday, March 30, 2009

How many computers does the world need?

There’s been a lot of buzz lately about this question and this question and Nicholas Carr’s corollary assertion, “The coming of the mega computer” :

The original quote, from the FT Techblog:

According to Microsoft research chief Rick Rashid, around 20 per cent of all the servers sold around the world each year are now being bought by a small handful of internet companies - he named Microsoft, Google, Yahoo and Amazon. That is an amazing statistic, and certainly not one I’d heard before. And this is before cloud computing has really caught on in a big way.

Having recently been working with one of the vendors of the high density servers designed for this market, I’ve been reviewing the articles and comments with great interest. Our work is almost always on contract for a major IT vendor, so we tend to see the universe through the distorting glass of our client’s needs. In this case, the outer realities reflect the inner space fairly well.

The market for high density servers is very different from the traditional approach of x86 server vendors. Servers have usually been designed to provide generalized compute capabilities within a form factor such as tower, Multiple rack unit sizes and blades. The new high density servers are much less generalized. Not only do they allow for the more discreet units of physical servers within the space required from other form factors – the high density part – but they’re also focused on providing mission specific computing capabilities through mass customization. Even, or perhaps especially, vendor giants such as IBM are producing server product lines for which the concept of server model might be an oxymoron. Yes, there are SKUs, but in reality they systems these vendors produce are custom built for the customer.

The economies of scale for mass production fits here because typical orders are for thousands of servers, all integrated into racks redesigned to hold more server units and incorporating power efficiency and cooling schemes that not only allow more servers per data center square foot, but also requiring less electricity to run the servers and keep them cool. (Cooling electricity costs are generally equal to operational electricity costs.)

The scale of this business is also very different. Where Dell, HP, IBM and the other server vendors compete to sell hundreds of servers to tens of thousands of companies in the traditional markets, in this space there are only a few customers; perhaps a few hundred world wide. But the math still works – 10,000 servers a month to one account is a lot of servers. And as anyone who’s sold computers knows, five deals to sell 50,000 units is much less expensive than 10,000 sales to sell the same number of units.

If the future is being able to use compute like services from devices anywhere on the planet at any time, then Nicholas Carr’s label of the mega computer makes sense. Or as Scott McNealy said, so long ago, “The network is the computer”. Now they’re building it.

The question is whether the vendors can wait while the economy sorts itself out. Perhaps the economies of scale work again here. That same 5X10,000 metric applies here to: Selling 50,000 servers can pay a lot of bills. Even for IBM.