- What ads would appear?
- Will I make more than a dollar a month?
- Would it annoy my few readers?
- Would it generate traffic.
Whatever the reasons, it will be interesting and if I make a buck or two, I'll be grateful.
A blog discussing the affects of information technology on our lives and how our lives affect information technology. I'll discuss personal computing, enterprise computing, personal electronics and convergence.
Whatever the reasons, it will be interesting and if I make a buck or two, I'll be grateful.
"The most significant aspect of the announcement is the management, says Phil Hochmuth, an analyst with the Yankee Group. 'Enterprises are really consolidating their management roles,' he says. 'More and more enterprise IT and enterprise security teams are sharing the same hat, the teams are extremely integrated. The more they are looking at the same screens, the better.'"This is something I've been discussing and writing about for many years. Too many different management consoles leads to confusion, error and frequently extreme segregation of operational functionality limiting the ability of administrators to manage increasingly complex systems. Systems that are comprised a mixture of servers, storage, networks, security systems and more and all being virtualized as well.
Among the announcement are of a Virtual Storage Console, which is a plug-in module for VMware vCenter Server that lets storage administrators manage and monitor NetApp gear from within vSphere 4 environments.
NetApp unveils new virtualized storage software | NetworkWorld.com Community
This specific item and the other announcements from NetApp, quoted in this article, show their continued commitment to marketing themselves as the VMware storage solution. Competitors will need to move quickly to implement similar capabilities in the area of two way management integration.
This is important because even if servers, storage, applications, I/O and everything else is virtualized, the devices are still individual components with unique characteristics that need their own management tools. In a virtualized world though, changes to the hardware have affects on the virtual space and visa versa; so being able to manage the virtual from a physical device and the physical from a virtual device will become ever more critical.
It’s going to become a requirement that admins be able to manage any storage system from vCenter and vCenter from any storage management console. These capabilities are going to be a special challenge to IBM and HP as their focus has been on integrating within their own equipment stacks rather than bi-directional integration with management platforms for vendors such as VMware. Stories about a Cisco/EMC joint venture might, at least partially, be about this need for management capabilities across the whole hardware stack.
A cloud is something soft and fluffy. When used to describe shared compute services, it means, among other things, that the borders between the various compute services provided and the users of those services are blurred. Why shouldn’t the devices providing these services also be a bit blurry. Servers as we’ve come to know them (I’m mostly talking about x86 servers here) have architectures that are starting to really reach their architectural limits when providing cloud compute services. It’s time for something new and different.
Why couldn’t the hardware of which servers are now comprised be treated differently. Rather than clusters of servers or pools of virtual machines running on clusters of servers, I think it might make more sense for there to be pools of processors, pools of memory, pools of I/O, pools of storage. These can be provisioned in any combination of components as needed to deliver the functionality required by the applications in question.
Instead of racks and racks of servers filling acres of air conditioned space why not have racks or processors and racks of memory and of storage and so forth. Each of these module achieve different densities and consume different power volumes producing different heat profiles. This means that memory and processors which produce the most heat can be cooled separately from other components that produce less heat or storage that works fine at higher temperatures than processors and memory.
I’ll try and explore this idea more at a later time.
Google (GOOG) has begun operating a data center in Belgium that has no chillers to support its cooling systems, a strategy that will improve its energy efficiency while making local weather forecasting a larger factor in its data center management.
Kudos to Data Center Knowledge for bringing this to my attention. The story speaks for itself, so I’ll limit my comments to highlighting a few things that I found very interesting.
Google maintains its data centers at temperatures above 80 degrees.
Most data centers are kept below 80 degrees. Co-Lo facilities speak of keeping temperatures below 70 degrees. Google has an advantage over other large companies: among other things their equipment is much more uniform so airflow and other temperature management issues are less complex to manage.
Co-Lo facilities probably have to keep things cooler than corporate data centers because their tenants often won’t use best practices in provisioning for air flow and other factors such as highly heterogeneous hardware use.
At last month’s Structure 09 conference, Google’s Vijay Gill hinted that the company has developed automated tools to manage data center heat loads and quickly redistribute workloads during thermal events (a topic covered by The Register).
Google has had a head start here, and this is an area where the major vendors are playing catch-up. Most current hardware management tools now include power and temperature monitoring as standard features and we’re starting to see performance tuning for power and heat appear in the administrator console as a standard feature. This trend will undoubtedly continue as components become more manageable for energy use.