Monday, June 30, 2008

The Future of OS?

It would seem that I was on the right track over the past decade. The following excerpts from a ZDNet blog, by Mary Jo Foley, and the referenced Microsoft research sites describes research into OS architecture I was writing about years ago when Microsoft was first getting legally slammed for monopolistic practices and delivering crummy software.

My points over the years have been that operating systems were being made to do things for which they were not designed and that the use of legacy code and approaches was hobbling functionality and crippling performance, security, and innovative ways of using every improving microprocessor design and features.

I'd felt that starting over from the basics might solve lots of problems that are the result of renovating and building new additions to an old architecture. This should be obvious, but it's very good to see a willingness on the part of the owners of that rickety Rube Goldberg building to start over from scratch.

The first two quotes are about a "pure" research project at Microsoft: Singularity. The third quote is about a Microsoft spin off from that research: Midori.

From ZDNet:

“The Singularity project started in 2003 to re-examine the design decisions and increasingly obvious shortcomings of existing systems and software stacks. These shortcomings include: widespread security vulnerabilities; unexpected interactions among applications; failures caused by errant extensions, plug-ins, and drivers, and a perceived lack of robustness. We believe that many of these problems are attributable to systems that have not evolved far beyond the computer architectures and programming languages of the 1960’s and 1970’s. The computing environment of that period was very different from today….”

Some more detail from Microsoft Research:

The status quo that confronted them (the Microsoft Research team) was the decades-long tradition of designing operating systems and development tools. Contemporary operating systems—including Microsoft Windows, MacOS X, Linux, and UNIX—all trace their lineage back to an operating system called Multics that originated in the mid-1960s. As a result, the researchers reasoned, current systems still are being designed using criteria from 40 years ago, when the world of computing looked much different than it does today.

“We asked ourselves: If we were going to start over, how could we make systems more reliable and robust?” Larus says. “We weren’t under the illusion that we’d make them perfect, but we wanted them to behave more predictably and remain operating longer, and we wanted people to experience fewer interruptions when using them.”

From the same ZDNet story on Midori:

“There’s a seemingly related (related to Singularity) project under development at Microsoft which has been hush-hush. That project, codenamed ‘Midori,’ is a new Microsoft operating-system platform that supposedly supersedes Windows. Midori is in incubation, which means it is a little closer to market than most Microsoft Research projects, but not yet close enough to be available in any kind of early preview form.

There's not much information on Midori, but that's not too important. What's important is the possibility that as we transition from the PC paradigm to something very different (and beyond the mobile device model too), is the willingness to consider a whole new way of utilizing what we call computing technology. Without knowing more about what Microsoft is up to, and now inside knowledge about what the other interested companies might be doing, I'd like to pose an idea:

Move away from our current hardware architectures. Find alternatives to buses and other limiting structures. Look at hardware design from the same new perspective as Microsoft is looking at OS design. Start over from scratch. And start over by scratching the needs of the folks who might actually find this new paradigm useful.

Computers started as tools for breaking WWII ciphers and calculating ballistics. It was only in 1951 when the first business applications arrived. The first graphical computer game arrived a year later.  All of these inventions were built upon engineering principles, the limitations of vacuum tubes and early electronics. We're now at a point where these limitations can be transcended by a bit of imagination.

I look forward to seeing the imagination in action.

Wednesday, June 25, 2008

Tiered, Schmeared, Weird: Enterprise Storage Management Issues and Technologies

We've been doing research in the areas of enterprise storage management with a focus on what is now called Tiered Storage and EMC calls Information Lifecycle Management (ILM). The technology was also called, back in the day, Hierarchical Storage Management (HSM).

The basic idea is pretty obvious. Store the most critical, performance sensitive data on devices that are the most reliable and highest performing (and most expensive). Store less critical data on less expensive devices and the least critical data on the least expensive and possible off line devices.

I'll probably address several of the aspects of these technologies in multiple postings over the next several weeks. For now, I'll just define some terms and establish a baseline context for the future. For these discussions most of the storage solutions mentioned will be shared. Sharing may be on a SAN, NAS or iSCSI protocol, but the devices themselves are shared.

First of all, lets describe the nature of the hardware typical to the different storage tiers. As mentioned previously, the logic behind tiered storage is that the most performance sensitive, mission critical data should be stored on devices that offer the greatest reliability and performance. This generally means Fibre Channel Storage Area Networks with the largest arrays, fasted backplanes and connections. Vendors for these systems include IBM DS6000 and DS 8000, HP XP series, SUN StorageTek 9900 series, EMC Symmetrix, Hitachi Universal Storage Platform, Pillar Axiom, and a few others.

The second storage tier, at least for purposes of this discussion, consists of systems that are not as powerful as the top of the line products. They are intended for servers and applications where maximum performance is not the key driver. The hardware can be the same as previously, listed, perhaps with lesser components, or be older models. Depending on the scale of the organization, these devices could also be the same vendor's mid-range systems. For example, HP EVA or EMC CLARiiON. This tier also opens the door for additional vendors and connection protocols. For example, NetApp filers. (I know that NetApp makes high end devices and could be included in the previous list, but this discussion is already very complicated.)

The next tiers bring us to another aspect of the story: online, near-line and offline storage. Quickly, online storage is that which is connected to and immediately available to the devices needing the data. Near-line is storage that's connected, but which may not be immediately available for use, such as a Magstar tape used in legacy mainframe systems. Offline storage is data that needs to be brought online before it can be accessed. This can be as sophisticated as a DVD stored in an automated library or as simple as a tape stored in a vault. The tiers being discussed now can consist of any combination of devices that fit into these three states depending upon need and design.

My next post will discuss some of the ways vendors are attempting to both provide tiered storage solutions. In the future, I'll be looking at solutions from vendors who purport to provide alternatives that greatly simplify this architecture such as XIV Nextra, and perhaps drill down into some of the technical details. I'll also be discussing how the various vendors go to market with their interpretations of these themes. Not to belabor the obvious, but each vendors strategy exploits their history, market and product strengths.