Wednesday, January 28, 2009

Rackable puts desktop CPUs in low-cost servers - Network World

How’s this for a new approach to application serving:

Rackable Systems has turned to low-cost desktop components for a new server design that aims to provide a cheaper alternative for companies running busy Web applications, the company announced Wednesday. The design uses Athlon and Phenom desktop CPUs from Advanced Micro Devices and allows for highly dense servers that can be priced under US$500 because they are based on commodity PC parts, said Saeed Atashie, director of server products at Rackable.

Rackable puts desktop CPUs in low-cost servers - Network World

People have been using desktops as server for years. Hell, my first publically accessible web server ran an a Mac SE30. And it ran commercial sites driven by an Oracle Database that ran on a PC!

But that was a one off test bed, not a large scale web farm. It was also 1994.

Rackable makes high density x86 servers. Until now they utilized Xeon and Opteron processors. The systems were designed so as many as 84 servers could fit into a standard server cabinet. At about $3K a piece, they were not cheap, but were in the same ball park as the much lower density 1U servers from everyone.

This offering fulfills the RAIC paradigm. This is really an array of inexpensive computers. While the reliability won’t be as high as with server grade parts, that doesn’t matter that much in this use model. Quickly pulling out a failed server and replacing it is the story here and dumping a $500 box is much easier to justify to the CFO than a $3500 one, even if you could make the case that the $3500 one was cheap too.

Very interesting.

The shifting tides of heterogeneous storage virtualization :: Wikibon

A few months ago I’d embarked on research concerning external virtualization, here called heterogeneous storage virtualization. I like that term better and the TLA, HSV, works pretty well too. 

The article points to the same drivers I’d identified in my research:

    1. Tiered storage - in an effort to create a default tier 2 strategy and avoid expensive tier 1 platforms;
    2. Migration capability - especially for customers facing a rolling financially-forced lease conversion every year or those with particularly frequent migrations due to acquisition strategies;
    3. Storage consolidation - in an effort to pool heterogeneous storage assets;
    4. VMware and server virtualization - to support backend storage virtualization for virtualized server environments.

So far, my research has focused on how the main players: IBM with SVC, EMC with Invista, Hitachi  with USVP (and HP which sells Hitachi built storage, customized to HP specs, but still using Hitachi technologies) with and LSI with Storage Virtualization Manager. In the past few weeks, HP has brought to market its own version of the LSI product, renaming it SVSP and adding some features and capabilities.

It’s very hard to compare how these products differ in delivering their functionality as the costs are fairly high, but Edison hopes to begin some testing in the next several months. For now, it looks like the market will have to depend upon what the sales folks say. But in the future, perhaps, Edison will have some nuts and bolts information to help organizations make more educated decisions.

Sunday, January 25, 2009

Vista doesn't like to update

I've been having a problem with one of my computers that's running Vista. (I know, I know: why am I running Vista? Probably so I can go through this crap and understand what the fuss is about.)

Anyhow, a few months ago, Windows Update started having problems. It would automatically update, but I couldn't access the tool to change settings and if I also wanted to update other Microsoft products, I couldn't access the Microsoft Update site, nor install nor run Microsoft Update or even MS Office auto updates.

I discovered that this might not be an unusal problem - there was a link on the error page to get support from Microsoft. I clicked it.

Microsoft support people started a long, e-mail discussion of things to try. I've tried them all over the past many weeks. Install this, run that command line utility, download that other thing, do an upgrade installation (which failed because I'd installed SP1 and the Vista DVD was pre SP1; MS kindly sent new software media, but it too was pre-SP1 and also came on CD not DVD. Oh, and it didn't work).

Additional research on the Microsoft forums has shown that this problem is common enough to have several threads about it.

Everyone seems to go through the same steps. Sometimes it works and sometimes it doesn't. I know that Microsoft is at a disadvantage over Apple in this - every PC has different hardware and software installed and not all of those products are made to the same high standards. And updating live software is always a challenge. Windows (and earlier MS OS) biggest advantage over Macintosh OSes is that it runs almost any Intel compatible hardware. It's biggest disadvantage is that it also may not run on almost any Intel compatible hardware.

The fruits of this conflicting agenda is the failure of Vista to gain market acceptance and the on-going concerns about next generation operating systems.

Sad.

Tuesday, December 2, 2008

Evangelism, Word-of-Mouth and Passion is the Next Evolution of Advertising Research

Jack Myers, a marketing and advertising expert has written an interesting set of articles on the, “irreversible progression of advertising messaging away from the 100-year tradition of mass reach and building awareness and toward an emphasis on achieving and maintaining consumer trust and passion.” While my company Edison Group is not engaged in Advertising, the themes are very relevant to what Edison tries to do for our clients. I think these themes should be referenced in our marketing and underlie much of our work.

The following is the meat of the article:

Rather than position these as self-contained and differentiated approaches to advertising value, it is more appropriate to consider established and emerging principles as redundant and overlapping.

Awareness = Relevance
Consumers must judge an advertised product or service to be relevant in order for that product/service to register in their conscious, aka awareness -- a measure that researchers have defined through old-fashioned recall research.

Interest Requires Differentiation
For consumers to become actively interested in purchasing or even learning more about a product/service, its messaging will need to clearly differentiate it from its competitors. Whether that differentiation is based on price, quality, geography or other considerations, advertising will need to clearly communicate a differentiated positioning strategy.

Retention Requires that Consumers Clearly Perceive the Product/Service
Retention has also been measured in old-fashioned ways: how long do consumers remember a message after it has been communicated – and how recently does the message need to be repeated for it to have a meaningful impact. In the future, new measures will focus, as explained by BBDO president Andrew Robertson, on whether consumers can perceive the product or services fitting into the patterns and rituals they maintain throughout their lives.

Trust is Essential to Convince Wary Consumers
Once advertising establishes the unique points of differentiation and the relevance of the product/service to the lives of target consumers, a wary public will consider whether or not the advertiser can be trusted and if they can safely make a purchase decision with confidence the promise of the advertising will be fulfilled. Convincing and persuading consumers requires that consumer trust be established; continued purchase requires that trust be maintained.

The Ultimate Goal for Marketing is Passion and Active Evangelism
A brand is defined by differentiation, relevance, effectively communicated messages and fulfilled expectations. Brand equity is lost when there is a loss of trust. But the final goal of advertising – which in the past has been solely based on the actual purchase – now has an even more fully evolved goal. Consumer advocacy – the desire to be a public proponent and evangelist for a product or service – is defined by the passion consumers have for products and services. Passion goes beyond actual purchase decisions and reflects a proactive decision to seek to convince others to consumer the product as well.

Marketers, creative agencies and all those involved in the creative process across all media and entertainment must begin thinking beyond traditional research metrics and traditional measures of success. They must begin evolving their strategies to move beyond reach and even beyond direct consumer actions – whether those actions be clicks or purchases. The next generation of communications will focus on passion – defined by word-of-mouth, blogs, evangelism, conversational marketing, and other forms of advocacy.

While it should be obvious, I’ll illustrate how each of these principles apply to our work and how we bring value to our customers.

Relevance – this is key to matching a technology product to real end user needs. Not only must the products themselves be useful to our clients’ customers, but the work we do should be relevant to those customers' real needs. For example, our work on management ease is not only a TCO issue, even though that’s how we’re presenting the concept. Ease of use is also relevant to many readers who are used to struggling with products that are just too complex or hard to use. The relevance comes from that personal connection and the comfort a user feels when a product is truly easy to use. Another example: Apple

Differentiation – this is precisely what we do. I generally call it validation of positioning: providing demonstrable proof that differentiating factors for our clients products are true within the product’s own context and versus competitors. By providing validation, we not only support the assertions of our clients, but enhance our demonstrations of relevance to the customer’s needs and perceptions.

Clear Perception Enables Retention – by providing a relevant, clear and validated differentiation of our clients’ products we provide conceptual hooks by which customers can retain the impressions we’ve imparted by our work product. Whether it’s the belief that HP EVA is easy to use that’s current in the customer base —something supported by our work and remembered as such by customers — or a silver bullet in a sales guide that is relevant to a sales person’s daily effort — she made the sale because of something we wrote and she remembers that we wrote it — the best outcomes from our projects come from our clear writing style and approach to our work.

Trust – is a core value we bring to the table. Not only trust in our research, but also by extension, in our client’s products. A reader won’t retain what we’ve produced if they don’t think what we’ve written is true. They won’t believe the differentiations if they don’t trust our methodologies and independence and they can’t perceive relevance if they don’t trust that we understand their issues.

Passion and Active Evangelism – these are even more critical for our customer base. The enterprise technology products we write about are boring: they rarely inspire passion in their users other than hatred when the products don’t work as advertised. Overcoming negative passions and inspiring word of mouth endorsements can go a long way in helping our clients sell more. Helping end user advocates close their internal sell for these expensive solutions is an important goal for most of our public documents. Our white papers don’t necessarily change the reader’s mind: instead the reader uses what they have learned to prove to their boss the efficacy of the solutions we’re supporting.

Monday, June 30, 2008

The Future of OS?

It would seem that I was on the right track over the past decade. The following excerpts from a ZDNet blog, by Mary Jo Foley, and the referenced Microsoft research sites describes research into OS architecture I was writing about years ago when Microsoft was first getting legally slammed for monopolistic practices and delivering crummy software.

My points over the years have been that operating systems were being made to do things for which they were not designed and that the use of legacy code and approaches was hobbling functionality and crippling performance, security, and innovative ways of using every improving microprocessor design and features.

I'd felt that starting over from the basics might solve lots of problems that are the result of renovating and building new additions to an old architecture. This should be obvious, but it's very good to see a willingness on the part of the owners of that rickety Rube Goldberg building to start over from scratch.

The first two quotes are about a "pure" research project at Microsoft: Singularity. The third quote is about a Microsoft spin off from that research: Midori.

From ZDNet:

“The Singularity project started in 2003 to re-examine the design decisions and increasingly obvious shortcomings of existing systems and software stacks. These shortcomings include: widespread security vulnerabilities; unexpected interactions among applications; failures caused by errant extensions, plug-ins, and drivers, and a perceived lack of robustness. We believe that many of these problems are attributable to systems that have not evolved far beyond the computer architectures and programming languages of the 1960’s and 1970’s. The computing environment of that period was very different from today….”

Some more detail from Microsoft Research:

The status quo that confronted them (the Microsoft Research team) was the decades-long tradition of designing operating systems and development tools. Contemporary operating systems—including Microsoft Windows, MacOS X, Linux, and UNIX—all trace their lineage back to an operating system called Multics that originated in the mid-1960s. As a result, the researchers reasoned, current systems still are being designed using criteria from 40 years ago, when the world of computing looked much different than it does today.

“We asked ourselves: If we were going to start over, how could we make systems more reliable and robust?” Larus says. “We weren’t under the illusion that we’d make them perfect, but we wanted them to behave more predictably and remain operating longer, and we wanted people to experience fewer interruptions when using them.”

From the same ZDNet story on Midori:

“There’s a seemingly related (related to Singularity) project under development at Microsoft which has been hush-hush. That project, codenamed ‘Midori,’ is a new Microsoft operating-system platform that supposedly supersedes Windows. Midori is in incubation, which means it is a little closer to market than most Microsoft Research projects, but not yet close enough to be available in any kind of early preview form.

There's not much information on Midori, but that's not too important. What's important is the possibility that as we transition from the PC paradigm to something very different (and beyond the mobile device model too), is the willingness to consider a whole new way of utilizing what we call computing technology. Without knowing more about what Microsoft is up to, and now inside knowledge about what the other interested companies might be doing, I'd like to pose an idea:

Move away from our current hardware architectures. Find alternatives to buses and other limiting structures. Look at hardware design from the same new perspective as Microsoft is looking at OS design. Start over from scratch. And start over by scratching the needs of the folks who might actually find this new paradigm useful.

Computers started as tools for breaking WWII ciphers and calculating ballistics. It was only in 1951 when the first business applications arrived. The first graphical computer game arrived a year later.  All of these inventions were built upon engineering principles, the limitations of vacuum tubes and early electronics. We're now at a point where these limitations can be transcended by a bit of imagination.

I look forward to seeing the imagination in action.

Wednesday, June 25, 2008

Tiered, Schmeared, Weird: Enterprise Storage Management Issues and Technologies

We've been doing research in the areas of enterprise storage management with a focus on what is now called Tiered Storage and EMC calls Information Lifecycle Management (ILM). The technology was also called, back in the day, Hierarchical Storage Management (HSM).

The basic idea is pretty obvious. Store the most critical, performance sensitive data on devices that are the most reliable and highest performing (and most expensive). Store less critical data on less expensive devices and the least critical data on the least expensive and possible off line devices.

I'll probably address several of the aspects of these technologies in multiple postings over the next several weeks. For now, I'll just define some terms and establish a baseline context for the future. For these discussions most of the storage solutions mentioned will be shared. Sharing may be on a SAN, NAS or iSCSI protocol, but the devices themselves are shared.

First of all, lets describe the nature of the hardware typical to the different storage tiers. As mentioned previously, the logic behind tiered storage is that the most performance sensitive, mission critical data should be stored on devices that offer the greatest reliability and performance. This generally means Fibre Channel Storage Area Networks with the largest arrays, fasted backplanes and connections. Vendors for these systems include IBM DS6000 and DS 8000, HP XP series, SUN StorageTek 9900 series, EMC Symmetrix, Hitachi Universal Storage Platform, Pillar Axiom, and a few others.

The second storage tier, at least for purposes of this discussion, consists of systems that are not as powerful as the top of the line products. They are intended for servers and applications where maximum performance is not the key driver. The hardware can be the same as previously, listed, perhaps with lesser components, or be older models. Depending on the scale of the organization, these devices could also be the same vendor's mid-range systems. For example, HP EVA or EMC CLARiiON. This tier also opens the door for additional vendors and connection protocols. For example, NetApp filers. (I know that NetApp makes high end devices and could be included in the previous list, but this discussion is already very complicated.)

The next tiers bring us to another aspect of the story: online, near-line and offline storage. Quickly, online storage is that which is connected to and immediately available to the devices needing the data. Near-line is storage that's connected, but which may not be immediately available for use, such as a Magstar tape used in legacy mainframe systems. Offline storage is data that needs to be brought online before it can be accessed. This can be as sophisticated as a DVD stored in an automated library or as simple as a tape stored in a vault. The tiers being discussed now can consist of any combination of devices that fit into these three states depending upon need and design.

My next post will discuss some of the ways vendors are attempting to both provide tiered storage solutions. In the future, I'll be looking at solutions from vendors who purport to provide alternatives that greatly simplify this architecture such as XIV Nextra, and perhaps drill down into some of the technical details. I'll also be discussing how the various vendors go to market with their interpretations of these themes. Not to belabor the obvious, but each vendors strategy exploits their history, market and product strengths.

Tuesday, May 20, 2008

The Server Virtualization Battle

Citrix's Xen won't cede to VMware or Hyper-V

It seems like only yesterday when I was wondering about whether the price war in the hypervisor market should be worrying VMware. Wait, it was yesterday. Today I read that Citrix won't concede in the price war or on the technology.

From the TechTarget web site:

In December 2006, two startups, XenSource Inc. (now owned by Citrix) and Virtual Iron Software Inc., kicked off a virtualization price war, offering virtualization for as little as one-sixth the cost of VMware. Last fall with the entrance of Oracle Corp., Novell Inc., Red Hat Inc. and others into the battle, the price competition intensified, and then this spring, rivalry flared when Citrix cut prices again and initiated flat pricing for servers with up to four sockets. Citrix's efforts have met some success as well. Now all the players have geared up for Microsoft's August 2008 launch of Hyper-V, which is extremely low cost at the price of $28 per server. Citrix has a special mission in this new lanscape. Sandwiched in between feature-rich VMware and lower-cost Hyper-V, Citrix's Xen has the daunting task of remaining price-competitive yet fully featured enough to compete.

Citrix's Xen won't cede to VMware or Hyper-V

Yesterday was about VMware not lowering prices because they "…have the right product packages with prices that allow the customer to do whatever it is they need to do with our technology…".

Today is about how "…Citrix may ultimately focus on the desktop virtualization. First, Microsoft hasn't targeted its virtualization efforts to virtual desktops, leaving an opportunity for others to gain advantage."

And that Citrix "…has an open storage interface that integrates innovations from storage vendors into the hypervisor with plug-in drivers."

The rest of the article combines some market-speak with lots of assertions about feature comparisons between Citrix XenSource, VMware and even Microsoft Hyper-V (though to be fair, they state clearly that since Hyper-V isn't shipping feature comparisons are invalid at this time). Lots of on-the-one-hand vs. on-the-other-hand comparisons.

Actually comparing these products and either publishing the results for all to see or sharing them privately with our vendor clients is what my company does best. VMware: hear those footsteps behind you? Novell: wonder why you're not in the top tier? Virtual Iron: the reviewers like you, but don't you need help getting some real market share? Microsoft: we know you're going to deliver something functional eventually; wouldn't it be good to use have some support for your product claims and debunk the critics?

Why don't you give me a call?