It's too bad that the US World Cup team lost its first game, against the Czech Republic. Team USA actually played good soccer, but you'd never know that from the 0-3 final score or from looking at the game's highlights.
The loss can be chalked up to a couple of defensive mistakes (suicide against an experienced team such as the Czechs), not enough determination when attacking, and a good share of misfortune. When the US goes up against Ghana or Italy, the story could have a different ending.
I'm bringing up soccer because the digital recordings of games -- such as the clips you can enjoy at Yahoo -- are a perfect example of data eligible for CAS (content addressable storage) archiving. Other fixed content files that make good CAS candidates include a variety of contractual, scientific, and medical records (your last MRI, for example).
Although we discussed CAS not long ago (http://www.computerworld.com.au/index.php?id=1934809641), you won't be bored by the latest news. Hitachi Data Systems recently announced an interesting approach, one that it describes as "second-generation CAS." A vendor entering a new market obviously wants to differentiate its offering from those of its competitors, and Hitachi is doing just that: Its Content Archive Platform is based on the OAIS (Open Archival Information System) standard.
OAIS is a reference model for archiving solutions initially drafted by the Consultative Committee for Space Data Systems. Several scientific organizations and the Library of Congress, among others, have adopted it.
There's more: You may remember how Caringo's cluster-based architecture is a sharp contrast to EMC Centera's monolithic approach. Hitachi takes an interesting third path, combining a cluster of commodity servers running archive management applications from Archivas with TagmaStore WMS (Workgroup Modular Storage) hardware.
Archivas has been trumpeting applications that run on commodity hardware as well as offering an object-oriented approach to archiving since it was founded in 2003.
The basic CAP module is a cell comprising a two-node cluster plus one WMS100 storage device. Hitachi, however, suggests that a typical entry-point deployment will have at least four nodes and 5TB to 10TB of capacity spread across two WMS100s. The cluster will run Archivas' applications to provide a single policy-based archive interface to multiple applications via familiar file and Web access protocols, including CIFS, NFS, HTTP, and WebDAV.
Adding more nodes will speed performance; adding more disk drives to the WMS100 will increase data transfer rate and capacity -- as fast as to 300TB and as many as 350 million files, according to Hitachi. The vendor estimates that CAP should outperform "first generation CAS" (Hitachi's euphemism for Centera) solutions 5-to-1.
Services provided by the cluster include searching, preserving content quality, ensuring secure custody, and verifying data removal when appropriate. CAP also provides those services for both structured and unstructured data, whereas most competitors focus only on the latter.
GA for CAP should be out this fall, at a price that will vary depending on number of nodes and capacity; Hitachi suggests a price of approximately US$225,000 for roughly 5TB of usable capacity, which also includes compliance applications, according to the vendor.
CAP will probably mark the beginning of a new and fierce stand-off between Hitachi and first-generation CAS vendors, but Hitachi seems to be aware that software will play a role at least as important as hardware in that battle.
In fact, on the same day it made its CAP move, Hitachi revealed "an ecosystem of ISV partners to deliver an integrated, best-of-breed portfolio of hardware and software components." That list of well-known names seems to be just the beginning of many partnerships to come.
I can't wait to see the CAS battle escalate. After all, increased competition is usually good news for customers.