Fifty years ago, data management was simple. Data processing meant running millions of punched cards through banks of sorting, collating and tabulating machines, with the results being printed on paper or punched onto still more cards. And data management meant physically storing and hauling around all those punched cards.
That began to change in 1951, when Remington Rand Inc.'s Univac I computer offered a magnetic tape drive that could input hundreds of records per second. In 1956, IBM rolled out the first disk drive, the Model 305 RAMAC. The drive had 50 platters, each 2 ft. in diameter, that could hold a total of 5MB of data. With disks, data could be accessed at random, not just sequentially, as with cards and tape.
But for decades, most firms had only used data in batch runs for accounting, and it took time for an idea like navigating through data to catch on.
Data Management Is Born
In 1961, Charles Bachman at General Electric Co. developed the first successful database management system. Bachman's integrated data store (IDS) featured data schemas and logging. But it ran only on GE mainframes, could use only a single file for the database, and all generation of data tables had to be hand-coded.
One customer, BF Goodrich Chemical Co., eventually had to rewrite the entire system to make it usable, calling the result integrated data management system (IDMS).
In 1968, IBM rolled out IMS, a hierarchical database for its mainframes. In 1973, Cullinane Corp. (later called Cullinet Software Inc.) began selling a much-enhanced version of Goodrich's IDMS and was on its way to becoming the largest software company in the world at that time.
Meanwhile, IBM researcher Edgar F. "Ted" Codd was looking for a better way to organize databases. In 1969, Codd came up with the idea of a relational database, organized entirely in flat tables. IBM put more people to work on the project, code-named System/R, in its San Jose labs. However, IBM's commitment to IMS kept System/R from becoming a product until 1980.
But at the University of California, Berkeley in 1973, Michael Stonebraker and Eugene Wong used published information on System/R to begin work on their own relational database. Their Ingres project would eventually be commercialized by Oracle Corp., Ingres Corp. and other Silicon Valley vendors. And in 1976, Honeywell Inc. shipped Multics Relational Data Store, the first commercial relational database.
By the late 1960s, a new kind of database software was being developed: decision support systems (DSS), designed to let managers put data to better use in their decision-making. The first commercial online analytical processing tool, Express, became available in 1970. Other DSS systems followed, many developed inside corporate IT departments.
In 1985, the first "business intelligence" system was developed for Procter & Gamble Co. by Metaphor Computer Systems Inc. to link sales information and retail scanner data. That same year, Pilot Software Inc. began selling Command Center, the first commercial client/server executive information system.
Also that year, back at Berkeley, the Ingres project had mutated into Postgres, with a goal of developing an object-oriented database. The next year, Graphael Inc. shipped Gbase, the first commercial object database.
In 1988, IBM researchers Barry Devlin and Paul Murphy coined the term information warehouse, and IT shops began building experimental data warehouses. In 1991, W.H. "Bill" Inmon made data warehouses practical when he published a how-to guide, Building the Data Warehouse (John Wiley & Sons).
With the widespread adoption of PC-based client/server computing and packaged enterprise software in the 1990s, the transformation of data management was complete. It was no longer just storing and maintaining data, but slicing, dicing and serving it up in whatever ways users demanded.
And now, on with the story. . . .