The US in my own estimation, spent more than $300 billion (in Australia it was around $12.5 billion) to secure a smooth computer transition into 2000. My numbers are based on comparisons between Y2K fix-it costs that public corporations filed with the Securities and Exchange Commission and each company's administrative and sales expenses. With financial data about such expenses available for 7600 public corporations, you can come up with a more reliable approximation of Y2K costs than what you'll read or hear from consultants and government officials.
The question is whether that $300 billion-plus was a successful investment. Or, rather, was it money thrown at a problem under the hysterical conditions that one usually finds when paying ransom to rescue a kidnap victim?
The answer is important. Executives are now concerned with rationalising their Y2K programs as prudent decisions. Shareholders will also soon ask what other large payments are looming to obtain catastrophe-free computer services that their enterprises will need in order to function.
And the potential credibility of all computer executives could be at risk. I consider the Y2K tab a monumental failure. It demonstrates how the leadership of our information-based society couldn't cope economically with computer-based risks. There's no such thing as a perfectly functioning, completely reliable computer system, and there never will be, especially as the complexity and interconnectivity of networked systems permeate everything. Of all the inherent risks that I can list, I rank the omission of two calendar digits among the least deadly. The issue is, what have we learned so that we can deal with the dangers that are yet to emerge, such as the perils from cyber terrorism and information warfare?
Our society knows how to cope with inherent risks that arise from technological progress. Steam boilers explode, trains derail, oil tankers spill their cargo, cars collide, buildings collapse and aeroplanes crash. Since hardly anyone can absorb what may appear to be a rare but huge loss, commerce has insurance.
The payment of an insurance premium allows businesses to share risks and submit to the judgment of expert risk assessors to come up with rational gambles about plausible hazards. In turn, the risk underwriters do their best to impose on society public standards, codes and tests that would minimise their own monetary losses.
In terms of dollar exposure, Y2K was initially believed to be riskier and more expensive than earthquakes or tornadoes. What, then, was the response from our executives? Instead of following the proven pattern of reliance on codes, standards, tests, competitively priced premiums and cost/benefit analyses, management abandoned sound decision-making and chose to protect each company as if it were a stand-alone fortress that had to depend on its own error-prone crews to achieve unprecedented levels of reliability.
Whipped into a spending frenzy by doomsday consultants, self-serving vendors and blame-dodging politicians, each organisation went ahead with a program of trying to attain a zero-risk status against unpredictable 'bugs'. As any actuary would tell you, the price of achieving risk-free perfection against every conceivable and unknown danger is a huge multiple of what it would cost to insure against specific perils. It's the difference between the pooled costs of the risks from Y2K compared with each organisation's payment of 'protection money' for its own safety that is my measure of Y2K's excess costs.
I predict that this loss will come to haunt the credibility of all computer executives whenever they try to make a new case for spending that won't yield more profits. wPaul Strassmann (firstname.lastname@example.org) says he believes that the Y2K experience is not a good example of how to cope with the risks of a computer-based society