There will be several Y2K post-mortems in the coming months. Some will assess the costs of Y2K projects and the damages associated with Y2K failures. Others will investigate the puzzling success of less-prepared countries and unprepared small businesses. But the most useful form of post-mortem for IT managers will focus on the reasons for success, especially in the organizations that took Y2K seriously, spent an enormous amount of time and energy on remediation and testing and subsequently discovered that it had all paid off.
Some IT managers might retort, "Of course we succeeded! That's what we expected! What's the big deal?" But if it wasn't a big deal, 80% of large U.S. companies wouldn't have had Y2K "command centers" to monitor the rollover. Even if we exuded confidence publicly, many organizations spent considerable sums for both command centers and contingency plans, just in case of serious problems. History suggests that such precautions were well founded: We embark upon every new IT project with great confidence, but when the dust settles, many projects are delivered late, and/or over budget and/or full of bugs.
Before we congratulate ourselves too enthusiastically for our Y2K success, we should admit that in many cases, we failed from a budgetary perspective, and that's it's too early to tell whether we failed in terms of bugs. Many large organizations spent two to three times their original estimates; the U.S. government, for example, estimated in 1997 that it would spend roughly $2 billion on Y2K repairs, but that gradually rose to approximately $8 billion by last fall. That's a polite way of saying that it exceeded its original budget by a factor of four. As for bugs: Most organizations wait for a year of operational experience before they make final judgments about the quality of the delivered system. Enthusiastic as we may be, it's a little too early to tell how many Y2K bugs will eventually be uncovered.
But one thing is clear: Most organizations did deliver and deploy Y2K-compliant systems in time for the non-negotiable Jan. 1 deadline - and most systems ran well enough to keep from crashing immediately. Even this aspect of success was better than we might reasonably have expected because everyone achieved it, with no spectacular explosions, nuclear meltdowns, power blackouts, toxic leaks, plane crashes or bank failures - anywhere. So I ask again: How did we pull it off?
When I first predicted a pessimistic Y2K outcome during a conference presentation a few years ago, an IT manager in the back of the room shouted out loudly enough for everyone to hear: "This time it will be different!" I disagreed with him at the time, but I'm beginning to think he was right. This time, we really did get senior management's involvement and support, all the way up to, and including, the CEO and the board of directors. This time, we really did perform a triage to separate the "must-do" Y2K requirements from the "should-do" and the "could-do" categories. This time, we really did perform risk management and contingency planning - because this time, every decision-maker in the organization understood that failure to do so could result in bankruptcy - as compared with the typical IT project failure, which is embarrassing but not fatal. This time, we insisted that unit managers follow a disciplined project-management methodology, which included filling out weekly status reports with detailed information about progress, problems and risk. I know of one large company that used the same project-management methodology it had developed for every other project - but this time, the company insisted that it be used and the CEO talked to any team leader who balked at the paperwork involved.
This leads to an obvious question: If we could do it this time, why not do it next time and every time? In many companies, success with Y2K could become the role model for success in all future IT projects.
Yourdon heads the year 2000 service at Cutter Consortium in Arlington, Mass.
Contact him at www.yourdon.com.