Year 2000 advice - plan now for testing

If you think year 2000 testing can be ignored until 1999, think again.

Like everything else about 2000, success with testing is more a management issue than a technical issue.

One aspect of year 2000 testing is well-known to anyone who has managed a software project:

Approximately 50 per cent of the money, resources and time will be spent on testing. Thus, the organisation that dawdled in 1997 with awareness-building, inventory and assessment of its date-sensitive software will find that 12 months of effort achieved only 5 per cent of the overall task of year 2000 remediation.

The pace is picking up this year as organisations staff up for the implementation phase. But that will get them only to the halfway point, and it's likely to dribble into the first few months of 1999. Thus, half the work will be compressed into 12 months or less in 1999. And without intricate planning and management, the chances of success are very low.

Of course, that depends on how you define success. Some year 2000 managers will find themselves saying, "It's December 31, 1999, so we must be done with our testing. We hereby declare success!" This is a classic issue for any software project: How do you know when you've done enough testing? Alas, a common answer is, "We've done enough testing when we've run out of time." A less cynical version is, "We've done enough testing when we've gone several days without finding any bugs."

The appropriate definition of success involves coverage: "We've done enough testing when we can demonstrate that our test data has exercised X per cent of the instructions or X per cent of the logic paths in our program." There are several commercial tools that provide coverage analysis; the technology is well-developed, but the practice of using the technology is not. If you want to succeed with year 2000 testing in 1999, make sure that you get coverage-testing tools selected and installed in 1998 and that your project teams know how to use them.

While you're at it, buy some regression-testing tools, install them and make sure your people know how to use them. You need them because the attempt to fix year 2000 bugs will introduce new bugs in other parts of the software. A regression test checks software before and after a change is made to see not only if the changes work, but also whether another part of the software was broken because of the change.

We often overlook that risk for the simple "one-line patch" phenomenon in maintenance projects, though legendary stories abound of cataclysmic disasters resulting from that practice. With year 2000, it's an utterly unacceptable practice because of the magnitude of the software changes required. For a typical Fortune 500 company, 80 per cent of the business applications are date-sensitive and therefore need to be remediated. And the remediation effort typically will involve modifying 5 per cent of the code.

While they're at it, programmers are tempted to fix a few other bugs that they discover in the legacy systems as well as eliminate "dead code" that may or may not turn out to be truly dead.

According to metrics guru Capers Jones, approximately 7 per cent of the code changes in year 2000 projects introduce new bugs. And when you're dealing with an enterprise portfolio of 200 million to 300 million lines of code, that means a lot of new bugs are introduced.

Without regression testing to provide a before-and-after comparison, the project team won't know if everything that used to work still does. The IRS has already acknowledged one such experience in its year 2000 project, which resulted in 1,000 innocent taxpayers receiving an erroneous notice of late tax payments. More interestingly, a major Wall Street brokerage made an innocent mistake in its year 2000 project that resulted in a windfall $US19 million deposit being made in each of its clients' accounts.

Finally, year 2000 project managers need to implement a relatively unfamiliar form of testing now: baseline testing. If you're not familiar with the concept of baseline testing, think of it this way: If you've got a stable legacy system running in production mode, then the objective of the year 2000 effort is to replicate today's behavior with the year 2000-compliant version of the system.

But what does that mean? We can't simply declare, "Today's system works, and we want the new version to work, too, when we make the year 2000 corrections." Instead, we must say, "We have 1 million test cases, which represents a 98 per cent coverage of the logic paths in today's version of the system, and here is the output of those test cases. After we finish making the year 2000 changes, we will run the same million test cases to verify that we get the same [logical] output. That's how we'll know the system still works."

Is this trivial? Consider the fact that today's "stable" system almost certainly contains bugs. Some bugs are known but not yet fixed, while others are latent and unknown. If you're dealing with 200 million to 300 million lines of code, then control and configuration management are of paramount importance. And that means, with very few exceptions, that the year 2000 effort must replicate the buggy behavior of today's system. It also means that if you've begun remediating your applications without having conducted a baseline test, you're already out of control.

There's a lot more to this, of course, and there are some excellent year 2000 testing vendors that will be happy to lend you a hand. But don't call them, and don't begin making your plans, in 1999. Even though you may think that the primary activity for 1998 is implementation, you must begin planning for year 2000 testing now.

Join the newsletter!

Or
Error: Please check your email address.

More about IRSIRSLogicalWall Street

Show Comments