Net legacy nightmares

Despite the exhortations about the Internet's revolutionary nature, IT organisations are learning the old maxim, "The more things change, the more they stay the same". Project managers have already learnt that they ignore such basic software engineering principles at their peril if they succumb to the pressure of "Internet time" when building a new Web application. Now we're seeing another aspect of "déjà vu all over again": the emergence of Internet legacy systems.

In the good old days, it took two to three years to develop a new mainframe application in Cobol. If you were lucky, the original developers would hang around for another couple of years, and it would take yet another couple before business conditions changed enough to warrant major software changes. Then, everyone realised they were dealing with a legacy system whose internal logic nobody understood. Today, each stage in an application's life has been reduced to months and is developed on a "death-march" schedule so the company can be first to market. When success results, developers cash in their bonuses and disappear for greener pastures. Within a few months, the market has metamorphosed such that major changes are needed and nobody has a clue how the code works. Voila - another legacy system!

We never solved this problem before, and perhaps it's naive to imagine we can do so in the world of the Internet. But because the legacy-creating process is inevitable and the Internet's compressed time scale means that some of the original senior managers will still be around to suffer the consequences of legacy code, perhaps there will be a greater sense of urgency about minimising the problem, if not eliminating it altogether.

In a perfect world, we would insist on a formal, disciplined analysis-and-design process, combined with meticulous documentation. But Internet-time projects are clearly not part of this perfect world, and it's unrealistic to assume we're going to see much discipline in a development effort measured in weeks or months.

But that doesn't mean we have to abandon order and discipline. Instead, the prudent project manager should focus on "light" methodologies, such as those described in Kent Beck's new book, eXtreme Programming eXplained (Addison Wesley, 2000) or Jim Highsmith's Adaptive Software Development (Dorset House, 2000). One of the biggest problems with legacy systems is the lack of documentation. Remember that documentation aims to transfer knowledge from the original developer to the maintenance programmer. It's called dual programming: two developers work together on each application component. In the old days, we sometimes did this because we didn't have enough terminals.

Now we realise that it also helps with peer reviews and knowledge transfer: for example, from a senior developer who is likely to depart when the project is finished to a junior developer who is expected to stay to help with maintenance.

Or consider videotaping all important design review meetings, joint application development sessions and other key meetings in which the system's technical aspects are discussed. Assuming that an iterative, prototyping approach is used, schedule a mini-post-mortem after each new prototype is implemented and videotape it for posterity. Use voice-recognition technology to transcribe the meeting's audio content, then index those documents so future generations can quickly track down key information. None of these strategies is perfect, but they're better than ignoring the issue. If nothing else, Y2K taught us the consequences of letting undocumented legacy systems live on. It would be nice if we could avoid that in the brave new world of the Internet.

Join the newsletter!

Error: Please check your email address.
Show Comments

Market Place