SOA (service-oriented architecture), we agree, is the way of the future. We'll build loosely coupled Web services now and wire them up into composite systems later. The benefits are clear: scalability, OS and language neutrality, easy integration. But as "later" starts to resolve into a date like 2003, or 2004, it's also becoming clear that SOA raises challenging issues. How, for example, do you monitor, test, and debug a distributed system when only some of its components are under your direct control?
In one sense, there's nothing new here. As GUI programmers have known for many years, event-driven software is fiendishly tricky stuff. Messages trigger more messages. The worst nightmare is a bug that manifests itself only when messages arrive in a particular sequence. The large-scale event-driven systems that we are now proposing to build using Web services will, without a doubt, give developers more of these bad dreams.
At the component level, there's a lot of progress being made. I recently met with AmitabhSrivastava, a Microsoft Distinguished Engineer who runs the company's ProgrammerProductivity Research Center (PPRC). His group has developed technology to correlate programs with their test suites and to prioritize the tests performed when programs change. The details are fascinating, but here's the gist of it: when an altered Microsoft Word or SQL Server DLL is checked in, a quick analysis of the binary code can correlate it with affected tests and can rank the tests by degree of impact. The technique works agnostically with x86, IA-64, and MSIL (.Net's instruction set) -- a good thing, since Microsoft software will increasingly and for a long time to come mix these code types.
To see why it's important to prioritize tests, think about shipping a security patch. Sure, your testbed can uncomplainingly grind through the whole test suite, but that could take a week and you need to deliver something in hours. Focusing on the tests actually affected by the patch is a huge win.
Granular data about software change, once available, could be used in all sorts of interesting ways. For example, how do you currently decide when to upgrade to a patched version of a program or to apply service pack? Unless it fixes some urgent problem, you'll probably talk to friends, scan newsgroups, or just flip a coin. Given a quantitative view of the impact of the change, you could make a more informed decision. Conceivably, you could use that information to prioritize testing of your own dependent software.
As we combine individual components into federated systems, the distinction between open source and closed source begins to blur. Components built using either methodology offer services, and it is the openness of those services that will finally matter. XML buys ustransparency in the messaging layer. It enables Mindreef's SOAPScope to debug any collection of endpoints, and Confluent's CORE Manager to enforce service-level agreements. But WSDL interfaces, notes Microsoft's Srivastava, don't tell you how to test services -- only how to invoke them.
It's no longer possible to carve up the software life cycle into distinct phases. Individually we may still design, develop, test, and deploy, but collectively it becomes a continuous process. Today, the test-management technologies the PPRC has invented are helping Microsoft's own developers deal with that reality. If Microsoft can push those technologies into its commercial tools, tomorrow's third-party developers will be similarly empowered.
The open-source communities, meanwhile, have their own rich testing traditions. JUnit, for example, isn't just a harness for regression tests. For practitioners of test-driven development, it's a framework that supports incremental exploration of design space. These tests can, and will, serve other purposes too.
Smooth interoperation of distributed services will require more than well-formed SOAP packets and WSDL-compliant interfaces. We'll also need ways to describe the assumptions encoded into our services. That's what software tests do. The ability to exchange tests and test metadata will, I predict, be one of the defining characteristics of an open service.