Therefore is the name of it called Babel, because the Lord did there confound the language of all the earth. - Genesis, 11:9And we know what happened next: work on the greatest high-rise corporate office block of biblical times ground to an immediate halt as the thousands of planners, architects and designers, logistics experts, teamsters, stonemasons, carpenters, bricklayers and hod-carriers (not to mention the system administrators) shouted at each other in mutual incomprehension.
This exacerbated an already difficult problem, that of barter: tough enough getting agreement that one donkey equals 100 bushels of wheat, without trying to do it in a mix of Hebrew and Chaldean!
Then two things happened.
First, some lateral thinker invented money, which was really nothing more than a lump of easily-transported metal with the king's picture on it guaranteeing it to be of a certain weight and purity, and therefore, fixed exchange value.
Then the Romans came along, invented Latin, and conquered a big chunk of the world. Suddenly it was very cool to know Latin: it meant you could talk to anybody else who knew it too, no matter where they came from. And ecce!' (that is, voila!'), standards were born.
Standards were such a good idea that we've been inventing them in droves ever since. In fact, we've had such fun developing them that we often have several for any given topic we want to agree on. Of this the Romans would have disapproved: good standards, and the interoperability they allow, only work when they are pretty universal and guaranteed by an umbrella structure such as, say, the Roman Empire. When that fell in a heap, people went back to a non-standardised way of doing things: unrelated local dialects, and meaningless local currencies not worth a scrap of tin outside their own province. We called it the Dark Ages, and it was very, very bad for business.
To a degree, we're still in that situation in IT today, though not, perhaps, as feudal as all that. In the absence of imperial decrees specifying exactly how systems and products should be designed so that they work seamlessly together, we're faced with a variety of costs associated with running heterogeneous environments - costs which may be incurred at any of several levels: staff, software, hardware, management solutions. Where do those costs lie, and can they be avoided?
According to Graham Penn, director, storage research for Asia Pacific at IDC, the financial effect of standardisation or the lack of it can be seen at five main points: cost of acquisition, cost of deployment, cost of management, daily operational costs, and opportunity cost'. ("If you waste a lot of time doing mundane, stupid things, you can't get on with doing high business-value things.")Common sense and industry orthodoxy suggest that of course, a highly uniform environment must be more economical to run. After all, that's in effect what Henry Ford did to dramatically drive down the cost of the automobile. But there's a dissenting opinion held by some, a heresy suggesting that many of the savings have been illusory, or at least eaten up by increased expenses in other areas.
Similarly with network management. Bruce Boardman of Syracuse University in the US, in a white paper entitled "Management Standards Come Together", says with regard to network management frameworks proposed since the early 90s, "it never happened. Well, it kind of happened, but most of these frameworks required so much heavy lifting and money that the solution failed".
One size fits all? or alphabet soup?
What to do? Two broad approaches seem open to enterprises wishing to standardise. Most straightforwardly they can, as far as possible, (usually not very far at all) choose a single vendor.
Natasha David, senior analyst, software at IDC, says "the bottom line is that apart from greenfield sites and start-up companies, most organisations are operating within a heterogeneous environment," and the older the legacy systems in place, the more that is likely to be the case. As Penn adds, "there's a trade-off between committing yourself to a single vendor that may offer total interoperability, as against being caught in the vendor's clutches'.
"Somewhere along that continuum is the best, lowest-cost point. But bear in mind that over time - due to technology and product cycles - no one vendor necessarily has the lowest-cost solution."
Or, enterprises can embrace their diversity, and rely on the rapidly evolving arsenal of communication protocols and middleware standards that promise increasing levels of connectivity and interoperability, along with a forest of new acronyms.
This implies that the industry' is ready for collaboration. But for developers, standardisation may be a mixed blessing, depending on their size. Barb Goldworm of Network World has observed that "because standards help developers build products that run across different platforms from different vendors, they help level the playing field.
"This is great for small developers and for users. For major players, however, a level playing field goes against their competitive advantage. Standards, therefore, become a love/hate passive/aggressive arena. Everyone says they'll implement the standards. But definitions are painfully slow and actual implementations by major vendors are even slower."
Frank Hayes, a senior news columnist at US Computerworld, puts it more colourfully: "Standardisation is an advantage for little guys, not for 800-pound gorillas who can get whatever they demand."
IDC's Penn partly concurs, but says he has observed big players, while trying to control the game, begin in time to lose their competitive advantage to smaller, more agile start-ups, and niche companies with a good idea.
"Take Fibre Channel," he says. "Over the past three or four years it has been touted as the preferred storage interconnect, and everyone moved in that direction. But out of left field came Fibre Channel-across-IP or SCSI-across-IP, and suddenly Fibre Channel is just one of a number of alternatives. The people that were big in Fibre Channel, such as Brocade, are now embracing the new technology as well, because they can see that if they don't, in two or three years they'll be marginalised."
Penn adds: "It is fair to say that over the last three years, increasingly we have seen the various vendors, responding to user requirements, prepared to work together to establish interoperable standards. We're not there yet. But look at the SNIA - the Storage Network Industry Association - or the Fibre Channel Association. There, organisations which were totally proprietary are prepared to give up their own trade secrets for the benefit of the industry overall. And I see that as very positive, and we shouldn't trivialise that, because it means that through time, organisations with disparate legacy equipment will be able to connect them together. Some of that is driven by the users, some by a desire for the large IT companies to move away from being islands by themselves, and some by start-ups such as Brocade and Veritas which provide some of the pieces in the middle."
More developers are attempting to build bridges across a number of divides. Take the example of Dublin-based Iona. Its XMLBus should, by the end of the year, enable developers to build Web services on the Iona iPortal, BEA Systems' WebLogic, or IBM's WebSphere, and take advantage of services built on .Net.
"XMLBus has passed interoperability testing with nine Web service implementations," according to Rebecca Dias, Iona's product manager for XMLBus. "Iona's XMLBus currently supports Java applications and EJBs (Enterprise JavaBeans)," Dias said. "The next two desires for us in the short term are JMS and Corba, and eventually CICS and IMS."
But in the long run it's not enough, according to Kevin McIsaac, Meta Group's program director server infrastructure strategies, for third parties to produce connectivity tools. Such standards will have truly arrived' when vendors offer them along with their applications. "At the moment," he says, there's a dilemma.
"On one hand we've got products from some of the top integration companies, which provide a very robust, stable, rich set of tools. But they're all proprietary. On the other side there's the emergence in the last year or so of the recognition that we could actually have a standard integration server, which may not be quite as good as the proprietary tools, but which will be good enough. The dilemma today is: do I go with what I know is a robust, rich solution that will work now and be simple to deploy, but in five years time will definitely be a legacy? Do I wait and do nothing at all? Or do I buy into these new standardised approaches, which are very immature?"
Time, resources, cash
Sometimes it's not a choice. Gary Whatley is CIO at Corporate Express, which he describes as, "the one-stop shop for the office space, for major corporates" in Australia and New Zealand, with major European links. About 40 per cent of its business is done electronically, with most of that being via their e-procurement site and, due to the variety of customers they do business with, having gone with XML.
Corporate Express's B2B mechanism is based on Intel machines running Linux. "Our e-procurement - the Linux side - is connecting in real time to our ERP system on [IBM] RS6000," he says. "In some cases, we have significant numbers of customers who are coming into the site and may be using other software. Also, they could be sending us electronic orders via one of the XML or EDI formats."
Corporate Express chose XML as the basis for an ERP initiative, which began at the start of 2001. "Our e-procurement side was growing significantly," says Whatley. "We also had a lot of customers talking about wanting to integrate their back-ends with our back-ends. In the past that had been on a peer-to-peer basis: a one-off solution each time. What I wanted to build was some infrastructure that we could reuse and leverage.
"Our approach has been to push standards as much as we can," he adds. "That's been an issue because everybody talks about XML, but there are just so many flavours of XML."
Lack of integration is often cited as a cost, but at Corporate Express, the cost was in time.
With close to 50 integration projects with customers in progress at once, Whatley says, they were taking months to do. So the main driver was trying to use standardisation to accelerate customer integration. WebMethods allowed Corporate Express to address the different XML flavours' they encountered.
"We support most of them in regards to the major trading environments of our customers," he says. "Most of them are SAP customers, so we support CommerceOne and xCBL3; we also have a lot of Ariba customers, so we support cXML. We support probably six or seven different formats."
But for Whatley, it's not technology that's the problem. He says: "The biggest issues, we've found, really aren't the technology, they're the alignment of processes between us and our customers. Technology, from my point of view, is easy and it's all there; it's really the process side that takes the time and significant work."
Doctor, heal thyself
Although - perhaps because - Dimension Data is itself a provider of network integration and i-commerce solutions, it has taken its own medicine and moved towards standardisation, thus becoming a user of the solutions it recommends. Says CIO Scott Petty, "actually standardising on a platform is less important than standardising processes: having a consistent, repeatable way to build desktops or servers is the key.
"When you talk about interconnectivity between companies," he said, "it's very difficult to say you'll only do it one way. In our business, our two major vendors are Cisco and Microsoft, and we integrate our ERP into their back-end systems. They happen to be different, so as a trading partner we have to support both of those. But our processes to interface and deliver the information are consistent across both pieces of technology."
In contrast to Gary Whatley's haste, Petty thinks interoperability is best appreciated in small, repeated doses.
"If your primary project implementation driver is pushing down costs through standardisation," he suggests, "you probably will fail. I think you've got to build standardisation into existing business-value projects. Have a clear blueprint that you're working towards, and as projects kick off that can assist you in that standardisation, you're going to get them implemented at a low cost, and still reap the benefits of a lower cost of ownership or management.
He is suspicious of radical revisions.
"For example," he says, "a lot of people will run a server standardisation' project: they're going to cut everything over to NT or Windows 2000, and they spend five or six million dollars doing that. I'd say, yes, by the time you get that piece of work done, you're probably going to fail [in cost saving]. But our approach was that as we rolled out new applications, new platforms and new tools into the business, we'd migrate all of them to a common platform: standardisation by stealth', if you like. The end result is we still got the cost of ownership benefits through a common platform, but we did that at a pace that made sense, with very little incremental cost to the projects that we were running anyway."
Over at the banks they know how to count costs. But at St George Bank, Colin Cain, chief manager, Internet solutions, agrees that the actual costs of improving interoperability aren't in cash: in his case, they're in people - people who had to waste time doing what Penn calls "mundane, stupid things".
"In 99 per cent of cases," Cain says, "technology and vendor interoperation at certain levels is easily achieved. No issue there at all; the biggest cost to us is maintaining and managing the environments that we're responsible for.
"We've got one of everything," says Cain. "From a host system point of view, we're an IBM shop; we also use Tandem/Compaq. Mid-tier, we're an AIX shop but we also have Solaris. At Intel-layer, we're typically IBM at the server and the desktop, but we share the desktop space with Dell.
"In the telecommunications space, we're a Cisco environment. In storage, EMC. We try to choose best or near-best-of-breed of all the vendor technology deliverables."
So it's hardly surprising that Cain feels the biggest success factor within St George's technology infrastructure is establishing a standard approach to middleware.
"In other words," he says, "trying to ensure that we move our business rules layer to our back-end systems and that we establish an architecture that has a presentation layer, a middleware layer and a business rules layer. In executing that, XML is becoming almost a default for everything that we do in that interaction between the layers.
"Standardisation is absolutely the way to go," he concludes, "but it has to be looked at in terms of what is your business', and what does the business get out of it. Doing it for its own sake is not the Holy Grail ultimately it has to be tested against the business requirements."