When Novartis AG needed extra processing power, the pharmaceutical giant found it -- 5 trillion floating-point operations per second of unused capacity, to be precise -- in 2,700 desktop PCs at its headquarters in Basel, Switzerland. The company lashed the PCs together in a compute grid that it now uses to run number-crunching supercomputer applications that model the interactions between proteins and other chemicals that might be used in drugs.
"The grid has opened up a number of opportunities for us which were just not there before," says Manuel Peitsch, head of informatics and knowledge management at subsidiary Novartis Research. "People couldn't imagine doing the things that we are doing today on a routine basis."
The Novartis drug research software is loaded onto the desktops by way of a server running Grid MetaProcessor software from United Devices Inc. in Austin. By investing US$400,000 in grid technology, Novartis avoided spending $2 million on a new Linux cluster.
The Novartis success story is far from unique. Drug companies, university computation centers, product development and engineering departments, federally funded research consortia and a few financial services firms have set up computer grids. They report big savings in hardware costs and sometimes productivity improvements as well.
Grids consist of geographically dispersed computers linked dynamically in order to present to users a unified view of computational resources such as compute cycles, disk space, software or data. There are intracompany grids, such as the one at Novartis, and partnership grids, such as the National Science Foundation-sponsored TeraGrid.
Utility grids, which proponents say could provide unlimited on-demand access to computer resources in much the same way the U.S. electric power grid provides on-demand access to electricity, are a dream of companies such as IBM and Hewlett-Packard Co. However, they don't yet exist.
Today, most grid applications share three characteristics. First, they are computationally intensive. Second, most are written for parallel or massively parallel execution. Third, like the Novartis grid, most are built to harvest unused compute cycles. Some, however, focus on getting at distributed data or disk resources.
Although IT vendors tout grids for all kinds of applications, grids have barely begun to move beyond scientific, engineering and mathematical/statistical applications. One reason is that most business applications weren't written with parallel processing in mind, so they're less able to take advantage of the many semi-independent processors that form grids.
"Parallelizing these applications is a major rewrite," says Carl Greiner, an analyst at Meta Group Inc. in Stamford, Conn. "That's why grids are having a difficult time in the commercial space." It will be five years before applications such as supply chain systems become suitable for grid computing, he predicts.
Another impediment is that tools for monitoring usage, charging for usage and even ensuring security in grids aren't well developed, Greiner says. The lack of such capabilities is especially troublesome when a grid spans multiple departments or companies, he adds. In a survey of 50 companies sponsored by Platform Computing Inc., a developer of grid software in Markham, Ontario, 89 percent of respondents cited organizational politics as a barrier to implementing grids. Objections included fear of losing control of IT resources -- "server hugging" -- and fear of a reduction in the IT budget.
Ahmar Abbas, managing director of Grid Technology Partners in South Hadley, Mass., sums up the obstacles to more widespread adoption of grids this way: "You have to really understand your applications -- Can I distribute them?" But, Abbas says, vendors are helping users get applications grid-enabled. For example, IBM recently announced a new release of WebSphere Application Server that lets users bring a collection of servers into a grid to balance the workloads across several WebSphere applications. A future enhancement will also support non-WebSphere applications in the grid, IBM says.
Web services hold the key to grid computing for commercial applications, Abbas says. "The way business applications will take advantage of the grid is through XML, UDDI, SOAP and WSDL. The Open Grid Services Architecture (standard) takes all the capabilities that grid can offer and makes them appear in the same nomenclature as a Web services application," he says.
Considerable work on grid standards is now under way among vendors, users and researchers. But many applications don't yet conform to the standards, and even some grid product vendors say the standards aren't mature enough for commercial applications.
While commercial applications aren't yet ready, traditional grid applications continue to grow. Researchers at Purdue University in West Lafayette, Ind., have a hierarchy of distributed computing resources, with supercomputing at the top, six 48-node Intel/Linux clusters in the middle and a 2,300-PC grid running on United Devices software at the bottom. The goal, says David Moffett, associate vice president for research computing, is to move jobs down the hierarchy, where computing is cheaper.
"I have very high hopes that we can move the whole stream of jobs out of the cluster space down into the United Devices space," Moffett says. Although the PC grid requires a United Devices software license and two dedicated grid servers, "those are close to free cycles," he says.
Moffett plans to expand the grid to include PCs in faculty and administrative offices. And he says he'll make the compute cycles on research computers that have been freed up by the existing PC grid available to business applications. "We've cleared off enough resources high in that stack that they will run up there," he says.