Gunning for the grid

Melbourne University last year unveiled what it claimed to be one of the first "stretched" clusters consisting of two active-active nodes some 1km apart.

Chris Pivec, Melbourne University's IT infrastructure team leader for the University Systems Project, said grid computing is "certainly" coming in 2005 due to the newness of low-cost hardware options.

"This is playing catch-up to the traditional players," Pivec said. "We're a little while away - probably one or two years - from stretched grids for mission-critical business applications."

Although admitting that use is low at this point, Pivec said grid computing concepts for business hold great promise, adding "Expect to see penetration over the next few years.

"At least for the next few years we will see growth in load-balancing hardware which is not inexpensive," he said. "Most business grids rely on hardware load-balancing."

Pivec said that, because not all applications are grid- or cluster-aware, the reality is the application vendor has to have grid support for it to be of any use.

"Complex ERP applications can't be just thrown onto a grid," he said.

When it comes to grid infrastructure, Pivec sees a sweet spot of less than or equal to four nodes for the database and less than or equal to eight nodes for the application server.

"One thing to watch is memory addressing," he said.

"The 4GB memory limit is problematic, particularly with Java applications [server technologies]. Look at Opteron and Itanium. Also consider disk and backup. There is a basic challenge of disk I/O for a large database [as] you're not going to put business data on a consumer disk."

With most vendors coming out with management tools and support of grid and clustering at the operating system level, Pivec believes the next step is to abstract the operating system; however, he questions whether that is good for the end user in the long run.

"It is a goal, or nirvana, to be able to run any application off a grid, but it will be some time, if ever, before that is possible," he said.

Pivec advises those looking at grid computing to understand the risk profile of the organization, and not to get burnt by marketing.

"For most of us in the enterprise space a proof-of-concept with help from vendors is the place to start," he said.

"Grid computing [for business] is an emerging field so it is risky to commit to it without a formal proof-of-concept."

Oracle Asia Pacific's technology solutions manager, Tim Blake, said although there are a lot of definitions of what a grid is, Oracle can show scalability beyond the needs of most organizations for the next two years.

"Organizations will move beyond a siloed view of the grid. They may have hundreds of databases that could be put on one infrastructure," Blake said.

Blake said Telstra's Raptor-E project (CW Sept 20, 2004, p3) has proven the commercial viability of grid computing, and as Telstra has a number of databases there is an opportunity to save money.

"Reallocation of nodes and dynamic provisioning is all about better resource utilization," he said.

"The ability to send a workload to a different location is in the application server and Oracle 10g is the most advanced grid-enabled application server."

On the question of the increase in trans-data centre grids, Blake said a stretched cluster has physical limitations but in time they will become the norm.

"The grid management technology is key - from the database to the app server," he said. "Unless you can manage the grid as one system, the power of the message is diminished."

Glenn Wightwick, IBM distinguished engineer, said the next phase in grid computing will see it become a more integral part of the computing environment.

"As many as 30 percent of organizations are starting to explore grid computing and the data centre will become nodes," he said. "In theory there is nothing stopping aggregation of remote resources but there are practical limitations around bandwidth availability and the amount of data needed to be transferred. It may not make sense to do small amounts of work this way."

Wightwick said there is a continuum in between scientific and enterprise grids. "For example, in the financial sector, more business analytics and stock market analysis is being done on a grid. That unit of computation can be easily distributed," he said.

"Early adopters of grid used the technology to improve utilization and improve productivity. Different sites may be underutilized - this lends itself beautifully to grid computing." From 2005 onwards we will see grids become a more integral part of computing with more ISVs, Wightwick said. Andrew Brockfield, senior sales specialist for IBM's deep computing business development, said there is a lot of activity in Australia around data grids to manage data transparent to the application.

"For 2005, there are two issues that are slowing the adoption of grid computing," Brockfield said.

"Firstly VPN access issues - which is a matter of getting agreement from everyone. Secondly, the cost of IT is typically funded around projects so when you bring in shared access it requires a change in thinking and organizational culture."

Brockfield referred to Sydney University's trial of 90 grid-enabled PCs as an example of using existing infrastructure to perform computationally-intensive tasks at "near linear scalability".

"In 2004 and 2005, the customer trend is to see that grid computing isn't the answer to every computational problem and the industry will see refinement of where the true value of the grid is," he said.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about CERNContinuumHome DirectorIBM AustraliaOraclePlatform ComputingSETITelstra CorporationUnited Devices

Show Comments