Intelligent foundations

Keeping your network up and running used to be a physical task supported by an army of techies who ran around the building ripping wires out of the walls while swapping network cards and hubs from a secret stash in their tool bags. That was then. Nowadays network hardware is considerably more robust and attention is now increasingly focused on ensuring that business needs are being met by the network.

Making sure the network is connected and working is still important but the measure of 'working' is no longer simply a question of bits and bytes being received without errors. Now the focus is on the right bits getting to the right place in a timely manner that assists business rather than hindering the process. Along with this focus a new breed of network management tools has arisen to support these metrics. Sometimes these tools are software that you run on one of your servers. Other incarnations are stand-alone network appliances with their own smarts designed to help you keep your network and applications running at peak efficiency.

The current crop of network management tools attempt to filter out the false alarms which quickly arise if there is a network failure. These alarms are tagged false because the device reporting the failure is reacting to a genuine fault further along the network, rather than its own failure. Without filtering, support staff can waste valuable time responding to every alarm until they reach the actual fault. Beyond straight fault reporting, a growing number of vendors are attempting to predict problems and faults before they occur by monitoring against a known 'good' network state, raising an alarm when conditions vary from the established norms.

"We try and understand the normal behaviour of a particular device, therefore we can raise an alarm on abnormal behaviour," says Chris Gibbs, managing director Concord Communications.

"We try and understand the behaviour of the device and raise an alarm when it is outside that tolerance. This is the basis of what the market has been after for a long period of time, and we seem to be on the cusp of a number of organizations embracing it; they want what we call intelligent proactive fault management."

It is analogous to the technology that is in modern automobiles. Not too long ago you just drove your car until it broke down, then you’d get it fixed. Now your car is likely to tell you to pull in for repairs before the fault occurs, and when you go for a regular service the onboard computer will notify the mechanic that some components are unlikely to survive until the next visit.

"Depending on what kind of car you’re driving around in you’ve got a series of proactive alarm systems in there, and that technology is completely analogous to where we’re coming from in network monitoring and management," Gibbs says.

"We’re finding that it has broadened beyond just network infrastructure. IT shops these days have to focus on applications as well." For that reason, network management vendors now offer to monitor your software as well as your hardware.

"We typically cover everything," Gibbs says.

"We work with organizations such as Cisco from a network standpoint and Seibel from an application standpoint. Customers are telling us that the whole notion of fault management is dead and a bygone era.

"The issue that we’re talking about now is intelligent fault management, that’s what’s really important. We say predict failure before it affects the users."

Gary Mitchell, managing director of networking specialist, Enterasys also believes that hardware faults are no longer the main issue. Enterasys believes that continuity and context are important issues that need to be understood in order to keep a network healthy.

"We believe that the days of capacity and connectivity being a major issue have certainly gone," says Mitchell.

"When I talk about continuity I’m coming from an angle not just about reliability because if you look at when the networks were under duress recently, whether it was Sasser or any other variants of worm attack, the networks stayed up 99.999 percent of the time, they were very reliable and did a fantastic job of passing around rubbish.

"Reliability is still just a given, but it’s continuity of good information being sent around to keep the business going as opposed to rubbish being sent around the network. There have been a lot of advances in technology around what I would call content inspections, having a look at what’s in the packet, but one of the keys to providing enhanced security of a network is being able to understand the context by which traffic is flowing through the networks."

That means being smart enough to realise what sort of devices and which users are most likely to produce certain kinds of data and raise alarms when the opposite occurs.

"The end user in the accounts payable department should probably be sending different traffic than a printer.

"Context is an important thing in terms of determining who should be allowed to send what type of information, because there’s probably a whole bunch of very valid network protocols that are going across the network, but if a photocopier is sending a truck load of SQL commands, that might be valid network traffic, but it’s probably not valid to be coming from the photocopier."

Enterasys is also pushing the proactive approach to fault management.

"All of our switches have security technology so that we can deploy a policy right across the entire network that says don’t allow this type of traffic to flow or say that these types of devices should never be able to do more than send these three or four types of traffic."

Mercury Interactive has been providing testing and optimisation services for software systems for a number of years, and has recently acquired technology from Allerez to take the company further towards the holy grail of a one-stop business technology monitoring solution.

"Systems management has moved away from a tiered monitoring model to more of a business focussed, customer impact model," says Tim Van Ash, Mercury’s Asia Pacific practice manager for application management.

"The network is really begging to become a utility service. People expect it to work, and when it doesn’t work, the first question someone will ask you is 'What is its biggest impact?'"

In the past that’s been something that’s been incredibly difficult to achieve, however Van Ash considers the reality is closer than it has ever been.

"The only way to develop business IT alignment is to start with the user and work back. Don’t start at the infrastructure and try and work up through event correlation or whatever else. I think the days of the old enterprise management system framework – while they’re certainly still relevant, have service models that are almost become impossible to maintain."

As enterprises move their expectations of fault prevention further up the network food chain from hardware to applications to end-user experience, SMEs are also beginning to ask for more than the basics in network infrastructure management.

"The more established enterprise customers have now found with complex applications that the users actually complain about what they see and what it delivers to them, in the useability sense, because lets face it, the end users don’t know or care about the servers and the network," says Van Ash.

"But then you’ve got the smaller organizations, which are now starting that process, needing to be able to see if something is up or down and we’re starting to see some of them go to the next stage to bandwidth monitoring and those sorts of things."

However, there are dangers inherent in a purely user-focused view of network management, and there is still a role for traditional network monitoring systems.

"The real end-user perspective only provides half the equation because real users make cups of coffee and they don’t turn up to work every day. It is important to have a robotic user that can be used as an availability measure as you need a perfect user that is predictable; you know what their performance trends look like because that really provides the benchmark that you can compare real end-user performance against. It’s not a case of one or the other."

Brett Oberstein, general manager of operations management for systems and network integrator Dimension Data, also believes that there is more to managing a network than checking the red lights on the routers.

"I think that organizations have been too technology-centric in the past so they’ve invested in network management systems to manage the LAN or WAN from a connectivity perspective without actually focussing on what the end-to-end service is," Oberstein says.

"The user on the keyboard isn’t getting what they expect from a quality of service perspective."

Part of the problem is the current fad for consolidation of services, according to Oberstein.

"We’ve seen consolidation of connectivity onto VPN services that are shared network services. We’ve seen consolidation of voice and data onto common platforms. We’ve seen consolidation of infrastructure in data centres, for example, server consolidation. We’ve seen storage consolidation, so you now have more use of shared infrastructure. You’ve got more users dependent on common infrastructure which is, I think, one of the things pushing IT departments to provide service management. At the same time business has become more dependent on IT and it is now demanding end-to-end service rather than just availability of discrete IT devices."

Along with the shift from managing devices towards managing services comes a new set of terms and standards. The IT Infrastructure Library (ITIL) defines the processes organizations should adopt in managing their IT environment. An example would be an unavailability process or a problem handling process.

"ITIL is a common set of processes that IT departments are starting to adopt which are now applied to the management of the infrastructure and the focus is leaning towards service management. ITIL focuses on service management across all infrastructure technologies, and with IT departments now adopting ITIL principles, that’s inference towards a service management approach rather than discrete IT management approach.

"We’re seeing a big need for the adoption of ITIL processes so we help organizations define strategies around that adoption which is leading to the adoption of service management rather than technology-centric management, and it’s also leading to the need to identify usage against which costs can be allocated.

"In achieving service management, you need to be able to monitor your network and identify the root cause, so the root cause helps you to manage your network more effectively and helps you to achieve service level management. There’s a whole load of building blocks that you need to achieve the service management layer."

Vendors are increasingly trying to provide managers rather than techies with monitoring products that they can relate to, and the so-called dashboard is one of the most popular delivery mechanisms. Built around the familiar metaphor of the warning lights and gauges on a car’s dashboard, these products aim to provide a high level check on the status of the network infrastructure. Micromuse is one such vendor.

Felix Marks, ANZ technical services manager for Micromuse, said the Real Time Active Dashboard is a tool that enables you to create a service model that describes logically what a service consists of in a visual format.

"It does that in a way that means you can see green and red summaries showing the status of a service," Marks said.

"But you’re also able to visually identify where service problems start to occur, whether it’s a degradation of the performance of a service or if one of the key components of the service starts to fail, or actually fails, you can see whether or not it actually matters as far as the service is concerned, and if it does, where it is and what it is."

The dashboard approach is proving popular with executives and techies alike, according to Marks.

"For our enterprise customers this has really been a key tool that’s been delivered by IT managers to more senior people in the organization who are normally in the position of trying to understand why it is that the service is there or not. It’s something that is Web-based which you can provide on an intranet. People can click on it and immediately understand what’s going on with those key applications and services. "There are many different types of software out there in the marketplace and one of the key differentiators is the simplicity with which they can be implemented.

"If you are deploying a software solution that takes you a year or 18 months to actually make useful, then you may as well not have started. It’s really about delivering that in a matter of days rather than weeks."

Networking infrastructure is the lifeblood of most companies these days, but managing that resource means looking beyond the wires and switches towards the applications and the users that rely on them. You need an end-to-end picture of the network and you need to be able to see what is going on inside the wires. Fortunately there are plenty of vendors ready and willing to help you provide a solution. As Joe the Gadget man used to say, "Bring your money with you!"

Join the newsletter!

Error: Please check your email address.

More about ANZ Banking GroupConcord CommunicationsDimension DataInferenceMercury GroupMicromuseRight BitsYATES

Show Comments