Testing takes a tumble

Systematic application testing gets lost in the pincer-like crush of unrealistic deadlines and limited resources.

A survey of 300 IT professionals found fewer than 25 per cent of respondents undertake appropriate application testing, reinforcing a perception that software is buggy, unreliable and prone to failure in unpredicted ways.

While the industry should shoulder much of the blame, according to the survey, very few IT departments use automated tools to ensure quality and to generate metrics for tracking.

According to an anonymous IT manager who recently completed contract work for an Australian financial institution, limited resources means projects are monitored rather than fully tested and quality assured.

He admitted this introduced a high degree of risk, because testing should be done in the early stages to manage all possible scenarios.

Despite the importance of these applications -- from financial reconciliation and budgeting to ERP, CRM, B2B and B2C -- more than 50 per cent of respondents blamed "unrealistic deadlines" for the lack of testing.

Australian IT managers who spoke to Computerworld agreed with the findings of the survey, which Mercury Interactive conducted in the US.

The survey results showed criteria used to evaluate the success of applications includes:

* meeting applications requirements (95 per cent), * meeting deadlines (87 per cent), * meeting quality assurance (QA) standards (84 per cent), * signoff from management (82 per cent), * measurable customer satisfaction (81 per cent) and * financial return or ROI (69 per cent).

Sam Higgins, applications architect, Queensland Transport (QT), warned that failure to take testing and other quality procedures into account can lead to a number or problems.

"There are two scenarios; either there is no time to test and overall product quality is reduced or, the time is taken to test, but lack of early planning means this becomes an addition to the original project schedule and overruns occur," he said.

At QT Higgins said all schedules and project management plans include the minimum testing phases with the time required based on metrics established over previous projects.

He said systematic testing of enterprise applications is enforced through "sign offs for each stage of testing".

Higgins said QT also has a dedicated group required to ensure that all applications undergo a minimum of three levels of testing as part of the organisation's overall testing methodology.

"We use tools such as JUnit and CA Verify. We also track defects encountered in testing. At this point, no detailed metrics are generated from this data," Higgins said.

The survey found that companies consider the largest problems in deploying high-quality enterprise applications to be real-world complexity (28 per cent), not knowing the needs of users (22 per cent), inadequate testing time (16 per cent), and a lack of executive commitment (16 per cent).

Given these constraints and challenges, finding best practices and options is a high priority for most organisations.

Join the newsletter!

Or
Error: Please check your email address.

More about Mercury Group

Show Comments