Today, business and IT strategies are becoming increasingly inseparable, one driving the other. Businesses must select the appropriate IT architectures and infrastructures for the project. These decisions and choices demand a systematic approach to project and system evaluation. This article describes the Constructive Evaluation (CE) method which provides a sound basis for such decision making and that is applicable to evaluation of several types of IT systemsThis is the decade in which IT has come of age. Today, business strategy and IT strategy are inseparable. In some companies the two are indistinguishable -- they are one and the same thing. In others, business and IT strategies are mutually inter-dependent, with each driving the other.
Decisions concerning IT-related projects and individual IT systems are of crucial importance to the company's performance. During periods of economic growth, good decisions will help the company to seize new opportunities and increase revenues, while bad decisions may leave the company falling behind. And in times of economic downturn, good decisions will help the company to thrive despite the difficult conditions, while the consequences of poor decisions may be disastrous.
Companies are constantly facing decisions and choices in their use of IT. They must select between possible alternative projects. They must decide whether a particular project is worth pursuing -- whether it will deliver real business benefit. During the developmental stage, they must continually assess whether the project is on track and whether any adjustments are required. On completion of development, they must decide whether the system is acceptable and ready for roll-out. And throughout the system's lifetime, they must monitor continuously to check whether it is still delivering value to the business and to determine whether it needs to be upgraded or replaced.
All these decisions and choices demand a systematic approach to project and system evaluation. In the absence of proper evaluation, decisions are inevitably ill-informed and become little more than acts of faith. Some form of evaluation, yielding some comparison or quantitative assessment, is essential to provide a sound basis for judgements and decision making.
However, the available evidence suggests that evaluation is not widely practised. Few organisations make any attempt at systematic evaluation of their IT systems; most rely on subjective judgement. And even where an organisation does attempt evaluation, the approach is usually ad hoc -- the "typical" evaluation concentrates on just a few characteristics that are easily measured, often without first establishing whether these measures provide a realistic picture of the system's overall impact. There is no defined evaluation method that is generally accepted.
The aim in developing Constructive Evaluation (CE) was to provide a method suitable for widespread use. This general aim was then translated into three inter-related goals. The CE method should:
¥ provide a sound basis for decision makingEvaluation is intimately associated with decision making. Some existing evaluation methods implicitly adopt the stance that the evaluation exercise should effectively generate the decision as its main output. By contrast, CE aims only to provide the information needed for informed decision making. Specifically, it provides a clear, complete and accessible picture of the IT system and its impact on the business. But the method does not itself generate the decision -- it simply provides the information on which a decision can be based.
¥ be simple and easy to apply
Amongst the obvious barriers to IT systems evaluation are cost and time. So a main goal with CE was to keep the method very simple. It should be easy to learn, easy to apply. And the various stakeholders in the system should themselves be able to rapidly confirm the validity of the evaluation exercise and its results without any significant overhead in first learning the details of the method.
¥ be applicable in almost any evaluation context.
An explicit goal was that CE should be applicable to almost any IT system, size of company and evaluation context. Some decisions demand predictive evaluation -- the evaluation exercise must predict the likely characteristics and likely impact of an IT system that is still to be acquired. Others require only retrospective evaluation -- the system is now in place and both its characteristics and impact can be measured. CE addresses both these modes of evaluation. Further, the method can also be used for "continuous" evaluation throughout all the various stages of system procurement or development, thus helping to ensure that the system has the desired characteristics "by construction". It is this possible mode of usage that gives the method its name of Constructive Evaluation.
The method combines four distinct techniques -- Quality by Design, Quality Profiles, standard risk management, and checklists of common characteristics (see panel). None of the techniques is new -- they are all well-established and proven. The primary innovation of the CE method is to combine the techniques into a coherent whole.
Over the past 18 months, CE has been subject to five separate field trials:
¥ In Scotland, the method was used in predictive mode to assess the impact on a "typical" user organisation of the ScotWeave CAD system for the design of woven fabrics. The evaluation showed that ScotWeave pays for itself within seven months while simultaneously allowing the organisation to improve its customer service (see the case study)¥ In Italy, the method has been applied in predictive mode at the Aquila Savings Bank to assess the impact on the business of introducing an Intranet. The evaluation helped to clarify those features of an Intranet service that would be key in helping the bank achieve its strategic objectives. The bank is now considering the possibility of using CE as a tool in all its future IT procurements.
¥ In Greece, the method was used in retrospective mode to investigate the impact of a new configuration management system at Singular SA, one of the country's leading software companies. The evaluation showed that the new system increased developer productivity and improved product quality. Singular will now use the method to evaluate other proposed changes to its software development processes and toolset.
¥ In England, the method was used for a retrospective evaluation of new IT systems at the Hampshire Chronicle Group, a newspaper group with four weekly titles. This evaluation showed how the new systems have enabled the group to increase advertising volumes and revenues, improve financial controls and increase business efficiency. The group is now publicising the results of this evaluation in the newspaper trade press.
¥ In Italy, the method was used in predictive mode at the financial company FINITER to evaluate the impact of a planned system to support the decision on whether to grant financial credit to company clients. The evaluation identified several potential risks that had not previously been recognised. The results of the evaluation acted as one input to the specification of the required system, which is now in development.
These trials have been highly encouraging. The evidence is that the method is easy to use and does indeed provide the "clear, complete and accessible picture" that is needed as a basis for informed decision making. Equally, it suggests that persons who are unfamiliar with the method, and indeed with IT systems, have no difficulty in understanding, judging and interpreting the results. Given these successes, the method is now being promoted for more widespread use.
The CE method
In contrast to methods that are "flat" -- focusing on a single level of consideration, such as the IT system itself or its overall financial impact -- CE encourages multi-level evaluation. The goal is to present a clear "chain of argument" that might show, for example¥ a particular IT system¥ the impact of that system on the business process it supports¥ the impact of that process on the company's overall operations¥ and the resulting contribution to achieving the company's strategic objectives.
CE proceeds by first identifying the sequence of levels on which the evaluation will focus. For each of those levels it then identifies a set of characteristics that can be used to describe that level. (In the case of an IT system, these characteristics would include the main facilities that the system provides. In the case of a business process, they might include both the time and cost of executing the process.) It then forms a set of spreadsheets, one for each pair of adjacent levels; each such spreadsheet shows the relationships between the various characteristics at the two levels. The method then provides concise definitions of the characteristics at each level and performs a risk analysis.
Quality by Design
Quality by Design (QBD) is an established approach to the monitoring and constructive achievement of quality objectives. The central tool in this approach is a series of interlocking spreadsheets. Each individual spreadsheet takes the form of a two-dimensional matrix, where the rows show the characteristics at one level and the columns show the characteristics at the next lower level. The individual elements of the matrix are then marked with symbols indicating relationships between the characteristics at the two levels. Figure 1 shows an example.
Collectively the spreadsheets form a chain, with the columns from one spreadsheet in the chain becoming the rows of the next spreadsheet in the chain. This linked chain of spreadsheets provides the framework of the multi-level evaluation and the basic structure of the chain of argument. An example is shown in figure 2.
Quality profiles can be used to provide concise "descriptions" of a any kind of system (in the broad sense of systems thinking rather than narrow sense of IT systems). In CE, these profiles are used to fully define the characteristics at each level. So each profile takes the list of characteristics for one level, as shown in the QBD spreadsheets, and provides a definition of each characteristic.
Characteristics are of two distinct kinds. Some are either "present" or "absent" -- either the system has this characteristic or it does not, and there are no intermediate states. Others are scalar -- the system has the characteristic to a greater or lesser degree. Characteristics of the "binary" kind are termed features, while those of the "scalar" kind are termed attributes. Each feature has a name, a concise textual description and, optionally, a specification at whatever level of detail and precision is deemed appropriate for the context.
The attributes cover, for example, cost, availability, reliability, accuracy, usability, and so on. Each attribute is specified by defining the scale on which the attribute is measured, the means of measurement, the actual or target value, and the worst acceptable value.
Risks are the "other side of the coin" of quality. Quality is the aim. Risks are the things that could prevent that aim being achieved. So assessment of risks is an essential part of a balanced evaluation. In CE, risk management is performed separately for each level in the QBD chain.
Risk management entails four activities: identification, quantification, planning and monitoring. Identification simply identifies the possible risks at each level. Quantification assesses the severity of each risk by estimating both its probability and its likely impact -- severity is then calculated by multiplying the probability and the impact. Planning decides whether measures should be taken to reduce the severity of some risks.
Avoidance measures can reduce the probability of the risk materialising, while contingency measures can reduce the impact if the risk does materialise.
Finally, monitoring tracks the changing status of each risk. The results of all four activities are recorded in a risk register.
A key aspect of constructive evaluation is the identification of the characteristics to be included in quality profiles. By their very nature, many of these characteristics will be unique to the individual evaluation. Nevertheless, there are some characteristics that occur with some frequency across many evaluations. Checklists of such common characteristics are an integral part of the CE method, and can be helpful in three distinct ways. They may yield characteristics that are directly relevant to a given evaluation. They may suggest other, broadly analogous characteristics. And they can provide guidance on how some characteristics (such as the usability of an IT system) can be measured.
The CE method is in the public domain. Details, including a full method handbook, can be found at www.anshar.comThis article has been communicated by Professor Vijay Varadharajan, Technical Board Director of Computer Science, Australian Computer Society.