Educational Technology & Society 3(4) 2000
ISSN 1436-4522

Towards a New Cost-Aware Evaluation Framework

Charlotte Ash
School of Computing and Management Sciences
Sheffield Hallam University
Stoddart Building, Howard Street
Sheffield, S1 1WB, United Kingdom
Tel: +44 114 225 4969
Fax: +44 114 225 5178
c.e.ash@shu.ac.uk

 

ABSTRACT

This paper proposes a new approach to evaluating the cost-effectiveness of learning technologies within UK higher education. It identifies why we, as a sector, are so unwilling to base our decisions on results of other studies and how these problems can be overcome using a rigorous, quality-assured framework which encompasses a number of evaluation strategies. This paper also proposes a system of cost-aware university operation, including integrated evaluation, attainable through the introduction of Activity-Based Costing. It concludes that an appropriate measure of cost-effectiveness is essential as the sector increasingly adopts learning technologies.

Keywords: Cost-effectiveness, Learning technologies, Evaluation


Introduction

Over a number of years the cost-effectiveness of using technology to support, or in some cases supplant, conventional higher education teaching and learning has been hotly debated. Although a number of attempts have been made to establish a measurement of cost-effectiveness, the fact that we are still searching for conclusive, reliable results testifies that no one has yet succeeded. Previous attempts have aimed to establish separate methodologies for measuring costs and effectiveness (unfortunately sometimes by just mentioning that both should be accounted for) and then bonding the two together with the educational equivalent of sticky-tape.

This paper presents, whilst in an embryonic state, a new approach to establishing cost-effectiveness. It reflects upon a number of flaws in effectiveness research to date and then recommends a set of principles to guide the development of cost-aware evaluation.

 

What is cost-aware evaluation?

Cost-aware university operation is an overarching framework within which each action, decision and outcome is treated in a cost-aware manner. This includes cost-aware evaluation.

The ‘Costs of Networked Learning’ project team believe they have solved the problem of accurately recording costs within UK universities (Bacsich et al., 1999). Once universities use an Activity-Based Costing (ABC) system, delivering accurate costing information becomes a relatively easy process. Costing individual activities will then be embedded into university culture and operation. It will be possible to cost individual activities, such as a single lecture. Thus cost-aware operation will become a reality.

Whereas previous methodologies have attempted to ‘marry’ costing and effectiveness together (the left side of Figure 1) and failed, this new approach (the right side of Figure 1) assumes that costing, in the form of ABC, is already integral to the institution and therefore happens as part of the evaluation of effectiveness.

Figure 1. The past and future of cost-effectiveness

 

What is needed now is a consistent range of methods for measuring effectiveness built into an accepted quality assured framework. Once this is also embedded into university culture, cost-effectiveness becomes a reliable guidance tool.

 

Why do we need to know if our courses are cost-effective?

The higher education sector has changed dramatically during the last decade. The use of learning technologies is now permeating courses in traditional institutions where academics are eager to develop better learning experiences and students are demanding greater access and more flexibility. This change is being actively encouraged by the UK government with schemes like the University for Industry and the e-University’s initiative. Introducing technology to courses is no longer a simple choice of academic enhancement but one of political and competitive advantage.

However, learning technologies are being used without any consideration of effectiveness (not unlike most teaching methods). A recent report noted that, “while the debate [about effectiveness] will continue, it is too late to turn back. Recent history suggests that both the variety of offerings and the number of individuals availing themselves of these alternative forms of learning will not only increase, but will increase dramatically. The alternatives are entering - and in some circumstances, becoming - the mainstream” (NCHEMS, 2000). Timely, accurate and conclusive cost-effectiveness information about conventional as well as innovative methods of teaching and learning is imperative in the modern academic community.

 

What is cost-effectiveness?

Cost-efficiency is the ratio of output to input (Rumble, 1997). A system becomes more cost-efficient if outputs can be increased with a less than proportional increase in inputs. Cost-efficiency does not attempt to document quality or value other than financial gain.

Cost-effectiveness, in contrast, is mainly based upon subjective judgements about value and quality. To refer again to Rumble (1997), “an organisation is cost-effective if its outputs are relevant to the needs and demands of the clients and cost less than the outputs of other institutions that meet these criteria.”

To establish a measure of cost-effectiveness that is agreeable to all stakeholders, both these differing aspects should be accounted for. Angela McFarlane of the British Education Communications and Technology agency (BECTa) recently stated, “people want to see some measurable improvements from their investment and the key element is how we define effectiveness” (Cole, 2000). Therefore, I propose the following definition of cost-effectiveness for discussion:

Cost-effectiveness is a mode of cost-aware institutional operation, which takes into account quality and benefits to all stakeholders, and allows comparisons with similar institutions to be drawn.

 

What is wrong with previous effectiveness research and evaluation?

A brief literature search will show hundreds of studies examining the effectiveness of using learning technologies within distance and classroom-based situations. Unfortunately, we still cannot agree about the effectiveness of learning technologies. As already mentioned, educational providers are moving increasingly towards learning technologies without reliable, concrete data about its effectiveness - what the TES Online supplement recently referred to as “a leap of faith” (Cole, 2000).

Many studies produce results that, under analysis, produce more questions than they originally set out to answer. Often the way the problem is approached, institutional biases and the specific variables being addressed skew the evaluation criteria so that the results are only applicable to that one situation and context. (Some studies intentionally set out to look at local issues and do not attempt to provide generalisable or transferable solutions, in recognition of these problems.) Educators and policy makers are so aware of this issue that the ‘not-invented-here’ attitude is visible across the sector. There is a distinct unwillingness to accept the findings from another study, however similar, as the basis for decision making.

This inconclusiveness is fuelled by books such as Thomas Russell’s ‘The No-Significant Difference Phenomena’ (1999). Russell details 355 studies between 1928 and 1998 concentrating on the effectiveness of various types of learning technologies which conclude that no significant differences are found between conventional learning experiences and those which employed some form of learning technology. The Institute for Higher Education Policy (IHEP) in the United States picked up on this annotated bibliography in its 1999 report ‘What’s the Difference?’ This looked at recent, original research into the effectiveness of learning with technology and found a number of flaws which move towards explaining why we are reluctant to base major policy decisions on this kind of research. The key shortcomings it identified are:

  • much of the research does not control for extraneous variables and therefore cannot show cause and effect;
  • most studies do not use randomly selected subjects;
  • the validity and reliability of the instruments used to measure student outcomes and attitudes are questionable; and
  • many studies do not adequately control for feelings and attitudes of the students and faculty.

The report also suggests that a set of commonly accepted and adhered to principles “are essential if the results of studies are to be considered valid and generalisable” (IHEP, 1999). Without these principles, research into - and the evaluation of - the effectiveness of learning technology will remain inconclusive and unutilised. I therefore propose the following principles of good practice:

  • The most suitable approach must be selectable under a rigorous, quality-assured framework.
  • The evaluation must be undertaken with specific use and users in mind.
  • Evaluation must be an activity that is integral to university operation.
  • Evaluation must be cost-aware.
  • Evaluation must be situation and context aware.
  • This framework must be accepted by evaluators, educators, policy makers and funders as producing conclusive results that will then be acted upon.

 

The most suitable approach must be selectable under a rigorous, quality-assured framework

The biggest fear of any professional evaluator, when discussing issues raised by this paper, is a uniform, universally imposed stricture that decrees that every evaluation should be undertaken using the same strategy, to provide results which are comparable across the sector. As Patton (1997) warns, “beware the evaluator who offers essentially the same design for every evaluation”. But as the Learning Technology Dissemination Initiative’s (LTDI) ‘Evaluation Cookbook’ (1998) illustrates, there are numerous approaches to evaluation and differing situations or problems demand different approaches. I propose a rigorous, quality-assured framework that encompasses numerous different evaluative approaches. These approaches should be selected for suitability to task and situation, including the available funding for the evaluation, and they should be capable of operating on a number of different levels, allowing for the evaluation of individual modules, courses and whole faculty offerings within the same framework. Thus, dependent on the evaluation problem being approached, a number of suitable strategies (three is recommended as a minimum allowing triangulation to expose any irregularities in the data) can be selected from within the framework. The selection of complementary strategies should also allow the collection and analysis of both qualitative and quantitative data, leading to more conclusive and reliable findings.

 

The evaluation must be undertaken with specific use and users in mind

Many evaluative studies are not utilised. Many evaluations are undertaken for the wrong reasons, produce unpalatable results that are quickly buried, produce completely meaningless results or are unfocused. At the basis of this is a communication and understanding problem. The commissioner of an evaluation study needs to be very clear about:

  • why they want an evaluation;
  • the appropriateness of an evaluation;
  • what the focus of the evaluation should be;
  • what they are going to do with the results; and
  • what they are going to do if the results produce something unexpected.

In turn, evaluators need to:

  • listen and be aware of these aspects and others;
  • focus the evaluation towards the needs of the stakeholders involved; and
  • continue this process of communication and discussion, possibly refocusing and adapting to change, throughout the study (what Patton refers to as “active-reactive-adaptive” evaluators).

These stakeholders, or “primary intended users” (Patton, 1997), include commissioners, funders, policy makers, educators, and students. As well as being aware of the stakeholders likely to use evaluation findings, and the preconceptions they hold, evaluators need to understand the expected use of their findings. Another problem with evaluation studies at present is that they produce results which appear interesting to the evaluator but do not address the concerns of the funders, commissioners and policy makers who are often interesting in more than pure academic effectiveness. Mason has repeatedly asked for evaluations that have a practical application, and which will actually lead to changes in policy and practice (1995). Patton (1997) would add that without taking these two aspects - “intended use by intended users” - into account, the value and success of any evaluation is limited.

 

Evaluation must be an activity that is integral to university operation

Many evaluative studies are commissioned to judge the viability of an innovative activity. When these studies are published, analysts cry out that the results are due to the Hawthorne Effect (a phenomena by which students and academics are motivated to achieve more, solely because they know they are being studied and this makes them feel special), and not the effect of using a particular aspect of learning technology. The only way to eliminate this bias is to make evaluation an everyday activity for all activities rather than innovative ones in isolation. Evaluation undertaken year after year within accepted university operation becomes an embedded activity, increasing validity and acceptance and, as Patton (1997) points out, decreasing costs. In this way, the mystique currently surrounding evaluation and evaluators can be dispelled and expertise built up within individual organisations; hostility and resistance can then be replaced by a feeling of ownership (Patton, 1997).

As part of the CNL project team, I was involved in designing and testing the three-phase course lifecycle model (Figure 2). This cyclic model includes evaluation as one of its phases and operates equally well on a university level as on a course level. Assuming this model is adopted in the sector, evaluation becomes embedded into university culture, providing continuity of operation, from planning to evaluation, for all activities.

Figure 2. The CNL three-phase course lifecycle model

 

Evaluation must be cost-aware

A number of centrally funded initiatives, such as the Higher Education Funding Council for England (HEFCE, 1999) funded Transparency Review, have recently highlighted the growing importance of costing educational provision. Previous attempts to cost education have been quite narrow in scope, and costing - as far as research projects are concerned - has been very much a ‘bolt-on’ aspect. The main barrier to date has been the lack of a usable, comprehensive costing methodology - this barrier is in the process of being broken-down. In early 1999 the Joint Information Systems Committee (JISC) funded a short study (Bacsich et al., 1999) to identify the hidden costs of networked learning. The outcomes of the project were a financial schema (see Figure 3) and planning document which together accurately record the costs of networked learning. The project team proposes that ABC be adopted by UK universities (a recommendation in line with other international costings studies in Australia and the US). With this method of recording both income and expenditure, individual activities and groups of connected activities can be costed; thus the ability to accurately cost individual courses or modules, including tutor time, the amenities in the room where it takes place and the additional resources involved, becomes a reality. With this system in place, universities will be able to extract the cost of different courses, income and expenditure, and calculate their efficiency.


Expenditure dimension

Stakeholder dimension

Total

Institution

Staff

Student

Staff costs

Salaries, wages, pensions etc.

Unpaid overtime

Opportunity cost of learning not earning

 

Depreciation

Lecture theatre, computing provision

Skills in teaching and materials development

Own home computer and accessories

 

Expenses

Subsistence, registration

Out of hours development using home consumables

Computer consumables, connection charges

 

Overhead

Software licences

Additional space for PC

Additional insurance

 

Total

 

 

 

 

Figure 3. The CNL Financial Schema

 

Another major breakthrough was that people other than the university also incur costs which differ according to the way the course is offered. The report advises that costs incurred personally by staff and students should also be recognised by the institution and documented as part of the financial schema. (See Figure 3, above; examples of costs specific to teaching are given in italics.) This in itself has caused more controversy than the ABC methodology as it raises questions about how these personally incurred costs will be monitored and dealt with.

 

Evaluation must be situation and context aware

Few evaluation studies refer to evaluations undertaken outside the educational sector. Yet developments in evaluative practice could influence use within the educational sector. This introspectiveness also applies to communication with other strategic partners, such as university planners and students. For example, an evaluator might attribute the failure of the programme to the technology rather than to difficulties encountered by students from deprived areas who may not have previously used a computer for learning.

 

This framework must be accepted by evaluators, educators, policy makers and funders as producing conclusive results that will then be acted upon

Without consensus between the various stakeholders concerned with the cost-effectiveness of using learning technologies, any developments become meaningless. Presently, we have myriad disparate studies with no measure of worth or value, which claim to display how learning technologies are, or are not, cost-effective.

This paper proposes a quality-assured framework that encompasses numerous different approaches to evaluation, together with a set of guiding procedures. This approach within a quality assured framework will allow evaluators to arrive at conclusions and recommendations about the effectiveness of learning technologies which can then be acted upon. This requires an understanding between the various stakeholders and the developers of the framework about what constitutes a reliable and usable set of evaluation results, about which aspects should be taken into account to substantially reduce institutional or evaluator biases and an agreement to act upon conclusive results about the effectiveness of learning technologies. These studies should highlight in what conditions and under which circumstances learning technologies become cost-effective, offering plentiful information upon which other institutions can draw in the planning or redevelopment of courses.

 

Conclusions

The need for timely, accurate and informative cost-effectiveness information is increasing dramatically. With the Government also supporting this development, there are serious competitive advantages for running effective courses. But at present, there is no accepted methodology for measuring that effectiveness or comparing it with other examples. This paper proposes for academic debate, a new approach to the evaluation of learning technology which is both rigorous and cost-aware whilst operating under a quality-assured framework.

 

References

  • Bacsich, P., Ash, C., Boniwell, K., Kaplan, L., Mardell, J., & Caven-Atack, A. (1999). The Costs of Networked Learning, Sheffield: Sheffield Hallam University.
  • Cole, G. (2000). Heard the Evidence? TES Online Supplement, February 11, 11.
  • HEFCE (1999). Transparency Review of Research,Bristol: HEFCE.
  • IHEP (1999). What’s the Difference? USA: IHEP.
  • LTDI (1998). Evaluation Cookbook, Edinburgh: LTDI.
  • Mason, R. (1995). Evaluating Technology-Based Learning. In Collis, B. and Davis, G. (Eds.) Innovative Adult Learning with Innovative Technologies. Holland: Elsevier Science B.V, 191-199.
  • NCHEMS (2000). Procedures For Calculating The Costs Of Alternative Modes Of Instructional Delivery, Unpublished draft, National Centre for Higher Education Management Systems, UK.
  • Patton, M. Q. (1997). Utilization Focused Evaluation - New Century Text, California: Sage Publications.
  • Russell, T. (1999). The No-Significant Difference Phenomena, North Carolina: Chapel Hill.
  • Rumble, G. (1997). The Costs and Economics of Open and Distance Learning, London: Kogan Page.

decoration