Educational Technology & Society 2(3) 1999
ISSN 1436-4522

Computer-Assisted Assessment: Impact on Higher Education Institutions

Joanna Bull
CAA Centre, Teaching and Learning Directorate
University of Luton, Luton, United Kingdom
Email: joanna.bull@luton.ac.uk
http://caacentre.ac.uk/

Introduction

As an increasing number of higher education institutions (HEI's) look to computers to solve some of the problems associated with the burden of expanded student numbers, advances in technology are increasingly impacting the way in which the curriculum is delivered and assessed (Hartley et al, 1999). More contentious than using computers to deliver content and support student learning is computer-assisted assessment (CAA). CAA can include a range of activities such as, the collation, analysis and transmission of examination grades across networks and, most desirably, the use of computer-based assessment, where students complete assessments at workstations and their answers are automatically marked. CAA is, in comparison to computer-aided learning, a relatively new development and is often pioneered by enthusiastic individual academics (Stephens et al, 1998). The successful implementation of CAA is often hindered or abandoned due to time and funding restrictions, or reliance on an individual academic.

There are a range of advantages and disadvantages to using CAA, depending on the type of technology and the type of assessment. The advantages can include: enhancing feedback to students and staff, time savings, reducing administrative loads and improving the balance of assessment methods. Disadvantages include the perceived validity of CAA, the risks associated with using technology and the cultural shift needed to invest time in designing new assessments rather than marking traditional assessments. The benefits and limitations vary depending on whether the assessment is formative, summative or diagnostic.

Perhaps the most valuable benefit of CAA is the ability to provide focused and timely feedback to students and staff. Feedback can be used to direct future learning, motivate students to investigate other resources and identify students who need additional support. In an environment of rising student numbers, CAA offers the opportunity to give constructive, detailed and consistent feedback to every student, a task which many academics may find difficult to achieve otherwise. The time savings of electronic marking are clearly important but need to be offset against the time invested in writing challenging and effective questions, meaningful feedback and structuring appropriate tests. In addition there may be the need to master software and liase with necessary support staff to enable efficient delivery of the assessment.


Current CAA Activities in HE

In the UK the majority of CAA activity has been as a result of the efforts and achievements of individual academics, and in some instances as a result of collaboration between staff from academic and support services. Rarely have departments, faculties or whole institutions shown a commitment to implementing CAA. Where ‘commitment’ is expressed the level and form can vary from simple permission to proceed, to funding and/or staff time released. Those involved often face serious inhibitors to their attempts to introduce innovative teaching and assessment methods. These inhibitors can be in the form of cultural and organisational barriers - the place of CAA does not sit easily within the remit of existing organisational structures. The preserve of an enthusiastic minority, progress to date has relied on willing and committed individuals, working in a system which rarely formally recognises marking as a timetabled activity. Case studies and papers which detail how CAA is being used are quite common (McCabe 1993, Stephens 1994, Lloyd et al, 1996, Neill, 1996, Callear and King, 1997, Thelwell, 1996, Sutcliffe et al, 1999), but those which address the broader issues of development and strategic implementation are rare (see for example, Bennett, 1998, Stephens et al, 1998, Sandals, 1992, King, 1998).

Already pressured academic staff rarely have time to formally investigate the impact of CAA on their students and teaching. There is a lack of evaluative research with which to support the further development and implementation of CAA. Little measurement of student or academic experiences and perceptions has been carried out. Heywood (1988) proposed that assessment was the ‘afterthought of higher education’. It appears that evaluation is now the afterthought of learning technology. There is an even more pressing need to rigorously evaluate the use of CAA than the use of CAL as the implications and impact are wide reaching, and of concern to a range of parties.


CAA Centre

An attempt to address some of these issues at a strategic level is the Computer-assisted Assessment Centre project. The CAA Centre is a Teaching and Learning Technology Programme project (HEFCE, 1998) which aims to provide models which can be used by individuals, departments and institutions to implement and evaluate CAA. By identifying existing good practice in the use of CAA, the project aims to demonstrate how to overcome the organisational, pedagogic and technical difficulties of using CAA in higher education. The project will identify and develop good practice in embedding CAA within the curriculum and develop and pilot a range of models and guidelines which focus on the strategic implementation of CAA within departments, faculties and institutions. The project is led by the University of Luton and the consortium includes the University of Glasgow, Oxford Brookes University and Loughborough University. (http://caacentre.ac.uk).

Together with the experience of consortium partners, the University of Luton’s experience of developing and implementing a university-wide computer-based assessment system has enabled many of the issues which the project is addressing to be identified. The University of Luton currently assesses approximately 6,000 students each year using summative computer-based examinations a further; 3 to 4 thousand take formative and self-assessments. A wide range of subjects are assessed, predominantly at Level One (Pritchett and Zakrzewski, 1996, Zakrzewski and Bull, 1998).


Organisational Impact of CAA on HEI’s

The full potential of CAA has yet to be realised and its implementation within HEI's can be fraught with difficulties. The implementation of CAA should be pedagogically-led, not technology driven. Allowing technology to drive the assessment process is highly undesirable. Merely transferring traditional (and possibly flawed) assessments to electronic format with little thought to the potential for enhancing the assessment in terms of the skills and abilities tested is not a solution to the problems of assessing large numbers. The emphasis must be placed on using CAA to deliver appropriate assessment, as part of a balance of assessment methods which clearly relate to the skills, abilities and knowledge which need to be tested. CAA can provide academic staff with the opportunity to review and refine their assessment strategies holistically. It is clearly important to recognise the limitations and accept that there are benefits and drawbacks to all assessment methods.

It is essential to gain the support of all the staff who will be involved in designing, implementing and maintaining the system. There is clearly a need to involve computing services staff early in the implementation process. Staff developers and administrative staff should also be included in the planning and implementation phases and where summative assessments are undertaken, quality assurance staff must also be consulted.

Organisational difficulties can arise when the responsibilities and services that need to be provided do not sit comfortably within the established remit of academic or technical staff. The lack of definition in terms of the roles and responsibilities of academic and support staff can lead to CAA falling between the two groups with no-one driving through the initiatives. There are cultural and political implications which often defy institutional structures, and assessment traditions can also act as an additional hurdle to effective implementation.

The logistical issues, which may be both operational and technical, require planning and trialling. Collaboration between academic and support staff is the key to successful implementation. Students require appropriate support and training and there it is important to construct methods for evaluating the cost and learning effectiveness of CAA during the initial stages of development. New procedures and policies may be required which guide the implementation and evaluation of CAA, particularly where summative assessments are used.


Conclusions

Many of the issues associated with implementing CAA are common at across the HE sector. Often the effective implementation of CAA appears to have been hindered by a lack of institutional commitment, strategic direction and easy-to-use and established methodologies.

CAA has the potential to impact the way in which assessment is managed within institutions. It can influence the format and type of assessments which are delivered to students – in both a positive and negative manner. The quality and speed of feedback which students receive can be enhanced by CAA and the extent to which academics are aware of their students’ progress and deficiencies may be increased. Correctly employed, CAA has the power to ensure that curriculum modifications take place at a time when they can benefit the current students rather than the subsequent cohorts. In terms of quality assurance, CAA could drive institutions to reconsider their existing assessment methods in light of the wealth of monitoring and evaluative data which can readily be obtained from CAA. For academic staff, CAA can provide the stepping stone to greater use of computers for teaching, as those who may be reluctant to explore the use of CAL, use CAA out of necessity and discover the benefits outweigh the challenges.

CAA challenges organisational structures and managed effectively should result in a greater collaboration between support and academic staff. This process may be challenging in the first instance, as disparate groups are forced to find ways of working together, but such collaboration is essential to the progress of CAA, as its future depends on both pedagogical and technological advances.


References

  • Bennett, R. (1998). Reinventing Assessment: Speculations on the Future of Large Scale Educational Testing, New Jersey: Policy and Information Centre, Educational Testing Service.
  • Callear, D. & King, T. (1997). Using computer-based tests for information science. Association for Learning Technology Journal, 5 (1), 27-32.
  • Hartley, J. R. (Moderator) & Collins-Brown, E., (Summarizer) (1999). Effective Pedagogies for Managing Collaborative Learing in On-line Learning Environments. Education Technology and Society, 2 (2).
    http://ifets.gmd.de/periodical/
  • HEFCE (1998). TLTP Phase 3: Funded Projects. Report May 98/20, UK: Higher Education Funding Council.
  • Heywood, J. (1988). Assessment in Higher Education, Chichester: John Wiley.
  • King, T. (1998). Developing and Evaluating a CAA Protocol for University Students. In W. Wade & M. Danson (Eds.) Proceedings of the Second Annual Computer-assisted Assessment Conference, Loughborough University, Loughborough, 17-23.
  • Lloyd, D., Martin, J. G. & McCaffery, K. (1996). The introduction of computer-based testing on an engineering technology course. Assessment and Evaluation In Higher Education, 21 (1), 83-90.
  • McCabe, M. (1993). Computing practical exams for the over-forties (large student numbers). Monitor, 4 (Winter), 82-88.
  • Neill, N. T. (1996). The use of technology within the formal grading process. Paper given at the Association of Learning Technology Conference, September, Glasgow.
  • Pritchett, N. & Zakrzewski, S. (1996). Interactive Computer Assessment of Large Groups: Student Responses. Innovations in Education and Training International, 33 (3), 242-247.
  • Sandals, L. H. (1992). An overview of the uses of computer-based assessment and diagnosis. Canadian Journal of Educational Communication, 21 (1), 67-78.
  • Stephens, D. (1994). Using computer-assisted assessment: time saver or sophisticated distraction? Active Learning, 1, 11 -15.
  • Stephens, D., Bull, J. & Wade, W. (1998). Computer-assisted Assessment: suggested guidelines for an institutional strategy. Assessment and Evaluation in Higher Education, 23 (3), 283-294.
  • Sutcliffe, R. G., Leonard, E. M., Tierney, A., Howe, C. W., Reid, I., Goodwin, S. T. & Mackenzie, D. M. (1999). Introduction of a range of computer-based objective tests in the examination of Genetics in first year biology. In M. Danson & R. Sherratt (Eds.) Proceedings of the Third Annual Computer-assisted Assessment Conference, University of Loughborough, Loughborough, 193-206.
  • Thelwell, M. (1996). Computer-based assessment at the University of Wolverhampton. Paper given at the Association of Learning Technology Conference, September, Glasgow.
  • Zakrzewski, S. & Bull, J. (1998). Computer-assisted Assessment: suggested guidelines for an institutional strategy. Assessment and Evaluation in Higher Education, 23 (3), 283–294.


decoration