Educational Technology & Society 3(4) 2000
ISSN 1436-4522

An evaluation model for supporting higher education lecturers in the integration of new learning technologies

Gordon Joyes
Teaching Enhancement Advisor and Lecturer in Education
School of Education, University of Nottingham
Jubilee Campus, Wollaton Road
Nottingham, NG8 1BB United Kingdom
Gordon.Joyes@nottingham.ac.uk
Tel: +44 115 9664172
Fax: +44 115 9791506

 

ABSTRACT

This paper provides a description and some reflections on the ongoing development and use of an evaluation model. This model was designed to support the integration of new learning technologies into courses in higher education. The work was part of the Higher Education Funding Council for England (HEFCE) funded Teaching and Learning Technology Programme (TLTP). The context and the rationale for the development of the evaluation model is described with reference to a case study of the evaluation of the use of new learning technologies in the civil and structural engineering department in one UK university. Evidence of the success of the approach to evaluation is presented and the learning media grid that arose from the evaluation is discussed. A description of the future use of this tool within a participatory approach to developing learning and teaching materials that seeks to embed new learning technologies is presented.

Keywords: Evaluation, New Learning Technologies, Participative


Introduction

The success of any learning material depends upon the ways the learner is able to use it, and this ultimately depends upon the ways lecturers incorporate its use within their courses. There is a tendency to think that computer aided learning (CAL) materials that have been shown to work successfully within the institutions where they were developed can then be used by lecturers on other very similar courses at other institutions. After all, the CAL can often be a self-contained learning experience that might be considered to be almost lecturer-proof. Experience indicates that this is the exception rather than the rule (Coopers & Lybrand, 1996). An initial barrier to effective use of any new learning materials is the lecturer themselves. The Dearing committee report (Dearing, 1997) stated that information and communications technologies (ICT) is not effectively “embedded in the day to day practice of learning and teaching in most higher education institutions” in the UK, and “the main reason is that many academics have had no training and little experience in the use of communications and information technology as an educational tool.” This report recognises the fact that ICT can be used within courses in a variety of ways and is used most commonly on campus-based undergraduate courses alongside other more conventional learning and teaching approaches.

Experience suggests that a prerequisite for embedding ICT in learning and teaching is that the academics teaching the course recognise the need for appropriate holistic evaluation to provide them with not only an understanding of how best to use the ICT, but more generally in improving their understanding of how to develop effective learning environments. In this way the evaluation process can be an aid to curriculum planning.

 

Background

The Teaching and Learning Technology Programme (TLTP) is a major multi-million pound UK government initiative launched in 1992. The aim of the programme was to encourage the higher education (HE) sector to work collaboratively to explore how new technologies could be exploited to improve and maintain high quality learning provision. An aim of the current TLTP Phase 3 (TLTP3) is to evaluate and ‘embed’ a selected number of the CAL products, produced in the earlier phases of TLTP, into everyday teaching in UK universities. One of the 35 projects that received TLTP3 support, was the ‘Embedding COMPACT’ project. COMPACT stands for COMPuter Aided Concrete Teaching. It is a suite of 11 CAL modules covering concrete technology and the design of concrete structures, a major section of the syllabus for civil engineering degree courses in the UK and overseas.

A consortium of four UK universities (University of Nottingham, Imperial College of Science, Technology and Medicine, London, University of Leeds and University of Sheffield) has a task to ‘embed’ COMPACT into a wider range of UK civil engineering departments (Joyes et al.,1998). From the outset, it was anticipated that this would be done by providing paper and web-based materials to facilitate the implementation of the software into civil engineering teaching, mainly at first-degree level.

A detailed survey in 1998 of the 50 UK HE institutions to which COMPACT had been made freely available indicated that the software is being used in a variety of ways, namely:

  • as self-learning and revision tools by students;
  • as a teaching tool to improve student understanding;
  • by lecturers to prepare ‘traditional’ lectures; and
  • as the basis of new taught courses.

An example of the latter approach is a third year half-module in use at the University of Sheffield Department of Civil & Structural Engineering. At this department, COMPACT and electronic communication with students are fully integrated into the teaching of this half-module.

Apart from this latter approach the use of COMPACT by most university departments is mainly passive and supportive in that the software is simply made available on file servers for access by students on a formal, semi-formal or a totally informal basis. As such the potential of the materials is not being exploited.

 

Development of the evaluation model

The literature commonly identifies five purposes for evaluation of CAL, i.e. formative, summative, illuminative, integrative and quality assurance (Draper, 1996; Oliver, 1997). The choice of approach depends upon the primary purposes of the evaluation and its primary audience. Within this project the primary audience for the evaluation was the lecturers themselves, as the project intended to provide quality assured case study exemplar materials to support the integration of the COMPACT software within courses.

In order to provide these case study materials the evaluation had to quality assure the materials by providing:

  • Ongoing feedback to the lecturers delivering the half-module. This was to support decision making for revision of the approach whilst the half-module was being taught;
  • Feedback to the lecturers about the overall effectiveness of the learning and teaching approach to inform future use.

The success of the approach to evaluation would be measured by whether the results made sense to the lecturers and whether they were able to ‘trust’ the outcomes and therefore use it to modify their learning and teaching approach. A starting point for the evaluation and for further understanding of the approach taken is the following more detailed description of the learning and teaching culture and context that was to be evaluated.

 

The context for the evaluation: The culture

The quality of teaching, learning and assessment within Civil Engineering higher education courses in the UK has been recently been criticized, with only 6 out of the 44 providers demonstrating best practice (QAA, 1998). The evidence is that there is little consideration of pedagogy in module or course design. If the Embedding COMPACT project was to be successful it needed to engage lecturers in a consideration of pedagogy, but this had to be grounded within their own culture. This, coupled with our own survey results of the use of COMPACT, had a major influence upon the design of the project and the evaluation.

The intention of the project was to involve civil engineering lecturers in developing, understanding and describing best practice in relation to effective learning and teaching in order to support them in integrating ICT into their own teaching. This needed to be a participatory process, in that the design of the evaluation had to centrally involve developing an understanding of the context in which the evaluation took place in order to involve the lecturers themselves in developing and contributing fully to the evaluation.

The project in its first year was to start by working with lecturers who had been identified as using the COMPACT software. In the second year these lecturers would then work with other invited lecturers from other HE institutions to share their expertise and develop more materials for piloting within the new institutions. In the final year all the materials and case studies of their use would be shared with the civil engineering HE community through workshops and the project web site and an evaluation of their use of the materials would take place to develop further case studies of use for dissemination.

 

The context for the evaluation: The case study half-module

The evaluation process began through a dialogue with the lecturers to gain an understanding of:

  • The half-module, i.e. the learning and teaching materials, the learning objectives and the ways ICT was integrated within these;
  • The learning and teaching approaches generally used within the department;
  • The student cohort i.e. year, background, culture, ICT skills, academic ability etc.;
  • The issues that the lecturers wished to explore through the evaluation;
  • The ways in which the evaluation could best involve and inform them about effective practice.

The half-module could best be described as a complex integration of different learning and teaching activities many of which appeared to have duplicate functions for the students. This is described in some detail here. This approach actually meant that students were provided with a choice of learning and teaching resources and approaches in order to achieve the learning outcomes. This aspect was actually a by-product of the iterative nature of the development of the half-module rather than an explicit intention of the developers.

The central focus was a problem solving activity that students were introduced to in a sequential way through the half-module. The basic scenario of the problem was the same for all students. However, each student had a set of unique parameters and had to produce their own solution in accordance with these. Students could therefore support each other, but had to work out their own solution individually. The procedures for solving the problem and the concepts associated with this were introduced through lectures and lecture notes, which contained worked examples. The COMPACT software provided an alternative approach to the lectures, notes and worked examples. The ‘problem’ was given to the students as a set of four problem sheets to be solved and assessed as the course progressed. The answers to these were provided through the use of an interactive feature in COMPACT and the students had to work through the full solution in order to calculate the correct answer. The new problem sheets, as well as feedback on the previously completed one, were provided through four compulsory small group tutorials spaced at approximately three-week intervals throughout the half-module. In this way students were provided with opportunities to develop a full understanding of the analysis procedure and demonstrate this by presenting a set of calculations on calculation sheets as used in industry. This exercise was aimed at enhancing the students’ numeracy and communication skills and represented 15% of the final exam mark.

This system was supported by a lecturers and post-graduate students acting as teaching assistants, and it managed to cater for approximately 75 students (who submitted for review 300 different calculations per semester). The use of COMPACT to check calculations coupled with the worked examples made this assessment workload feasible. It was clear that the lecturers held a strong belief that students needed experience of completing these types of complex problems and that COMPACT was the means by which this quantitative element of the curriculum could be included.

The lecturers felt it necessary to provide students with further opportunities to discuss issues, both technical and conceptual, through the use of WebCT, a virtual learning environment (VLE). This VLE had a management function and provided continuous access to the lecture notes in PDF format, subject-oriented internal e-mail notices, a question and answer help line through the bulletin board, a computer assessed revision exercise, a timetable which could be easily modified throughout the half-module, and the COMPACT software.

The student cohort were 3rd year BEng and MEng students in their final semester used to a diet of traditional lectures and study activities. Student evaluation of teaching in the department was carried out through the use of a voluntarily completed five point Likert scale questionnaire measuring general satisfaction or otherwise with some aspects of the teaching. Such an approach provided an incomplete and unreliable glimpse of the half-module, i.e. no question related to the usefulness of ICT and the sample were self selecting and therefore likely to be unrepresentative. Importantly this post module quantitative approach to student evaluation represented the only departmental of evaluation. Clearly the project had to further develop the lecturers’ understanding of alternative approaches and to support them in understanding the benefits of these.

 

The evaluation model

University courses that attempt to integrate new learning technologies may employ a range of learning and teaching media; such courses are likely to be the most effective learning environments (Laurillard, 1993). The half-module described typifies such a multiple media approach. The problem sheets and the ways these were supported within the half-module by COMPACT and the other media were to be the focus for the evaluation in that it was these resources within the learning and teaching approach that had to be quality assured before they were disseminated more widely. An accurate picture of the value of the new learning technologies within the half-module could only be gained by an approach to evaluation that sought to discover the particular functions these media had in the learning and teaching process from both a lecturer and student perspective. Thus there was a need to evaluate the whole learning experience (Draper, 1996). This was to be an educative process for the lecturers, because the feedback they were to get would be richer and more immediate than they received from the process to which they were accustomed. In fact, they had no clear vision of which aspects of their course were effective, partly because of the iterative way they had developed the half-module and partly because of their usual approach to evaluation.

The evaluation model used was essentially iterative and adaptive, and is shown in Figure 1. High value was to be given to data collected interactively through interviews with students and lecturers. However, other data about the use of WebCT provided a focus for the interviews.

Figure 1. The evaluation model

 

Data was collected to inform the evaluation of the half-module through a mixture of face to face interviews with a group of fourteen students, interviews with two lecturers, content analysis (Mason, 1992) of all of the E-communications within WebCT and analysis of use of the self-assessment questions within WebCT. Students were selected carefully to ensure they were representative of the half-module cohort and the evaluation was carried out by an independent and experienced evaluator from the School of Education, University of Nottingham in weeks three and eleven of the half-module. Further rationale for the approach used within the project is described elsewhere (Joyes et al., 1998).

Contextualisation was a critical phase and resembled an ‘outer method’ (Draper, 1996). This formed an initial educative function in which the evaluators and the lecturers developed a common understanding of their half-module and how it might be best evaluated. This established a cooperative framework and a basis for further regular dialogue. The initial interviews with students were carried out to inform the half-module organisers about the value of the learning that was taking place and to support them in their decision-making in relation to pedagogy for the rest of the half-module. The interview with the lecturers was carried out prior to the interview with the students. This provided a view of the course from the lecturers’ standpoint and helped structure the student interview. The immediacy of the feedback the lecturers received from the initial interviews was critical in establishing their confidence in the evaluation approach.

The initial interview was followed up at the end of the half-module and prior to the examinations. At this final interview the initial findings were presented to the same set of students in order to validate the data as well as to find out any changes that had occurred.

 

Evaluation results: The initial interview

The findings from the initial evaluation (after only three weeks) revealed the functions and the relative value of the key features of the whole learning environment provided in the half-module from a student and a lecturer perspective. These are shown in Table 1.

Table 1.Learning media grid: A map of the key features of the learning environment and the functions they fulfil for users

 

The initial interviews indicated that for the students:

  1. Lectures, lecture notes and the problem sheets were all valued;
  2. COMPACT was beginning to be used as support for the half-module. Its value was that it gave a different perspective to that provided elsewhere on the half-module;
  3. The use of WebCT was problematic for students. This was due to the slow speed of the network file servers as well as access to computers on campus. Students preferred standard e-mail communication, as it was much faster;
  4. Student perceptions of WebCT were negatively influenced by the fact that they had to download the lecture notes through this. This meant that they had to pay for the printing and additionally this took up their time;
  5. Most students used peer support as an initial strategy for sorting out any problems they had. If they still had problems they would resolve this through face-to-face contact with the lecturer;
  6. WebCT e-communication was not used to discuss conceptual difficulties and only occasionally used to discuss technical difficulties. (e.g. Installation of COMPACT software, problems with network printing, etc.)

The lecturers had hoped that e-communication within WebCT might have been used to support learning. It was thought that students might share problems with lecturers who could then provide answers that would then be available to all the students. It was clear that these 3rd year students were uncomfortable about publicly acknowledging they were having any difficulties. The majority used peer support with ‘trusted’ colleagues and resorted to lecturer help on a one to one confidential basis. This reveals much about the learning culture of these students and little about the value of e-communication via bulletin board discussions for supporting learning (Crook, 1997). It is important to note that students did feel that on courses where lecturers were less available this form of e-communication would have been useful. Additionally as one student stated:

 “I cannot easily describe my difficulty without referring to an equation, a diagram or the problem I am working on. Electronic communication does not allow this. By seeing my lecturer I can quickly get the help I need when I need it. Electronically this would take much longer.”

Lecturers had been concerned that the WebCT e-communication was not being used effectively, and needed to make a decision about this, i.e. whether to support its use more or just accept this low usage and focus on other aspects of support. It was clear that these students did not value the e-communication, but they did value aspects of the course that would enhance their end of half-module exam performance. Mindful of the ways the students valued anything that would enhance exam performance it was therefore decided to create a self-assessment module. This consisted of a bank of Objective Test Questions (OTQs) to be delivered through WebCT in order to support students in their revision for the exam.

 

Evaluation results: The final interview

This interview took place towards the end of the half-module and prior to the examinations. At this final interview the results from the initial interviews in table 1 were presented to the students in order to validate the data as well as to find out any changes that had occurred. As part of this process students were asked to rate each of the learning media using a Likert scale of 1 to 5 (5 indicating high value and 1 indicating low value). These responses were then discussed within the group to gain an insight into the reasons behind them. These interviews confirmed the original data, but also revealed that initial perceptions of value had evolved over the course as they had gained more experience of the role of each of the media. The interviews additionally revealed that:

  1. Students liked the multiple media nature of the course. This was due to the fact that there was an element of choice in how they could study as well as providing alternative coverage of the same concepts/ideas.
  2. Although the continual assessment approach was demanding of their time throughout the course, they felt it did support their learning.
  3. The students did not generally value the four small group tutorials that were scheduled in order to provide feedback about the problem sheets.
  4. The lecture notes were not of a consistently high enough quality.
  5. It was felt that the first three problem sheets were quite trivial compared to the final one. This therefore took much longer to complete at a time when students were under workload pressure.
  6. COMPACT was used in a variety of ways other than simply calculating the answers for the problem sheets. This included support for the problem sheets by using the worked examples, developing ongoing conceptual understanding during the course and for revision. However some students did not use it at all.
  7. It was generally felt that the OTQs would be useful for revision and over half said they would use this during revision, but not before. (This was borne out by the WebCT management log of usage.)

 

An evaluation of the evaluation

Within the institution this could be measured in two ways:

  1. the effect on the delivery of the half- module i.e. did the lecturers ‘trust’ the data and act on it.
  2. the effect on the department

 

The effect on the delivery of the half-module

The lecturers valued the approach to evaluation in that the data it revealed made sense even though some of the messages it was giving were problematic. Two of these, and the actions taken by the lecturers, are discussed below.

The tutorials: Why did the students not value the tutorials? Was the quality of the tutorial provision at question? Could the department afford to remove them completely?

The reality was that even though the quality of the tutorials given by different tutors varied, particularly when tutors were post-graduate students, students did feel that they should not be a compulsory feature of the course. In effect the tutorials were made compulsory by the fact that problem sheets were handed in/collected at these. Students actually felt that the most important feedback was that which they had obtained through attempting to complete the problem sheets by working through the answer given by the COMPACT software. The lecturers were surprised that face-to-face feedback was not felt by the students to add significantly to their learning.

The lecturers decided for future delivery of the half-module that attendance for students at the feedback tutorials would be voluntary. As part of the marking process lecturers would identify those students who needed specific support and use the tutorial time to provide this.

WebCT: Should WebCT continue to be used? If so, what should it deliver? Should conventional email communication replace the e-communication in WebCT or should both be used?

VLEs, such as WebCT, come with a wide range of tools for learning and the tendency is to attempt to use all of them independent of the context. WebCT was convenient for the lecturers in that all course materials, i.e. notes and OTQs, as well as e-communications could be directed through this, but the students found only the OTQ’s helpful.

Lecturers decided for future delivery of the half-module not to have WebCT as a central delivery tool and its use was to be peripheral, i.e. use would be made of conventional e-mail, CD ROM delivery of notes and software. However the useful function of delivery of the OTQ’s was to remain.

 

The effect on the department

The departments’ involvement in the project served as a major catalyst for some academics in the host institution, who were not project members, to start using new learning technologies (Pavic, 2000). Other well documented factors were also undoubtedly influential, for example, prior experience of ICT (Scott, 1999) and other aspects of ‘cultural flow’ (Trowler, 1998), leadership support (Silver, 1999) and the national climate and agenda for change (Austin, 1992). It was felt by the department that in addition to the increased confidence of academics to utilise learning technology, the ‘exposure’ to the project and advanced educational evaluation methodologies also increased general awareness of good practice in providing learning.

 

Reflections on the evaluation

The evaluation was designed to operate at two distinct levels:

  1. To quality assure learning and teaching materials that effectively integrate ICT into the curriculum;
  2. To support lecturers in their pedagogic understanding.

The choice of approach was influenced by the need to operate at these two levels. The lecturers needed to feel close to the evaluation data in order to trust it and then to act upon it. From the project perspective it was important that these academics took on the role of ‘missionary characters’ (Dixon, 1998) or ‘agents of change’ (Fullan, 1991). The success of this can be seen from the effect they had on their department as they served as the main driver to the development of apparently more effective learning provision within the department. Such processes are in accordance with the ‘diffusion perspective’ of university change (Van Vugt, 1989) and more general notions of bottom up and incremental cultural change (Trowler, 1998).

The evaluation approach is context- and people-centred and although it is empirical, i.e. it aims to collect observational data, it is not experimental in its treatment (Oliver, 1998). It draws on the adaptive methodology of illuminative evaluation (Parlett & Hamilton, 1972) and the need to evaluate the whole course and to work in a collaborative way with lecturers, both of which are central to integrative evaluation (Draper, 1996). However the emphasis on the educative process as part of the development of lecturers as change agents has resulted in a model for evaluation that could be termed participative. Participatory design originated in Scandinavia as part of the workplace democracy movement (Clement & Van den Besselaar, 1993) but the term now covers a continuum of user-centred design approaches (Carmel, 1993). It has until now focused on the participative design of software, but the principles can usefully be applied to the participative design of the learning environment into which the software is integrated.

The participatory design process recognises that all the stakeholders, in particular the users, i.e. the lecturers and students, should have an involvement in the development process. Importantly the lecturers who are to implement the software need to have a central involvement in its development, evaluation and dissemination. There is a recognition that such a process involves teams of experts, i.e. subject experts (the lecturers), pedagogic and evaluation experts etc. It views the process of curriculum design as an iterative and ongoing process, rather than a step wise one of design, evaluation and dissemination. In effect it empowers lecturers to engage in all aspects of curriculum design rather than leaving this to ‘experts’. The notion is that ‘experts’ outside of the lecturers’ culture are unlikely to produce software or teaching materials that support the use of ICT that are matched to the lecturers’ needs (Harrison, 1994). The process also recognises that there is no single correct way to implement software into the curriculum and that lecturers need support in terms of developing and adapting exemplars of use.

Within the case study described in this paper, lecturers ‘intuitively’ attempted to embed both COMPACT and the WebCT VLE into their teaching. A pedagogic expert might well have predicted some of the outcomes of the evaluation at the curriculum planning stage though it took a reasonable period of contextualisation in this case study to get near to achieving this. For example, the analysis of their learning and teaching approach revealed that the lecturers’ reasons for insisting on compulsory tutorials were cultural rather than pedagogical. There was a sense that the lecturers still needed to ‘face-to-face teach’, even though much of this role had been replaced by other media. Equally, the use of the many functions within WebCT on the course were more to do with a naïve enthusiasm for experimenting with a new toy than for any sound pedagogic reasons.

The evaluation process enabled lecturers to engage with an analysis of the learning and teaching environment that they had created so that they could recognise the features that were supporting learning, those that were not and those that had a useful yet duplicate function. The learning media analysis grid, which has similarities to the learning resources questionnaire used in the Teaching with Independent Learning Technologies (TILT) project (Draper, 1996), supported this process. On reflection this has the potential to be a powerful tool in helping lecturers to engage in a pedagogic analysis of the design of effective learning environments. It is the introduction of new elements within more traditional approaches that necessitates the need for this type of analysis, as it is through their introduction that traditional pedagogic assumptions will be challenged. This is because the roles of traditional media such as lectures or tutorials are likely to be duplicated by the ICT and the support materials.

 

Conclusions and further developments

Ongoing involvement of both lecturers and students in the evaluation process not only clarified the functions of each aspect of the complex learning environment, but provided the lecturers with a believable picture of the value of each of the learning media in the learning process. Lecturers were able to trust the information they were receiving from the students through the evaluation and felt confident in making decisions on the basis of this. The learning media grid was a useful tool in supporting this process in that it provided an overview of the complex learning environment and clarified the inter-relationships between the learning media.

The author is currently working as part of a central team supporting the development of five new learning technologies projects in a range of faculties at the University of Nottingham. The learning media grid is being developed as an analytic tool to support the lecturers developing these modules. The media comparison table devised by Conole and Oliver (1997) could be used supportively for lecturers who wish to replace more traditional media with ICT. However it is at the preliminary planning stage that lecturers need support in considering the functions of the learning media they intend to use, from both the lecturer and the student perspective. It is at this stage that cultural norms and assumptions about learning and teaching need to be challenged. The learning media grid is to be modified to involve the lecturers in considering, at this planning stage, how each of the media they are using will achieve the learning outcomes for the module. In addition they will be asked to consider how this matches the needs of the student culture with which they are working. This will then form the basis for a discussion of appropriate evaluation strategies and tools. This extends the evaluation model used within the case study to a more fully participative one.

It is not suggested that more considered planning will always provide the most effective learning environment, but the intention is to provide a framework that encourages lecturers to experiment with new approaches to learning and teaching and at the same time engages them in considering the most effective approach to evaluation so that they can judge the value of these experiments.

 

Acknowledgements

Thanks go to Drs Aleksandar Pavic and Paul Reynolds, Department of Civil & Structural Engineering, University of Sheffield, UK who developed and delivered the half-module.

Additional thanks go the UK HEFCE which funded the TLTP3 Project 88 entitled ‘Embedding COMPACT’ and the other members of the project Executive team, who have been central to the developments described in the case study:

Drs Ban Seng Choo (Project Director) and Rachael Scott, University of Nottingham, UK; Dr John Newman, Imperial College of Science, Technology and Medicine, London, UK; Professor Andrew Beeby and Dr Peter Wainwright, University of Leeds, UK.

 

References
  • Embedding COMPACT web site,
    http://www.shef.ac.uk/~compact
  • Austin, A. E. (1992). Faculty Cultures. In Clark, B. R. & Neave, G. (Eds.) The Encyclopaedia of Higher Education, Vol. 3, Oxford: Pergamon Press, 71-89.
  • Carmel, E.,Whittaker, R. D.& George, J. F. (1993). PD and Joint Application Design: a Transatlantic Comparison. Communications of the ACM, 36 (4), 40-48.
  • Clement, A. & Van den Besselaar, P. (1993). A retrospective look at PD projects. Communications of the ACM, 36 (4), 29-37.
  • Conole, G. & Oliver, M. (1998). A pedagogical framework for Embedding C & IT into the curriculum. Association for Learning Technology Journal, 6 (2), 4-16.
  • Coopers & Lybrand and the Tavistock Institute (1996). Evaluation of the Teaching and Learning Technology Programme, Bristol: Higher Education Funding Council for England (HEFCE).
  • Crook, C. (1997). Making hypertext lecture notes more interactive: undergraduate reactions. Journal of Computer Assisted Learning, 13 (4), 236-244.
  • Dearing, R. et al. (1997). Higher Education in the Learning Society: Report of the National Committee of Inquiry into Higher Education, London: HMSO and NCIHE Publications.
  • Dixon, M. (1992). The Uptake of IT as a Teaching Aid in Higher Education, MA Thesis, Oxford: CTISS.
  • Draper, S. W., Henderson, F. P., Brown, M. I. & McAteer E. (1996). Integrative evaluation: an emerging role for classroom studies of CAL. Computers and Education, 26 (1-3), 17-32.
  • Fullan, M. G. (1991). The New Meaning of Educational Change, London: Cassell.
  • Harrison, C. (1994). The role of learning technology in planning change in curriculum delivery and design. Association for Learning Technology Journal, 2 (1), 30-7.
  • Joyes, G., Choo, B., Pavic, A., Newman, J., Beeby, A. & Wainwright, P. (1998). Learning outcomes and the role of evaluation in Embedding COMPACT. Active Learning, 8, 32-35.
  • Laurillard, D. (1993). Rethinking University teaching - a framework for the effective use of educational technology, London: Routledge.
  • Mason, R. (1992). Methodologies for Evaluating Applications of Computer Conferencing, Buckingham: Open University.
  • Oliver, M. (1997). A framework for evaluating the use of educational technology, BP ELT Report no. 1, University of North London,
    http://www.unl.ac.uk/tltc/elt/.
  • Oliver, M. & Conole, G. (1998). The evaluation of learning technology - an overview. In Oliver, M. (Ed.) Innovation in the evaluation of learning technology, London: University of North London Press, 5-22.
  • Parlett, M. & Hamilton, D. (1987). Evaluation as Illumination: a new approach to the study of innovatory programmes. In Murphy, R. & Torrance, H.(Eds.) Evaluating education: issues and methods, London: Harper and Row Ltd, 57-73.
  • Pavic, A., Reynolds, P., Choo, B., Joyes, G., Scott, R., Newman, J., Beeby, A. & Wainwright, P. (2000). The unforeseen effects of hosting a new learning technology project. Paper presented at the ACED Conference, 26-28 April, University of Southampton, UK.
  • QAA (1998). Quality Assessment of Civil Engineering 1996 to 1998, Subject Overview Report, Cheltenham, UK: The Quality Assurance Agency for Higher Education.
  • Scott, R. (1999). Towards a Greater Understanding of Learning Technology Implementation Within Higher Education, Unpublished PhD Thesis, Cambredge, UK: University of Cambridge.
  • Silver, H. (1999). The Challenges of Innovation.Paper presented at the History 2000 Conference, 15 April, Bath Spa University College, UK.
  • Trowler, P. R. (1998). Academics Responding to Change, Buckingham & Philadelphia: The Society for Research into Higher Education & Open University Press.
  • Van Vugt, F. A. (1989). Creating Innovations in Higher Education. European Journal of Education, 24 (3), 249-270.

decoration