Educational Technology & Society 3(4) 2000
ISSN 1436-4522

Mapping the Territory: issues in evaluating large-scale learning technology initiatives

Charles Anderson, Kate Day, Jeff Haywood, Ray Land and Hamish Macleod
Department of Higher and Further Education
University of Edinburgh, Paterson's Land
Holyrood Road, Edinburgh EH8 8AQ
United Kingdom
Tel: +44 131 651 6657/6661
Fax: +44 131 651 6664
kate_day@education.ed.ac.uk

 

ABSTRACT

This article details the challenges that the authors faced in designing and carrying out two recent large-scale evaluations of programmes designed to foster the use of ICT in UK higher education. Key concerns that have been identified within the evaluation literature are considered and an account is given of how these concerns were addressed within the two studies. A detailed examination is provided of the general evaluative strategies of employing a multi-disciplinary team and a multi-method research design and of how the research team went about: tapping into a range of sources of information, gaining different perspectives on innovation, tailoring enquiry to match vantage points, securing representative ranges of opinion, coping with changes over time, setting developments in context and dealing with audience requirements. Strengths and limitations of the general approach and the particular tactics that were used to meet the specific challenges posed within these two evaluation projects are identified.

Keywords: Evaluation practice, ICT, Higher Education


Introduction

This article addresses some key challenges confronting researchers engaged in assessing the impact made by learning technology programmes upon attitudes and professional practice among large and diverse populations. Drawing on our experiences of two evaluations commissioned by the Higher Education Funding Councils, we will discuss the nature and implications of these challenges, and illustrate the strategies that we chose to employ.

The first study (September 1996–February 1997 for the Scottish Higher Education Funding Council) evaluated the achievements of the Learning Technology Dissemination Initiative (LTDI) over its initial two year funding period (£450k). It examined the extent to which LTDI met its objectives, represented value for money, and impacted upon the Scottish higher education sector. The second study (carried out February–August 1998 for the Higher Education Funding Council for England) was more substantial in that it concerned the outputs of 76 projects with a total funding of £11 million. It was geared to evaluating the use made across British Higher Education institutions (HEIs) of the materials produced during phases 1 and 2 (1992-96) of the Teaching and Learning Technology Programme (TLTP). By considering aspects common to these two evaluations of large-scale learning technology initiatives (the effects of LTDI’s activities and the uptake of TLTP’s products) we hope to reap the benefits of linking insights to examples taken from more than a single case study (Yin, 1989).

The details of the evaluative studies and their substantive findings are readily available (see http://www.flp.ed.ac.uk/LTRG; Use of TLTP Materials in UK Higher Education, June 1999, HEFCE Report 39). We have also disseminated the pictures obtained of the use of, and attitudes towards, information and communication technology (ICT) in learning and teaching in higher education (e.g. Haywood et al., 2000). For the evaluation of the impact of these programmes both necessitated, and legitimated, broader investigations of factors and developments, at all levels from the institution to that of the individual academic, which in combination affect the deployment of ICT.

Our current concern, however, is with strategic and methodological aspects of the evaluations – what issues did we identify as particularly problematic and how did we seek to resolve them? The objective is to tease out and reflect upon the ‘middle ground’ of the iterative interactions between principles, challenges and practicalities that always occur in evaluative research, and as much of the literature now indicates, have to be resolved on the ground in pragmatically viable ways. “Evaluations need to be conducted so that they are ‘good enough’ to answer the questions under study” (Rossi and Freeman, 1993, p.405). Of course, as Scriven points out, “given demands for credibility, comprehensiveness, validity, and so on, there may not be a solution within the constraints of professionality, time, and budget”. But as he goes on to say “more commonly there are many, and it is this that must lead one to recognise the importance of the notion of multiple levels (of analysis, evidential support, documentation) in coming to understand the nature of evaluation” (Scriven, 1983, p.257).

 

General challenges presented by the evaluative contexts

From the outset we were very much aware of the general challenges that we faced; whether in tracing out the effects of LTDI’s efforts to inform academic staff about the potential of ICT for enhancing teaching and learning and to promote its usage, or in establishing how widely the various TLTP products had been disseminated, the ways they were being deployed and the degree of their integration into day-by-day learning and teaching activities.

Both studies were of the ‘short, fat’ variety, with quite tight time frames given the size and the underlying complexity of the evaluative tasks. Ways had to be found within the allocated resources to get a sensible handle on the extent to which a set of activities and publications, in the case of LTDI, and a large array of different learning materials emanating from seventy-six projects in the case of TLTP, had affected the ways in which teaching and learning were being carried out; across the multitude of modules and courses, the welter of subjects, departments, schools and faculties, in the assortment of institutions that make up higher education. Even within a single university setting it would rarely be a straightforward matter to obtain an accurate picture of academic and managerial views about the appropriate role of ICT, of what was taking place in diverse classrooms, and the uses made of specific teaching and learning resources. There would also be a large number of possible contextual variables, operating independently and interactively at several levels, which could account for the particular patterns uncovered.

The very difficulties that confronted us in trying to determine the current state of play across higher education with respect to the extent and types of use made of ICT also meant that there was a lack of appropriate ‘baseline’ data. Relatively little systematic information existed about the prevailing situation prior to the work of the initiatives whose impact we were seeking to evaluate. With LTDI we were far removed from the controlled world of pre- and post- measurement, especially since this project was operating not in a vacuum but alongside other agencies similarly aimed at fostering positive attitudes towards ICT, raising awareness of what was available, and encouraging the uptake and integration of suitable materials for teaching and learning. When trying to gauge the impact made by TLTP products, it was important to have some awareness of what else was available at the same time, and of the extent to which and how ICT materials generally were being used. Again this kind of sector-wide knowledge was not to hand. In addition we were dealing with a moving rather than a static situation since the TLTP projects created a broad range of different products that came on stream in different incarnations over a four year period (and indeed continued to do so as we worked).

The retrospective nature of these evaluations further complicated a situation in which any data would be highly dependent on the position and perspective of observers across a diverse sector, who could only be expected to have a partial view of the impact of LTDI or TLTP. We clearly needed to obtain a complementary (and maybe contrasting) mix of viewpoints, ‘takes’ on situations, and reactions to initiatives - not simply in terms of going right across the disciplines and types of institutions, but in terms of producers and consumers, academic and support staff, policy makers and chalk-face workers, insiders and outsiders, bottom up and top-down. We also had to bear in mind, and be realistic about, the limits of knowledge among different groups of informants, especially in relation to happenings some time in the past. Other tricky boundary issues arose in connection with the requirement, identified as critical for a fair assessment of the LTDI service or the TLTP project materials, to both dissect out the objects of enquiry from, and to nest them within, the broader contexts of knowledge, attitudes and developments concerning the use of learning technologies in higher education. A further concern that shaped both studies was how to undertake the research and report on findings in ways that would prove useful not only to the commissioning agents (funding agencies and policy makers), but also to on-the-ground practitioners (academic and related staff in higher education) and other researchers in the field.

 

General evaluative strategies

There were two fundamental (and familiar) aspects of our approach to evaluation which we felt – both in prospect and in retrospect – put us in a good position to tackle the general challenges outlined above. The first was to have a multi-disciplinary research team, whose members would bring to the investigation not only knowledge about educational technology, evaluation, and learning and teaching in higher education, but also sets of research skills and approaches that were distinctive as well as complementary. The second broad strategy was to have a multi-method research design, which involved capitalising on documentary, statistical and bibliographic materials already in the public domain, reviewing records held and reports produced by the projects themselves, as well as devising our own survey questionnaires and interview protocols in order to elicit new information.

 

A multi-disciplinary team

Reference to the wide range of skills that need to be brought to bear upon evaluation is a feature of much of the writing about evaluation, regardless of whether quantitative systems, qualitative ethnographic or pluralist approaches are favoured. Often this leads on to mentions of the usefulness of having an evaluation team, but usually in a taken-for-granted rather than an elaborated way, which reflects its generally uncontested status – at least in comparison to other aspects of evaluation practice. Particularly welcome therefore is Guba and Lincoln’s discussion of the rationale for having teams, which concludes that while there are “some problems to be taken into account when forming a team... they do not cancel out the benefits” (Guba & Lincoln, 1983, p.289). The identified advantages – providing for multiple roles, multiple perspectives, multiple strategies, rigour, methodological and substantive representation, and mutual support - were all mirrored in our own experience. Fortunately, too, although the evaluation processes we engaged in were not without their frustrations and fleeting frictions, our disagreements tended to be productive in their outcomes. Existing academic interests in the area under scrutiny, the strength of professional and personal relationships amongst team members, and the buzz of getting to grips with a challenging but stimulating task within a short timespan, meant that few of the potential difficulties materialised.

The LTDI team, different combinations of whom had either worked together on previous research projects or as part of the same department (the Centre for Teaching, Learning and Assessment), was made up of people with academic backgrounds in biochemistry, education, history, information management, psychology and sociology. For the TLTP products evaluation, the research team that had worked on the LTDI project was joined by an academic development specialist from an arts background with shared interests and experience in learning technologies.

Whilst the familiarity, proximity, and stability of the research team were undoubted assets, the differences among them in terms of assumptions, opinions, and discourse were an equally important feature. As a result there was at all stages and in most aspects of the projects substantial debate and close scrutiny of materials, as we worked to develop and implement the design specification, and then to make sense of the findings. The involvement of people from different backgrounds resulted in the lack of an immediately consensual mindset, so that evaluative approaches had to be argued for very explicitly and co-constructed to everyone’s reasonable satisfaction. Few suggestions or interpretations could be made without being elaborated upon, queried, or challenged by someone else, and the collective judgements arrived at through subsequent discussion determined whether they were accepted, modified, or discarded. All this happened with remarkably little rancour because contestation and debate were balanced by a sense of trust and shared purpose, which included a determination to find an efficient and effective balance between the time and effort expended and the benefits derived, in terms of the quantity and quality of the data generated and the explanatory models developed.

As well as the positive consequences for the intellectual framing of the task, the bringing together of various sets of academic skills had considerable practical benefits. We were able to play to individuals’ particular strengths, and either share out responsibilities or pair up as appropriate, in order to carry through the different elements of a complicated research process. For example, whilst most of the interviewing was done in tandem for reliability reasons, in the first pass through the survey data, one person took charge of the statistical analyses and another the theming of the open comments. Once preliminary findings were available, a third party and then the whole group would become involved in testing out their robustness and teasing out their implications. For the larger TLTP study, however, it was essential to have a team leader who would help orchestrate, and to some extent control, these multiple iterations; keeping tabs on the progress actually being made, and ensuring that the lively interrogation of strategies and findings contributed to, rather than distracted from, the meeting of project milestones.

 

A multi-method design

The decision to take an eclectic approach to the selection and combination of research methodswas potentially more contentious than the team-based strategy, because of the link to a continuing debate within the evaluation literature. The ‘great paradigm wars’ (Gage, 1989) have been underway for well over twenty years, fuelled by ideological commitments, territorial claims, and the forging of professional identities. Although now moderating in intensity, it seems that they have yet to come to an end. In a recent publication, for example, Ian Shaw accepts Silverman’s pragmatic conclusion that “there are no principled grounds to be either qualitative or quantitative in approach. It all depends on what you are trying to do” (Silverman, 1997, p.14). Yet Shaw moves on to assert that “what we are trying to do – evaluative purpose – will likely as not demand a qualitative approach,” thus rendering “the pluralist horses-for-courses approach... not adequate” (Shaw, 1999, p.15).

As Shadish et al. (1991, p.42), have observed, “terms like ontology and epistemology bore many evaluators, because they conjure up images of sterile philosophical debates”. But whilst not needing to follow every twist and turn in the controversy, practitioners cannot afford to ignore the paradigmatic arguments entirely, because they are rooted in fundamentals such as the nature of reality, knowledge, understanding, and evidence. The main areas of disagreement particularly pertinent to evaluation are, first, the extent of the contrasts in ‘world-views’, and second, their implications for the conduct of practical research.

Some commentators remain convinced that the distinctiveness of the two orientations is conceptually so great as to constitute a chasm. Other observers, including Bryman, argue that the differences have been exaggerated, and are assumed rather than demonstrated by reference to substantive studies. “The tendency to talk about quantitative and qualitative research as though they are separate paradigms has produced ideal-type descriptions of each tradition with strong programmatic overtones, and consequently has obscured the areas of overlap, both actual and potential between them”(Bryman, 1988, p.172). A separate, and particularly germane question, concerns the empirical significance of the debate for evaluation practice. As early as 1979, Cook and Reichardt were urging evaluators to be “flexible and adaptable; why not use both qualitative and quantitative methods?” (Cook and Reichardt, 1979, p.19). Patton in particular has long championed a “paradigm of choices”, which “rejects methodological orthodoxy in favor of methodological appropriateness as the primary criterion for judging methodological quality” (Patton, 1990, p.38).

The strong inclination of our research group is to agree with Patton’s view that evaluators need to be “aware of their methodological biases and paradigmatic assumptions so that they can make flexible, sophisticated, and adaptive methodological choices” (Patton, 1988, p.119). We would also add the observation that the explication of connections between individuals’ belief systems and methodological preferences is greatly helped by, if not integral to, working in a multi-disciplinary team.

But endorsement of the idea that “fitting the approach to the research purposes is the critical issue” (Rossi and Freeman, 1993, p.437) does not imply that the implementation of a ‘pick and-mix’ strategy is unproblematic. Issues to be dealt with include the appropriate matching up and detailed tailoring of specific research tools and tasks, as well as the overall configuration of the different methodologies and instruments employed. “An evaluation is a place for every kind of investigation… The difficulty for evaluators, however, is that they must decide on the distribution of investigative effort in a particular project at a particular time; so the trade-offs in design must be very much the center of concern” (Stufflebeam and Shinkfield, 1984, pp.120-21).

Further down the line there is the issue of what happens when the information obtained by different methods is brought together. “The use of multiple methods, often referred to as ‘triangulation’,” is appropriately advocated as “a means of off-setting different kinds of bias and measurement error” (Rossi and Freeman, 1993, p.437), but it does not always result in congruent findings. As Clarke has recently emphasized, “different methods may produce contradictory results when applied to the same evaluation problem” (Clarke, 1999, p.35). This was exemplified by Trend’s 1978 case study which he published in order to help “dispel the notion that using multiple methods will lead to sounder explanations in an easy additive fashion” (p.68). In the event, however, as Trend suggested and we discovered for ourselves, “if the accounts mesh this provides an independent test of the validity of the research. If they do not, the areas of disagreement provide points at which further analytic leverage can be exerted” (p.69).

 

Meeting challenges in day-to-day evaluation practice

As indicated in the opening section, tackling the general issues highlighted in the preceding paragraphs required us to focus on the ‘middle ground’ where principles and practicalities meet. Turning now to consider in detail the specific challenges faced and the kind of strategies adopted, the following sections of the article will discuss how we went about:

  • tapping into a range of sources of information
  • gaining different perspectives on innovation
  • tailoring enquiry to match vantage points
  • securing representative ranges of opinion
  • coping with changes over time
  • setting developments in context
  • dealing with audience requirements

Of course in practice the challenges did not present themselves neatly packaged in an issue-by-issue way. However, we did try to strike the necessary compromises between different desiderata explicitly rather than implicitly. In other words, these challenges were an ever present preoccupation and a touchstone for much of our decision making about how best to proceed.

 

Tapping into a range of sources of information

We were keen to get as broad and varied a view of the effects of LTDI and of the pattern of TLTP product usage as we could, which meant not only finding effective means of eliciting new information but also making good use of existing sources of information.

The data that informed our assessment of the scale and type of use of TLTP materials included:

  • four sets of surveys involving
    • all the projects in phase 1 and phase 2
    • all teaching/departments/schools in all British universities
    • teaching and learning technology support personnel in universities
    • heads of FE colleges delivering HE courses
  • six case studies of exemplar TLTP products

    (contexts and details of HE usage)

  • documentary analysis
    • records, reports, feedback and dissemination materials

    (central co-ordinating unit and individual projects)

    • journal articles, reports, and other publications
  • (ICT in learning and teaching and TLTP related)

A similar blend of methods and sources, using different routes into different pools of informants and information, characterised the LTDI evaluation. Direct information about LTDI and its activity areas (awareness workshops, an implementation support service, an information and advice service, a resource collection of courseware/software) came from talking on several occasions with the Director and staff (current and some past), both individually and as a group. The interviews simultaneously allowed LTDI to expand on their aims, achievements and perceptions of the contexts in which they were working, and enabled us to check (both for factual accuracy and interpretative plausibility) information derived by other means or from elsewhere. We also had open access to all of LTDI’s internal files, including feedback from workshops and ‘implementations’, materials from which reports were compiled, publicity literature, and project publications. The twin sources of information from the universities were a postal questionnaire survey of teaching staff across the sector (982 respondents) plus face-to-face interviews (c. 70) of staff with ICT-related responsibilities (senior managers, staff/educational development units, CAL/IT support units and designated ‘LTDI contacts’). The information thus obtained was amplified by, and compared with, additional data gathered via telephone interviews and e-mail from 50 of LTDI’s ‘implementations’ and a sample of the subject specialists listed in the LTDI Information Directory, personal interviews with directors of parallel initiatives, and the analysis of documents such as Teaching Quality Assessment Reports and other formal publications.

The process of consulting an array of different sources in an assortment of ways quite deliberately involved the generation of a broad mix of different types of quantitative and qualitative data. It made available to us many sets and sorts of findings that could be brought together, cross-checked and triangulated, often many times over and in different combinations, in order to construct as accurate and reliable a snapshot as possible, within the prevailing time and resource constraints.

 

Gaining different perspectives on innovation

As well as maximising access to the available information, we also wanted to capitalise on the different perspectives associated with differently situated vantage points. The collection of data from various viewpoints was likely to allow the construction of a fuller, and more nuanced, picture, of the impact made by LTDI or the TLTP products. On the other hand practical decisions and compromises had to be made in a situation where it was clearly important to avoid wasting anyone’s time and effort – our own as well as that of respondents.

So we tried to think through carefully and identify what we most needed to be informed about, as well as who and which documentation would be best placed to supply what was required. Ideally we hoped to end up without either yawning gaps or a lot of duplication in the evidence available to us, but it is hard to predict in advance of data collection just how many different views from which sets of information sources ought to be brought to bear on particular aspects of the evaluation. Quite often we would carry out cross checks to make sure that we were covering the ground as best we could, and emphasising those aspects that looked like being most salient.

We took account of the particular vantage points occupied by different groups or sub-sets of groups in both the overall choice of research methods and the detailed design of research instruments. In particular we were alert to the variability in both the content and substance and the focus and grain of different perspectives, and thus in the knowledge held by, or contained in, different data sources.

 

Tailoring enquiry to match vantage points: content and knowledge base

Whether analysing documentary materials or making contact with groups and individuals, we were guided by the content and substance of what each source could reasonably be expected to yield, as can now be illustrated.

One technique employed productively both in interviews and questionnaires was to combine a core set of ‘common ground’ questions with another set of more tailored queries concerning matters on which particular sources would be in a good position to shed some light. In the LTDI evaluation, for instance, institutional perspectives were investigated by a series of interviews with groups who had different ranges of institutional responsibilities. The common core issues addressed in the interviews (about which all groups were likely to be concerned and to have something useful to say) were the extent and nature of their interactions with LTDI, the value and quality of the LTDI service, the future of funding council support for ICT in learning and teaching, and the preferred funding mechanisms. The additional tailored questions for the institutional managers concerned the potential for ICT in learning and teaching, together with their HEI’s infrastructure and planning processes. The supplementary queries for the support units asked about institutional provision in the area of either staff development or learning technology, and further discussion with those named as LTDI institutional contacts centred on their practical roles and activities.

A similar technique was used in the survey of academic staff across Scotland, which we carried out by means of a single questionnaire. This incorporated four nested sections, answered according to the level of a respondent’s engagement with LTDI and knowledge about ICT. The first section was for everyone, the second for those simply aware of LTDI, the third for those able to comment on experiences of LTDI, and the fourth afforded the opportunity to comment more extensively on ICT generally and LTDI in particular. Respondents simply worked through the survey, which moved from more closed questions at the beginning to more open ones at the end, and exited after completing the section relevant to their degree of knowledge.

 

Tailoring enquiry to match vantage points: focus and grain

Obtaining views from differently located ‘fixed points’ allowed information to be gathered at contrasting levels of definition and description, combining wide-angled and relatively coarse-grained observations with more detailed ‘close-up shots’ illuminating narrower fields of vision in greater depth.

In the TLTP evaluation, for example, our surveys of the Higher Education sector were designed to capture information from three differently located levels within institutions in order to provide:

  • an institution-wide perspective;
  • a departmental/school perspective;
  • a course/module perspective.

Gathering information in this way enabled us to gain views on the use of TLTP products that varied in their ‘grain’ of definition. Informants with an institution-wide ICT role had a wide angle of vision of the impact of TLTP, and were able to give us a broad-bush account that placed TLTP usage within the wider context of a university's overall use of ICT for learning and teaching. Course organisers, by contrast, were in a position to give us a more narrowly-focused, fine-grained account of product use. (The data from the course/module level survey, however, was not always as fine grained as we had anticipated because course organisers who were sometimes responsible for several courses or modules did not always fill in a separate questionnaire for each one.)

In addition to surveying at course/module level, we also conducted case studies to provide a more detailed level and finer grain of description of at least a sample of TLTP product usage. Case studies allowed for a more ‘bottom-up’ approach to research which stayed close to the experience of the actual users of TLTP materials. Also, because of their more exploratory approach, the case studies encouraged the emergence of new insights, offering new ways of viewing and framing the use of TLTP materials and of recognising issues that otherwise might have escaped us. The consultation of published articles and reports on the incorporation of TLTP materials into courses added another perspective to the study, given that they, by and large, are written from the viewpoint of developers, enthusiastic adopters and those making innovative use of the products.

 

Securing a representative range of opinion within information sources

As well as the contrasts in viewpoints between different sources and groups of people, there were also likely to be some differences within any given category. Thus in order to create a balanced awareness of the situation as seen from various perspectives, it was important that our sampling of opinion ran across whatever range existed, and was not skewed in any particular direction. And if the internal differences seemed to be more systematic than random, we would wish to be able to attribute them to particular variables or characteristics.

The issue of representativeness arose particularly in connection with our intentions to make contact with, and obtain information from a cross-section of academic teaching staff. We did not have a known survey population, either globally or in terms of all the different features (institutional, professional, and personal) that might well influence the responses made to our enquiries. With any kind of stratified or systematic sampling approach a non-starter, the best we could do was to throw the net wide, so as to reach as many people as possible, and to try to avoid the introduction of response bias. The extent of our success would have to checked out subsequently by analysing the ‘background data’ supplied in the questionnaire returns and comparing it with whatever sector-wide information was publicly available.

The practicalities of getting our questionnaires distributed were non trivial, since we had first to construct, with the help of colleagues in other institutions, a data base of all teaching units – in Scottish universities for the LTDI evaluation and the whole of the UK for TLTP. For LTDI we decided to sample the spread of experience, attitudes, and views in a balanced manner by asking Heads of Departments to distribute questionnaires to ‘matched pairs’ of staff (one noticeably involved in ICT and one with little or no involvement). By asking Heads to return a slip indicating how many questionnaires had been distributed (the requested number was 20% of the size of the department), we could estimate the return rate (60%) and coverage of the total population of teaching staff (10% of c. 8,000). As can seen be seen from the percentage figures, this constituted a more than respectable result for a mail survey. The representativeness of the resultant sample (N=982), in terms of personal, departmental and institutional characteristics, was duly calculated, and again was very much within acceptable tolerances.

In the process of constructing the data base it became evident just how much diversity exists across the sector in the administration of teaching and curriculum organisation, which proved to be an important factor in trying to assess the representativeness of the data secured about the use of TLTP products. Since in the first instance we wanted to achieve maximum coverage, it was essential that our research instruments applied equally well in all institutional contexts, otherwise people simply would not bother to respond. This concern meant giving very precise attention to the wording of individual questions, since while, for instance, many HEIs still organise their teaching on a departmental basis, teaching responsibilities elsewhere belong with schools or with teaching organisations that cover broad disciplinary groupings.

The representativeness of the data generated, in terms of comparability, was also an issue with TLTP. We had to be cautious lest estimates of the extent of the usage of TLTP materials would be contaminated by the variations across institutions in the number of modules or courses taken by students in any given academic year. In one institution, for example, a first year student might take only three courses, whereas in another the same student would be taking quite a large number of individual modules. There was a danger, therefore, that in asking questions of an individual department or school about the number of its courses or modules employing TLTP materials, a measure was being taken not solely of TLTP usage but of the way in which teaching happened to be organised. The wording of individual questionnaire items was designed to ‘manage’ this problem, but we were appropriately cautious in our analysis and conservative in our interpretation of questions about the number of students enrolled on courses/modules which had taken up TLTP materials.

 

Coping with changes over time

The task of tailoring enquiry to match informants’ situations was complicated by the need to take into account not simply current vantage points, but also their ‘history’ of contact with LTDI or TLTP products. If it was only recent this would restrict the scope of what they could usefully contribute, and they might also have difficulty in associating activities or materials with their actual originators. Difficulties in identifying products as coming from the TLTP stable, for instance, might occur if these products had been deployed and embedded in learning activities a while ago, or if they were videos rather than the more common computer-based materials. To help with product recognition, the last page of the departmental level, course level, HEI key informants and Further Education questionnaires identified the individual projects by number and listed them under their general discipline area.

Some people who would have been ideal informants had moved on to other positions or locales, and even if people had been in post for the duration (with the same institutional responsibilities as regards ICT, support for teaching, running particular courses, or associated with the same projects) it is undoubtedly the case that memories tend to fade. As well as the problem of observers remembering accurately what happened in the past, events and reactions to them can take on different colourations with the passage of time, and shifts occur in their perceived significance. Sometimes we simply had to bear in mind our reservations about the reliability of recall, and insofar as possible take account of the effects of hindsight and post-hoc rationalisation.

An additional consideration in relation to the retrospective nature of our research, was the way that the objects of enquiry themselves, together with their surrounding contexts, had changed over time; what might be termed the ‘moving target’ element which is present in many evaluations. One general way of coping with this was to try on a consistent basis to register what stage in the life and the development of LTDI project activities or TLTP materials, any particular piece of information referred to. With survey informants, for example, we directed them to pin their comments to a precise period in time, while in interviews we might seek clarification by using a prompt phrase such as, “and when would this be?” For TLTP, where the pace of technological change meant that products created in the earlier phases could have become less relevant and salient, or the size of the potential user base could have altered, it was especially important to take account of the changing contexts in which these initiatives were operating.

 

Setting developments in context

The whole issue of how to meet the need for the objects of our evaluative enquiry to be both situated within, and differentiated from, the wider contexts that shaped the use of learning technologies in higher education, was another challenge difficult to resolve. It involved trying to draw boundaries so as to determine data collection priorities with regard to what was central, more peripheral, or outside the areas of both immediate and backdrop concerns.

We grappled in both studies with setting the ‘figure’ of the innovation against its contextual ‘ground’, although the form taken by the task was somewhat different in the two evaluations. With LTDI the prime requirement was to separate out its impact on the sector from that of a number of other related ICT initiatives. In the case of TLTP, there was a need to delineate the wider context of ICT use in UK higher education because of the lack of any reliable up-to-date information.

Although LTDI was unique in having a Scottish sector-wide remit, it was not alone in endeavouring to alert staff within universities to the potential benefits of using learning technologies, to enhance their awareness of what was available to suit particular teaching and learning purposes, and to increase the amount of actual usage. Moreover, most changes in attitudes and practice are brought about in cumulative and multi-causal ways. While it might prove possible to tease out some direct consequences of LTDI’s work, we were doubtful about the viability of dissecting out cleanly effects on the sector that were solely attributable to LTDI – as distinct from those of other agencies and factors operating at the same time. But we thought that useful pointers concerning penetration, and thus potential impact levels, could be gained by examining the comparative profiles of the four main agencies who were concurrently involved in the dissemination of ICT related information. We therefore asked academic staff, for example, whether they had heard the name of, had direct contact with, or had literature from CTI (Computers in Teaching Initiative), ITTI (Information Technology Training Initiative), TLTP, and LTDI. The findings were then cross-tabulated to show the penetration patterns in terms of staff and institutional characteristics. In the final report we also drew attention to the more distal background influences such as JISC (Joint Information Systems Committee), TQA (Teaching Quality Assessment) and various Scottish Funding Council teaching and learning initiatives (e.g. Flexibility in Teaching and Learning Scheme, Regional Strategic Initiative, Staff Development Initiative, Uses of the MANs Initiative).

In order to understand the extent and pattern of TLTP product penetration, we needed to build up from our surveys a clear picture of the current amount and types of use of ICT for learning and teaching within UK higher education. Accordingly measures were taken of departments' reported levels of both general ICT use and that used in learning and teaching. We also needed to clarify the contextual resources and constraints that were influencing both ICT use and the progress of educational innovation. This we did at several operational levels, paying attention to the discipline/subject area (e.g. the perceived scope for and interest in using ICT and courseware within an individual discipline), course characteristics (e.g. level, size, and mode of delivery), departmental orientations (e.g. openness to innovation and access to ICT resources) and the characteristics of individual higher education institutions (e.g. size, age, ICT strategy and infrastructure in relation to teaching and learning).

 

Dealing with audience requirements

In common with any large-scale evaluation we faced the challenge of trying to satisfy the expectations and serve the requirements of different ‘audiences’. For while “nearly all of the literature on evaluation speaks of it as an attempt to serve a policy maker” (Cronbach, 1982, p.5) and the importance of remaining ‘client-centered’ is self-evident, the existence of other ‘stakeholders’ with a legitimate interest in an evaluation is also acknowledged. “Evaluations should serve the interests not only of the sponsors but also of the larger society” (House, 1993, p.128). Thus we hoped, in the terminology used by Shadish and Epstein (1987), to combine our work as ‘service evaluators’ with being ‘academic evaluators’. In particular we saw our ‘audiences’ as including not just the agencies who commissioned the studies, but also university teachers and other staff professionally concerned with the use of ICT, as well as fellow evaluators and other researchers interested in how ICT impacts upon everyday practice.

The contrasts in what audiences want as regards the level and nature of description and explanation are often most apparent at the reporting and presentation of findings stage, but they also have implications for data gathering and analysis. A recent paper detailed the way in which the design of the TLTP investigation was influenced by the need to keep in mind the needs and expectations of different audiences (Anderson et al., 1999).In the following paragraphs we will focus on some of the ways in which the write-up of both projects was tailored to meet the needs of particular audiences.

Our previous research experience confirmed the importance of considering carefully how to reach mutually satisfactory compromises in seeking to meet a range of audience requirements, both in terms of the issues addressed and the form that the account of the evaluation takes. Policy makers will anticipate the delivery of a crisply-written summary report that provides a clear overview of the impact of an innovation and is guided by an appreciation of the constraints within which policies have to be set and implemented. Researchers will usually base many of their judgments about an evaluation report, on the degree to which observations and claims are well supported by a full, appropriate presentation of evidence. Practitioners will see value in a report which takes a close look at day-to-day problems and dilemmas and comes up with suggestions and insights that are of evident relevance to practice. It is very unlikely in an evaluation that you will be able to optimise on all fronts.

For both studies, despite the short time-frames, we made interim presentations to the funders prior to writing up the final report, partly to prepare the ground but also to get a better feel for what would best serve their requirements in terms of format, content and emphasis. As a result the balance of the LTDI report, for example, shifted quite considerably. Whilst we certainly addressed the primary aims set out in the tender document (assessing the extent to which LTDI met its objectives, the value for money given by LTDI, its impact on the Scottish HE sector and the options for future SHEFC funding of learning technology), the analysis of the options was a more detailed and substantial element in the final report than originally envisaged.

In the case of TLTP we were providing the materials to inform policy making but were not being asked to give any analysis of options or recommendations as to the decisions that could/should be made. Our main concern was to strike a sensible balance between on the one hand giving a clear, crisp account of the findings, which would be read by policy makers, go into the public domain, and inform practitioners, and on the other hand providing sufficient evidence, both methodologically and in terms of raw data, for people to have confidence in the validity and reliability of the account given. In the event the main report ran to under 30 pages, while Appendices A to H took up an accompanying 170 pages and is intended as a resource for the researcher with an interest in this area. The reporting of the case studies that were conducted within the TLTP evaluation was explicitly designed to highlight issues and insights for practice in the development and everyday use of ICT in learning and teaching within higher education.

 

Conclusion     

This article has described the main challenges that we faced in two recent large-scale evaluation studies and described ways in which we sought to meet them. We felt that what we managed to achieve, within quite tight time and resource constraints, was due in no small part to the broad strategic approach of having a muliti-disciplinary team and using a variety of methods for data collection and analysis.

As regards the value of team working we feel with Stufflebeam and Shinkfield that “Evaluation at its best involves much team work” (1984, p. 25) and “the single evaluation agent is hard pressed to coordinate and do the necessary fieldwork while also fulfilling all required specialties” (p. 29). We would want to emphasis, however, that it is not simply a matter of being better placed to get the job done in a technical sense. For what we also found constantly beneficial was the questioning of assumptions, and the cross-checking of each others’ efforts and arguments, not only at the stage of design, but also when carrying out the research, analysing and reporting the findings. We very much came to appreciate the process aspects of teamwork: the ways we could bounce ideas off one another, debate and hammer out the details of exactly how we were going to go about things, support one another through the more tedious bits, receive periodic injections of new insights and energies, and work together at making sense of the findings. When it came to developing and giving presentations, or structuring and writing reports, being able to share out the workload was always a great relief. But again it did seem important to have a team leader/co-ordinator prepared to take overall responsibility; someone who was able to keep the big picture in view and to chivvy and chide as necessary.

The multi-method approach worked well for us and it did indeed seem, as Bryman argues, that “by and large, the two research traditions can be viewed as contributing to the understanding of different aspects of the phenomena in question” (Bryman, 1988, p.170). As Patton says, “the skilled evaluator works to design a study that includes any and all data that will help shed light on the evaluation questions being investigated, given constraints of limited resources and time” (Patton, 1987, p.169). We would also endorse Patton’s view that “multiple methods and triangulation of observations can contribute to methodological rigor” (p.169), but add the important rider that this does not necessarily happen in a straightforwardly confirmatory manner. Indeed it is at times when different sets of ‘evidence’ seem to be pointing in different directions, particularly when the evaluator is having trouble reconciling them and a satisfactory methodological reason is unavailable, that advances in understanding and explanations are often most readily achieved.

As anticipated, our methodological tactics yielded a mix of perspectives about many key aspects to do with LTDI and TLTP. It was the case, however, that other aspects remained comparatively less well illuminated, either because we had considered them less important or the requisite information proved elusive. Despite successful efforts to make imaginative use of a wide spectrum of documentary and personal sources, some information remained beyond our reach. Partly this was due to the customary time and resource pressures, sometimes it was simply a matter of things having gone unrecorded initially and, as time went on, becoming less amenable to being satisfactorily reconstructed. For the most part, however, we ended up in the satisfactory position of having the informational resources available to discuss effectively and convincingly what we had anticipated focussing on in the evaluation report, as well as aspects that surfaced in the course of data accumulation and analysis. Perhaps inevitably there were some lacunae, but luckily the missing evidence was of the ‘nice to have’ kind, rather than anything that vitally affected the roundedness of the accounts given or the robustness of the assessments made.

Most of the challenges that have been discussed are well recognised in the research methods literature as being key concerns that need to be addressed by any large evaluation project. However, while these evaluative concerns may be ‘hardy perennials’, the precise form that they will take may vary considerably from evaluation study to evaluation study. While general guidelines on sound evaluative practice may assist to a considerable degree in guiding attempts to tackle these concerns, every evaluative project needs to respond to these challenges in a principled way that is well tailored to its individual set of purposes, object of study and audience(s). This article has provided an outline description of the strategies that we adopted to meet the specific form that these challenges took within our own projects; but we recognise that the evaluation of a very different type of ICT initiative might have required a somewhat different plan of action. Indeed, it could be claimed that a strategic alertness and creativity of response to the methodological difficulties and dilemmas associated with the particular domain that is being examined is a central element of the ‘craft’ of evaluation.

 

References

  • Anderson, C., Day, K., Haywood, J., Land, R. & Macleod, H. (1999). Going with the grain? Issues in the evaluation of educational innovations. Paper presented at the 8th European Conference for Research on Learning and Instruction, 24-28 August, Göteborg, Sweden.
  • Bryman, A. (1988). Quantity and Quality in Social Research, London: Unwin Hyman.
  • Clarke, A. & Dawson, R. (1999). Evaluation Research: an introduction to principles, methods and practice, London: Sage.
  • Cook, T. D. & Reichardt, C. S. (1979). Qualitative and Quantitative Methods in Evaluation Research, London: Sage.
  • Cronbach, L. J. (1982). Designing Evaluations of Educational and Social Programs, London: Jossey-Bass.
  • Gage, N. (1989). The paradigm wars and their aftermath. Education Researcher, 18, 4-10.
  • Guba, E. G. & Lincoln, Y. S. (1983). Effective Evaluation, San Francisco: Jossey-Bass.
  • Haywood, J., Anderson, C., Coyle, H., Day, K., Haywood, D. & Macleod, H. (2000). Learning Technology in Scottish Higher Education – a survey of the views of senior managers, academic staff and ‘experts’. ALT-J, 8 (2), 5-17.
  • House, E. R. (1993). Professional Evaluation: Social Impact and Political Consequences, Newbury Park: Sage.
  • Patton, M. Q. (1987). How to Use Qualitative Methods in Evaluation, London: Sage.
  • Patton, M. Q. (1988). Paradigms and pragmatism. In Fetterman, D. M. (Ed.) Qualitative Approaches to Evaluation in Education: The Silent Scientific Revolution. New York: Praeger, 116-137.
  • Patton, M. Q. (1990). Qualitative Evaluation and Research Methods, 2nd Ed., Newbury Park: Sage.
  • Rossi, P. H. & Freeman, H. F. (1993). Evaluation: A Systematic Approach, 5th Ed., London: Sage.
  • Scriven, M. (1983). Evaluation Ideologies. In Madaus, G. F., Scriven, M., Stufflebeam, D. L. (Eds.) Evaluation Models: Viewpoints in Educational and Human Services Evaluation, Lancaster: Kluwer-Nijhoff, 229-260.
  • Shadish, W. R, Cook, T. D & Leviton, L. C. (1990). Foundations of Program Evaluation: Theories of Practice, London: Sage.
  • Shadish, W. R. & Epstein, L. (1987). Practice of program evaluation practice among members of the Evaluation Research Society and Evaluation Network. Evaluation Review, 11, 555-90.
  • Shaw, I. F. (1999). Qualitative Evaluation, London: Sage.
  • Silverman, D. (1997). The logics of qualitative research. In Miller, G. and Dingwall, R. (Eds.) Context and Method in Qualitative Research, London: Sage, 12-25.
  • Stufflebeam, D. L. & Shinkfield, A. J. (1984). Systematic Evaluation, Lancaster: Kluwer-Nijhoff.
  • Trend, M. G. (1978). On the Reconciliation of Qualitative and Quantitative Analyses: A Case Study. Human Organization, 37, 345-54. Reprinted in Cook, T. D. & Reichardt, C. S. (Eds.)(1979) Qualitative and Quantitative Methods in Evaluation Research,London: Sage, 68-86.
  • Yin, R. (1989). Case Study Research: Design and Methods, Revised Ed., London: Sage.

decoration