Improving use of learning technologies in higher education through participant oriented evaluations
David Dwayne Williams
Evaluating learning technologies used in higher education
A review of higher education literature reveals a growing interest in instruction using learning technologies (Bullock and Schomberg, 2000; Spooner, 1999; Pea, Tinker, Linn, Means, Bransford, Roschelle, Hsi, Brophy, and Songer, 1999). Learning technologies include a wide variety of Internet-based and other communications tools, such as “N-way video streaming, digital library and museum database management, simulations, teleconferencing, telephony, and wireless communications” (Harley, 2001,p.10) which educators hope will enhance student learning and reduce costs (Guskin, 1997).
How to best evaluate the worth and merit of learning technologies in higher education programs is a continuing concern (Kenny, MacDonald and Desjardins, 1997; Jackson, 1990). Oliver (2000) and others in the same issue of Educational Technology and Society provide an excellent review of the most pertinent issues and establish a basis for this current article and issue of the journal. Oliver points out the need for participant-oriented evaluation of learning technologies and this article expands upon his work to include an example of practitioners doing just that.
Ehrmann (1995) of the Flashlight Project has been arguing for several years that asking certain questions about learning technologies is a waste of time while asking others can be very helpful to some participants. For example, he claims that because universities and faculty’s classes that might employ learning technologies vary tremendously in their purposes and procedures, evaluations can not answer the question “‘How well is this technology-based approach working, relative to the norm?’ since there isn't a norm” (p. 22). Instead of asking these and other typical but not really answerable questions about universality of cost comparisons and others, Ehrmann suggests we ask, “Has [the] educational potential [of courses and programs using learning technologies] been realized in improved learning outcomes?” He further asserts, “There is no substitute for each faculty member asking that question about his or her own students” (p. 25).
There are ongoing efforts to accumulate lessons learned from faculty studying their own practices using various learning technologies (e.g., Spencer and Hiltz, 2001; The Sloan Asynchronous Learning Networks Consortium at http://www.aln.org/alnweb/aln.htm; the American Evaluation Association’s Topical Interest Group on Distance and Other Educational Technologies with studies at http://www.courses.dsu.edu/TIG/default.htm), as well as efforts to encourage higher education institutions to improve their own evaluation efforts (e.g., the American Association of Higher Education Teaching, Learning, and Technology roundtable group, http://www.tltgroup.org/ and a growing number of affiliates such as the Joint Information Systems Committee’s exploration of roundtables in the United Kingdom http://www.roundtable.ac.uk).
Thus, an important lesson from the literature seems to be that higher education uses of learning technologies in particular educational programs and courses should be evaluated based on the specific needs and questions of local participants. Otherwise they may be resistant or unable to use the results to improve program performance. This article addresses that lesson by reviewing a growing literature on participant-oriented evaluation and illustrating a participant-oriented evaluation of eleven web-based courses that employed various learning technologies.
Evolving evaluation theory
Over the last three decades, several new approaches to the theory and practice of educational evaluation have emerged to address these concerns. In virtually every case the changes have encouraged greater attention to the interests and values of the participants with interests or stakes in the things being evaluated. The author has found that attending to participant or stakeholder interests can resolve many of the other problems identified earlier.
One of the first participant-oriented approaches was Stake’s Responsive Evaluation. Robert Stake (1975, chapter 2 and 1984) argued against automatic use of social science to test achievement of objectives. He noted that often evaluation clients (or stakeholders) need help understanding their programs and how to improve them more than they need to know if their program is better than all possible alternatives or if it has achieved official objectives. He urged evaluators to be responsive to the shifting concerns and questions of clients regarding their actual programs, subjugating method to clients’ evolving agendas.
Guba and Lincoln (1981) expanded upon Stake’s ideas as they created Effective Evaluation and further refined their approach in Fourth Generation Evaluation (Guba and Lincoln, 1989). They claimed that through a rigorous process called the “hermeneutic/dialectic,” (p. 72) evaluators could assist participants with conflicting value perspectives and questions to identify the most important foci for an evaluation so all would be motivated to act on the results.
Over nearly 30 years Patton (1997, 2001) has drawn upon theories of change, economics, diffusion of innovations, psychological ownership, organizations, utilization, reinforcement, systems, contingency and others in developing his Utilization Focused Evaluation approach. He focuses on identifying and working with key participants in organizations who can develop a vision for the value of gathering information and using it to improve the functions of the organization in ways responsive to each situation.
Others have explored philosophical and sociological foundations for involving participants in defining their particular interests in evaluations of educational programs they care about (House and Howe, 1999; Ryan and DeStefano, 2000). Fetterman (1996) took these notions further by proposing Empowerment Evaluation to coach stakeholders in building capacity to become their own evaluators rather than depending upon external experts. As Fetterman (1996, p. 5) states, “Empowerment evaluation has an unambiguous value orientation—it is designed to help people help themselves and improve their programs using a form of self-evaluation and reflection.”
Finally, Cousins and his associates have summarized many of these ideas into Participatory Evaluation (Cousins and Whitmore, 1998) and Collaborative Evaluation (Cousins, Donohue, and Bloom, 1996). These approaches celebrate the involvement of participating stakeholders in evaluating programs and objects they value. They assume that if the people most involved in creating, obtaining, using, and changing educational objects and programs seek feedback on the quality and performance of these evaluands they will address the most important issues and will want to use what they learn to do even better work.
Although these approaches to evaluation differ in many ways, they all emphasize the fact that evaluations are done for particular participants whose values vary. Thus, on a criterion of fairness, participant evaluations try to take discrepant value positions seriously and systematically. Additionally, these approaches assume that attention must be paid to participants’ values if they are to have sufficient interest to use the evaluation results. Indeed, over time, evaluation theories have become increasingly attentive to the needs and interests of wider and more diverse groups of people associated with the things being evaluated.
A natural result of this focus on stakeholders is attention to their definitions of the evaluand, use of their values in setting the criteria for judging the evaluand, and the asking of their questions. These definitions, values, and questions are exactly what higher education personnel need to refine and choose among educational programs in general and those that employ learning technologies in particular. Taken together, these theories of evaluation highlight the importance of using the following key evaluation elements or activities to enhance evaluation use by higher educators as they judge programs using learning technologies:
An example from a university study illustrates one way in which these key evaluation elements can be employed to enhance programs that use learning technologies. During one semester, the author directed a major evaluation of eleven Internet-based courses for over 450 students at Brigham Young University. The evaluation has continued, involving other courses and students in subsequent semesters. This example is described below in terms of the context of need for the evaluation and how the key evaluation elements were used to carry out the study.
Context of need
As have many other universities, over the last few years Brigham Young University recognized important changes in the higher education landscape. Because of increased global demand for quality adult education, increased competition from alternative providers, fast-paced developments in learning technologies (particularly the use of the Internet) that made eLearning possible in more and more formats, and the need to keep costs minimal, the administration of Brigham Young University initiated a series of efforts to develop and offer web-based instruction. They noted that with over 45,000 students per year off-campus taking “Independent Study” courses by correspondence as well as 30,000+ students on-campus, they would target several “bottleneck” courses required of most students, which have traditionally been taught on campus in large sections.
The administration organized a cross-college unit for instructional design and hired several professional and about a hundred student designers, artists, and computer programmers to work with faculty to convert Independent Study and regular campus courses into Semester Online courses. These courses were designed to use multimedia on compact discs, Internet discussion boards and grade rolls, electronic mail, textbooks, and occasional face-to-face sessions in various configurations to provide students with multiple alternative formats for meeting required General Education and other high volume courses.
Just weeks after initiating this effort, the new administrator of the instructional design unit contacted the author to request evaluation assistance on the first eleven Semester Online courses his staff had hastily organized. The courses were to begin instruction in less than a week! There was a small budget for student assistants if needed. Whatever we could learn during this first semester would be applied to these eleven and more than 100 other courses that would be designed, refined, made over, or otherwise created in coming semesters.
Key evaluation elements
The key evaluation elements (who cares, about what, getting them involved) coming from participant-oriented evaluation theories and which guided this evaluation effort are described briefly here.
Who are the audiences/stakeholders/information users who care about the evaluand and its evaluation? Several parties with interests in the results were identified in the early stages of the evaluation while others emerged throughout the study:
What do these stakeholders care about?
What do they want to evaluate (the evaluand), what values or criteria should be used to evaluate it, and what questions do they want answered so they will use results to help them judge and improve learning technologies associated with their programs?
What was to be evaluated? The eleven courses were the evaluands and the following dimensions of the courses became the focus as the stakeholders were consulted:
What criteria did the stakeholders have for judging the evaluand? Based on their questions and issues discussed earlier, the following criteria for success of the courses were derived and synthesized across all the stakeholders by the evaluation team through a serious of focus group discussions, observations, and personal interviews:
Based on the criteria and related issues raised by the stakeholders, what questions should this evaluation address? Although many others are possible, the study was organized to begin answering the following key questions which should help answer the overall question, "How well do these courses meet the stakeholders' criteria?"
Somewhat to our surprise, costs didn’t end up being the key issue we thought they might be. However, we realize that this is a key criterion for many administrators and ongoing efforts are being made to clarify the costs involved and estimate the benefits relative to the costs (Campbell and Williams, 2001)
Getting stakeholders involved
With the stakeholders and their values identified, the next key evaluation element in a participant-oriented evaluation is to involve them in collecting, interpreting, and using the data addressing the evaluation questions. This can be done in many different ways, depending on the stakeholders and their needs.
In this example, the stakeholders were involved from the beginning as they identified their stakes and definitions of what aspects of the courses, the learning technologies, and this new university program they cared about. They were also involved in helping refine questionnaires and interview protocols, providing information about their own participation in and assessments of the courses by engaging in various data-gathering activities (focus groups, interviews, questionnaire responses, email discussions, and many more), allowing their course materials and performance to be examined, responding to interpretations of the data collected from them and from others, and deciding what to do with the evaluation data as it was presented.
Some of the participants were more involved than others, of course. The Director of the Instructional Design unit and several of his staff members participated most actively in some of the data gathering, analysis and interpretation (such as reviewing critiques of the eleven courses by students who tried out all the online options and discovered technical problems, exploring reports from students who dropped out of courses, refining the multimedia offerings, and so on). Likewise, the faculty were involved in adjusting their roles, answering student questions, exploring alternative solutions to problems with the Instructional Design unit representatives, etc. But in spite of these differences, the evaluators searched for ways to involve representatives of all the participants in as many evaluative activities as possible.
The high level of participation involved in this evaluation lead quite naturally to the building of evaluation into these courses and into the process of creating future courses by the Instructional Design unit participants. Due to many discussions with the Director, a full-time evaluator was hired to carry on future evaluations, based in great measure on this first semester’s experience. Instructional designers began working with this internal evaluator to build more evaluative activities into the design of instruction, using learning technologies for these and other courses.
Faculty who had participated actively realized the value of obtaining feedback on their use of technology in all their courses. They looked for ways to capitalize on electronic mail, discussion boards, and online testing to gather feedback from their students to formatively improve their courses. Students who participated began to realize that their voices could be heard in ongoing systematic evaluations and faculty would respond more than they could on regular end-of-semester course/instructor evaluation forms, which were scored and sent to faculty weeks after the courses ended.
Unlike the majority of courses at Brigham Young University, the Semester Online courses were thoroughly evaluated throughout the semester and ongoing evaluations have taken place for these and other courses that have been developed subsequently. The evaluation unit is accumulating a good database for making cross-course, system-wide estimates of the value of courses that employ a variety of learning technologies.
These results from this example, set in the context of the literatures on learning technologies in higher education and the evolving field of program evaluation, suggest a few implications for future researchers and higher educators to consider:
Copyright by the International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the authors of the articles you wish to copy or firstname.lastname@example.org.