Educational Technology & Society 5 (3) 2002
ISSN 1436-4522

Improving use of learning technologies in higher education through participant oriented evaluations

David Dwayne Williams
Associate Professor of Instructional Psychology and Technology
Brigham Young University, USA
Tel: +1 801-378-2765
Fax: +1 801-378-8672
David_Williams@byu.edu


ABSTRACT

Use of learning technologies is growing in higher education.  Simultaneously, educational evaluation approaches have evolved to be more and more useful for higher education decision-making.  Elements of evaluation are emerging which enhance its use for judging higher education programs that include learning technologies.  This article reviews several key elements by examining an example of a major evaluation of eleven Internet-based courses.  Implications are proposed for expanding evaluation of programs that employ learning technologies in higher education.

Keywords: Evaluation, Higher education, Learning technologies


Evaluating learning technologies used in higher education

A review of higher education literature reveals a growing interest in instruction using learning technologies (Bullock and Schomberg, 2000; Spooner, 1999; Pea, Tinker, Linn, Means, Bransford, Roschelle, Hsi, Brophy, and Songer, 1999).  Learning technologies include a wide variety of Internet-based and other communications tools, such as “N-way video streaming, digital library and museum database management, simulations, teleconferencing, telephony, and wireless communications” (Harley, 2001,p.10) which educators hope will enhance student learning and reduce costs (Guskin, 1997).

How to best evaluate the worth and merit of learning technologies in higher education programs is a continuing concern (Kenny, MacDonald and Desjardins, 1997; Jackson, 1990).  Oliver (2000) and others in the same issue of Educational Technology and Society provide an excellent review of the most pertinent issues and establish a basis for this current article and issue of the journal.  Oliver points out the need for participant-oriented evaluation of learning technologies and this article expands upon his work to include an example of practitioners doing just that.

Ehrmann (1995) of the Flashlight Project has been arguing for several years that asking certain questions about learning technologies is a waste of time while asking others can be very helpful to some participants.  For example, he claims that because universities and faculty’s classes that might employ learning technologies vary tremendously in their purposes and procedures, evaluations can not answer the question “‘How well is this technology-based approach working, relative to the norm?’ since there isn't a norm” (p. 22).  Instead of asking these and other typical but not really answerable questions about universality of cost comparisons and others, Ehrmann suggests we ask, “Has [the] educational potential [of courses and programs using learning technologies] been realized in improved learning outcomes?”  He further asserts, “There is no substitute for each faculty member asking that question about his or her own students” (p. 25).

There are ongoing efforts to accumulate lessons learned from faculty studying their own practices using various learning technologies (e.g., Spencer and Hiltz, 2001; The Sloan Asynchronous Learning Networks Consortium at http://www.aln.org/alnweb/aln.htm; the American Evaluation Association’s Topical Interest Group on Distance and Other Educational Technologies with studies at http://www.courses.dsu.edu/TIG/default.htm), as well as efforts to encourage higher education institutions to improve their own evaluation efforts (e.g., the American Association of Higher Education Teaching, Learning, and Technology roundtable group, http://www.tltgroup.org/ and a growing number of affiliates such as the Joint Information Systems Committee’s exploration of roundtables in the United Kingdom http://www.roundtable.ac.uk).

Thus, an important lesson from the literature seems to be that higher education uses of learning technologies in particular educational programs and courses should be evaluated based on the specific needs and questions of local participants.  Otherwise they may be resistant or unable to use the results to improve program performance.  This article addresses that lesson by reviewing a growing literature on participant-oriented evaluation and illustrating a participant-oriented evaluation of eleven web-based courses that employed various learning technologies.

 

Evolving evaluation theory

Over the last three decades, several new approaches to the theory and practice of educational evaluation have emerged to address these concerns.  In virtually every case the changes have encouraged greater attention to the interests and values of the participants with interests or stakes in the things being evaluated.  The author has found that attending to participant or stakeholder interests can resolve many of the other problems identified earlier. 

One of the first participant-oriented approaches was Stake’s Responsive Evaluation.  Robert Stake (1975, chapter 2 and 1984) argued against automatic use of social science to test achievement of objectives.  He noted that often evaluation clients (or stakeholders) need help understanding their programs and how to improve them more than they need to know if their program is better than all possible alternatives or if it has achieved official objectives.  He urged evaluators to be responsive to the shifting concerns and questions of clients regarding their actual programs, subjugating method to clients’ evolving agendas.

Guba and Lincoln (1981) expanded upon Stake’s ideas as they created Effective Evaluation and further refined their approach in Fourth Generation Evaluation (Guba and Lincoln, 1989).  They claimed that through a rigorous process called the “hermeneutic/dialectic,” (p. 72) evaluators could assist participants with conflicting value perspectives and questions to identify the most important foci for an evaluation so all would be motivated to act on the results. 

Over nearly 30 years Patton (1997, 2001) has drawn upon theories of change, economics, diffusion of innovations, psychological ownership, organizations, utilization, reinforcement, systems, contingency and others in developing his Utilization Focused Evaluation approach.  He focuses on identifying and working with key participants in organizations who can develop a vision for the value of gathering information and using it to improve the functions of the organization in ways responsive to each situation.

Others have explored philosophical and sociological foundations for involving participants in defining their particular interests in evaluations of educational programs they care about (House and Howe, 1999; Ryan and DeStefano, 2000).  Fetterman (1996) took these notions further by proposing Empowerment Evaluation to coach stakeholders in building capacity to become their own evaluators rather than depending upon external experts.  As Fetterman (1996, p. 5) states, “Empowerment evaluation has an unambiguous value orientation—it is designed to help people help themselves and improve their programs using a form of self-evaluation and reflection.”

Finally, Cousins and his associates have summarized many of these ideas into Participatory Evaluation (Cousins and Whitmore, 1998) and Collaborative Evaluation (Cousins, Donohue, and Bloom, 1996).  These approaches celebrate the involvement of participating stakeholders in evaluating programs and objects they value.  They assume that if the people most involved in creating, obtaining, using, and changing educational objects and programs seek feedback on the quality and performance of these evaluands they will address the most important issues and will want to use what they learn to do even better work.

Although these approaches to evaluation differ in many ways, they all emphasize the fact that evaluations are done for particular participants whose values vary.  Thus, on a criterion of fairness, participant evaluations try to take discrepant value positions seriously and systematically.  Additionally, these approaches assume that attention must be paid to participants’ values if they are to have sufficient interest to use the evaluation results.  Indeed, over time, evaluation theories have become increasingly attentive to the needs and interests of wider and more diverse groups of people associated with the things being evaluated.

A natural result of this focus on stakeholders is attention to their definitions of the evaluand, use of their values in setting the criteria for judging the evaluand, and the asking of their questions.  These definitions, values, and questions are exactly what higher education personnel need to refine and choose among educational programs in general and those that employ learning technologies in particular. Taken together, these theories of evaluation highlight the importance of using the following key evaluation elements or activities to enhance evaluation use by higher educators as they judge programs using learning technologies:

  • Who cares?  Clearly identifying the participants or stakeholders who care about the program to be evaluated and its associated learning technologies.
  • About what?  Working with stakeholders to clarify their needs, values, definitions, and questions so they want to use results from associated evaluations to help them judge and improve learning technologies associated with their programs.
  • Getting involved.  Involving the stakeholders in designing evaluation studies, gathering and analyzing data, interpreting results, applying selected criteria/values to clarify implications for their own and their organization's practice, and using evaluation results as part of their organizational decision-making.

 

An example

An example from a university study illustrates one way in which these key evaluation elements can be employed to enhance programs that use learning technologies.  During one semester, the author directed a major evaluation of eleven Internet-based courses for over 450 students at Brigham Young University.  The evaluation has continued, involving other courses and students in subsequent semesters.  This example is described below in terms  of the context of need for the evaluation and how the key evaluation elements were used to carry out the study. 

 

Context of need

As have many other universities, over the last few years Brigham Young University recognized important changes in the higher education landscape.  Because of increased global demand for quality adult education, increased competition from alternative providers, fast-paced developments in learning technologies (particularly the use of the Internet) that made eLearning possible in more and more formats, and the need to keep costs minimal, the administration of Brigham Young University initiated a series of efforts to develop and offer web-based instruction.  They noted that with over 45,000 students per year off-campus taking “Independent Study” courses by correspondence as well as 30,000+ students on-campus, they would target several “bottleneck” courses required of most students, which have traditionally been taught on campus in large sections. 

The administration organized a cross-college unit for instructional design and hired several professional and about a hundred student designers, artists, and computer programmers to work with faculty to convert Independent Study and regular campus courses into Semester Online courses.  These courses were designed to use multimedia on compact discs, Internet discussion boards and grade rolls, electronic mail, textbooks, and occasional face-to-face sessions in various configurations to provide students with multiple alternative formats for meeting required General Education and other high volume courses. 

Just weeks after initiating this effort, the new administrator of the instructional design unit contacted the author to request evaluation assistance on the first eleven Semester Online courses his staff had hastily organized.  The courses were to begin instruction in less than a week!  There was a small budget for student assistants if needed.  Whatever we could learn during this first semester would be applied to these eleven and more than 100 other courses that would be designed, refined, made over, or otherwise created in coming semesters.

 

Key evaluation elements

The key evaluation elements (who cares, about what, getting them involved) coming from participant-oriented evaluation theories and which guided this evaluation effort are described briefly here.

 

Who cares?

Who are the audiences/stakeholders/information users who care about the evaluand and its evaluation?  Several parties with interests in the results were identified in the early stages of the evaluation while others emerged throughout the study:

  • Central administrators were interested in the quality of the Semester Online courses because they hoped to expand the number of such courses if they were successful.  They were also interested in taking advantage of the Internet for instructional purposes.  Many of them were interviewed regarding their expectations for online courses in general (not just the eleven being evaluated in this study).
  • The Director of the Instructional Design unit requested the evaluation be done and immediately identified several issues he was concerned about with hopes that evaluation would help get his organization established as a useful entity on campus.
  • Faculty associated with the eleven Semester Online courses were considered to be principal stakeholders.  They not only taught the courses but, in many cases, helped develop them.  Interviews were held with the faculty early in the evaluation to clarify their questions and concerns.
  • Students, particularly those from the eleven courses and others who might take similar courses were identified as important stakeholders.  The current students were invited to clarify their expectations of their online course and to express their opinions about their experience through focus groups and questionnaires.
  • Department chairs and service departments from across campus were important stakeholders who were not approached regarding their questions and issues due to the haste with which this evaluation was initiated.

 

What do these stakeholders care about? 

What do they want to evaluate (the evaluand), what values or criteria should be used to evaluate it, and what questions do they want answered so they will use results to help them judge and improve learning technologies associated with their programs?

 

Evaluand

What was to be evaluated?  The eleven courses were the evaluands and the following dimensions of the courses became the focus as the stakeholders were consulted:

  • How well the Semester Online courses were implemented
  • How well students participated, learned and felt about the experience
  • How the faculty felt about teaching this way
  • What should be done to enhance such courses in the future

 

Criteria

What criteria did the stakeholders have for judging the evaluand?  Based on their questions and issues discussed earlier, the following criteria for success of the courses were derived and synthesized across all the stakeholders by the evaluation team through a serious of focus group discussions, observations, and personal interviews:

  • The courses should all function well technically and any technical problems should be resolved quickly and satisfactorily.
  • Instructionally, the courses should all be designed and function in ways that make sense to the learners and instructors and help them with their instructional objectives, including the use of tests and other assessments, instructional materials and experiences, multimedia, collaboration, communications with the instructor and other students, etc.
  • A high proportion of the students should complete all courses in the semester timeframe, comparable to traditional classes.
  • The workload, quality, rigor, administrative support, and expectations for both students and instructors should be reasonable for all courses and comparable to traditional classes.
  • Students should achieve most if not all the courses' objectives.
  • Students should have positive attitudes regarding the topics they study in the courses and regarding taking similar courses online in the future.
  • Students who might not be served otherwise should be served by these courses.
  • Instructors should end up with positive attitudes regarding teaching similar courses in the future.
  • Multimedia used and associated benefits should be worth the costs of adding them.
  • Courses should provide several learning opportunities such as individualized pacing, automatic feedback, convenience, more faculty focus on individuals, increased interaction between faculty and instructional support teams, greater emphasis on instructional design by faculty, and access to web resources.

 

Evaluation Questions

Based on the criteria and related issues raised by the stakeholders, what questions should this evaluation address?  Although many others are possible, the study was organized to begin answering the following key questions which should help answer the overall question, "How well do these courses meet the stakeholders' criteria?"

 

  1. Technically, do the courses all function smoothly and are problems resolved quickly and satisfactorily?
  2. Instructionally, are the courses designed and do they function in ways that make sense to the learners and instructors to help them with their instructional objectives, including the use of tests and other assessments, instructional materials and experiences, multimedia, collaboration, communications with the instructor and other students, etc.; and are associated benefits judged to be worth the costs of adding them?
  3. Do these courses provide learning opportunities such as individualized pacing, automatic feedback, convenience, more faculty focus on individuals, increased interaction between faculty and instructional support teams, greater emphasis on instructional design by faculty, and access to web resources?
  4. Are the workload (for students and faculty), quality, rigor, administrative support, and expectations for both students and instructors reasonable for all courses and comparable to traditional classes?
  5. Do a higher proportion of the students complete the courses in the semester timeline and achieve most or even all the courses' objectives?
  6. Are students who might not be served otherwise served by these courses?  What are their profiles?
  7. Do students have positive attitudes regarding the topics they study in the courses and regarding taking similar courses online in the future?
  8. Do instructors have positive attitudes regarding teaching similar courses in the future?

Somewhat to our surprise, costs didn’t end up being the key issue we thought they might be.  However, we realize that this is a key criterion for many administrators and ongoing efforts are being made to clarify the costs involved and estimate the benefits relative to the costs (Campbell and Williams, 2001)

 

Getting stakeholders involved

With the stakeholders and their values identified, the next key evaluation element in a participant-oriented evaluation is to involve them in collecting, interpreting, and using the data addressing the evaluation questions.  This can be done in many different ways, depending on the stakeholders and their needs. 

In this example, the stakeholders were involved from the beginning as they identified their stakes and definitions of what aspects of the courses, the learning technologies, and this new university program they cared about.  They were also involved in helping refine questionnaires and interview protocols, providing information about their own participation in and assessments of the courses by engaging in various data-gathering activities (focus groups, interviews, questionnaire responses, email discussions, and many more), allowing their course materials and performance to be examined, responding to interpretations of the data collected from them and from others, and deciding what to do with the evaluation data as it was presented.

Some of the participants were more involved than others, of course.  The Director of the Instructional Design unit and several of his staff members participated most actively in some of the data gathering, analysis and interpretation (such as reviewing critiques of the eleven courses by students who tried out all the online options and discovered technical problems, exploring reports from students who dropped out of courses, refining the multimedia offerings, and so on).  Likewise, the faculty were involved in adjusting their roles, answering student questions, exploring alternative solutions to problems with the Instructional Design unit representatives, etc.  But in spite of these differences, the evaluators searched for ways to involve representatives of all the participants in as many evaluative activities as possible.

 

Implications

The high level of participation involved in this evaluation lead quite naturally to the building of evaluation into these courses and into the process of creating future courses by the Instructional Design unit participants.  Due to many discussions with the Director, a full-time evaluator was hired to carry on future evaluations, based in great measure on this first semester’s experience.  Instructional designers began working with this internal evaluator to build more evaluative activities into the design of instruction, using learning technologies for these and other courses. 

Faculty who had participated actively realized the value of obtaining feedback on their use of technology in all their courses.  They looked for ways to capitalize on electronic mail, discussion boards, and online testing to gather feedback from their students to formatively improve their courses.  Students who participated began to realize that their voices could be heard in ongoing systematic evaluations and faculty would respond more than they could on regular end-of-semester course/instructor evaluation forms, which were scored and sent to faculty weeks after the courses ended. 

Unlike the majority of courses at Brigham Young University, the Semester Online courses were thoroughly evaluated throughout the semester and ongoing evaluations have taken place for these and other courses that have been developed subsequently.  The evaluation unit is accumulating a good database for making cross-course, system-wide estimates of the value of courses that employ a variety of learning technologies.

These results from this example, set in the context of the literatures on learning technologies in higher education and the evolving field of program evaluation, suggest a few implications for future researchers and higher educators to consider:

  1. Learning technologies mean different things to different people.  They should not be evaluated without taking their context of use and users into account.
  2. Instructional programs employing learning technologies also involve many other elements of instruction that should be included in any evaluation of the particular learning technologies.
  3. Using the key evaluation elements discussed in this article (identifying and involving the participants in clarifying what to evaluate, what their values are relative to the evaluand, and in gathering and interpreting data) has been shown to enhance the participants’ use of the evaluation results and their interest in building evaluation into their ongoing system.
  4. Lessons learned regarding the evaluation of learning technologies probably apply to many other aspects of higher education.  Evaluation can be a powerful partner for improving higher education if all the relevant participants are involved systematically in the evaluation process.

 

References

  • Bullock, C. D., & Schomberg, S. (2000). Disseminating learning technologies across the faculty. International Journal of Educational Technology, 2 (1), 1-12.
  • Campbell, J. O., & Williams, D. D. (2001). An evaluation of factors in cost effectiveness of eLearning. Paper presented at the annual meeting of the Association for Educational Communications and Technology, Nov. 8-10, Atlanta, Georgia.
  • Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators' self-reported opinions, practices, and consequences. Evaluation Practice, 17 (3), 207-226.
  • Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. In E. Whitmore (Ed.) Participatory evaluation approaches, San Francisco: Jossey-Bass.
  • Ehrmann, S. C. (1995). Asking the right questions.  Change, 27 (2), 20-27.
  • Fetterman, D. M. (1996). Empowerment evaluation: Knowledge and tools for self-assessment and accountability, Thousand Oaks, CA: Sage.
  • Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches,  San Francisco, CA: Jossey-Bass.
  • Guba, E. G., & Lincoln, Y. S. (1989).  Fourth generation evaluation, Thousand Oaks, CA: Sage.
  • Guskin, A. E. (1997). Restructuring to enhance student learning (and reduce costs). Liberal Education, 83 (2), 10-19.
  • Harley, D. (2001). Higher education in the digital age: Planning for an uncertain future. Syllabus: New Dimensions in Education Technology, 15 (2), 10-12.
  • House, E. R., & Howe, K. R. (1999). Values in education and social research, Thousand Oaks, CA: Sage.
  • Jackson, G. A. (1990). Evaluating learning technology: Methods, strategies, and examples in higher education.  Journal of Higher Education, 61 (3), 294-311.
  • Kenny, R. F., MacDonald, C. J., & Desjardins, F. J. (1997).  Integrating information technologies to facilitate learning: Redesigning the teacher education curriculum. Canadian Journal of Educational Communication, 26 (2), 107-24.
  • Oliver, M. (2000). An introduction to the evaluation of learning technology. Educational Technology and Society, 3 (4), http://ifets.ieee.org/periodical/vol_4_2000/intro.html.
  • Patton, M. Q. (1997). Utilization-focused evaluation, 3rd ed., Thousand Oaks, CA: Sage.
  • Patton, M. Q. (2001). Theories of action in program evaluation, presentation as part of a panel at the annual meeting of the American Evaluation Association, Nov. 9, St. Louis, Missouri.
  • Pea, R. D., Tinker, R. L., Marcia, M. B., Bransford, J. Roschelle, J., Hsi, S., Brophy, S., & Songer, N. (1999).  Toward a learning technologies knowledge network. Educational Technology Research and Development, 47 (2), 19-38.
  • Ryan, K. E., & DeStefano, L. (2000). Evaluation as a democratic process:  promoting inclusion, dialogue, and deliberation. New Directions for Program Evaluation, number 85, San Francisco: Jossey-Bass.
  • Spooner, F. (1999). Oh, give me a home where technology roams. Teacher Education and Special Education, 22 (2), 97-99.
  • Stake, R. E. (1975). Evaluating the arts in education: A responsive approach, Columbus, OH: Merrill.
  • Stake, R. E. (1984). Program evaluation, particularly responsive evaluation. In G. F. Madaus, M. Scriven, & D. L. Stufflebeam (Eds.) Evaluation models, Boston: Kluwer-Nijhoff.

decoration


Copyright message

Copyright by the International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the authors of the articles you wish to copy or kinshuk@massey.ac.nz.