An Evaluation Instrument for Hypermedia Courseware
Anastasios A. Economides
The number of products from the educational software industry has significantly increased the last decade and in particular there are numerous hypermedia courseware available in the market on almost any educational subject. (Courseware is a relatively recent appellation for Computer Based Learning, which refers to the use of computers for the delivery of instruction in an interactive mode.) The reason for that increase is closely related to the basic policy assumption that the educational system should serve the overall target of ‘information society for all’. Schools should prepare students to use actively new information and communication technologies (ICTs) taking advantage of the life-learning process that these technologies support. As a result in most countries all curriculum are under continuous development adopting ICTs in teaching and learning.
Nowadays, hypermedia systems provide the necessary technology for highly interactive and potentially adaptive learning environments. Yet, in many cases authors of educational hypermedia are often tempted to impress rather than educate the user. As often stated, the failure of so many instructional programs has been the result of an emphasis solely on content, with little regard for principles of instructional design to produce effective, efficient, and appealing instruction. If hypermedia is not well designed, they will create difficulties for users, such as memory overload and divided attention, or they will fail to suit the variety of ways that people work together or alone (Preece, 1993).
The media and learning debate has carried on for several
decades. In 1983, Richard Clark reviewed the research to that date on
media-delivered education and concluded that instructional designers
gain no learning benefits from employing a specific medium to deliver
Despite the intricacies of the debate, in the early 1980s, several meta-analyses related to the effects of computers on learning were published by Kulik and his associates at the University of Michigan (Kulik, Kulik & Cohen, 1980; Kulik & Kulik, 1987; Kulik & Kulik, 1991) which has proven that computer-based instruction made small but significant contributions to the course achievement of all level students. Moreover, a meta-analysis on the effects of hypermedia versus traditional instruction on student’s achievement on thirty five different empirical studies published from 1986 to 1997 showed that the effects of using hypermedia in instruction are positive and greater than the traditional instruction (Liao, 1997). Taking also into consideration cost-effectiveness and access issues regarding hypermedia versus traditional instruction, it can be argued that hypermedia courseware can be seen as an effective learning tool. However, instructional designers must carefully design hypermedia contents to take full advantage of them. Moreover, it can be argued that one of the main reasons for the lack of high quality of hypermedia courseware is that often research cannot keep pace with the advances of technology and as a result existing evaluation methods are often inadequate. Therefore, the development of evaluation criteria is very important for employing hypermedia courseware to best effect.
Systematic evaluation of computer-based education (CBE) in all its various forms often falls behind development efforts (Flagg, 1990). There are several reasons for this lack of evaluation. Producers of CBE products are often invest more money in marketing CBE than in evaluating them. Moreover, consumers of technological innovations for education seem to assume that because these innovations are advertised as effective, they are effective. Also, evaluation of CBE has often been reduced to a number of indicators wherein the value of CBE is represented by the amount of money spent on hardware and software, the ratio of students to computers etc. (Becker, 1992). Another reason for the lack of the evaluation of CBE is the inadequate utility of the evaluations that have been previously conducted. Evaluation reports are usually presented in the format of social science research reports, “format that is almost useless for most educators” (Scriven, 1993).
However, besides the general trend, there are some important evaluation studies that either focus only on interface design or there are broader and focus on the pedagogical value of hypermedia systems as well. For example, heuristic evaluation suggested by Nielsen (Nielsen and Molich, 1990; Nielsen 1994) is looking at the usability problems in a user interface while Reeves (Reeves, 1992; Reeves and Harmon, 1994) pedagogical dimensions are used as criteria for evaluating different forms of computer-based education.
Heuristic evaluation is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the ‘heuristics’). The ten Usability Heuristics defined by Nielsen are: Visibility of system status; Match between system and the real world; User control and freedom; Consistency and standards; Error prevention; Recognition rather than recall; Flexibility and efficiency of use; Aesthetic and minimalist design; Help users recognize, diagnose, and recover from errors; Help and documentation.
Reeves, on the other hand, proposes fourteen pedagogical dimensions of computer-based education that can be used to compare one form of CBE with another or to compare different implementations of the same form of CBE. Each dimension is based on some aspect of learning theory or learning concept that can be used as criteria for evaluating different forms of computer-based education. These pedagogical dimensions are as follows: epistemology; pedagogical philosophy; underlying psychology; goal orientation; experiential value; teacher role; program flexibility; value of errors; motivation; accommodation of individual differences; learner control; user activity; cooperative learning; cultural sensitivity (Reeves, 1992; Reeves & Harmon, 1994).
Another important study is the ‘conversational framework’
for use in the analysis of teaching media developed by Laurillard (1993).
Laurillard suggests that teaching media can be divided into four categories:
discursive, adaptive, interactive and reflective. Discursive media should
allow student and teacher to exchange views freely. Students must be
able to act on, generate and receive feedback appropriate to the topic
goal whilst the teacher must be able to reflect upon the student’s actions
and descriptions in order to adjust their own descriptions, making them
more accessible for the student. Adaptive media allows the teacher to
use the relation between his/her and the student’s understanding to determine
topic goals for the continuing session. Interactive media enables students,
acting to achieve topic goals, to receive meaningful intrinsic feedback.
Finally, reflective media facilitates teacher support for the process
by which students link feedback on their actions to the topic goal. This
list of required media characteristics was designed by Laurillard to
encompass a complete specification of what is required of a learning
situation. The ‘conversational framework’ incorporates all four categories
of media. Adaptation and reflection are internal to both teacher and
student. The two levels in their dialogue: discursive and interactive
are external processes transmitted over the media. Several other studies
are based on the ‘conversational framework’ such as a framework for pedagogical
evaluation of virtual learning environments (Britain & Liber, 1999)
and evaluation of computer-supported collaborative learning (
Besides the above evaluation studies there are also studies with a narrower centre of attention. For example the South Carolina Statewide Systemic Initiative developed an evaluation instrument for instructional material in mathematics; the Children’s Software Review (1998) developed an evaluation instrument for children's Internet site; Biner (2002) created a web course evaluation questionnaire; Southern Regional Educational Board (2002) developed criteria for evaluating computer courseware; Beaudin and Quick (1996) formed an evaluation instrument for instructional video; etc.
It is also worth mentioning the standardization efforts
of several organizations, such as the IMS Global Learning Consortium
(IMS, 2001) and the International Standardization Organisation (ISO).
IMS focus on interoperability - defining the technical specifications
and supporting the incorporation of specifications into products and
services worldwide. The ISO set up an ISO/IEC JTC1 SC36 sub-committee
(ISO/IEC JTC1 SC36, 2001) “Standards for Information Technology for Learning,
Education and Training”. Again the focus here is interoperability and
reusability of resources and tools. In addition,
As shown, despite the fact that systematic evaluation of computer-based education (CBE) often falls behind development efforts there are several evaluation studies. However, some of the evaluation models described above require background knowledge on instructional technology, while the latter references have a very specific target (e.g. instructional material for mathematics, for children etc.). However, with the growth in the use of learning technologies and the availability of hypermedia courseware, an increasing number of teachers with no particular knowledge on instructional technology want to use such courseware in their teaching. The authors of this paper are attempting to provide an evaluation instrument for hypermedia courseware based on an evaluation framework, that can also address teachers with no particular knowledge on instructional technology, as a structured way of assisting them to initially assess a new piece of courseware that want to use in their teaching. Next, the evaluation framework is discussed.
The efficiency of hypermedia courseware depends on many issues. In order to build the evaluation instrument the authors attempted to integrate in a framework a number of important issues emerged from research on instructional design and system evaluation the past fifteen years, and which should be considered from evaluators of hypermedia courseware (H.C.) that delivers mainly content knowledge (Georgiadou & Economides, 2000). It has to be acknowledged that this framework is relatively limited as there are numerous articles in the literature on instructional design and system evaluation. However, the authors, in order to develop the framework, tried to review a large number of them and then to focus on the most often cited authors and articles. Moreover, the framework is not rigid and therefore new parts could be added or existing ones could be altered as research advances in the area of educational hypermedia.
This framework is concerned with both social and practical acceptability of hypermedia courseware, based on Nielsen’s idea that“the overall acceptability of a computer system is a combination of its social and practical acceptability” (Nielsen, 1990). The term social acceptability is related with the social basis of an educational system. In cases when the basis is teacher-centred, then the software that provides high levels of learner control and undermines the teacher’s authority is possibly socially unacceptable. On the other hand, when the basis is student-centred, then a courseware that limits the student’s potential for independent discovery is socially unacceptable. Moreover, as an example, whereas constructivist pedagogy advocates persistent questioning on the part of learners, questions, especially ‘why?’ questions, are inappropriate in cultures such as the Torres Strait Islanders of Australia. Although computer-based education may not be able to adapt to every cultural norm, they should be designed to be as culturally sensitive as possible (Powell, 1993).
Given that a piece of hypermedia courseware is socially acceptable, its practical acceptability is examined through the evaluation of the following four sectors: a) content, b) presentation and organization of the content, c) technical support and update processes and finally, d) the evaluation of learning. All sectors are equally important, as hypermedia courseware has to be simultaneously pedagogically and technically sound. Moreover, each sector includes a number of criteria that are incorporated in the evaluation instrument, which should be met in a satisfactory level, in order to characterize a piece of hypermedia courseware of high quality. Furthermore, cost-effectiveness should always be examined when similar products seem to have the same educational value. Figure 1 presents in a diagram the sectors included in the framework and the factors that are associated with them.
Figure 1. Diagram of the Evaluation Framework
Before we proceed to present the evaluation instrument it is necessary to discuss the underlying theory of the criteria used for the ‘presentation and organization of the content’ and the ‘evaluation of learning’ sectors. It is not necessary to do the same regarding the criteria for the evaluation of ‘content’ and those for the ‘technical support and update processes’ as the relevant items included in the evaluation instrument that are based on guidelines available in the literature (ANSI Standards Committee on Dental Informatics, 2002; Southern Regional Educational Board, 2002) are self-explanatory.
Presentation and Organization of the Content
The factors associated with this sector are the pedagogical ones that are concerned with learning and instructional design theories and the interface design factor.
Pedagogical Factor: This is a complicated factor as there are different beliefs of how humans learn. However, cognitive theories stress that learning is an active, constructive, cumulative, self-regulated process in which the learner plays a critical role. Moreover, current instructional theory focusing on learner-centred approaches depends on information access and learning environments that encourage free interaction with information. The agreement with the principles of an instructional design theory depends heavily on the subject matter. In addition, teachers’ beliefs is of great importance, especially in cases when the hypermedia courseware is part of the curriculum.
Nevertheless, the two core elements that are important in all educational settings are ‘motivation’ and ‘structure’, which largely define the instructional nature ofan information environment. A typical way to motivate the learner is to inform him/her what will s/he achieve at the end of the instruction by stating the aims and objectives (Gagné, Briggs, Wager, 1988). As far as the structure of the hypermedia courseware is concerned, that is how to organise instructional information, this again depends on the subject matter. However, in cases when the instructor wants to permit the learners to advance, review, see examples, repeat the unit, or escape to explore another unit, Jonassen (1992) suggest the network type or structured hypermedia as most appropriate. Structured hypermedia consists of sets of nodes, each set accessible from any other set. The node sets can be structured in any number of ways, such as node-link, hierarchical, network, depending on the nature of the processing the designer wants to elicit from the user. The structure of each node set with the various options available within each set needs to be conveyed on every screen. Another method for structuring the node sets is to combine related concepts, tie them together in an introductory block, and then permit access within the set only to concepts contained within the set.
In hypermedia learning systems another important element is ‘learner's control’, which is primary in the design of interactive learning as it allows students to tailor the learning experience to their own individual needs. However, there are dangers in surrendering too much control to the user, as low-ability students may get confused when control depends on a wide range of options (Gray, 1989; Litchfield, 1993). The high level of learner control may result in disorientation and distraction. The amount and type of learner control depends on the learner characteristics (age and cognitive capabilities), content, and the nature of the learning task (Poncelet & Proctor, 1993). Content that must be mastered and unfamiliar tasks often requires more program control, compared to content with no qualified mastery levels or familiar learning tasks. Learner control is more appropriate than program control when learners are more capable and are familiar with the learning task. Moreover, advisement is provided to assist learners in making decisions and control is used consistently within a lesson (Ross & Morrison, 1989). In general, the more control is given to the learners, the more feedback about their decisions should be given (Mcateer & Shaw, 1995).
Moreover, the issues of ‘accommodation of individual differences’, and ‘cooperative learning’ are highly important in the effectiveness of hypermedia-based learning. In most education contexts learners are not homogeneous in terms of prerequisite knowledge, motivation, experience, learning styles and cognitive styles. Also evidence suggests that when hypermedia-learning systems are structured to allow cooperation, learners benefit both instructionally and socially.
Interface Design Factor: Interactivity - Navigation - Feedback: Interactivity in instruction comprises the nature of the activity performed by the technology and the learner, as well as the ability of the technology to adapt the events of instruction in order to make that interaction more meaningful (Reigeluth, 1987). It is important to design as much meaningful interactivity as possible into instructional software (Orr, Golas, & Yao, 1994). The amount of navigational assistance needed is a function of the size of the knowledge base, the usefulness of navigational aids that are already part of the authoring software, and the types of links the software allows (Locatis, Letourneau & Banvard, 1989). Guidelines for increased interactivity were produced from researchers (Shneiderman & Kearsley, 1989; Tessmer, Jonassen & Caverly, 1989) and are used in the instrument as evaluation items in the relevant section.
The basic factors that can determine the effectiveness of feedback are the type and frequency of feedback given and the delay between feedback and instruction (Jonassen & Hannum, 1987). Feedback is closely related with the issue of interaction, as action without feedback is completely unproductive for a learner. Laurillard (1993) identifies two types of feedback, ‘intrinsic’ and ‘extrinsic’. Intrinsic feedback is what is given as a natural consequence of an action. To illustrate the concept of intrinsic feedback Laurillard uses examples of a child's actions while playing with water as the physical world responds to the child's actions of filling, pouring, etc. On the other hand extrinsic feedback does not occur within a situation but as an external comment on it: right or wrong. She suggests that extrinsic feedback is not a necessary consequence of the action, and therefore is not expressed in the world of the action itself, but at the level of the description of the action. In computer-based instruction, however, the intrinsic feedback relates to navigation and interactivity with the instructional program, and the extrinsic feedback relates to the feedback on user's performance. Schimmel (1988) identifies three types of extrinsic feedback: (a) Confirmation feedback that simply confirms whether a learner's answer is correct or incorrect; (b) Correct response feedback that presents the correct answer; (c) Explanatory feedback, such as a step-by-step solution to an incorrectly answered question. Many actions require more extended extrinsic feedback than confirmation feedback. Simple answers such as right or wrong cannot provide any information about how learners should correct their performance. A more helpful form of extrinsic feedback would give the learner information about how to adapt and correct their performance, such as correct response and explanation feedback.
‘Screen design’ is also an important evaluation factor. Different screen elements should be used to present stimulating information that will motivate and assist the learners in retaining and recalling the information. The psychological limitations to consider when designing hypermedia learning systems include: (a) Memory load: i.e. how many different control icons is it reasonable for learners to remember at any one time? (b) Perception: i.e. what colours and fonts provide the best readability?, and (c) Attention: i.e. how can the users' attention be drawn to information that is relevant, when there is a lot of different information on the screen? (Preece, 1993). A large number of screen design guidelines produced from several researchers on educational technology exist in the literature and the relevant items on the evaluation instrument are based on these (Morris, Owen & Fraser, 1994; Cox & Walker, 1993; Clarke, 1992; Mcateer & Shaw, 1995).
Evaluation Of Learning
Marchionini (1990) argued that the interactivity of hypermedia systems provides learners with access to vast amount of information in varied forms, control over the process of learning, and the potential for collaboration with the system and other people. Such empowerment of learners forces evaluators of learning to adopt a broad-based set of methods and criteria to accommodate 'self-directed' learning. He proposes a 'multi-faceted' approach to the evaluation of hypermedia based learning that address both the outcomes and the processes of learning.
The learning outcomes are evaluated through performance tests typically used to judge the quality and the quantity of learning, which usually have the form of ‘pre-tests’ used to determine learning outcomes prior to the intervention and ‘immediate’ and ‘delayed post-tests’ to examine learning outcomes after the intervention. The learning process refers to the usability of a product and should be evaluated by observing and measuring the end-users attitudes. Usability is usually associated with five parameters (Nielsen, 1990): (1) Easy to learn: Users can quickly get some work done with the system, (2) Efficient to use: Once the user has learnt the system, a high level of productivity is possible, (3) Easy to remember: The casual user is able to return to using the system after some period without having to learn everything all over, (4) Few errors: Users do not make many errors during the use of the system or if they do so they can easily recover them, and (5) Pleasant to use: Users are subjectively satisfied by using the system.
The criteria selected from the literature for every
sector of the evaluation framework were used as the basis for the design
of the initial version of the evaluation instrument. This initial version
was disseminated for comments to academics, postgraduate students and
researchers in the field of educational technology at the
The instrument has the form of a suitability scale questionnaire with five points; where figure (1) is assigned to strongly agree and figure (5) to strongly disagree. The scale also includes the figure (0) for those items in the questionnaire that cannot be evaluated, as they do not apply during the evaluation of particular hypermedia courseware. One hundred and twenty four items are included in the instrument and they cover both cases of stand-alone and web-based hypermedia courseware. The one hundred items refer to both stand-alone and web-based ones and the extra 24 items refer only to web-based ones, as these applications have some distinct characteristics regarding screen design and technical support and update processes. However, the instrument does not include items regarding the Social Acceptability because the criteria for such an evaluation cannot have universal application, as different educational systems have different beliefs on what is socially acceptable or unacceptable; therefore these criteria should be determined every time from the evaluators of each educational system.
The different sections of the instrument and the items included are presented next. It has to be noted at this point that the numeration of the items continues from each previous section in order to be more helpful to potential evaluators.
Α. Evaluation of the content
Β. Organization and Presentation of the Content
Β.1 Pedagogical Parameters
Β.1.1. Instructional Theories – Curriculum
Β.1.3. Learners Control
Β.1.5. Collaborative learning
Β2. Design Factors
Β.2.1 Interactivity - Navigation - Feedback
The H.C. includes:
Β.2.2 Screen Design
C. Technical Support and Update Process
D. Evaluation of learning
D.1 The process of learning
In cases when the hypermedia courseware is web-based then additionally the following items are examined as well for the Screen Design section.
Moreover, for web-based hypermedia courseware the following items need examination for Technical Support and Update Process section.
Evaluation Process and Analysis of the Results
As shown from the items included in the instrument, during the evaluation of a hypermedia courseware application a number of people should be involved, i.e. content experts, instructional technologists, educators and interface designers. However, the items are quite straightforward and as a result the instrument can be used from educators with no particular knowledge on instructional technology, as a structured way of assisting them during the initial evaluation of a new piece of courseware that want to use in their teaching. After this initial stage, an evaluation with the students is required in order for educators to have a better understanding of the courseware’s value and potential.
In order to analyse the results the evaluators have to consider that not all the factors have the same weight; and content is the most important of all. If the content does not meet the educator’s criteria then there is no need to further evaluate the organization and the presentation of the educational material. However, to have an overall idea regarding the value of the courseware at the end of the evaluation process for a particular courseware the sum of the score in all items - except those resulted from the evaluation of the content - and its comparison with the total sum, that is the maximum of the marks in all items is required. Therefore, by excluding the 13 items for the evaluation of content the total sum for stand-alone applications is 435 (87*5) and 555 (111*5) for web-based ones (Table 1). These two figures need alteration in the case that not all the items were used during the evaluation, as some of them could not find application in certain pieces of hypermedia courseware. For example, if only 80 items are used then the total sum is 400 (80*5).
Table 1. Assessment table for all the items of the evaluation instrument
When evaluating two or more courseware on the same subject, then the above figures can be a useful starting point in determining the most appropriate one. Yet, the most important part of the evaluation is the examination of the scores resulted from the evaluation of the four different sectors separately: a) content, b) presentation and organization of the content, c) technical support and update processes and finally, d) the evaluation of learning. The examination of these scores is important in order to secure the case that an application is technically sound but does not have a pedagogical value and vice versa. Table 2 can be used to compare the results.
Table 2. Assessment table for the different sectors of the evaluation instrument
It has to be mentioned that in order to ensure high quality of hypermedia courseware the evaluators’ team (or the teacher) potentially could agree on some standards and set a threshold to the comparison of the results. For example, if the score resulted from the evaluation of an application is not equal with the two thirds of the total sum in all sectors then the application cannot be used for teaching and learning.
This paper presented an evaluation instrument for hypermedia courseware that is designed according to an evaluation framework developed from the integration of a number of important issues emerged from research on instructional design and system evaluation the past fifteen years and is concerned with both social and practical acceptability of hypermedia courseware. One hundred and twenty four items are included in the instrument that has the form of a suitability scale questionnaire that are concerned with the evaluation of four main sectors: a) content, b) presentation and organization of the content, c) technical support and update processes and finally, d) the evaluation of learning.
Postgraduate students and secondary schoolteachers in
As research progresses in the field of hypermedia courseware evaluation new items can be added to the presented instrument. Therefore, it is a flexible tool that could be easily adapted in an educational environment and its improvement could be an ongoing process.
ANSI Standards Committee on Dental Informatics - Working
Group Educational Software Systems (2002). Guidelines for the design
of educational software,
Beaudin, B. P., & Quick, D. (2002). Instructional
Video Evaluation Instrument. Extension Journal,
Becker, H. J. (1992). Computer education. In M. C. Alkin
(Ed.) Encyclopedia of educational research,
Biner, P. (2002). Web Course Evaluation Questionnaire,
CEN/ISSS The European Committee for Standardization:
Information Society Standardization System,
Children's Internet Site Evaluation Instrument (1988). Children’s
Software Review, January/February,
Clark, R. E. (1994). Media will Never Influence Learning. Educational Technology Research and Development, 42 (2), 21-29.
Clark, R. E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research, 53 (4), 445-459.
Clarke, A. (1992). The Principles of Screen Design
for Computer Based Learning Materials.
Cox, K., & Walker, D. (1993). User Interface
Design (2nd Ed.),
Flagg, B. (1990). Formative Evaluation for Educational
Gagné, R. M., Briggs, L. J., & Wager W. W. (1988). Principles of Instructional Design, N.Y.: Holt, Rinehart, and Winston.
Georgiadou, E., & Economides, A. (2000). Evaluation
Factors of Educational Software. In Kinshuk, Chris Jesshope & Toshio
Okamoto (Eds.) Proceedings of the International Workshop on Advanced
Learning Technologies Proceedings,
Gray, S. H. (1989). The effect of locus of control and sequence control on computerised information retrieval and retention. Journal of Educational Computing Research, 5 (4), 459-471.
IMS Global Learning Consortium, http://www.imsproject.org/.
ISO/IEC JTC1 SC36, http://jtc1sc36.org/.
Jonassen, D. H. (1992). Designing Hypertext for Learning.
In Scanlon, E., & O’Shea, T. (Eds.) New Directions in Educational
Jonassen, D. H., Hannum, W. H. (1987). Research-based Principles for Designing Computer Software. Educational Technology, 1 (18), 42-51.
Kozma, R. (1991). Learning with Media. Review of Educational Research, 61 (2), 179-211.
Kulik, C-L., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: an updated analysis. Computers and Human Behavior, 7, 75-94.
Kulik, J. A., & Kulik, C-L. (1987). Review of recent research literature on computer-based instruction. Contemporary Educational Psychology, 12, 222-30.
Kulik, J. A., Kulik, C., & Cohen, P. A. (1980). Effectiveness of Computer-based College Teaching: A Meta-Analysis of Findings. Review of Educational Research. 50 (4), 525-544.
Laurillard, D. (1993). Rethinking University Teaching,
Liao, Y. C. (1997). Effects of Hypermedia vs. Traditional
Instruction on Student Performance: A Meta-Analysis. CD-ROM Proceedings
of WebNet97 World Conference on the WWW, Internet, and Intranet.
Litchfield, B. (1993). Design Factors in Multimedia Environments: Research Findings and Implications for Instructional Design. Annual Meeting of the American Educational Research Association, 1-10.
Locatis, C., Letourneau, G., & Banvard, R. (1989). Hypermedia and Instruction, Educational Technology Research and Development, 37 (4), 65-77.
Marchionini, G. (1990). Evaluating Hypermedia Based
Learning. In Jonassen D., & Mandl H. (Eds.) Designing Hypermedia
Mcateer, E., & Shaw, R. (1995). The Design of
Multimedia Learning Programs, University of
Morris, J. M., Owen G. S., & Fraser, M. D. (1994).
Practical Issues in Multimedia User Interface Design for Computer-Based
Instruction. In Resman, S. (Ed.) Multimedia Computing: Preparing for
the 21st Century,
Nielsen, J. (1990). Evaluating Hypertext Usability.
In Jonassen, D., & Mandl, H. (Eds.) Designing Hypermedia for Learning,
Nielsen, J., & Molich, R. (1990). Improving a human-computer dialogue. Communications of the ACM, 33 (3), 338-348.
Nielsen, J. (1994). Heuristic evaluation. In Nielsen,
J., & Mack, R. L. (Eds.) Usability Inspection Methods,
Orr, K. L, Golas, K. C., &
Poncelet, G. M., & Proctor, L. F. (1993). Design and Development Factors in the Production of Hypermedia-based Courseware, Canadian Journal of Educational Computing, 22 (2), 91-111.
Powell, G. C. (1993). Incorporating learner cultural diversity into instructional systems design: An investigation of faculty awareness and teaching practices. Unpublished doctoral dissertation, The University of Georgia.
Preece, J. (1993). Hypermedia, Multimedia and Human
Factors. In Latchem, C., Williamson, J., & Henderson-Lancett, L.
(Eds.) Interactive Multimedia,
Reeves, T. C. (1992). Effective dimensions of interactive
learning systems. Invited keynote paper presented at the Information
Technology for Training and Education Conference (ITTE `92),
Reeves, T. C., & Harmon, S. W. (1994). Systematic
Evaluation Procedures for Interactive Multimedia for Education and Training.
In Resman, S. (Ed.) Multimedia Computing: Preparing for the 21st Century,
Reigeluth, C. M. (1987). Instructional Theories in
Ross, S. M., & Morrison, G. R. (1989). In Search of a Happy Medium in Instructional Technology Research. Educational Technology Research and Development, 37 (1), 19-33.
Schimmel, B. J. (1988). Providing Meaningful Feedback
in Courseware. In D. H. Jonassen (Ed.) Instructional Designs for Microcomputer
Scriven, M. (1993). Hard-won lessons in program evaluation,
Shneiderman, B., & Kearsley, G. (1989) Hypertext
Southern Regional Educational Board (2002). Criteria
for evaluating Computer Courseware,
Tessmer, M., Jonassen, D., & Caverly, D. A. (1989). Nonprogrammers
Guide to Designing for Microcomputers,
Copyright by the International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at firstname.lastname@example.org.