Adaptive Web-based Engine for the Evaluation of eLearning Resources
Lampros K. Stergioulas, Hasan Ahmed and Costas S.
The rapidly increasing use of web-based learning in Higher Education has spun a currently small, but continuously growing, interest in the effective evaluation of university level (web-based) eLearning resources (eLRs) (Berge and Collins (1995); Hiltz (1994); Moore and Kearsley (1996); Moore and Kearsley (1996); Harasim (1989); Pritchard, Micceri, and Barrett (1989)). Recently a number of web-based tools and applications have been described on evaluating web-based eLRs in specific areas of learning (Doughty, Magill, and Turner (1994), Brown, Doughty, Draper, Henderson, and McAteer (1996).). This paper contributes to this research area and describes a generic eLearning resources evaluation approach, which effectively covers the wide spectrum of Higher Education eLearning applications.
The paper addresses the issues involved in the development of a generic on-line evaluation engine for live, packaged as well as hybrid eLRs, and the use of this engine as an integral part of an eLR web-based exchange/brokerage system. Both high level strategic and low level technical design challenges are discussed and new approaches and solutions are introduced. System design trade-offs are also discussed between the conflicting requirements of high levels of evaluation data complexity/completeness, on the one hand and the need to simplify the data collection process on the other, in order to maximize user satisfaction and user-friendliness. User feedback results from real-life tests are also reported.
Specifically, the proposed engine aims to provide an online adaptive, centralized service for the evaluation of eLRs in Higher Education for the benefit of (i) eLR consumers (including students, faculty staff and university administrators) and (ii) eLR content providers (e.g. individual academics, Higher Education Institution etc.). The evaluation engine collects from users eLR-related evaluation data, which is then stored in an evaluation database. This data can be analyzed by the engine and transformed into meaningful eLR evaluation information. The engine can then respond to requests from prospective consumers and eLR providers with evaluation information on specific eLRs, and by doing so, users can judge the quality of a given eLR.
Section 2 discusses the important strategic and technical challenges encountered in developing such an engine. Section 3 outlines the generic architecture of the eLearning resource exchange/brokerage system. The eLR evaluation engine part is discussed in detail in Section 4, which effectively addresses the issues of evaluation data collection, processing and presentation. Furthermore, advanced functionality tools which can significantly enhance the capabilities of such eLR evaluation systems are also proposed. Section 5 presents an example of such a prototype engine, as implemented in a new European Higher Education eLR brokerage system (UNIVERSAL). User feedback results obtained from real trials involving eLR delivery and the use of the proposed eLR evaluation prototype engine are also presented in this section. Conclusions are given in Section 7.
eLR Evaluation Engine: Strategic and Technical Challenges
Within the UNIVERSAL framework, the main objective of the eLearning resource evaluation engine (eLR-EE) is to provide eLR quality feedback to consumers and providers and by doing so, maintain a high quality eLR catalogue/brokerage service. Thus, it was necessary to address the following issues and suggest sustainable and technically feasible solutions:
A Generic System Architecture for an eLR Exchange /Brokerage Service
An exchange/brokerage system generally consists of various engines and information systems, such as delivery, evaluation, user interface, and administration engines (see Figure 1). Within this context, the term engine refers to an application, which provides a number of services to other applications.
The user interface engine (UIE) is responsible for the dialogue between the eLR brokerage system and eLR consumers or providers. Thus the UIE allows users to interact with the Brokerage System, in order to obtain or provide information. When interacting with users, the UIE establishes relevant background knowledge and guides users in the selection of eLR. Potential customers are presented with choices according to pre-requisites and conditions attached to different types of eLR (eLR metadata), the suitability of different institutions offering this course and the different delivery modes available for a particular eLR. In the case of eLR providers, the UIE is mainly used to provide eLR related information and handle the offer of content provision to the brokerage system.
The Administration Engine maintains the platform’s data depository. Thus it processes records of users and transactions and makes these records available to other system engines. Administration of users is also performed by the administration engine. The User Profile engine is responsible for creating and maintaining users’ profiles and user authentication files. The Contract Engine enables formal transactions between users and system and invokes contract formulation, acceptance and billing mechanisms.
The LR metadata enginestores the eLR metadata information. Its main function is to store the metadata information in a database, enable searching of this database and provide information in response to user UIE requests.
The brokerage system may or may not store eLR contents. In the later case, it offers an interface layer which provides communication functionality between the brokerage system and the various delivery systems. This is the responsibility of the delivery engine, which provides authentication and authorization services, delivery negotiation and delivery supervision. In such a system, the role of an evaluation engine is to collect and store eLR evaluation data and provide necessary analysis and presentation facilities.
Data related to these engines are stored in the system data resources which comprise the following databases:
Architecture and Design of an Adaptive eLR Evaluation Engine
The proposed architecture for the evaluation engine is shown in Figure 2. The evaluation engine consists of four parts: (a) a data collection tool, (b) a data analysis and presentation tool, (c) questionnaire forms database and (d) an evaluation database.
Data collection tool
The data collection tool is responsible for collecting evaluation-related data from eLR users and/or providers (via the user interface engine) and also from other system engines (for example users’ background data is provided by the user profile engine). Data from users is collected at specific, predefined times. For example, the collection process can be initiated by the Administration Engine either at pre-specified intervals during eLR delivery or following completion of delivery. Providers can supply eLR evaluation-related data (such as validation by independent experts, accreditation, previous evaluation, etc.) during the eLR provision process.
Questionnaire forms database
Online questionnaire forms are used to collect data from the users (consumers/providers). The evaluation questionnaire forms are adaptive with respect to “past evaluation history” and the current, specific evaluation requirements for each eLR, in the sense that their structure and level of detail varies for different eLRs according to perceived system needs. Furthermore, the selection/formulation of questionnaires depends on the type of user (student/academic tutor/provider), since the evaluation engine requires different types of data from different types of users. The selection of the questionnaire form is also dependent on the type of eLR (e.g. live, packaged or hybrid).
More specifically, evaluation data consist of:
According to this scheme, the evaluation engine initially presents students with the short questionnaire (see Figure 3). Depending on subsequent entries and following certain criteria/rules (pre-specified during the eLR Provision Process in terms of thresholds for perceived shortfalls/poor performance defined on each first-level attribute), the system might (or might not) activate the second-level detail structure in the questionnaire. In this way, questionnaires supported by such flexible, deployable structures are adapted to the situation at hand and provide the necessary information for providers/tutors to focus on specific perceived eLR weaknesses, with the view of introducing targeted improvement, replacement, or complementary support. This kind of questionnaire/adaptation provides a good reconciliation of the contradictive requirements of (a) short, user-friendly questionnaire/evaluation data size and (b) detailed but time consuming evaluation of all eLR quality aspects. Furthermore, providers are able to define their own questions at the second level, thus enhancing the overall feedback process. Data collection from tutor-users follows a similar scenario, with the notable exception that in this case second-level questionnaires are used as standard. The process of data collection from eLR consumers is shown in the Figure 3.
The attributes (first-level) and their detailed characterization (second level) were chosen in agreement with recent evaluation methodologies (Berge and Collins, 1995; Moore and Kearsley, 1996; Anderson et al., 1999; Jones et al., 1997; Chen-Lin and Kulik, 1991, see also reference to IEEE LTSC standards). The following (first-level) attributes are employed:
Each first-level attribute is split into a sufficient number of second-level details to target more specialized areas of evaluation. General user reviews/comments are also collected and stored in the evaluation database in the form of open text.
Evaluation data from providers
A priori eLR evaluation data is collected from providers. This collection process is optional and occurs within the eLR provision phase (Evaluation information menu). This process is presented in Figure 4.
Also providers have an option to adjust the collection process and modify the questionnaire according to the needs and specific characteristics of their eLRs, or to suppress the evaluation process. The various scenarios are given in the flowchart of Figure 5.
Data analysis and presentation
Users can extract information from the eLR evaluation database via a versatile query interface. Two modes of interaction are provided: (i) a summary mode and (ii) a user-defined query (UDQ) mode, which are supported by Help, Zoom and Scroll facilities.
The summary mode provides instant reference to aggregate evaluation results. Users are able to access important evaluation statistics (indicators of evaluation quality) using a simple and direct query environment. Queries are initiated through the activation of single function buttons in a web page interface and the system will supply specific evaluation summary data. Figure 6 shows the user interface, which consists of two sections: a query interface and a graphical/text display area. Users are able to select a query from a fixed set of queries and results will be displayed in a pre-defined graphical mode. The results/graphs to be displayed correspond to the first level data of questionnaire forms.
The second mode, UDQM is more advanced, involving, custom-made queries. The user is presented with various options for extracting, analyzing and displaying assessment data. In this way the system enables users to customize queries to their specific needs. User-driven queries not only provide users with specific information, but also reduce the chances for data to be extracted erroneously or to be misinterpreted. Due to its more complex nature and in addition to the standard Help facility, UDQM also include example Demos (based on popular queries) for the benefit of the user. An example of such a query formulation screen is given in Figure 7.
A number of numerical/statistical functions are available for the task of processing the extracted dataset(s). This analysis stage is however optional, since users may only want to view the unprocessed (raw) data and do the processing using their own tools.
The data analysis and presentation facility is in the form of an interactive interface/display window (Figure 8). The user has a variety of display options, including capabilities to process/display more than one dataset at the same time and to cross-examine data across different eLRs.
Automatic Alert System (AAS)
An automatic alert system is an additional feature that enhances the functionality of the evaluation engine. Users (consumers/providers) will be able to setup and customize alert actions and more particularly specify the type of alerts to be generated by the system and the conditions that trigger an alert. This characteristic is valuable to eLR providers, as it assists them to track eLR evaluation results and take appropriate action when certain conditions are met. On the other hand, an eLR consumer will be able to have up-to-date, instant information on eLRs and reconsider consumption of eLRs (e.g. in cases where quality indicators fall below a certain threshold).
The alert system relies heavily on the user define query (UDQ) mode functionality. An authorised user can extract a dataset by formulating queries in this mode and perform numerical and statistical operation on the extracted dataset. For an alert action to be established, the user must specify rules on the extracted dataset, which are then periodically checked. These rules can be very simple, e.g. based on whether the average of specific dataset is less than a predefined value, or could be more complex, for example different datasets and numerical functions could be combined and checked against a range of values. Generally, the rules a user can define are limited only by the UDQ mode functionality.
After defining a set of rules for a particular alert action, the user can store it in the alert depository as part of their profile. Popular alerts will also be stored in the evaluation database as standard rules which can be employed by all users. Once an alert has been setup, users can associate it with a particular eLR and specify the action to be taken when the condition applies. Two main types of alert are supported: (a) generation of an email message to be sent to an appropriate person and (b) automatic system action such as termination of consumption/provision of eLRs. For example, if the delivery quality indicator of an eLR falls below a certain level then the provider is notified, so that immediate action can be taken to restore delivery to acceptable levels.
A user can also specify the frequency at which AAS should check the conditions for specific alerts. For example checking procedure can be activated every time new evaluation data becomes available for a particular eLR, see Figure 9.
Although such an alert system requires a tight real-time communication between the various engines of the system, it significantly enhances the functionality of the evaluation engine, towards an automated eLR quality monitoring system.
eLR Evaluation Engine Development
Lancaster University has recently developed a prototype evaluation engine, which is currently in use in large-scale trials of an eLR brokerage service with participating Higher Education Institute across Europe (IST project UNIVERSAL).
The UNIVERSAL Exchange for Pan-European Higher Education aims to demonstrate the feasibility of an open exchange system for eLearning resources between institutions of higher education across Europe and elsewhere in the world. The system embraces offers, enquiries, booking and actual delivery of course units. The main aim is to develop and validate a model and standards that could later be widened to embrace other learning groups from industry, commerce and government. The key innovation is to create and manage an open market by introducing a brokerage system employing a standardized description of the pedagogical, administrative and technical characteristics of course units. The system enables institutions to enrich their curricula with remotely sourced material. It is compatible with a variety of business models pursued by different institutions, including open universities and alliances between peer institutions. In addition, the common catalogue and continuous evaluation mechanisms allow institutions to selectively grant credits for course units delivered through the system.
The UNIVERSAL model is based on the generic architecture described in Section 2 and the brokerage system is implemented using modern XML:RDF metadata models, open-source components and web server technologies (Guth et al, 2001).
The functionality and operation of UNIVERSAL eLR evaluation service is provided by the Lancaster prototype engine, which is built using the architecture and functionality presented in Section 3. The engine allows the UNIVERSAL system to interact with: (a) students, (b) university administrators and academic staff and (c) eLR providers.
More specifically, the service facilitates the collection of evaluation data from consumers (a, b) and providers (c). The availability of collected eLR evaluation data to users is conditional to provider approval. A flexible form for first-level and second -level questionnaires is used to accommodate all types of eLRs (live, packaged and hybrid). Although the brokerage system deals with eLRs of variable granularity levels, for practical purposes in current trials the evaluation engine collects data related only to eLRs that constitute complete courses of equivalent lecturing duration that is longer then 10 hours.
eLR Evaluation engine real-life testing
Recent results from the on-going trials suggest that:
Furthermore testers also raised a number of issues which are discussed below:
An adaptive, multifunctional design for an on-line eLR evaluation engine has been developed. The main characteristics of the engine are:
Furthermore we presented the implementation and use of such engine within an eLR exchange/brokerage framework. The approach is generic enough to be easily applied to other subjective quality evaluation application areas.
The work described in this paper was undertaken as part of the UNIVERSAL IST European research project, funded by the European Commission.
Copyright by the International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the authors of the articles you wish to copy or firstname.lastname@example.org.