Educational Technology & Society 2(1) 1999
ISSN 1436-4522

The role of student knowledge in the design of computer-based learning environments

Moderator & Summarizer: John Eklund
Lecturer, The University of Technology, Sydney, Australia.

Discussion Schedule
Discussion: 26 Oct. - 4 Nov. 98
Summing up: 5 - 6 Nov. 98

Pre-discussion paper

This discussion centres on a number of points made in a recent conference paper by John Eklund (The University of Technology, Sydney) and Peter Brusilovsky (Carnegie Mellon University, Pittsburg) found on the WWW at

Firstly, teaching and learning are knowledge-based endeavours. As teachers we deliver knowledge, we encourage its active construction by offering information and resources in structured ways. The knowledge that is embodied in the teaching process consists of a knowledge of the domain, of strategies and of students. Student knowledge is critical for individualising the instructional process, and it often makes the difference between an experienced teacher and a novice. Experienced teachers have well-developed 'student-models', both for individuals and classes. They keep these as mental models, records of achievement, or student profiles. They are constantly updated as the teacher interacts with the students, and are used to select appropriate domain knowledge and teaching strategies. Experienced teachers balance student, domain and strategic knowledge.

This simple model of teaching knowledge has been the basis for the design of intelligent tutoring systems since the late 1970s. Current applications of multimedia to education provide highly interactive and engaging environments, using techniques such as strong metaphor, simulation, and game playing to hold the interest of the learner. However, they generally remain as ignorant of the individual learner as the most simplistic tutorial software of the 1980s. Multimedia and web-based instruction are increasingly being used to augment and in some cases replace face to face instruction in the context of the flexible delivery of courses. Because of the lack of a student model, of any student knowledge in the system, these tools are defined as learning environments, not teaching environments. Yet they are increasingly being used to replace teaching.

Adaptive systems are hypermedia-based learning environments which are capable of altering some part of the instructional process on an individual basis by the use of individual student models. The paper introduces the notion of adaptivity in learning environments, and in particular examines the InterBook tool for authoring and delivering adaptive electronic textbooks. These offer adaptive navigation support through the annotation of links. The argument is that, in a climate of increasing use of technology to replace tradition forms of instruction, adaptive systems may be able to individualise the instructional process to some extent to account for individual learner knowledge, preferences and cognitive abilities.

Post-discussion summary

John Eklund opened the discussion by noting stating that "the original purpose of this forum was to bring together the two communities of AI and education for fruitful discussion. Clearly from the talk so far on this mailing list there is also a significant component of people who might be described as instructional technologists. (..then again they seem to be on most mailing lists these days). The paper for discussion briefly examines some non-technical issues that should be of interest to many subscribers, specifically: Why is a personalised dialogue with a learner important? How is a knowledge of individual students commonly built into web-based instruction? How can student knowledge be represented in an 'intelligent' system? What is an adaptive textbook? Do knowledge-based systems have a role to play in flexible delivery of subjects using technology?"

In response, Jennifer Hofmann made the point when talking about online learning that "... the instructors cannot see their students but must maintain a high level of engagement...", and this is one of the central points for this discussion, and one that connects the instructional technologist's agenda to that of the AI & related community. In a climate of globalisation and budgetary restraints (Arun-Kumar Tripathi), we are increasingly using computer mediated communication and instruction to augment (and sometimes replace) traditional forms of pupil-teacher interaction. Because we cannot 'know' these students as well, we attempt to 'engage' them with technology and enable an increased level of 'interaction' with each other instead.

Alfred Bork, in a keynote-like email, reminded us of that there is really nothing new in education in classrooms or between student and teacher since Socrates, but changes in the social contexts of equity, access, work and economics are facilitated *and* furthered by new technologies. His argument is that the "... importance of highly interactive learning cannot be exaggerated. The central problem of learning at any moment is to determine what the student knows and does not know, and to offer appropriate help based on that knowledge. This allows us to be responsive to the needs of the student. This knowledge can be gained only with careful interaction with the student." ... I like the idea that adaptivity might be a dimension of this often-used term 'interactivity'.

Dimiter Bogdanov provided an accurate summary of the discussion paper and suggests that we may not be " to model the Web-based learning on the basis of a process only." I agree that what we can model and know of our students through machines is rather primitive compared with the complex student knowledge used by experienced teachers. Is it adequate? And how should it be accomplished, through 'successive approximations' (David Wiles)? In the case of InterBook, we infer knowledge about an individual purely on a history-based mechanism (where the user has been), although more recent versions employ user-knowledge through embedded tests. Time to hear from the user-modellers out there: Is it possible to provide meaningful levels of personalised interaction with such a paucity of information about learners? Is it meaningful in anything other than trivial domains? Can interactivity be enhanced in computer-based learning environments much more readily by non-modelling methods?

In talking about adaptivity as a functionality of a computer based instructional system, Alfred Kobsa noted that adaptivity also is not a goal by itself, but a means to the higher end of helping students learn better. There are good reasons (e.g. the constancy principle of HCI and probably also pedagogical reasons) to use this means sparingly, i.e. only when it is likely to be beneficial.

Dimiter Bogdanov stressed how important is the learner's knowledge issue. It is a milestone in education and we are not able to separate it from the general issues of learning environment design. He writes "started the discussion having in mind the experience of generations in teaching and all the time comparing if our computer-based model is like the classical teacher-centered model (and which is working very well). This is one of the feedback loops that limit our vision. The second one is, of course, the present level of the technologies - what we could reach in education using the existing technologies. And here there are a lot of flaws and failures of technical/technological nature that absorb our efforts. The third limitation comes from the leading ICT companies which are not able (or are not willing) to elucidate the proprietary aspects of their technologies. Of course, they have strong interest to deploy their technological solutions over education area, but this is a rather commercial interest to cover a big market. Everybody knows, I think, the examples where a Hw/Sw donation from a company could reflect on the whole design of a learning environment."

Much of the ensuing discussion centred on the problems of authenticating learners undertaking assessments. Several comments were made, and Michael Scriven suggested a prophylactic model being recommended for use at Claremont Graduate University:

1. We give the usual tests to the offsiters and onsiters and grade all the papers the same way at the same time. Offsiters are given "virtual credit' for the course, and a letter to that effect.

2. This can be converted into actual credit in only one way; coming to the campus and taking a two hour oral for each four credit course (full student load is 12 credits per semester); the oral focuses on their papers and is the 'authentication' process.

3. We charge or will charge much less for the offsite course; when the credits are converted to real ones, there is a fee that bridges some but not all of the gap (can't charge the whole difference because they did not use the libraries or rooms or tutorials), which also pays the cost of the faculty who give the oral.

4. I'm considering allowing offsiters to take a supervised written exam instead of the oral, if they feel very nervous about the oral.

Alfred Bork wrote that several recent articles have raised the question of assessment in distance learning, worrying about the problem of cheating.

This problem has long been addressed by the UK Open University. For newly developed computer-based courses, there is a future approach which seems very promising, combining learning and assessment in an intimate blend, so the student has no impression of ever taking a 'test' or doing an assignment. So cheating does not occur. Clearly very little such material exists. But we know how to produce it, with no technology beyond that we already have.

Following this, several IFETS members responded with their views on assessment in online learning.