Learning Objects and Instruction Components
Moderator: Clark Quinn
Summariser: Samantha Hobbs
A new concept in Educational Technology is the 'learning object'. Learning objects, as defined by the IEEE's Learning Technology Standards Committee (http://ltsc.ieee.org), are "any entity, digital or non-digital, which can be used, re-used or referenced during technology supported learning."
In this paper I introduce the concept, review current work in the area, and discuss ways in which our research is leading us to push the standard in a particular way. I conclude with some questions that arise from this work.
Learning Object Model
The learning object (LO) model is characterised by the belief that we can create independent chunks of educational content that provide an educational experience for some pedagogical purpose. Drawing on the object-oriented programming (OOP) model, this approach asserts that these chunks are self-contained, though they may contain references to other objects; and they may be combined or sequenced to form longer educational interactions. These chunks of educational content may be of any type—interactive, passive—and they may be of any format or media type. A learning object is not necessarily a digital object; however, the remainder of this paper will focus on learning objects that are exclusively digital.
An associated requirement for learning objects is that of tagging or metadata. For these objects to be used intelligently, they must be labelled as to what they contain, what they teach, and what requirements exist for using them, and thus exists the need for a reliable and valid scheme for tagging learning objects.
The LO model provides a framework for exchange of learning materials between systems. If LOs are represented in an independent way, conforming instructional systems can deliver and manage them. The learning object activities are a subset of efforts to creating learning technology standards for such interoperable instructional systems.
The first major benefit provided by the LO model is the one imported from OOP-- reuse. A learning object designed by one person is made available to other instructors who can use them for different educational purposes. For example, a learning object that discusses how autos behave differently with and without anti-lock brakes might be used in several different educational domains: the physics of friction, automotive design, or insurance liability.
One of the benefits of the LO model is that it has the potential to reward the best educational content, by allowing objects to 'compete' in a market economy. In this scheme, there are costs to the consumer for the object, costs that are then delivered to the author as rewards. Rights to the objects are made clear, as is the financial responsibility. The objects can be customised, aggregated to produce courses, etc., as the Intellectual Property (IP) owner dictates. Then, as different authors produce different versions of the same content, the economy rewards those authors who produce the most effective objects. The Educational Object Economy (http://www.eoe.org) has parts of this, though they are limited to Java Applets.
Another benefit is to provide search operations for objects that meet a particular category. Instead of doing a web search on "+Railroad +US +Western +Expansion", for example, a teacher might do a search which specified a search for educational material aimed at fourth graders which described the western expansion of the railroads in the US, particularly which incorporated maps. This same capability could be used by learners to aid in their own educational processes. The richer the tag set, the higher the likelihood of being able to craft a query that generates a precisely targeted set of candidates.
In any system that uses learning objects, the objects are manipulated by the system independent of their content, at least until delivery to the learner. Consequently, the objects must be tagged to indicate many things about the content. Tags have a syntax that indicates the name of the field or domain of the tag, and the value attached to that label. For example, the field might be author, and the value for this article would be "Clark Quinn".
Some tags are necessary, independent of educational use. Such tags would include technical issues of format, size, and delivery requirements. Other categories are authorship, ownership, and might include information about who did the tagging. Information might also track version number, status, and other issues associated with a lifecycle of the object. It might also indicate if there's been annotations or aggregations.
While tags like this are certainly useful, one can imagine a number of additional tags that might be useful for educational purposes. For example, it would be desirable to tag learning material as to the content. For objects at the level of courses or books, we might consider using any established library scheme, such as the Library of Congress subject headings. If our objects are smaller, how do we address this? Any librarian can tell you (and you should talk to them, they've been trying to solve this problem for years) that there is no overarching ontology that accounts for all knowledge. So unless we aggregate individual objects into larger buckets and label the buckets, we haven't solved the problem of semantic content tagging. If we do aggregate, we limit the flexible reuse of objects.
There are other tags to consider, as well. One, particularly for smaller objects, is the instructional role of the object, as well as instructional characteristics. Is it informational, or does it require activity on the part of the learner? Other questions might include how focused it is, whether it has navigation requirements, or whether and what the form of feedback is.
Others have supported the learning object approach, notably Merrill (1998), but there is lack of agreement on what needs to be indicated. While theoretically it might be valuable to err on the side of over-specification, pragmatically there are reasons to limit the amount of detail. The trade-off, of course, is that for greater effort, you get greater power. The question is: where to draw the line?
There are several activities in progress to develop a tagging scheme for LOs, including the Dublin Core, the Instructional Management System (IMS) project, and the Learning Technology Standards Committee (LTSC).
The Dublin Core initiative was an early effort to standardise on what the core tags for any information object should be, and has been remarkably successful to the stage that most standard efforts start with the Core. The Dublin Core is now separately investigating the special case of educational objects (independently of the other ongoing work).
The Instructional Management Systems project of EduCause has made a tagging proposal that has achieved the level of a first specification (http://www.imsproject.org/metadata/index.html). Their work has passed on to the IEEE's LTSC, particularly working group 12, and is the basis for further work in this area. The LTSC have a draft that is close to voting standard (http://ltsc.ieee.org/doc/wg12/LOM-WD3.htm). Notably, the LTSC is having the work forwarded to ISO to work towards an internationally accepted standard.
The bottom line is that there is considerable work going into object metadata that the educational technology community needs to be aware of.
Currently, the LTSC proposal includes tag categories of: General, LifeCycle, MetaMetaData, Technical, Educational, Rights, Relation, Annotation, and Classification. Most of these are true of objects regardless of purpose, and would be true of knowledge objects as well as learning objects. It is only the 5th category, Education, that really concern us, though I will occasionally point to some other issues.
I will here note that the Classification category allows the introduction of other classifications for use in tagging. This allows people to propose and use new sets, and it is an explicit goal of the current tagging exercises to leave some difficult issues vague and allow actual use to drive further specification.
The educational category has several types of tags for objects. The first is interactivity type, covering flow of information between resource and user, with restricted values of active, expositive (passive), or mixed. Then comes learning resource type, describing the specific kind of resource (which can be a list, prioritised), and allows any terminology but recommended values are exercise, simulation, questionnaire, diagram, figure, graph, index, slide, table, narrative text, exam, or experiment. Next comes interactivity level, defining the degree of interactivity, and ranges from very low, through low, medium, high, to very high. Semantic density has the same values, and is meant to define a subjective measure of a resource's usefulness relative to size or duration. There are categories for intended end users (teacher, author, learner, manager), context of use (an open vocabulary, but examples include primary education, secondary, higher ed, different university levels, tech schools, etc.), typical age range, difficulty (again, a range from very low to very high), typical learning time. Also included are a space for a text description of the resource, and a language choice from the international standard codes.
Not surprisingly, a number of issues arise. These issues naturally divide into issues about the characteristics of the objects and characteristics of the tagging of the objects. Under object issues is the issue of level of granularity. Under tagging issues is the problem of vocabulary.
Currently, people tend to develop instruction where a complete course is the smallest independent level of learning object. Certainly, that's the easy way. Can we find value in pursuing a finer level of granularity?
Several arguments can be made for a finer level of granularity. First, with smaller granularity, there's greater potential for reuse of objects. If the anti-lock brakes example discussed above had incorporated several problems specific to the insurance domain, for example, its reusability in the engineering domain would be limited. By keeping objects smaller, they are more likely to be able to be reused in different contexts.
Second, there's the opportunity to allow flexibility on the part of the learner, or even to support intelligent processing. If the objects are small enough, and instructional experiences are composed of these objects, then different learners can have different instructional experiences.
While developing an online course, I was trying to move beyond traditional instructional design to consider principles that might support people's choices in sequencing. Perusing different instructional design theories, I was struck that 'problem-based learning' (e.g. Barrows, 1986) provides problems first, before conceptual material, while Laurillard (1993) suggests conceptual material first. It seemed clear that one way I could support learners in determining their preferred learning path was to break material up along the lines of the role in the instructional process, and allow learners flexibility (while preserving a lifeline of a default path that followed a safe and standard approach). That led me to propose that instruction is composed the following components: Introduction, Concept, Example, Practice, and Reflection.
Introduction is material that motivates, activates relevant knowledge, and lists objectives. Concept is a presentation of the relevant abstraction. Examples are applications of the concept to problems. Practice is opportunity for the learner to practice the skill, including feedback. Reflection (as I use it here) is material that cements the learning and prepares the learner to transition beyond the learning experience. This includes reviewing concepts, pointing to further directions for exploration, suggesting ways to practice and keep the knowledge active, and a graceful segue from the learning experience. The smaller granularity provided greater opportunities for learner control.
Granularity is independent of object use, and the tagging standards have granularity (called Aggregation Level), under the General category. They talk about atomic units (raw media data or fragments), collections of atoms (molecules?), collections of collections, and full courses. Here, I am suggesting that granularity at the collection level is the one in which instructionally different individual choices would be made.
With many tags proposed for learning objects, one stumbling block is whether to determine a fixed and controlled vocabulary for the tag, or to allow authors to extend labelling to meet their own needs (called "open vocabulary with best practice"). Although this is not an easy goal, I argue for a robust fixed vocabulary instead of the alternative, a lack of interoperability. We need categories designed so that authors or 'taggers' (a new job category that's part editor, part administrative) can easily discriminate how a potential object should be labelled and so that the objects are labelled consistently.
As an example that illustrates the issues related to vocabulary, consider the description of 'interactivity level'. We might have objects that are interactive, and we'd like to categorise this. However, I see several problems with using the interactivity level tag as it is now defined. First, it is difficult to imagine anyone using the 'low' category without guidance. If someone creates an interactive object, they are hardly likely to consider it only minimally interactive.
Second, it is not clear what distinguishes a ‘high’ interactivity object from a ‘medium’ one. Interactivity can come from several sources, whether navigation, or type of response, or quality and speed of feedback; and any of these sources can vary independently, and be more or less important than the others.
Ideally, we would have conceptual distinctions in a fixed vocabulary, but the definition of interactivity is currently unsolved. In the next best case, we would have categorical, demonstrated examples; and, here, I would argue, you can get traction (like pornography, you know interactivity when you see it). I'll argue that we can create rough examples for such categories. For interactivity level, this might be: no interactivity, page turning/linear progression, multi-dimensional navigation like web pages or multiple choice questions, or rich interaction such as SimCity or Doom/Quake with rich (or seemingly limitless) choice interaction possibilities and rapid feedback. While I am not committed to this particular set of distinctions, I believe this is an achievable and desirable intermediate stage on the path to a fixed vocabulary.
It's not easy to determine categories, nor to attempt to apply them to the myriad types of potential objects, but the guidelines for accomplishing the task can be by example as well as by theoretical principle. In places where the theory is still controversial, we'll need to do it by example.
I recognise that what I propose is not an easy task, but if we do not control the vocabulary, we ensure that systems cannot operate on the data. One important future use of learning object tagging is for intelligent systems, which will only be possible if the tagging is through a predictable vocabulary.
Just briefly, let me extend my interactivity level examples to two other categories—semantic density and difficulty level--to indicate that this is a generalisable approach. For semantic density, we could indicate something to the effect of: concept material implicit but not explicit, or buried in additional detail, as in a story; narrative and illustrated content; direct representations such as expositive text, charts, tables, or graphs. For difficulty, we could consider: introductory material; initial application or overview material; scaffolded practice or detailed example; and full application or for expert only.
The sum total of what I'm proposing is a fixed vocabulary for a finer granularity and the discriminating feature (in addition to technical and IP properties) being the instructional role of the object. I'd like to stop here and suggest some questions for discussion.
What about a new instructional design? This suggests a different approach to instructional design, where the components of the instructional process are designed separately and designed to stand alone. Is that a good direction, and why or why not?
What about granularity? This level of granularity provides greater individualization of learning, but at an overhead for authoring. Is it worth it, and why or why not?
What about vocabulary? The powers of a controlled vocabulary are greater automatic processing. The costs are significant debate and perhaps premature limitations. Is the goal obtainable, and why or why not? Is it worthwhile, and why or why not?
What questions haven't we asked? What tradeoffs have I missed, and what are their pros and cons?
While I wrote the first draft, important revisions have been made by Brendon Towle, Cindy Mazow, Edwin Bos, and Dan Christinaz. They substantially improved it; all remaining errors, of course, are mine. It is hoped that they will participate in the discussion as well.
This paper was designed to inform in regards to the spread awareness of the growing movement towards LOs, and to make some suggestions related to the existing proposals. Three issues were deliberately raised in the paper: the adequacy of the tagging schemes in certain areas, the granularity appropriate for system sequencing, and the potential for tagging on instructional role. These issues were covered to some extent by the discussion, and a number of other issues surfaced.
It seemed from the discussion that there was a general level of agreement that we were discussing Learning Objects (LOs) to facilitate learning from and the preparation of online learning. It also seems to be agreed that these LOs would be combined or sequenced in some way before they are presented to learners. There were some that did not agree that this was the way forward, though the focus of the debate was agreed. But as soon as this was 'unpacked' to a level at which it could be implemented, views fragment along several lines. What follows are some of the 'cracks'.
What defines an LO?
Agreement on the definition of LOs seems to be muddied in several dimensions; the content/tool relationship, LOs/not-LOs, size and the presence/absence of metadata tags appear to be the main ones. Ip expresses it well when he says "I am still struggling with an operational definition of an LO".
Kahn and Lowney draw comparisons between software components and LOs with Kahn suggesting that LOs may experience the same lack of success as independent software components such as word processing packages and spell checkers. Quinn extends this analogy but puts forward the view that items in a library for GUI widgets is nearer the size he would envisage. He then later goes on to suggest a list of pro's and con's of a "finer level of granularity than a complete instructional sequence".
In contrast, Rowley and Gilbert appear to see an LO as a whole course or at least a large, complex and coherent part of a course and therefore reject the applicability of re-use to 'commercial-grade' courses since the expected standards of 'harmonisation and customisation of courses would not be feasible. Gilbert goes on to suggests that since anything that is designed for educational use is designed for specific learner groups and learning aims in mind, successful use for learning outside that scenario will be purely accidental. Thompson and Knox both ask whether a complete 'session' such as a class time or tutorial should be considered as an LO, whereas Quinn would have a basic LO at a much smaller level. This view is not shared by others who feel that for success, courses need to be decomposed into smaller objects that would be the LOs. These would then be combined, perhaps in hierarchies (Thompson and Downes) where at the higher level student is integrating lower level LOs.
Shafer suggests that neither 'type' not 'size' of LO need be determined in advance if the right structure is used. Quinn expands a statement of Shafer's to suggest two ways of creating LO's, firstly a 'natural level' and secondly through guidelines and context.
Other suggestions as to what might be considered LOs include 'non-educational' or knowledge objects (Ip) and bulletin boards and collaborative activities (Quinn rejected by Dalgarno). Ip, taking input from Lian and Schuyler, suggests that an LO must have at least 4 subcomponents: content, functions, learning objectives and 'look and feel'. This is rejected by Downes who draws a distinction between 'components' which are self-contained entities (such as LOs) and 'variables' that are values and properties, such as colour or font size, which cannot exist on their own.
Parson uses a 'Web as Museum' metaphor to show that different types of LO are not a problem for either learners or educators. Ip, amongst others, suggests that tools should be permissible as LOs and is supported by Cooper who sees no problem with either proscribing or not a particular tool as part of an LO and in fact would welcome specific tools for some purposes. He goes on to say 'in fact I would go further and exclude nothing'.
There is some disagreement as to whether LOs can be tools or must be contentful and Dalgarno only accepts the relevance of LOs to subjects where there is 'individual learning of concepts' such as the sciences. A long discussion evolved between Quinn and Ip concerning the relevance, admissibility and definition of LOs that have not been specifically designed for educational purposes. Definitions were exchanged of NEFs, Knowledge Objects, Knowledge Assets and Learning Assets to name a few. Ip was pro the integration of non-educationally specific and Quinn con saying "I can't see a system capable of stringing together knowledge objects … into a learning experience".
Existence of Metadata
The tagging – LO relationship is confusing with some suggesting that tagging only identifies the object for the system or user and others that without tagging, and object cannot be an LO, others further suggest that it is the 'kind' of information in the tags/metadata which makes the difference. Cooper, for example, has a very inclusive approach to LO types on the condition that information concerning the 'scope' and 'level of instructional support' are tagged.
There seems to be an onion-like view of how something becomes an LO. For Quinn, a Knowledge Object becomes an LO when it is given 'instructional wrapping'. It then needs a layer of metadata tags to become complete, a stage with which others agree.
Labelling an LO: Metadata
Both Weston and Cooper make mention of the need for metadata standards to be extensible to adapt to future needs. Thompson raises the issue of standardised metadata language not matching users usual vocabulary and suggests that experts in the area of LOs and their metadata will be needed to which Quinn agrees. There seems to be a tension concerning rigidly standardised metadata (best for machines) and less standardised, more flexible tagging (usable by humans) although Hobbs does not see these as incompatible. Quinn refers to maintaining the balance between these aspects as 'a fine tension', but also suggested that the metadata is for systems even in the non-system sequenced cases, and that a machine should provide an interface for other tasks.
Shafer suggests that by concentrating on the 'what it does' rather than the 'how it can be used' the generation of tags for LOs will be made more manageable. Brusilovsky, in his machine/human sequencer discussion claims that if a computer is to sequence, tight, rich, defined vocabularies will be needed (Quinn's view) but if a human agent is involved it would not be necessary to work to such a level of rigorousness.
Other issues raised include the difficulty of defining any restricted vocabulary such as levels of difficulty and levels of interactivity of LOs.
Combining LOs into a 'course'
There are issues about the benefits of LOs, which seems to revolve around the role they play.
Pincas raised the issue of starting point, suggesting that many developers begin with the LOs and only look at the instructional design later, herself preferring to work the other way round. Ip suggests a cyclical approach oscillating between the materials and the pedagogy in response to these concerns.
Martinez sees the migration towards a new instructional design appropriate to the use of LOs as an opportunity at which we might integrate and capitalise on the growing awareness of the influence of affect and emotion on learning.
Others also suggest that rich metadata and small LOs should permit adaptability of content to user although more research as to the user characteristics to which it should adapt will be necessary. Quinn suggests two approaches to combining LOs; 'adaptation' in which the system leads and 'adaptability' where students set their own parameters. Where the system leads, either the LOs are 'simple but richly tagged' or the objects are very smart and the flow of information very complex making the former more likely to be realised.
The issue of the need to design LOs specifically for re-use was also raised (Kahn, Lowney and Schuyler).
Brusilovsky states "…the answers to a number of questions posted into this forum depend on who is doing the sequencing". He then offers two computer-based possibilities and one human-based. Both the computer-based approaches are based on the use of a student model of some kind. One generates the 'string' of LOs 'on-the-fly' (presumably as a result of real-time student actions) and the other is done in 'one shot' before the student starts. When moving onto the human educator as the sequencing 'agent', he suggests that for this we can be much more flexible in our definition of LOs and their tags. Quinn speaks for the computer-based sequencing agent and suggests that the sequencer could complement the use of the standard 'knowledge taxonomies' with learner characteristics and other taxonomies as they develop. He suggests that these could be further enhanced through the development of templates, heuristics and definitions of best practice.
Weston suggests that the 'AI approach' (dependent on the standard of AI) would suit independent learners whilst the 'authored' (presumably human) is better suited to an institutional context. Plot also voices concern suggesting that when intelligent systems assemble courses 'in this context coherence is a big problem'). Whilst Quinn acknowledges concerns about the AI contribution to current system, he believes that "a system can create an individualised learning experience on-the-fly from tagged learning objects". This dichotomy of approach, whether human or computer based 'course construction' seems to affect all other aspects of LOs, their definition, use and tagging.
Concerns were raised about the resulting coherence of courses produced by juxtaposing LOs (Lowe, Lian, Rowley and Gilbert). This resulted in some interesting discussion with many contributors as to what constitutes 'cohesion', who or what should be responsible for providing it, and how important it is. Cohesion is seen as two different things, the similarity between LOs in terms of fonts, graphics etc (type 1) and the information and guidance which 'glues' the LOs together to form a 'course' (type 2).
Lian sees the development of coherence between LOs in a 'course' as being a 'core issue' and raises the question of what creates coherence as a research issue. Others propose that the educator is responsible for providing the necessary cohesion (type 2) between LOs when they combine them to form a 'course' (Plot and Parson) Thompson gives and example of such a course evolving. Quinn, who believes that the system itself can produce the cohesion suggests that the coherence may be composed of (amongst other things) the look and feel of LOs, their thematic relationship and their relevance to a particular task.
Others see the degree of cohesion between LOs as being of less urgency (type 1). Parson equates a student integrating different kinds of LOs with a similar experience with different object types in a museum. Schuyler suggests that learners already integrate very different resources on the internet and Lowney points out that although we currently may have coherence throughout a course, this rarely extends between courses, the content of which students are still expected to integrate.
Some lovely images have been shared relating to how LOs come into being either as individual objects or combined into larger units of study. Weston picks up on the concept of 'wrapping' objects (implying an action effecting areas external to the LO) to give a group or sequence of object coherence). Lowney, on the other hand, suggests that LOs might be 'massaged' to suit an environment of use (implying an internal effect on the LO).
D'Aquin proposes distinguishing between 'education' and 'training' suggesting that LOs are not appropriate for training. Quinn prefers to see these as different ends of a continuum with 'most' forms of education leading to 'performance on test'. He believes that the new instructional design which can be developed using LOs will help educators address the needs of different types of learners though the same objects saying "I think what we want to achieve is a mechanism where content can meet widely varying performance objectives, in flexible ways."
The complicated and thorny issue of the relevance of current IP law to complex LOs and how this might affect their use was raised.
This discussion benefited from the knowledgeable contributions of a variety of participants. While there was large interest in, and acceptance of, the view that Learning Objects have benefits, there was considerable discussion about the important characteristics of learning objects, and the role they play in the pedagogical experience. There is also a clear need for rich metadata standards. It is hoped that this discussion helps spread awareness of the issues involved in moving to an LO model.