Educational Technology & Society 5 (1) 2002
ISSN 1436-4522

Automated Tutorial and Assignment Assessment

Roger F. Browne
Senior Lecturer, Institute of Information Sciences and Technology
Massey University, Palmerston North, New Zealand
Tel: +64 6 350 5799 ext.2456
Fax: +64 6 350 5604
r.f.browne@massey.ac.nz

 

ABSTRACT

Computer simulation is used extensively both as an educational tool and within industry.  It can be employed as a means of developing a new process or system or as a means of experimenting with an existing system.  The simulation exercises described in this paper aim to provide students with the ability to use and create computer models.  The input parameters for the simulations are based on student identification numbers and the assessment is automated.  Ideally students would receive immediate feedback, but the facility has only been partially implemented.  Automated “reverse-engineering” of incorrect answers can suggest the source of the error in many cases, and this is in the process of being put into operation.

Keywords: Automated assessment, Modelling, Simulation


Introduction

Three developments are significant for the context described in this paper. Firstly, intelligent tutoring systems (ITS), which originated from the computer assisted learning systems of the 1970s, have the ability to respond to the needs of individual students. Secondly, powerful computer-based modelling packages such as Matlab are available that provide immediate feedback when applied to relatively simple problems (that is, at a level appropriate to undergraduate engineering students). Thirdly, the Internet provides an attractive means of managing tutorials and assessments, especially for large classes.

Real-world systems in engineering and technology are typically complicated, so direct experiments on entire systems may not be practical. Examples include:

  • The reaction of a large engineering structure such as a bridge to loads in excess of its design limit.
  • The effect of changes of process temperatures on the quality of milk powder from a dairy plant.
  • The effect on the addition of heavy metals on an ecological system.

Clearly these experiments cannot be performed on real-world systems owing to safety and health concerns.  In some cases the use of physical models may be appropriate, although computer models now dominate in the engineering industry.

Within the context of a university engineering course, computer modelling has two aims:

  • To instruct students in the way in which systems operate, and allow students to perform simulated experiments on those systems.
  • To provide students with the skills needed by those industries that use computer modelling as a tool.

Thus, mathematical modelling is an important tool, both within universities and in industry. An introduction to mathematical modelling is provided in “Technological Mathematics” (Massey University Calendar, 2001), a core paper for all options of the Bachelor of Engineering (BE) and Bachelor of Technology (BTech) degrees at Massey University. This paper is available to internal students only, and in recent years the class roll has varied between 150 and 280. Within the context of the Technological Mathematics course the objectives are to provide the students with the ability to formulate mathematical models, code the resultant equations in Matlab, and interpret the outputs of the simulations. Use of simulations as a teaching tool is not considered in this paper as it is the process which is relevant to this discussion. A Matlab-based assignment, which is used both as a tutorial and as part of the course assessment, will be reviewed below. Then some proposed improvements to this process will be outlined.

 

Literature Review

Computer simulations are important in both education and industry.  According to Alessi & Trollip (1991), “In a simulation the student learns by actually performing the activities to be learned in a context that is similar to the real world” (p. 119).  The authors observed that, compared to conventional tutorials, simulations provide better student motivation, offer a better transfer of learning, and are more efficient as a learning experience.

The choice of simulation language is important. Various special-purpose languages have been developed for learning through simulation.  For example, SMISLE, a System for Multimedia Integrated Simulation Learning Environments has been developed (see de Jong et al., 1994).  However Matlab has the advantage that it is fully industrial strength software and provides a rich library of built-in functions.

Peña & Alessi (1999) investigated the effects of three different presentation formats, Microcomputer-based Laboratory (MBL), simulation, and computer-based text, on individuals’ ability to understand concepts in physics. They concluded that the MBL and the simulation presentation formats were of equal effectiveness, and both were more effective than computer-based text in the context of the desired learning actions.

 

Basis of the mathematical modelling assignment

The objective of the mathematical modelling section of the technological mathematics course is that after completing the course the students will be able to:

  1. Develop a mathematical model of a real-world application, based on either an analytical description of the application or an empirical approach using data obtained from measurements on the actual system.
  2. Express the equations describing the model in a form suitable for solution in Matlab.
  3. Use the model to derive the values of system parameters subject to specified initial conditions.
  4. Make judgements concerning the accuracy of the simulation.

The original modelling exercises had been in existence for over ten years.  They were not re-evaluated given this established usage.  The motivation for providing the automated process described in this paper was:

  1. To decrease the turn-around time (to the advantage of both assessor and students).
  2. To provide students with feedback that was both useful and timely.

 

Current implementation

In its present form the assignment involves the input of numerical parameters from which, through computer-based experimentation, appropriate solutions will be derived. When the assignment was first introduced, plagiarism was a significant problem. Subsequently a scheme was implemented in which a portion of each student's ID (identification number) was used to generate a selection of the input parameters. However, various constraints must be imposed on the values of the parameters, and simple procedures were introduced to allow students to derive the values of their parameters.

The first assignment question involves simulating the flight of a golf ball (which is subject to the forces of gravity and air friction) such that it travels 100 metres before striking the ground (achieving a hole in one on a 100 metre fairway). An infinite number of combinations of initial angle (relative to the horizontal) and initial velocity will achieve this result, but by constraining the angle to be a function of the ID a velocity that is unique to that ID results. The technique initially adopted for determining the value of the initial angle was as follows. Let the last two digits of the ID be ‘pq’. Then the initial angle of the ball is ‘4p.q’ degrees. For instance, an ID of 12345678 will result in an initial angle of 47.8 degrees. The task for the students is to then estimate the appropriate velocity, perform a computer simulation, and use that result to refine their estimate. Usually an answer within the required bounds is obtained within four or five iterations. A typical simulation result is as shown in Figure 1.

 

Figure 1. A typical golf ball simulation

 

This has worked well in most cases. However certain input parameters provide fortuitously simple solutions.  For instance, an ID ending in ‘80’ will give an initial angle of 48.0 degrees, and the corresponding velocity is 70 (within 0.1 metres/second). If the student uses initial guesses of velocity in multiples of ten then the correct value will be arrived at with very few iterations, and possibly on the first guess. In contrast, an ID ending in ‘76’ results in a velocity of 69.5, which is unlikely to be arrived at by guessing.  So a more robust parameter-derivation scheme has being developed.

Using a java applet accessed through an HTML page the students input their ID numbers and have a suitable parameter set returned to them, based on the full eight digits of their ID.  A portion of this process is shown in Figure 2. During the automated tutorial, marking correct answers can then be uniquely associated with a given ID. The process of establishing the table of acceptable parameters is fully automated.  The system uses the ID as a pointer into the parameter table.

 

Figure 2. Parameters obtained from a Java applet

 

The web-based assignment submission has been fully implemented. Currently the marking process is semi-automated, but it is hopeful that a fully automated marking system will be in place soon. The only marking operation that would not be automated is the checking of graphs, but it has been found that virtually 100% of the computer-generated graphs of the assignments submitted are correct if the simulation results are correct.  The production of a graph provides the student with visual confirmation that they are on the right track.

During this first year of electronic submission a number of problems have arisen:

  • Students are not confident that the electronic submission, has worked correctly, and this has frequently lead to multiple submissions.
  • Sometimes the submission itself has failed, and the sender’s Internet service provider has provided confusing messages.
  • Answers have been entered in a variety of formats. The JavaScript used in the submission page checks for the presence of a response but not its format. For instance, the second assignment question involves entering the value of an electrical capacitance (in Farads). The response should be in engineering format, for instance “550E-9”, but actual responses included “550 * 10^-9” and “550E‑9 Farads”. Thus, fully robust automation will require carefully coded parsing of the input.
  • A minor error may lead to a result that is drastically wrong. For instance the calculation of frequency in Hertz (cycles per second) involves calculating the wave period (the distance between zero crossings) in seconds, the frequency being the inverse of period. If the correct response in a particular case was 1000 but this was submitted as 0.001, then the calculations are probably correct, apart from the final step of inverting the period to obtain frequency. Similarly the capacitance might be reported as “550.0” instead of “550.0E-9”.

 

Improvements

In addition to the observations in the previous section, various improvements are under investigation:

  1. Certain errors (such as a wrong sign in an equation) will usually lead to seriously wrong answers. If the student has derived his answer on the basis of a single error, but has proceeded to perform the simulation correctly, the incorrect answer may be able to be ‘reverse-engineered’ to identify the source of the error. In this procedure the set of probable errors is first tabulated (five such errors, covering an estimated 80% of all submissions that have large errors, have been identified in the case of the golf ball simulation). Then, any wrong answer is compared to this set to derive the possible source of the mistake. Details of this procedure are outlined in the next section. The procedure has been partially implemented.
  2. There is an opportunity to extract any global patterns or trends in incorrect responses. The reporting of the oscillation period instead of the frequency as described in the previous section would be a good example of this.

 

Error identification

In general error identification in faulty answers is likely to provide useful information only if the number of probable (or common) faults is constrained within some bound that can, in principle, be estimated. An incorrect answer arising from an erroneous calculation is ambiguous if it is indistinguishable from the answer arising from a different error for that same calculation, within some specified accuracy. For simulation problems having a random distribution of input parameters and in which the simulation answers are reported to a specified number of digits, the probability that two different errors would result in the same answer are as shown in Table 1.

 

Number of significant figures:

2

3

4

5

6

Maximum error:

1%

0.1%

0.01%

0.001%

0.0001%

Total number of possible errors:

9

0.1863

0.0313

0.0045

0.0006

0.0001

16

0.3202

0.0621

0.0093

0.0013

0.0002

25

0.4624

0.1071

0.0167

0.0023

0.0004

100

0.8172

0.4398

0.1011

0.0138

0.0018

Table 1. The probability of coincident errors

 

This table has been derived on the basis that all errors are equiprobable. Although this is only approximately true in the case of the modelling problem described in this paper the values in the table do indicate the levels of confidence with which the causes of erroneous answers can be correctly identified. For instance, if simulation results are considered as being accurate to four significant figures then the probability of two erroneous answers being the same is relatively small (approximately 1 in 60 or 0.0167) if there are 25 or fewer possible sources of error but is much larger (approximately 1 in 10 or 0.1011) in the presence of 100 possible sources of error.

Another consideration is the amount of work involved in identifying the possible errors (this must be coded into the assignment submission software). In highly regular mathematical procedures (such as matrix manipulation) this would be straightforward, but in the assignment considered here the commonly-occurring errors were only identified through the manual marking of assignments. The commonest errrors were incorrect signs.  Finally, error identification would be very difficult to implement in the presence of multiple errors.

 

Conclusions

Web-based assignment submission, combined with automated marking, has the ability to provide a quick turn-around. In addition, it also provides feedback to students (especially when their assignment contains errors) and to tutors (allowing trends and common sources of error to be readily identified). In at least 75% of cases, incorrect answers can be ‘reverse-engineered’ to determine the source of the error. However the implementation of a fully automated system faces many challenges.

 

References

  • Alessi, S. M., & Trollip, S. R. (1991). Computer-based instruction:  methods and development, Englewood Cliffs, NJ: Prentice-Hall.
  • de Jong T, van Joolingen W. (1994). SMISLE:  System for multimedia integrated simulation learning environments. In de Jong, T. & Sarti L. (Eds.) Design and production of multimedia and simulation-based learning material (133-165).  Dordrecht: Kluwer.
  • Massey University Calendar (2001). Massey University, Palmerston North, New Zealand.
  • Peña C. M., & Alessi S. M. (1999). Promoting a Qualitative Understanding of Physics. Journal of Computers in Mathematics and Science Teaching, 18 (4), 439‑448.

decoration