Educational Technology & Society 3(4) 2000
ISSN 1436-4522

A multi-institutional evaluation of Intelligent Tutoring Tools in Numeric Disciplines

Kinshuk
Information Systems Department
Massey University, Private Bag 11-222
Palmerston North, New Zealand
Tel: +64 6 350 5799 Ext 2090
Fax: +64 6 350 5725
kinshuk@massey.ac.nz

Ashok Patel
CAL Research & Software Engineering Centre
Bosworth House, De Montfort University
Leicester  LE1 9BH   ENGLAND
Tel/Fax: +44 116 257 7193
apatel@dmu.ac.uk

David Russell
Professional Doctorate Programmes Director
Faculty of Business & Law, De Montfort University
Leicester  LE1 9BH   ENGLAND
drussell@dmu.ac.uk

 

ABSTRACT

This paper presents a case study of evaluating intelligent tutoring modules for procedural knowledge acquisition in numeric disciplines. As Iqbal et al. (1999) have noted, the benefit of carrying out evaluation of Intelligent Tutoring Systems (ITS) is to focus the attention away from short-term delivery and open up a dialogue about issues of appropriateness, usability and quality in system design. The paper also mentions an independent evaluation and how its findings emphasise the need to capture longer-term retention.

Keywords: Intelligent Tutoring Systems, Multi-institutional Evaluation, Quantitative Evaluation, Evaluation of Learning Technology


Introduction

A number of researchers have considered the benefits and limitations of Computer Aided Learning (CAL) and its effects on the educational community. CAL has been compared with the traditional human teacher based methods to investigate its effectiveness and has been observed to perform better in many applications (Kaplan & Rock, 1995). In some of the other cases, the low performance of CAL can be attributed to poor interface design (Hazari & Reaves, 1994; Wong, 1994), less flexibility than human teachers or poor and inappropriate evaluation of CAL packages (Murray, 1993; Shute & Regian, 1993; Duncan, 1993; Alexander & Hedberg, 1994). It is therefore important to develop benchmarks for assessing the suitability of CAL packages in the actual learning environment.

Heller (1991) noted that instructional software, like all other educational material, should be evaluated before it is used in the classroom or research laboratory. The challenge is to decide what to evaluate, who should carry out the process and how it should be carried out. The literature suggests that the evaluation of a tutoring system needs to be carried out in two stages (Wyatt & Spiegelhalter, 1990; Murray, 1993; Legree et. al., 1995). Initially, the system should be evaluated for its overall effectiveness and usability. Such evaluations play an important role of informing the subsequent modifications of procedures and interface design. When a system meets the objectives of the initial evaluation stage, the efficacy of its components should be determined in the real environment. This paper presents some of the analysis and findings of a multi-institutional evaluation study to investigate the efficacy of intelligent tutoring systems designed for numeric disciplines. The study was conducted as a part of formative evaluations carried out towards the end of software development cycle.

The evaluation of Intelligent Tutoring Systems in numeric disciplines has not received much attention in the literature. Although there are some instances of small-scale evaluations that have been completed within a single institution, little work has been reported on large-scale evaluations conducted across several institutions. This paper is concerned with the findings of research involving multi-institutional evaluation of the effectiveness of tutoring packages as an alternative to the human-led tutorials. It employs a quantitative approach, in the main, as favoured by various researchers (for example, Legree et. al., 1993; Murray, 1993; Mark & Greer, 1993) for initial investigations, although the subjective views of the students towards the functionality and effectiveness of these packages have also been recorded. The evaluation is based on three packages used for teaching different techniques in management accounting. Although the evaluation studies were conducted under the laboratory-based control-testing conditions and may not provide a fully accurate picture of how the students would behave in real teaching environment, the multi-institutional nature of this study brings it near to a field trial carried out with the sample size exceeding that required for a power level of 0.95 (Altman, 1991). This is sufficient to enable drawing of firm conclusions about the efficacy of the tutoring system, at least within the scope of the testing conditions. In addition, an independent study by Stoner & Harvey (1999), described later in this paper, validates the effectiveness of tutoring packages in real environment.

 

Byzantium model of CILE and Intelligent Tutoring Tools

Although computers are being used at all levels of the curriculum, introductory topics are becoming more popular for the use of CAL. An explanation of this fact may be the simple and relatively discrete nature of the concepts acquired at the introductory level that are inter-linked at later stages of studies to solve more complex problems. To recognise that students construct knowledge of different degrees of complexity at different stages of their learning, a model of Computer Integrated Learning Environments (CILE) was formulated by a consortium of six universities under the Byzantium project, funded through the Teaching and Learning Technology Programme (TLTP) of the Higher Education Funding Councils of United Kingdom. This model, which proposes that the level at which a discipline is taught and learnt provides a vital context for tutoring software design, divides the learning of the subject discipline into three distinct knowledge levels:

  1. At the introductory application level, a student forms mental maps of various conceptual objects, each consisting of a small network of interrelated conceptual atoms, and learns how to use the basic tools of a subject discipline. The basic tutoring package or an Intelligent Tutoring Tool (ITT) is designed to suit this level.
  2. At the advanced application level, the vertical and horizontal integration of conceptual objects takes place. Vertical integration involves a comparison of the results of multiple use of the same tool, e.g. by comparing the Net Present Value of three projects. Horizontal integration employs multiple tools to solve a given problem, e.g. using the Budgeting, Absorption Costing and Job Costing ITTs to calculate a job cost. The individual ITTs can be used for various sub-tasks but an intelligent application providing a suitable interface for (i) holding and comparing the results of multiple instances of an ITT and (ii) linking various ITTs will be able to guide a student through a more complex task.
  3. The actual application approximation level attempts to simulate a simplification of real world problems. Here the students learn how to account for behavioural and environmental factors. Tutoring software at this level requires the ability to handle qualitative, probabilistic and imprecise data.

The current research output is focused on the development of the first level packages and their evaluation. It is recognised, however, that the on-going developments in the fields of the Internet, fuzzy logic and natural language processing may greatly assist developments at subsequent development stages by respectively providing: (i) an infra-structure for distributing development efforts but also for linking the outputs of such distributed efforts (Patel & Kinshuk, 1997); (ii) the processing of imprecise and possibly qualitative data; and (iii) a more natural student-computer interaction interface that removes much of the effort in encoding data to suit computer processing and thus lifts current limitations on the range of activities that can be performed on a computer with ease.

The Intelligent Tutoring Tools (ITTs) are aimed at extending a lecturer's scope by horizontally partitioning some of the teaching activities, e.g. supervising the development of operational skills, and assigning them to a tutoring package. Although the accounting domain has been used to develop these ITTs, the structure of an ITT is considerably domain independentand the same structure can be used for any numeric discipline. The structure and use of the ITTs have been discussed in Patel & Kinshuk (1996a and 1996b) and Patel, Kinshuk & Russell (2000) respectively.

 

Evaluation of ITTs

The evaluation stage of the ITT design commenced in May 1995, when the students at one university in United Kingdom studied Capital Investment Appraisal in a two-groups parallel trial. The Control group had classroom-based tutorials led by an experienced teacher, whereas the CAL group was exposed to the CAL package in computer laboratory based tutorials. Group comparison with the help of pre and post tests provided the initial validation of the effectiveness of the ITT, whereas the observations and subjective questionnaire feedback from CAL group validated the interface design adopted in the ITT. The study also provided a validation of the measurement techniques and questionnaire design adopted. A Phase II study was carried out subsequent to incorporating some design changes as a result of the Phase I study. It was conducted at six UK institutions and utilised three CAL packages. Capital Investment Appraisal, Absorption Costing and Marginal Costing packages were used by different groups of students for this purpose. A two group study was organised at two universities on the Capital Investment Appraisal package. At other institutions, the testing of all three ITTs took place on a random sample of about 40 students, as it was not feasible to test all students at other institutions.

Since the aim of the evaluation in this study was to examine the overall effectiveness of the tutoring packages mainly quantitative methods are used, as suggested by various researchers (Legree et. al., 1993; Murray, 1993; Mark & Greer, 1993), although qualitative views were also obtained from comments recorded during student observation and through subjective questionnaire. Subject-based evaluation methods were used in the study, as they are widely employed and favoured for the evaluation of CAL packages (Daroca, 1986; Simpson, 1986; Gallagher & Letza, 1991; Tonge et. al., 1994; Iqbal et. al, 1999). These are based directly on the user's judgement and the process of data collection is facilitated under laboratory conditions with less chances of bias. Two-group trial studies, the most common technique for evaluation of CAL packages (Webb et. al., 1991; Simons & De Jong, 1992; Wang & Sleeman, 1993; Ruf et. al., 1994; Forrester, 1995; Magnuson-Martinson, 1995), had been adopted for assessing the effectiveness of the packages.

Two types of subject-based evaluation techniques were used: questionnaires and observations. Since questionnaires contained both structured and open-ended questions, it was quite easy to elicit large amount of specific information quickly and easily. Also, users were free to provide detailed opinions about the packages in the open-ended questions. Students were also observed by one of the authors and a staff member at the various institutions. The information collected through both these techniques provided valuable understanding about the students' feelings towards the navigational procedures, screen layouts and other human-computer interaction related matters. Student observation was employed as a supplementary technique to augment the information obtained through the questionnaire. It was also used to capture the initial reactions of students that may not be conveyed in a questionnaire completed at the end of a session, when the initial problems may have been forgotten due to increased confidence in operating the software.

The main objective of the research was to determine if the Byzantium project ITTs are an effective alternative to the resource-intensive human-tutor-led tutorials for introductory numeric disciplines.

 

Research questions

The research questions addressed for statistical analysis in the study were as follows:

  1. Are the gains in students’ procedural knowledge of a numeric discipline, as obtained through the tutoring packages, comparable to human-led tutoring?
  2. Are the gains in students’ knowledge consistent across different packages?
  3. Are the gains in students’ knowledge consistent across different institutions?
  4. What are the views of the students towards the design of the interface, classified according to the following factors:
    1. Gender
    2. Previously computer training
    3. Confidence in operating computers
    4. Enjoyment in using computers
  5. Are there any differences in the performance of students who did not have any previous computer training, who did not have confidence in operating computers and who did not enjoy using computers, with those who had these attributes?

 

Questionnaires

The questionnaires employed in the study consisted of: (i) Pre and Post Test Questionnaire for all three packages; (ii) a Learning Style Questionnaire and (iii) the Subjective Questionnaire. Since the students had a mixed background of subjects studied at secondary school level, the Pre and Post Test Questionnaires were essential for eliminating any bias of previous exposure to the subject matter and were designed to assess the improvements in student knowledge following the use of each package. The subjective questionnaire was divided into three parts. The first part contained information regarding biographical data of users and the second part was related to their experience with general computing. The information obtained in these two parts provides the basis for the division of students into various subgroups according to their background for the purpose of analysis. The third part of the questionnaire was related to the subjective assessment of the tutoring system. It contained 113 closed-ended statements and one three-part open-ended question. The closed-ended statements in the questionnaire were 44% in favour and 56% not in favour of the packages so that the questionnaire was unbiased and balanced. All statements had a five point agree-disagree Likert Scale to facilitate easy and reliable analysis.

 

Sample size determination

The adequacy of the sample size is based on standardised difference, which is the ratio of the difference of interest to the standard deviation of the observations. In the comparative study of the two teaching methods for the introductory subjects, the difference in the means of gains obtained by students was used as a basis for comparison. A real difference of 10% between the means of gains was taken as representing an important difference between the performance of two teaching methods. The standard deviation for phase II study varies between 7.4 and 14.7. Therefore, taking the maximum value of standard deviation at 14.7, the value of standardised difference comes to 0.68. According to Altman (1991), the power level of 0.95 is achieved with a total sample size of 110 for a significance level of 0.05. This value of power is large enough to draw firm conclusions. The total sample sizes for all packages under the study were well above 110 students.

 

Statistical analysis

Initially, the gains were obtained for different institutions, where the two-group trial studies were carried out. Two-way ANOVA analysis was applied to the data to investigate whether the gain in student knowledge was consistent for both the teaching methods. The interaction between modes of instruction and centres was also analysed and the Least Significant Difference method was employed to investigate which centres had significantly different results (see Altman, 1991). The consistency among the gains obtained by the students was also investigated by two-way ANOVA analysis for different packages at various centres. To ascertain the students’ views about the packages, the subjective questionnaire data was analysed. The questionnaires were grouped according to centres and packages, and since the data was categorical, Mantel-Haenszel chi-square test was used for the analysis.

 

Analysis of the evaluation data

The evaluation took place at six universities in United Kingdom. Four universities out of six (universities A, B, D and E in table 1) were new universities (formerly polytechnics). The other two (universities C and F) were traditional universities. One new university (university E) used the packages in their open learning programs, whereas, at the other universities, the packages were used in general tutorial settings.

Table 1 lists the number of students who participated in the evaluation at various universities.

 

Capital Investment Appraisal

Absorption Costing

Marginal Costing

 Univ. A (I Phase)

Human tutor’s Students=40

CAL Students=40

 

 

 Univ. A (II Phase)

Human tutor’s Students=40

CAL Students=40

 CAL Students=38

 CAL Students=38

 Univ. B (II Phase)

Human tutor’s Students=41

CAL Students=40

 

 CAL Students=38

 Univ. C (II Phase)

 

 

CAL Students=39

 Univ. D (II Phase)

 

CAL Students=38

CAL Students=38

 Univ. E (II Phase)

CAL Students=41

CAL Students=40

 

 Univ. F (II Phase)

CAL Students=42

 

 

Table 1. Summary of sample sizes at various universities

 

Analysis of the gains obtained from pre and post test results

Two-way ANOVA analysis was applied to the gains of pre and post test results of the two group parallel trial study on Capital Investment Appraisal package at two universities. At one university, the study was carried out in two phases. The phase I study was a pilot study.

Table 2 shows the means and standard deviations obtained at the different universities.

Universities

Human-led Tutoring

CAL Teaching

 

n

mean

Std. Dev.

n

mean

Std. Dev.

1. University A (Phase I)

40

52.1

23.7

40

55.3

20.8

2. University A (Phase II)

40

65.2

7.4

40

66.8

8.7

3. University B (Phase II)

41

65.0

8.0

40

66.1

8.1

Table 2. Means and standard deviations  for various two group parallel trial studies

The results of the ANOVA analysis are as follows:

Source

DF

ANOVA SS

F Value

Pr > F

MODE

1

220.77802485

1.06

0.3045

CENTRE

2

7833.43015790

18.79

0.0001

MODE*CENTRE

2

55.23924312

0.13

0.8760

The above analysis can be summarised as follows:

  1. The difference between the gains of human tutor based teaching and CAL teaching is not significant.
  2. The difference between the gains of various centres is significant.
  3. There is no significant interaction between teaching modes and centres.

Since the two group trial study was completed at two universities and one university conducted the study twice, the Least Significant Difference test was used to investigate whether the difference is between the universities or between phases I and II, yielding (Comparisons significant at the 0.05 level are indicated by '***'):

Comparison

Lower Confidence Limit

Difference Between Means

Upper Confidence Limit

 

2-3

-4.014

0.470

4.954

 

2-1

7.838

12.336

16.834

***

3-2

-4.954

-0.470

4.014

 

3-1

7.382

11.867

16.351

***

1-2

-16.834

-12.336

-7.838

***

1-3

-16.351

-11.867

-7.382

***

The results showed that the differences are between the gains of phase I and phase II evaluation studies. These differences can be attributed to the relatively minor but critical modifications in the design following the phase I study. Once it was established that there is no significant difference between the gains at different universities, the analysis was extended to all the universities in phase II where the CAL study took place. Table 3 shows the means and standard deviations obtained at various universities.

Universities

Capital Investment Appraisal

Absorption Costing

Marginal Costing

 

n

mean

Std. Dev.

n

mean

Std. Dev.

n

mean

Std. Dev.

1. Univ. A

40

66.8

8.7

38

64.2

8.3

38

81.1

11.7

2. Univ. B

40

66.1

8.1

 

 

 

38

84.6

14.7

3. Univ. C

 

 

 

 

 

 

39

69.2

9.0

4. Univ. D

 

 

 

38

69.6

7.8

38

79.8

9.6

5. Univ. E

41

67.2

11.0

40

65.6

8.5

 

 

 

6. Univ. F

42

68.0

10.1

 

 

 

 

 

 

Table 3. Phase II studies of CAL

The results of two-way ANOVA analysis are as follows:

Source

DF

ANOVA SS

F Value

Pr > F

PACKAGE

2

13897.2231881

69.96

0.0001

CENTRE

5

4628.5081648

9.32

0.0001

PACKAGE*CENTRE

3

1182.8153323

3.97

0.0083

The analysis showed that there were significant differences in the gains between different packages and between different centres, and there was significant interaction between packages and centres. To identify significant differences in the gains, the Least Significant Difference test was applied on packages. The analysis showed that the Marginal Costing package results are significantly different from other packages. There was a difference of about 10% in the gains between Marginal Costing and other packages. The application of the Least Significant Difference test on universities revealed that the universities B and D are significantly different from universities A, C, E and F, whereas university A is significantly different from university E. The differences at universities B and D can be attributed to the higher gains obtained by the students at these universities, and the differences between universities A and E can be attributes to the lower percentage gains obtained by the students of university E (a new university where the software was used in open learning programs).

 

Subjective questionnaire

The analysis showed that the overall feelings of the students about the system were quite positive, and most of the students agreed that the packages do not require any prior knowledge of accounting and computing. Mantel-Haenszel chi-square tests were applied on subjective questionnaire for various parameters such as gender and students’ attitude towards computers. Though the details of this analysis are beyond the scope of this paper, it should be noted that the performance of students with or without any previous computer training, confidence in operating computers and enjoyment in computers was not significantly different (i.e. less than 0.05). In response to the open-ended questions, a large number of students provided positive feedback for the user interface of the packages and appreciated the error messages and the ease of navigation. They found the packages easy to understand and use. Many students commented that the layout of the screens made the programs easy to follow. Some students wanted greater transparency in the saving routines of the packages and the ability to view some details of the examples saved for computer marking. A software utility program has subsequently been developed to fulfil this requirement. 

 

Discussion

The key issue in this study was to determine whether the Byzantium approach to CAL in the numeric disciplines, based on the cognitive apprenticeship model (Collins, Brown & Newman, 1989), provides an adequate means of tutoring in the procedural skills that can be employed as an adjunct to the traditional lectures and replace some of the human-led tutorials. The study revealed that the means of gains obtained by the students in the traditional teaching group (mean = 60.8) and CAL teaching group (mean = 62.7) are almost equal and statistically; the difference in the performance is not significant. Since the sample size of this study is high enough to draw firm conclusions, it can be concluded that this CAL tutoring approach a suitable alternative to human-led tutorials. As there was no significant difference in the performance gain between the students with and without the attributes of previous computer training, confidence in operating computers and enjoyment in using computers, it can be concluded that the Byzantium packages are suitable for all the students learning numeric disciplines.

 

Feedback from independent evaluation in real environment

In an independent evaluation exercise carried out at the University of Glasgow, however, Stoner & Harvey (1999) found the results indicating that students’ performance had improved statistically significantly over the period since Learning Technology materials were introduced, and that this improvement appeared to be mainly reflected in the students’ ability to complete numeric questions, the area addressed by the ITTs. Interestingly, their evaluation involved the Byzantium ITTs, another widely used traditional Computer Based Learning package and human teachers. It was based on comparison of the examination performances over a period of three years. Considering the problems in maintaining control group conditions over an extended period of time, the Stoner & Harvey approach is, perhaps, better able to capture the improved long-term retention enabled through the cognitive apprenticeship model of tutoring system design, and may therefore provide a better measure of summative evaluation.

 

Reflections and Recommendations

The control-group based, mainly formative evaluations described here, as with many such evaluations reporting ‘no significant gain’, may have suffered from the short-term ‘freshness’ of what is learnt and may have failed to evaluate improved retention and recall in the longer term.

Our formative evaluation activities, however, yielded some other interesting findings. The work also focused on the differences in the knowledge gained through the different packages. The means of the gains were found statistically significantly different for different packages (Capital Investment Appraisal = 67.1, Absorption Costing = 66.4 and Marginal Costing = 78.6). The difference may be attributed to the ‘problem span’ and ‘problem size’, as opposed to the ‘problem complexity’, covered in these packages. In Marginal Costing, only 14 variables are involved and these are closely related to each other, although the relationships are more complex. The fewer variables allow the whole problem to be displayed on one screen and the user visually maintains a full view of the problem.

Unlike Marginal Costing, Capital Investment Appraisal (48 variables under 4 different techniques) and Absorption Costing (114 variables) spreads out the processing of the given problem into a number of screens to prevent cluttered layouts and for ease of learning. However, this appears to increase the cognitive load on students, as they have to conceptually relate the variables on one screen with those on another. Though critical variables are reproduced on the current screen, a novice user still has to retain a mental map of the variables in the previous screen to maintain the semantic link and may have to move between the screens to refresh this link until the concepts and their relationships are fully grasped and internalised.

Another possible difference favouring Marginal Costing is the smaller amount of overall information students have to process in solving a given problem, as compared to the other packages. This appears to have a psychological effect on student performance, as they perceive the relative size of the problem to be considerably larger in the other packages when compared to Marginal Costing. These reasons may explain better performance of students in Marginal Costing package compared to other packages. Interestingly, these observations, when presented at conferences, invoked interest among multi-media researchers who have suspected similar increased cognitive load and drop in performance due to screen changes. 

Since the comparative evaluations involving the laboratory based CAL groups and classroom based Control groups employed Capital Investment Appraisal and found similar knowledge gains with no significant difference between the two groups, the factors described above appear to affect the traditional teaching and learning methods too. Some topics by their nature may offer greater ease of learning, while others may be found more difficult to learn by most of the students. The study suggests that two of the contributing factors may be (i) the problem size in terms of the amount of data to be processed and (ii) problem span and the cognitive overhead involved in maintaining mental maps of the different parts of the problem at an initial stage of learning. It appears that while the partitioning of a larger size problem provides ease of learning within the scope of an individual component, it reduces overall visibility, whether we use a computer screen or a paper-based interface. 

There is a need for more research on these issues. However, it does seem that students need more time and practice for multi-part problems involving larger amounts of data and that, in the absence of adequate consideration of these factors, the traditional methods of teaching may not be allocating adequate time for learning a particular topic, disadvantaging the weaker students.

 

Conclusion

In conclusion, it would be useful to reiterate that evaluation has two dimensions. The sumamtive evaluation in a real environment involving a longer time frame is perhaps a better way to evaluate any teaching and learning system. The shorter time frame formative evaluations may fail to fully capture all the dimensions of learning and retention (perhaps blurring the difference between concept acquisition and cognitive skill acquisition), returning verdicts of ‘no significant difference’. The formative evaluations, however, are vital, not only for identifying key design issues but also for improving our understanding of the pedagogical issues.

 

References

  • Alexander, S. & Hedberg, J. G. (1994). Evaluating technology-based learning: Which model? In K. Beattie, C. McNaught & S. Wills (Eds.)  Interactive Multimedia in University Education: Designing for Change in Teaching and Learning (A-59), North-Holland: Elsevier Science B. V, 233-244.
  • Altman D. G. (1991). Practical Statistics for Medical Research, 1st ed., London: Chapman & Hall.
  • Collins, A., Brown, J. S. & Newman, S. E. (1989). Cognitive Apprenticeship : Teaching the crafts of reading, writing and mathematics. In Lauren B. Resnick (Ed.) Knowing, Learning and Instruction, Hillsdale, N. J.: Lawrence Erlbaum, 453-494.
  • Daroca, F. P. (1986). Introducing Microcomputers into the Classroom: Student Learning and Perceptions. Kent Bentley Review: Accounting and Computers, 2 (Fall), 67-78.
  • Duncan, N. C. (1993). Evaluation of instructional software: Design considerations and recommendations. Behavior Research Methods, Instruments & Computers, 25 (2), 223-227.
  • Forrester, M. (1995). Interpreting 'Hyper-Entrails' in a computer-based learning environment. TLTP Newsletter, Summer, 8-9.
  • Gallagher, I. D. & Letza, S. R. (1991). An evaluation of computer aided accounting learning- a student's perspective. Paper presented at the Reviewing Accounting Courseware conference, 26 March, Manchester university.
  • Hazari, S. I. & Reaves, R. R. (1994). Student preferences toward microcomputer user interfaces. Computers and Education, 22 (3), 225-229.
  • Heller, R. S. (1991). Evaluating software: A review of the options. Computers & Education, 17 (4), 285-291.
  • Iqbal, A., Oppermann, R., Patel, A. & Kinshuk (1999). A Classification of Evaluation Methods for Intelligent Tutoring Systems. In U. Arend, E. Eberleh & K. Pitschke (Eds.) Software Ergonomie '99 - Design von Informationswelten, Leipzig: B. G. Teubner Stuttgart, 169-181.
  • Kaplan, R. & Rock, D. (1995). New directions for intelligent tutoring. AI Expert, February, 31-40.
  • Kinshuk (1995). The influence of interface design upon the effectiveness of computer aided learning programs in entry level accounting subjects, MPhil-Phd transfer document, Leicester, UK: De Montfort University.
  • Legree, P. J., Gillis, P. D. & Orey, M. A. (1993). The quantitative evaluation of intelligent tutoring system applications: Product and process criteria. Journal of Artificial Intelligence and Education, 4 (2/3), 209-226.
  • Magnuson-Martinson, S. (1995). Classroom computerisation: Ambivalent attitudes and ambiguous outcomes. Teaching Sociology, 23, 1-7.
  • Mark, M. A. & Greer, J. E. (1993). Evaluation methodologies for intelligent tutoring systems. Journal of Artificial Intelligence and Education, 4 (2/3), 129-153.
  • Murray, T. (1993). Formative qualitative evaluation for "Exploratory" ITS research. Journal of Artificial Intelligence and Education, 4 (2/3), 179-207.
  • Patel, A. & Kinshuk (1996a).Applied Artificial Intelligence for Teaching Numeric Topics in Engineering Disciplines. Lecture Notes in Computer Science, 1108, 132-140.
  • Patel, A. & Kinshuk (1996b). Intelligent Tutoring Tools - A problem solving framework for learning and assessment. In M. F. Iskander, M. J. Gonzalez, G. L. Engel, C. K. Rushforth M. A. Yoder, R. W. Grow & C. H. Durney (Eds.) Proceedings of 1996 Frontiers in Education Conference - Technology-Based Re-Engineering Engineering Education, 140-144.
  • Patel, A. & Kinshuk (1997). Configurable ITS on the Internet - Framework for Providing Intelligent Tutoring through Networking. In M. Chrzanowski & E. Nawarecki (Eds.) Proceedings of 4th International Conference on Computer Aided Engineering Education, Krakow: University of Mining and Metallurgy - Cracow University of Technology, 184-191.
  • Patel, A., Kinshuk  & Russell, D. (2000).Intelligent Tutoring Tools for Cognitive Skills Acquisition in Life Long Learning. Educational Technology & Society, 3 (1), 32-40.
  • Ruf, B. M., Brown, R. M. & Crawford, H. J. (1994). Computer-assisted versus traditional instruction: Differential moderating influences of math, verbal and visual-spatial abilities on learning. British Accounting Review, 26, 197-210.
  • Shute, V. J. & Regian, J. W. (1993). Principles for evaluating intelligent tutoring systems. Journal of Artificial Intelligence and Education, 4 (2/3), 245-271.
  • Simons, P. R. & De Jong, F. P. C. M. (1992). Self-regulation and Computer Aided Instruction. Applied Psychology: An International Review, 41 (4), 333-346.
  • Simpson, S. M. (1986) Review of Computer Aided Instruction Ab-Initio Teaching in Accounting. M. Phil Dissertation, Hatfield Polytechnic.
  • Stoner, G. & Harvey, J. (1999). Integrating learning technology in a foundation level management accounting course: an e(in)volving evaluation. Paper presented at the CTI-AFM Annual Conference, April, Brighton, U.K.
  • Tonge, A., Beacham, N. & Gallagher, D. (1994). The Implementation and Evaluation of Computer Based Learning in Accounting Education. Paper presented at the International Symposium on Independent Learning and Flexible Study, September, Cambridge.
  • Wang, S. & Sleeman, P. J. (1993). A comparison of the relative effectiveness of computer assisted instruction and conventional methods for teaching an operations management course in a school of business. International Journal of Instructional Media, 20 (3), 225-234.
  • Webb, G. I. & Cumming, G. (1991). Educational evaluation of feature-based modelling in a problem solving domain. R. Lewis & S. Otsuki (Eds.) Advanced Research on Computers in Education, North Holland: Elsevier Science, 101-108.
  • Wong, S. M. (1994). Lessons learned from authoring - Computer Assisted Instruction. Business Education Forum, 48 (4), 39-41.
  • Wyatt, J. & Spiegelhalter, D. (1990). Evaluating medical expert systems: what to test and how? Medical Informatics, 15 (3), 205-217.

decoration