Educational Technology & Society 5 (3) 2002
ISSN 1436-4522

Evaluating an interactive learning environment in management education

Meliha Handzic
School of Information Systems, Technology and Management
The University of New South Wales
Sydney 2052, Australia
Tel: +61 2 9385 4935
Fax: +61 2 9662 4061
m.handzic@unsw.edu.au

Denise Tolhurst
School of Information Systems, Technology and Management
The University of New South Wales
Sydney 2052, Australia
Tel: +61 2 9385 6245
Fax: +61 2 9662 4061
d.tolhurst@unsw.edu.au

 

ABSTRACT

This paper reports the results and implications for management education of an empirical study evaluating the impact of interaction among learners on their knowledge and performance in a judgmental decision making task context. The results indicate that interaction had a significant positive effect on individual learning over time. Interactive learners were found to make significantly smaller decision errors over time than during the earlier stages of their decision task. This was not true for their non-interactive counterparts. The study also found a significant positive effect of interaction on learners’ overall decision accuracy. Interactive learners tended to make smaller decision errors than their non-interactive counterparts irrespective of the stage of their decision making process. These results suggest that future management education needs to consider forms of interactive learning in response to environment pressures for faster and more effective learning.

Keywords: Knowledge management, Interactive learning, Management education, Judgemental decision making


Introduction

Knowledge management literature suggests that to remain competitive, or even to survive, in today’s uncertain economy with shifting markets, proliferating technology, multiple competitors and shortening product life, companies will have to invent new business processes, new businesses, new industries and new customers. This will require a major shift in focus from tangible resources such as financial capital to intangible resources such as intellectual capital (Devenport & Prusak, 1998; Davenport et al., 1998; Drucker, 1993; Grayson & Dell, 1998; Stewart, 1997). In general, there seems to exist a widespread recognition in the literature of the importance of organisational learning (Garvin, 1998), knowledge-creating processes (Nonaka, 1998), and knowledgeable workers (Drucker, 1998) for a new age economy. However, the complexities of learning and the large number of interacting factors which affect individual and group learning present many challenges. Learning entities (individuals and organisations) are expected to be skilled at creating, acquiring and transferring knowledge and modifying their behaviour accordingly (Garvin, 1998); to continually expand their capacity to create desired results, nurture new thinking patterns, set free collective aspirations and learn how to learn together (Senge, 1990). It is also suggested that inventing new knowledge should become a way of behaving or being (Nonaka, 1998).

So far, these concerns have not been adequately addressed by Management Education (Seufert & Seufert, 1998). One of the major criticisms is that a large amount of knowledge is imparted to the learner without any attempt to interlink it with reality.  Another widespread weakness lies in the neglect of process oriented learning, that is making the learning and thought process visible in order to develop learners’ metacognition (Joyce & Weil, 1986). Organisational demands for new skills and capabilities for future professional and managerial knowledge workers necessitate a corresponding change in education. This implies a need for a balance between the imparting of knowledge to the learner and the learner’s own construction of it. A suggestion is made that the quantity of material to be learnt by telling should be reduced to a minimum and that the lesson time should instead be devoted to the cultivation of such qualities as problem-solving, decision making and creativity through self-directed and collaborative learning.

The main purpose of this study is to address the role of interactive learning environment in promoting cognitive skills learning and performance in a decision-making context. Decision-making is regarded as a knowledge intensive activity. Decision-makers often gain knowledge of and ability to implement strategies on the task from experience gained through task repetition and from feedback. Their work also involves a significant amount of social interaction with their subordinates, peers and superiors. With the growing importance of fostering learning and sharing of personal knowledge in modern organisations it is of interest to examine two things in particular. Firstly, whether and how an interactive learning environment affects individual decision makers’ learning process over time, and secondly, what impact it may have on the quality of decision makers’ overall performance.

 

Literature review

Learning from Experience

Learning theorists suggest that people should learn from experience through task repetition to adjust their behaviour and improve performance over time. In decision making, some authors (Payne et al., 1988) claim that adaptivity in decision behaviour may be important enough to individuals that they would guide themselves to it without the need for an external intervention.

Empirical studies from judgement research indicate mixed findings. A number of studies involving learning to purchase information where its value outweighs its cost, given multiple pieces of information of varying accuracy, have reported only modest learning over trials (Connolly & Serre, 1984; Connolly & Gilani, 1982; Connolly & Thorn, 1987). In order to explain how people learned to adjust their acquisition strategies over time, Connolly and his associates devised a simple hill-climbing algorithm (Connolly & Wholey, 1988). A computer simulation reliably reproduced information acquisition behaviour of real subjects in earlier empirical studies. Although the improvement was real, a serious deviation from optimality remained. In contrast, a study conducted in a similar multivariate judgement task reported no learning at all (Connolly & Wholey, 1988), while the results of a judgemental adjustment study (Lim & O'Connor, 1996) indicated that information acquisition strategies even worsened and consequently performance declined over time. Many studies performed in the multiple cue probability learning (MCPL) paradigm have concluded that people cannot learn effectively when multiple cues are involved or non-linear cue-criterion relationships exist (for review see Brehmer, 1980).

 

Learning from Feedback

Some authors (Hogarth, 1981) suggest that the availability of immediate feedback, in addition to the opportunity to take corrective action, is critical for effective learning. Adequate feedback is considered especially important for the correct assessment of the previous responses in situations where the subject is unfamiliar with the task or topic (O'Connor, 1989). It has been suggested that feedback may enhance learning by providing an information about the task, task outcome, individual's performance and/or decision process. From this information, through task repetition, an individual may learn to adapt, ie. maintain, modify or abandon their strategy to improve task performance.

Different dimensions of feedback have been shown to result in varied success in supporting learning and consequent decision making. In his review of a number of laboratory and field studies on the impact of feedback on task performance, Kopelman (1986) reported that objective feedback (defined as information about task behaviour or performance that is factual and inconvertible), had a positive effect on performance. But it was stronger and more sizeable in the field than in the laboratory. In the choice task setting, Creyer et al. (1990) also indicate that feedback on accuracy led to more normative-like processing of information and improved performance. The role of accuracy feedback was greater when the decision problem was more difficult. Explicit effort feedback, on the other hand, had no impact on processing or performance regardless of the difficulty of the problem. Feedback was also found to have induced learning in the so-called cue discovery tasks (Klayman, 1988). People were found to perceive the existence and direction of a cue-criterion relationship, but have difficulties in learning its shape. A significant improvement in predictive success over time was attributed to cue discovery rather than accurate weighting.

Most findings from MCPL studies with feedback reported by Klayman (1988) indicate that while people have difficulties learning from outcome feedback in these tasks, they can learn more effectively when they are provided with cognitive feedback, such as summary analysis of how their past predictions differed from optimal. In a more recent study examining the impact of several types of feedback on the accuracy of forecasts of time series with structural instabilities, Remus et al. (1996) found that task information feedback (showing the underlying structure of the task) with or without cognitive feedback gave significantly better forecasting performance than the baseline simple outcome feedback. Adding cognitive information feedback to task information feedback did not improve forecasting accuracy. The results for task and cognitive feedback largely replicated those by Balzer et al. (1992, 1994).

Some studies have found a detrimental effect of feedback on performance. In a complex probability task a large error on a particular trial might imply poor strategy or merely the fact that occasional errors can be expected in an uncertain task. As a consequence, outcome feedback may sometimes have a detrimental effect on strategy selection. Peterson and Pitz (1986) discovered that outcome feedback increased the amount by which the decision maker's estimates deviated from the model. These findings were further reinforced by Arkes et al. (1986) who found that the omission of feedback was effective in raising performance with a helpful classification rule.

 

Learning from Others

People learn from each other through interaction. Oral and written forms of communication are two main competing knowledge sharing systems used. The Internet, databases and web pages are tools that primarily support written forms of communication. So far, western communities, through the availability and accessibility of communication media and large volumes of explicit forms of knowledge, have promoted written over oral communication. In general, writing as a communication system is considered good for storing memory and  asynchronous communication gives durability to ideas; pieces can be read or skipped allowing individuals to work at a speed convenient to them. However, a negative aspect is that it tends to encourage conservatism and standardisation of thinking, and so may act against creative problem solving (Metcalfe et al., 2000).

In contrast, oral communication is closely associated with tacit knowledge. The opportunity for tacit knowledge sharing is considered necessary for raising creative and innovative performance. Also, physical clustering seems to result in more innovation than virtual clustering. Oral communication is believed to be very good for complex transfer of tacit knowledge. It requires an extensive use of “immediate”, aggregative rather than analytical, participatory rather than objectified, situational rather than abstract thinking (Metcalfe et al., 2000). These qualities make it particularly suitable for encouraging learning in interactive decision making tasks.

The literature offers a number of different perspectives on social learning that are relevant to decision making. Handzic and Low (in press) identified the following three categories: information exchange (Devlin, 1999), persuasive arguments (Heath & Gonzales, 1995), and group decision making (Marakas, 1999). The information exchange approach assumes that the aim of each participant in a conversation is to take new information about the focal object or situation into his or her context. Persuasive arguments perspective assumes that individuals first come up with a few arguments of their own, then collect novel arguments during interaction, and as a result may shift their initial opinions. It also proposes that an individual’s position on any given issue will be a function of its number and persuasiveness of available arguments. The group decision making approach recognises the collaborative nature of the interaction act. It also suggests the potential synergy effect associated with collaborative activity.

Group decision making requires group members to reach a consensual decision. This often results in unfavourable outcomes due to groupthink phenomenon (Janis, 1982). It appears that members of cohesive long-term groups strive for unanimity and consequently fail to realistically appraise alternative courses of action. Therefore, much of the earlier research into group interactions questions the relative value of collaborative over individual decision making. However, Handzic and Low (in press) argue that situations where individual decision makers interact in a social environment but make their own decisions should be free from groupthink-style outcomes. In such situations, interaction may allow individuals to more accurately assess their information and analysis, and improve individual decision performance. More specifically, decision makers may collect information and opinions from their peers, but make final decisions on their own. Because they make their final decisions individually decision makers can use or ignore the information they collect during social interaction.

The few empirical studies that have investigated the issues indicate mixed and inconclusive findings. Two studies cited by Heath and Gonzalez (1995) found that interaction improved accuracy in general knowledge questions and predictions domain (Hastie, 1986). In contrast, Heath and Gonzalez (1995) found that individual performance did not improve much after interaction. Instead, interaction produced robust and consistent increases in people’s confidence in their decisions. These findings were explained by a rationale construction approach suggesting that interaction forced people to explain their choices to others, and that generation of explanation led to increased confidence. Other studies reported mixed effects of interaction on individual performance. Within the judge-advisor system, Sniezek and Buckley (1995) found that the effect of advice was dependent upon advisor conflict. With no-conflict, advice was generally beneficial. When conflict existed, it had either adverse or no effect at all on judge’s final accuracy. More recently, Handzic and Low (in press) reported that social interaction was beneficial for enhancing people’s knowledge and performance of complex tasks, but made no difference to their performance of simple ones.

 

Research Questions

In summary, findings concerning learning in multivariate judgement and choice tasks are mixed and inconclusive. The findings suggest that the quality of performance may be conditional upon the type of feedback, task difficulty, time period, and whether participants are allowed to experiment or not. The equivocal findings may also be attributed to differences in the interacting groups, expertise and status of the participants, or the nature of the arguments involved. Some theorists have suggested that “learning histories” that capture past experiences may have a positive effect in terms of building of a better understanding about what works and what does not (Kleiner & Roth, 1998). Others suggest that as the number of participants increases, the likelihood of discussing unshared information decreases (Stasser et al., 1989), and thus teams of two people (dyads) may be more successful than larger groups (Panko & Kinney, 1992; Schwartz, 1995). It has also been suggested that non-experts who are less informed about a decision problem should be more responsive to information collected and pooled through interaction (Heath & Gonzalez, 1995). Prior research also points out that the usefulness of interaction may be limited to complex tasks only (Handzic and Low, in press). In view of these propositions, this study will examine whether, and how, the opportunity for pairwise interaction among novice decision makers supported by history of their decision errors will affect individual learning and subsequent decision performance in a complex judgemental decision making task.

 

Research methodology

The study of interactive learning in management education can be approached from many different viewpoints. We chose to investigate the issue from the knowledge management perspective. This emerging field of research and practice provides a novel theoretical approach to knowledge creation based on Nonaka’s spiral model (Nonaka and Takeuchi, 1995; Nonaka, 1998) and the concept of ‘ba’ or shared place (Nonaka and Konno, 1998). However, there is presently little empirical evidence to support the theoretical propositions suggested by the existing knowledge management frameworks. The lack of empirical research that could bridge the gap between theory and practice prompted us to carry out the current experimental study. A quantitative approach was chosen to allow clear and strong statements to be made regarding empirical findings (Judd et al.,1991) and offer some more conclusive evidence to the body of literature in this area.  We are aware however that qualitative methods would also support an understanding of the complex social and cultural aspects of learning environments.  Supplementing the quantitative results to provide greater depth and dimension to these results remains our future research goal.

 

Experimental Task

The experimental task in the current study was a simulated production planning activity in which participants made decisions regarding daily production of fresh ice-cream. This task is a modified version of the original developed by Handzic (1996) to provide a common basis for the author’s empirical research in decision making. The participants assumed the role of Production Manager for a fictitious dairy firm that sold ice-cream from its outlet at Bondi Beach in Sydney, Australia. The fictitious company incurred equally costly losses if production were set too low (due to loss of market to the competition) or too high (by spoilage of unsold product). The participants' goal was to minimise the costs incurred by incorrect production decisions. During the experiment, participants were asked at the end of each day to set production quotas for ice-cream to be sold the following day. Before commencing the task, participants had an opportunity to make five trial decisions (for practice purposes only).

To aid their decision making, participants were provided with task relevant variables including actual local demand for product and three contextual factors that emerged as important in determining demand levels: the ambient air temperature, the amount of sunshine and the number of visitors/tourists at the beach. All contextual factors were deemed relatively similar in importance. The task provided challenge because it did not stipulate exactly how information should be translated into a specific judgement. The participants were provided with the meaningful task context, sequential historic information of task relevant variables to provide some cues to causal relationship, and forecast values of contextual variables to suggest future behaviour. However, they were not given any explicit analysis of the quality of their information, or rules they could apply to integrate the available factual information.

Instead, all participants had an opportunity to learn from their own experience through task repetition and from performance history. Each participant was required to make thirty experimental production decisions over a period of thirty consecutive simulated days. While handling the task, different participants had different opportunities for social interaction with others. One half of the participants made their decisions independently, without any interaction with others. The other half was encouraged to share their information and opinions. More specifically, participants from this latter group were placed in groups of two and instructed to discuss their information and opinions before making their final decisions. However, they were not required to reach a consensual decision.

At the beginning of the experiment, task descriptions were provided to inform participants about the task scenario and requirements. The given text differed with respect to the form of communication allowed. In addition, throughout the experiment instructions and immediate performance feedback as well as the history of past errors were provided to each participant to analyse earlier performance and to adjust future strategies.

 

Experimental Design and Variables

A mixed factorial experimental design with one between and one within factor was used, since it made it possible to draw stronger inferences about causal relationships between variables examined. The between-factor was (i) learning environment (non-interactive versus interactive) and the within-factor was (ii) task phase (earlier versus later set of trials).

The learning environment was manipulated by completely constraining or maximally encouraging (through dialogue) sharing of ideas and information among learners during decision making process. In order to explore learning over time, experimental trials were divided into two blocks of trials referred to as task phases. Each phase was equivalent to a block of fifteen trials. Block 1 (or earlier phase) consisted of subjects’ first 15 trials, while block 2 (or later phase) consisted of their last 15 trials.

Decision performance was evaluated in terms of decision accuracy operationalised by symmetric absolute percentage error (SAPE). First, absolute error was calculated as an absolute difference between the units of sales produced and the units of the product actually demanded (in hundreds of sale units). Then, symmetric absolute percentage error was obtained by further dividing the absolute error by an average of the units produced and demanded and multiplying by 100%. This measure has been suggested by forecasting literature (Makridakis & Wheelwright, 1989; Makridakis, 1993). Percentage error is generally preferred to absolute error because it controls for scale.

 

Participants and Procedure

The participants were 28 graduate students enrolled in the Master of Commerce course at The University of New South Wales, Sydney. Students participated in the experiment on a voluntary basis and received no monetary incentives for their performance. Generally, graduate students are considered to be appropriate participants for this type of research (Ashton & Kramer, 1980; Remus, 1996; Whitecotton, 1996). The experiment was conducted in a microcomputer laboratory. On arrival, participants were assigned randomly to one of the treatment groups by picking up a diskette with an appropriate version of the research instrument. The instrument was specifically developed by one of the authors in Visual Basic. Students were briefed about the purpose of the study, read case descriptions and performed the task. The session lasted about one hour.

 

Results

A series of paired comparison T-tests were performed to analyse the effects of interactive environment on subjects' learning and overall performance. Results are presented graphically in Figure 1.

With respect to learning, the results indicate a positive impact of interaction. The analysis found significant learning and improvement in performance over time among the subjects in the interactive learning environment. There was a significant decrease in participants’ decision errors from earlier to later phase of the task (13.56 vs 9.28, t=4.45, p< .05). This was not so in the non-interactive learning environment. Although participants’ decision errors in this group tended to decrease slightly from earlier to later phase, the change was not statistically significant (19.25 vs 16.36, ns). These results indicate that participants tended to learn better and faster when encouraged to interact with others.

With respect to the overall performance, the results of the analyses performed indicate that interaction had a substantial beneficial impact on participants’ overall decision accuracy irrespective of the phase of the task. The participants in the interactive learning environment had significantly smaller percentage errors than their counterparts in the non-interactive environment, both in the earlier (13.56 vs 19.25, t=2.99, p<.05) and in the later (9.28 vs 16.36, t=4.44, p<.05) phases of the task.

 
Figure 1: Learners’ Performance by Task Phase and Learning Environment


Discussion

The main objective of this study was to investigate the effect of learning environment on individual knowledge and performance in a specific decision making context. The findings indicate a beneficial effect of interactive environment on learners’ decision accuracy. Interaction was useful in both earlier and later phases of the task and resulted in reduced errors in both. The findings also indicate that interaction led to faster and more effective learning and resulted in improvements in performance over time. In contrast, the findings of the study show slight but not significant change in performance over time in non-interactive environments. These results clearly suggest that future managerial education needs to consider forms of interactive learning in response to pressure from the rapidly changing environment and demands for accelerated and more effective knowledge creation strategies and better performance. However, further research is required that would extend this study to other types of learners and cognitive tasks to ensure the generalisability of the present findings.

 

Main findings

The main findings of this study provide strong support for the proposition that interaction enhances individual learning and improves learners’ performance in complex non-deterministic judgemental decision making task contexts. This was demonstrated by significant improvement found in the quality of interacting subjects’ decisions from the earlier to the later phase of the task, as well as the higher overall decision accuracy found among interacting subjects compared to their non-interacting counterparts, irrespective of the task phase.

Indeed, the current study has demonstrated a significant learning in the interactive decision making environment. Participants who interacted with others while handling the decision problem tended to exhibit substantial improvement in the quality of their decisions over time. These subjects tended to make significantly smaller percentage errors in the later period of the task. This is a very important finding. It suggests that social interaction may enable more effective learning. It seems that interacting participants were able to learn faster and better how to use their available information to improve performance. As a result, they tended to make more accurate decisions. The current study has also demonstrated that interaction had a beneficial overall effect on individual performance irrespective of the task phase. More specifically, interaction led to enhanced decision accuracy in both the earlier and the later stages of the task. When participants were allowed to interact and share ideas with others while handling the decision problem, they were found to make smaller decision errors than when they performed the same task without such interaction. This was true irrespective of the phase of the task.

The beneficial effect of interaction among learners found in this study is consistent with the theoretical expectations, as well as numerous anecdotal evidence from organisations (Garvin, 1998; Hewson, 1999; Nonaka & Takeuchi, 1995). According to Handzic and Low (in press) it is possible that the opportunity to discuss various aspects of the task with others helped the participants better assess the quality of the available information and adjust their prediction strategies over time. It could also help them perceive the task as less complex. Participants might have brought their personal analysis and know-how to the task, acquired information about their partner’s ideas and arguments and considered both in making final decisions.

It is also possible that the current study’s design prevented a potential adverse effect of advisor-conflict (Sniezek and Buckley, 1995), while promoting the benefits from small group size (Panko & Kinney, 1992; Schwartz, 1995) and performance feedback (Kopelman, 1986). More specifically, the study design limited each individual’s social interaction to one other person only, and ensured their dual judge-advisor roles. Because of their equal status in the interactive act, it is possible that the participants jointly generated some new ideas previously non-existent in their individual contexts. Finally, different from Heath and Gonzalez (1995), the current study provided participants with continuous performance feedback. Continuous performance feedback might have enabled participants to evaluate their own ideas against those of their partners or jointly generated ones, and adjust future strategies accordingly. Thus, generating and sharing personal tacit knowledge through interaction coupled with the opportunity to test its contribution to performance over time might have enhanced learning and resulted in greater overall accuracy.

 

Other findings

Consistent with most previous research, the current study found no significant improvement in individual performance over time in the non-interactive environment. Although participants in the non-interactive group tended to make slightly smaller decision errors in the later than in the earlier phase of the task, the change was not statistically significant. The lack of significant learning and performance improvement found in the non-interactive decision making environment is not surprising. It is consistent with a large body of knowledge accumulated from judgement and decision making research involving non-deterministic multi-variate tasks with or without feedback (for review see Brehmer, 1980).

It appears that the history of past errors expected to help learning (Kleiner & Roth, 1998) could not provide the participants with the information as to what will work and what will not next time in the context of a non-deterministic task. The same decision error could be produced by many different misapprehensions about the relations between the task variables. According to Brehmer (1980) people can not learn probabilistic relations because they assume that the variables are related by deterministic rule. However, the fact that the participants responded in the appropriate direction over time suggests that they could potentially achieve significant improvement if given more trials. The results from the cue discovery tasks (Klayman, 1988) indicate that it was possible to accomplish cue discovery gradually over a larger number of trials (ranging into the hundreds).

 

Implications for management education

The findings of the current study may have some important implications for Management Education. Creating learning environments that encourage communication and culture of information sharing among learners may be necessary to speed up learning and enable higher levels of performance. Given the growing number of distance learning programmes offered by various university colleges and business schools, the findings may also have some important pedagogical implications for distance education. In particular, the evidence from this study suggests that support facilities such as groupware systems may be necessary to enable interaction among distance learners to augment their individual experience. The demand for verbal communication may require additional space for the visual, aural or kinaesthetic modes of informing upon which learners rely for knowledge sharing.

While the current study provides some interesting findings, some caution is necessary regarding their generalisability due to a number of limiting aspects. One of the limitations refers to the use of a laboratory experiment that may compromise external validity of research. Another limitation relates to artificial generation of information that may not reflect the true nature of real business. No incentives were offered to the participants for their effort in the study, potentially resulting in poor motivation in task performance. The findings may also be limited to probabilistic tasks only. Furthermore, no data were gathered on discussions between group members, making the explanatory discussion somewhat speculative. All this implies a need for further research that would address current limitations.

 

References

  • Arkes, H. R., Dawes, R. M., & Christensen, C. (1986). Factors Influencing the Use of a Decision Rule in a Probabilistic Task. Organisational Behaviour and Human Decision Processes, 37, 93-110.
  • Ashton, R. H., & Kramer, S. S. (1980). Students as Surrogates in Behavioural Accounting Research: Some Evidence. Journal of Accounting Research, 18 (1), 1-15.
  • Balzer, W. K., Hammer, L. B., Sumner, K. E., Birchenough, T. R., Martens, S. P., & Raymark, P. H. (1994). Effects of Cognitive Feedback Components, Display Format, and Elaboration on Performance. Organisational Behaviour and Human Decision Processes, 58, 369-385.
  • Balzer, W. K., Sulsky L. M., Hammer, L. B., & Sumner, K. E. (1992). Task Information, Cognitive Information, or Functional Validity Information: Which Components of Cognitive Feedback Affect Performance? Organisational Behaviour and Human Decision Processes, 58, 369-385.
  • Brehmer, B. (1980). In One Word: Not from Experience. Acta Psychologica, 45, 223-241.
  • Connolly, T., & Gilani, N. (1982). Information Search in Judgement Tasks: A Regression Model and Some Preliminary Findings. Organisational Behaviour and Human Performance, 30, 330-350.
  • Connolly, T., & Serre, P. (1984). Information Search in Judgement Tasks: The Effects of Unequal Cue Validity and Cost. Organisational Behaviour and Human Performance, 34, 387-401.
  • Connolly, T., & Thorn, B. K. (1987). Predecisional Information Acquisition: Effects of Task Variables on Suboptimal Search Strategies. Organisational Behaviour and Human Decision Processes, 39, 397-416.
  • Connolly, T., & Wholey, D. R. (1988). Information Mispurchase in Judgement Tasks: A Task Driven Causal Mechanism. Organisational Behaviour and Human Decision Processes, 42, 75-87.
  • Creyer, E. H., Bettman, J. R., & Payne, J. W. (1990). The Impact of Accuracy and Effort Feedback and Goals on Adaptive Decision Behaviour. Journal of Behavioural Decision Making, 3 (1), 1-16.
  • Davenport, T. H., & Prusak, L. (1998). Working Knowledge, Boston: Harvard Business School Press.
  • Davenport, T. H., DeLong, D. W., & Breers, M. C. (1998). Successful Knowledge Management Projects. Sloan Management Review, Winter, 43-57.
  • Devlin, K. (1999). Infosense:Turning Information into Knowledge, New York: W.H. Freeman and Company.
  • Drucker, P. F. (1993). Post-Capitalist Society, New York: Harper Business.
  • Drucker, P. F. (1998). The Coming of the New Organisation. Harvard Business Review on Knowledge Management, Boston: Harvard Business School Press.
  • Garvin, D. A. (1998). Building a Learning Organisation. Harvard Business Review on Knowledge Management, Boston: Harvard Business School Press.
  • Grayson, C. J., & Dell, C. O. (1998). Mining Your Hidden Resources. Across the Board, 35 (4), 23-28.
  • Hastie, R. (1986). Experimental Evidence on Group Accuracy. Decision Research, 2, 129-157.
  • Handzic, M. (1996). The Utilisation of Contextual Information in A Judgemental Decision Making Task. PhD Thesis, UNSW.
  • Handzic, M., & Low, G. (in press). The Impact of Social Interaction on Performance of Decision Tasks of Varying Complexity. OR Insight.
  • Heath, C., & Gonzalez, R. (1995). Interaction with Others Increases Decision Confidence but Not Decision Quality: Evidence against Information Collection Views of Interactive Decision Making. Organisational Behaviour and Human Decision Processes, 61 (3), 305-326.
  • Hewson, D. (1999). Start Talking and Get to Work. Business Life, November, 72-76.
  • Hogarth, R. M. (1981). Beyond Discrete Biases: Functional and Dysfunctional Aspects of Judgemental Heuristics. Psychological Bulletin, 90 (2), 197-217.
  • Janis, I. (1982). Groupthink: Psychological Studies of Policy Decisions and Fiascoes, Boston: Houghton-Miffin.
  • Judd, C. M., Smith, E. R., & Kidder, L. H. (1991). Research Methods in Social Relations, 6th ed., Orlando: Holt Rinehart and Winston, Inc.
  • Joyce, B., & Weil, M. (1986). Models of Teaching, Englewood-Cliffs, NJ: Prentice-Hall.
  • Klayman, J. (1988). Learning from Experience. In Brehmer, B. & Joyce, C.R.B. (Eds.) Human Judgement. The SJT View, Amsterdam: North-Holland.
  • Kleiner, A., & Roth, G. (1998). How to Make Experience Your Company’s Best Teacher. Harvard Business Review on Knowledge Management. Boston: Harvard Business School Press.
  • Kopelman, R. E. (1986). Objective Feedback. In Locke, E. A. (Ed) Generalising from Laboratory to Field Settings, Aldershot: Gower.
  • Lim, J. S., & O'Connor, M. J. (1996). Judgemental Forecasting with Interactive Forecasting Support Systems. Decision Support Systems, 16, 339-357.
  • Makridakis, S. (1993). Accuracy measures: theoretical and practical concerns. International Journal of Forecasting, 9, 527-529.
  • Makridakis, S., & Wheelwright, S. C. (1989). Forecasting Methods for Management, 5th ed., New York: John Wiley and Sons.
  • Marakas, G. M. (1999). Decision Support Systems in the 21st Century, New Jersey: Prentice Hall Inc.
  • Metcalfe, M., Neill, B., & Marriott, P. (2000). Appropriate Technology for Oral Knowledge Sharing. Paper presented at the Australian Conference for Knowledge Management and Intelligent Decision Support, December 4-5, 2000, Melbourne, Australia.
  • Nonaka, I. (1998). The Knowledge-Creating Company. Harvard Business Review on Knowledge Management, Boston: Harvard Business School Press.
  • Nonaka, I., & Takeuchi, H. (1995). The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation, New York: Oxford University Press.
  • Nonaka I., & Konno, N. (1998) The Concept of Ba: Building a Foundation for Knowledge Creation. California Management Review, 40 (3), 40-54.
  • O'Connor, M. J. (1989). Models of Human Behaviour and Confidence in Judgement: A Review. International Journal of Forecasting, 5, 159-169.
  • Panko, R., & Kinney, S. (1992). Dyadic Organisation Communication:Is the Dyad Different? Proceedings of the 25th Hawaii International Conference on Systems Sciences, 244-253.
  • Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive Strategy Selection in Decision Making. Journal of Experimental Psychology: Learning, Memory and Cognition, 14 (3), 534-552.
  • Peterson, D. K., & Pitz, G. F. (1988). Confidence, Uncertainty and the Use of Information. Journal of Experimental Psychology: Learning, Memory and Cognition, 14, 85-92.
  • Remus, W. (1996). Will Behavioural Research on Managerial Decision Making Generalise to Managers? Managerial and Decision Economics, 17, 93-101.
  • Remus, W., O’Connor, M., & Griggs, K. (1996). Does Feedback Improve the Accuracy of Recurrent Judgemental Forecasts. Working paper, January, University of Hawaii.
  • Schwartz, D. (1995). The Emergence of abstract representation in dyad problem solving. The Journal of the Learning Sciences, 4 (3), 321-345.
  • Senge, P. (1990). The Fifth Discipline, New York: Double-day.
  • Seufert, S., & Seufert, A. (1998). Collaborative Learning Environments for Management Education. Paper presented at the 13th Annual Conference of the International Academy for Information Management, December 11-13, 1998, Helsinki, Finland.
  • Sniezek, J. A., & Buckley, T. (1995). Cueing and Cognitive Conflict in Judge-Advisor Decision Making. Organisational Behaviour and Human Decision Processes, 62 (2), 159-174.
  • Stasser, G., Taylor, L. A., & Hanna, C. (1989). Information Sampling in Structured and Unstructured discussions of three- and six-person groups. Journal of Personality and Social Psychology, 57, 67-68.
  • Stewart, T. A. (1997). Intellectual Capital: The New Wealth of Organisations, New York: Doubleday.
  • Swanson, E. B. (1988). Information Systems Implementation, Homewood: Irwin.
  • Whitecotton, S. M. (1996). The Effects of Experience and a Decision Aid on the Slope, Scatter, and Bias of Earnings Forecasts. Organisational Behaviour and Human Decision Processes, 66 (1), 111-121.

decoration


Copyright message

Copyright by the International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the authors of the articles you wish to copy or kinshuk@massey.ac.nz.