Educational Technology & Society 5 (1) 2002
ISSN 1436-4522

Assessing group learning and shared understanding in technology-mediated interaction

Ingrid Mulder and Janine Swaak
Telematica Instituut
P.O. Box 589
7500 AN Enschede, The Netherlands
Tel: +31 53 485 0 499
Fax: +31 53 485 0 400
ingrid.mulder@telin.nl
janine.swaak@telin.nl

Joseph Kessels
University of Twente
P.O. Box 217
7500 AE Enschede, The Netherlands
Tel: +31 53 489 3169
Fax: +31 53 489 3759
kessels@edte.utwente.nl

 

ABSTRACT

Without shared understanding, hardly any group learning takes place. Though much has been written about the essence of shared understanding, less is known about how to assess the process of reaching shared understanding. Therefore, the study focuses on ways of assessing shared understanding. A conceptual framework is described that makes a distinction between the process of reaching shared understanding and the resulting shared understanding. The conceptual ideas lead to a coding scheme for observing the processes of shared understanding and to the definition of product measures, among others a scale to assess perceived shared understanding. Then an empirical study is presented, in which the model is applied. A major conclusion is that it was hard to find reflective utterances in the protocols although the coding scheme provided for them.

Keywords: Collaborative technology, Global distributed engineering design team, Group learning, Problem-based projects, Shared understanding


Introduction

Understanding each other is not a trivial matter, both in face-to-face situations and in technology-mediated situations. When group members are globally distributed and their interaction is completely mediated by technology, it is even more difficult. In the current work we investigate the question: how do globally-distributed group members reach shared understanding when they can only communicate using technology support?

Problem-based learning often goes hand in hand with ill-defined problems. Group members working and learning together in problem-based projects need to pay attention to a variety of issues including a shared definition of problem statements and project goals, the division of tasks and roles, and the co-ordination of activities. In order to work and learn together, group members need a shared understanding of what they are working on, how they are going to work together, and with whom they are working. Furthermore, in technology-mediated interaction they need to know what technology they should use, and how they should use this technology. In other words, we believe group members need a certain amount of shared understanding on the content, the procedure, each other, and on the use of communication technology (Mulder & Swaak, 2000).

In group-based working and learning, synchronous and asynchronous communication may alternate. Tasks are carried out groupwise, though this does not imply that group members are always working synchronously, as self-study and individual literature review as well as working asynchronously on a document may be part of a project. These communication modes have distinct goals: synchronous settings are more suited for reaching a shared understanding (convergence), whereas asynchronous settings are better for exchanging information (conveyance) (Dennis, Valacich, Speier & Morris, 1998; Swaak, Mulder, Van Houten & Ter Hofte, 2000).

In order to meet face-to-face, globally-distributed group members face problems like long travel time and high travel costs. Even though technology support can bridge this geographical distance, these globally-distributed teams might still find it difficult - due to time-zone differences - to meet synchronously.

Technology support for collaborative working and learning should support both synchronous and asynchronous interaction as both interaction modes have distinct purposes. Until recently most collaborative technology has supported either synchronous interaction or asynchronous interaction. Fortunately, developments in technology are moving in the direction of better integration of synchronous and asynchronous tools, trying to solve problems concerning the switching between these communication modes (Swaak et al., 2000).

To sum up, shared understanding is considered crucial for the quality of interaction (see, for instance, Clark & Brennan, 1991; Donnellon, Gray & Bougon, 1986). Though much has been written about the essence of shared understanding, less is known about the assessment of shared understanding. In the current work, therefore, we focus on how we can assess shared understanding. We study the concept of shared understanding in a virtual team that interacts both synchronously and asynchronously. Synchronous settings are more suited for reaching shared understanding. Therefore, we pay in this work attention to the synchronous interaction in order to assess shared understanding and group learning. First, we present our conceptual framework on shared understanding and group learning, making the distinction between the process of reaching shared understanding and the level of shared understanding. This allows observation of both the process and outcome of shared understanding. It leads to a coding scheme for observing the processes of shared understanding and to the definition of product measures, among others a scale to assess perceived shared understanding. Then, we apply our conceptual ideas on shared understanding in an empirical study, and describe the main results. We conclude with a discussion of the limitations of our research and future work.

 

Assessing shared understanding and group learning

Shared understanding refers to mutual knowledge, mutual beliefs, and mutual assumptions (Clark & Brennan, 1991). We view the process of reaching shared understanding as an important type of group learning. The outcome of these group processes is a certain level of shared understanding: in other words, the overlap of understanding and concepts among group members. When progress is made in the group, understanding changes, and, therefore, group members need to update their shared understanding. When we refer to the process of reaching shared understanding, we implicitly include the updating of shared understanding. Figure 1 represents our theoretical notions on reaching shared understanding. Conceptual learning, the use of feedback and expressions of motivation are part of the process of reaching and updating shared understanding.

 

Figure 1. Conceptual framework on reaching shared understanding

 

In the following we explain how conceptual learning, feedback, and motivation make up the group learning process for reaching shared understanding.

 

Conceptual learning

In this study, conceptual learning refers to the exchange, reflection and refinement of facts and concepts. Norman (1993) distinguished various modes of learning, which form the basis of our assessment of shared understanding. Whereas Norman emphasises skills, we place more emphasis on concepts and understanding. We adapt the modes distinguished by Norman (1993), and redefine these as follows. When concepts and facts are added, we refer to his term accretion. We use tuning for the fine-tuning of these concepts and facts (i.e., when utterances involve more specifics, more detail, or when utterances define more boundaries, or make the scope explicit). We use restructuring when new relations among concepts or when a new conceptual framework are being created. Only after restructuring understanding can be updated. As our goal is to analyse continuously updated shared understanding among the group members — focusing on group learning instead of individual learning — we add co-construction of knowledge (Van der Meij, 2000) to Norman’s accretion, tuning and restructuring. The main difference between restructuring and co-construction is that restructuring involves individual reflection, whereas co-construction concerns the restructuring of the whole group.

 

Feedback

Feedback mechanisms are used to structure the communication process, and also to encourage reflection. The use of feedback contributes to reaching shared understanding because listeners understand better when more feedback is provided (Krauss & Fussell, 1991; Schober & Clark, 1989). Moreover, some researchers view feedback as a specific type of learning (Argyris & Schön, 1978). Based on the functions of feedback in communication (Gramsbergen & Van der Molen, 1992), and our specific wish to measure understanding, we define the following distinct feedback mechanisms: confirm, paraphrase, summarise, explain, reflect, check understanding, and check action. We argue that the checking of actions and understanding is often difficult when working together distributed. Therefore, we added these two variables to the feedback mechanisms defined by Gramsbergen and Van der Molen (1992).

 

Motivation

Learning is also affected by motivation (Bandura, 1986). Therefore, motivation has been added to include evaluative expressions on the usefulness of acquired information. More specifically, we refer to the expression of certainty and uncertainty, and subjective expressions of the ‘value’ of the situation. We distinguish the expression of certainty and uncertainty, impasse, and evaluation.

 

Conceptual framework summarised

Conceptual learning, feedback, and motivation are complementary and closely related. They have distinct purposes, though conceptual learning is more associated with cognition, whereas motivation involves the motivational and emotional part of learning. With respect to this distinction Snow (1989) refers to cognitive and conative structures in learning. Finally, feedback focuses on the mechanisms that structure communication.

Based on these conceptual ideas we developed a coding scheme to assess group learning and shared understanding. Both conceptual model and coding scheme were employed in an empirical study. In addition to our notions that conceptual learning, the use of feedback and the expression of motivation enhance group learning, we believe that group members working and learning together with technology need a certain level of shared understanding about the content, the procedure, each other, and technology. Therefore, we analyse the dynamic group processes and outcomes by means of these four interaction aspects. In addition, we assume that a good balance between interaction aspects (content, procedure, relation, and technology) has a positive affect on the process of reaching and updating shared understanding. With respect to social interaction, Kreijns, Kirschner and Jochems (2002, in this issue of ET&S) argue that social interaction can not be taken for granted, and therefore social interaction needs specific attention by means of communication technology and didactical support.

 

Empirical study

We explored group learning and shared understanding in a globally-distributed engineering team. This study was embedded in a larger project called International Networked Teams for Engineering Design (INTEnD), developed by an international consortium of universities and industry partners whose principle aim is to conduct research on the evaluation of geographically dispersed team of engineers.

In our distributed team, team members did not know each other before. In other words, the team was a zero-history group. Before the team started, the members received instructions about the various communication tools they could use. Students were free to decide how often they met each other, which media they used, and how they proceeded and came up with the final result. With respect to technology, they could ask help from technical staff as well as from the researchers. It was not possible for the whole group to meet face-to-face, though each subteam had face-to-face contact. The students were recruited at their universities to participate in an international design team, and volunteered to work on the engineering design task, in the context of an optional assignment. Since the language in the international team was English, Dutch students were assessed on their proficiency in English.

 

Unit of analysis

The subject of this study was a group of seven mechanical engineering students collaborating for four months on an engineering design project. The students came from two different universities: Delft University of Technology (TUD) in the Netherlands and Michigan State University (MSU) in the United States. In the Delft subteam two Dutch male students participated (age 26 and 27), and one American male student (age 21) who was spending the semester in Leuven, Belgium. The remote American subteam included two female and two male students (all age 21). In the current study we studied the group communication process. The unit of analysis is, therefore, the whole group (N = 1) rather than the seven individual students.

 

Task

The industry partners provided realistic projects, which gave the students a sense that their efforts had a real client and a purpose. The collaborative design task given this engineering team was to develop a design process, tool design concept and prototype to manually form the rear wheel well for a new car type. The students received background information about the automotive industry partner. Furthermore, they needed to follow some assumptions and criteria defined by the client. Thus, the students were performing a realistic task, with a problem statement and basic constraints, though the problem was ill-defined and various ways of tackling the problem were available.

 

Collaboration and communication tools

In order to fulfil their collaborative design task, the students could choose the media they preferred from the following:

TeamSCOPE, a web-based collaboration tool (Steinfield, Jang & Pfaff, 1999) which integrates several functions, such as sharing files, making comments and sending messages. It also includes awareness tools, a calendar and a chat function (see figure 2);

ISDN desktop videoconferencing (Vtelä);

Web-based desktop videoconferencing (NetMeetingä), which includes chat, and whiteboard functionality;

(handsfree) telephone;

email; and,

fax.

 

The ISDN desktop videoconferencing has better image and sound quality than the web-based variant. In order to use the videoconferencing systems, each subteam had a webcam installed on top of their monitor. Figure 2 shows an example of TeamSCOPE, giving details of each team member’s activity (e.g., who has downloaded which files, and when). One screen shows a table of users, activities and files. A second screen shows a message board and calendar.

 

 

Figure 2. TeamSCOPE: activity list and team’s calendar and message board

 

The media set allowed the students to work both together at the same time (synchronous communication), as well as allowing each individual to work on the engineering design task at the moment they preferred (asynchronous communication).

 

Data collection

In order to assess group learning and shared understanding in the global design team, we used a variety of data sources. We employed several methods to gather both rich, qualitative and more numerical, quantitative data including: observations, transcripts, interviews, questionnaires, rating scales, weekly communication diaries, monitoring system usage, and expert judgement of performance.

To get rich experience of the actual process of team communication we performed observation studies. We observed the team during synchronous team communication (i.e., the video meetings) at both locations (semi-structured). The verbal communication during the ISDN desktop video meetings was recorded. In addition, we fully transcribed the team communication. The main objective of these transcripts was to study the whole communication process in more detail. We coded the transcripts by means of a coding scheme (Mulder, 2000) described below. Coded transcripts gave us insight in the balance of the interaction modes and the process of reaching shared understanding. We conducted semi-structured interviews (at the individual and at the group level), that we repeated three times (pre-term, mid-term and final). These interviews solicited team members’ feedback on their global team experience, in terms of such issues as media use, cultural difference, misunderstanding and lessons learned.

We also employed web-based pre-term, mid-term and final questionnaires. Pre-test questionnaires were used to compare levels of interest, experience and skills among team members, as well as to assess expectations about working with their remote counterparts. Post-test questionnaires assessed such measures as the degree to which students were satisfied with their experience, their comfort with the group’s communication, their trust in other team members, the usefulness of the communication and collaboration tools provided, and their assessment of the group’s output.

We added a self-scoring instrument (Mulder, 1999) to measure the perception of shared understanding (both process and product). With this instrument we measured how group members perceived their understanding concerning content, procedure and relation aspects. The students indicated their understanding on six-point and seven-point rating scales (Likert) which refer respectively to their understanding of the several aspects and how this understanding has evolved. We used a 6-point scale to measure how the (shared) understanding was at a certain moment (now scale). Even points (6-points) force students to choose either negative or positive. With respect to the evolving understanding scale, we used a 7-point scale. In this case we also want to give students the opportunity to indicate their understanding had stayed the same (a score of 4), their understanding has been changed positively (5, 6, or 7) or their understanding has been changed negatively (1, 2 or 3). An example of the rating scale is shown in the box 1.

 

Box 1. Instrument for measuring the perception of understanding of the content

 

The team members completed communication diaries every week using web-based forms. On the form they reported whether they had communicated with their team, and, if so, via which medium. They also answered a few brief questions assessing aspects of team progress, communication for that week, and whether they had faced any problems. Furthermore, log-files were made to monitor the system usage including the frequency of use of TeamSCOPE and the number of electronic mail messages sent and received. The final source of collected information was the expert judgement of the team’s engineering work by engineering faculty.

 

Data analysis

Raw data were collected by the sources described above. We integrated all these data to analyse the results of the current study. Our richest sources were the observations and transcripts of the video meetings. In order to get results that better allow comparison across studies, we developed a scheme to code the transcripts of team communications. As this coding scheme is new, we describe it in more detail.

 

Coding scheme

The purpose of a coding scheme is to acquire some objective measures from the rich and qualitative transcriptions. This coding scheme is based on our theoretically founded conceptual ideas, in which four main categories were distinguished: task/domain (content), social interaction (relation), planning of activities (procedure), and technology. Because reaching understanding relates to use of feedback and to expression of motivation, certainty and uncertainty as well as feedback and motivation were included. In order to acquire a complete scheme, a two-step procedure with segmentation preceding categorisation was followed (Van der Meij, 2000). First the transcripts were divided in segments (utterances). Then, each segment was categorised on a certain dimension. Table 1 displays the categories and dimensions used in the coding scheme.

 

Linguistic expression

Kind of interaction

Conceptual learning

Feedback

Motivation

Assertion
Question
Reaction

Task/domain
Social
Procedure
Technology

Accretion
Tuning
Restructuring
Co-construction

Confirm
Paraphrase
Summarise
Explain
Check understanding
Check action
Reflect

Uncertainty
Evaluation
Impasse

 

 

 

 

Table 1. Categories and dimensions used in the coding scheme

 

All segments were coded on linguistic expression, whether they contained an assertion, a question, or a reaction. Assertions are statements of facts, principles, choices and so on, whose main intent is to inform the other group members. Questions are explicit requests for information. Reactions are responses including answers to questions and responses to assertions. In email communication this distinction is clear. In transcripts of videoconferencing meetings it is not always apparent.

Then, each segment was coded according to the type of interaction, whether it dealt with task/domain, social interaction, procedure or technology. Utterances that involved the task or the project description were coded as task/domain. Utterances that did not involve the task, but were more personal and cultural were coded as social interaction. With procedure we refer to planning of a next meeting and structuring the current meeting. Finally, utterances related to use of communication technology or media choices were placed in the (communication) technology category. More specific, we refer to the category technology, when students discussed how they should use communication technology, how the communication technology should work, and when the communication broke down due to communication technology. Specific task-related technology, for instance AutoCAD, was coded as task/domain, because even in face-to-face project meetings team members would have used this technology.

In addition, each segment was coded on a specific category-related dimension, a type of conceptual learning, the use of feedback, and the expression of motivation. Conceptual learning was used when the content of the information was being manipulated. With respect to conceptual learning we were interested if an utterance involved accretion, tuning, restructuring, or co-construction. We coded these learning types as follows:

  • Accretion: adding or repeating concepts and facts;
  • Tuning: fine-tuning of concepts (i.e., making them more specific, adding detail, adding boundaries or making the scope more explicit);
  • Restructuring: providing new relations between concepts or a new conceptual framework (i.e., reflecting on the individual level);
  • Co-construction: restructuring of the whole group.

 

To analyse the use of feedback, we used the following categories: confirm, paraphrase, summarise, explain, check understanding, check action, and reflect. In our coding manual (Mulder, 2000) we used the following definitions:

  • Confirm: Reaction that can be indicated as an agreement. The understanding is shared.
  • Paraphrase: Summarising using one’s own words. This is also a form of reflection.
  • Summarise: One of the group members summarises what has been told before.
  • Explain: Reaction on other utterances, which provides new information or increases the understanding.
  • Check understanding: Checking self-understanding or another group member’s understanding of a previous utterance.
  • Check action: Checking whether an action has been understood by another group member.
  • Reflect: This code represents a feedback mode to indicate meta-communication, which is not necessary procedure or technology related. This code should be used as a kind of evaluation and a feedback mode.
  • Other: We added an extra category in case we didn’t capture all feedback categories. In other words, if an utterance involved feedback, though it was difficult to subscribe it to one of the categories mentioned above, on should code this utterance as other.

Finally, in order to indicate motivation, we distinguished evaluation, uncertainty, and impasse. Evaluation was chosen when there was an opinion stated, or when something was evaluated, uncertainty was related to the expression of confusion or doubt, and an impasse was indicated when the group expressed they did not know how to go any further.

In figure 3 we illustrate how transcripts were coded. The end of a segment is indicated by ||.

 

Figure 3. Example of a coded transcript

 

In order to calculate the reliability of our coding scheme a random sample of 128 segments (of every other meeting) had been coded independently by two experienced raters who have been instructed using the coding manual (Mulder, 2000). These 128 segments were coded for all categories. This represented 5% of the segments of the current study (i.e., 2531 segments). To calculate the inter-rater reliability, the equality of coding by the two raters, we used coefficient kappa. This coefficient indicates the amount of agreement corrected for the agreement expected by chance. The overall average value of the inter-rater reliability was .839, which is considered ‘almost perfect’ by Landis and Koch (1977).

 

Expected results

As we view that conceptual learning, feedback, and motivation are part of the process of reaching shared understanding, we expect at the operational level that a higher amount of utterances devoted to these three categories is related to a higher rating of perceived shared understanding. Concerning the outcome of shared understanding we expect that the perception of shared understanding increases. In other words, we expect that the students indicate a score of 4 or higher on our rating scales to measure their perceived shared understanding. More specifically, a score of 4 on the now scales means a positive score on the understanding (‘I understand completely well’), whereas a score of 4 on the evolve scales implies that the (shared) understanding has not changed. A score of 5, 6, or 7 refers to an increased understanding.

 

Results

In this section we describe the main results briefly (for more details, see Mulder & Swaak, 2000). We then display the results concerning our hypothesis that conceptual learning, feedback, and motivation are part of the process of reaching shared understanding. We conclude by discussing whether the perception of shared understanding increases during the project.

 

Main results

By combining results from our different sources the following main results were found. The data regarding task-related interaction seem to support the assumption that a certain level of shared understanding about the problem formulation is necessary in order to learn. In addition, we found that task-related utterances predominated during the project. Of these task-related utterances a relatively high number were devoted to the use of feedback and to conceptual learning. Most conceptual learning that occurred was coded as accretion and tuning. While task-related interaction prevailed during the project, we found little social interaction. Interestingly, we found that when time was devoted to social aspects, this was near the end of the video meetings. Utterances devoted to social interaction increased during the project, with the exception of the second meeting, which contained the highest amount of social-related utterances. In addition to social interaction, the total amount of conceptual learning and the expression of motivation were also highest in the second meeting. The most commonly performed types of conceptual learning were accretion and tuning.

 

Figure 4. Total number of utterances devoted to the four kinds of interaction

 

The number of utterances devoted to process-related interaction decreased during the project. In the first meeting, the participants learned most and provided the most feedback concerning the procedure. Surprisingly, the participants did not learn much regarding technology use. The team had defined their specific way of communicating early on, which can be seen as their shared understanding of how to use the technology. Later in the project they did, however, try to use other technologies. During those meetings they also expressed more motivation and provided more feedback than in the other meetings. The total number of utterances devoted to the four kinds of interaction is displayed in Figure 4.

A general question in our observations was whether the interaction modes were in balance. As the participants in the study employed all four interaction modes, interaction was, in principle, not unbalanced. The lack of social interaction in the first meeting, and the absence of utterances to technology-related interaction, seem to indicate an unbalanced start. Nevertheless, the high score on social interaction in the second meeting suggests a kind of re-balancing action.

Conceptual learning, feedback, and motivation.We hypothesised that conceptual learning, feedback, and motivation are part of the process of reaching shared understanding. The following findings seem to support this hypothesis. Though we did not find an increase in the utterances devoted to conceptual learning, feedback, and motivation, we did find evidence that the number of utterances devoted to conceptual learning, feedback and motivation supported the process of reaching understanding. The most common types of conceptual learning that were accretion and tuning, especially in the beginning of the project. Interestingly, we found that the highest amount of conceptual learning occurred in the fourth meeting (100 utterances: 49 accretion, 45 tuning, and 6 restructuring). This was the next meeting after the participants had solved a big misunderstanding and had reframed their problem statement (meeting 3). After this miscommunication a team member stated in the weekly diaries: “We talked to our team mates again this week and they now understand more about the project.” The amount of conceptual learning seems to go hand in hand with an increasing shared understanding, as the students indicated the highest ranking on shared understanding of the content on the rating scales. This would tend to support the assumption that conceptual learning is part of the process of reaching shared understanding. In addition, these results also seem to support another assumption that a certain level of shared understanding, in this case, enough shared understanding on the problem formulation, seems to be necessary in order to learn.

We found that in their second meeting the students devoted most utterances to conceptual learning, feedback and motivation related to social aspects. After this meeting they all agreed that they felt they knew the other group members better (see rating scales, Table 3). After the first meeting the students also devoted more time to social interaction (raw transcripts). Again there seems to be evidence for the assumption that a certain level of shared understanding is first necessary in order to learn and update shared understanding. In particular, the use of feedback modes such as reflect, check understanding, paraphrase and explain seems to go hand in hand with the process of reaching shared understanding.

The perception of shared understanding. According to our conceptual framework, perceived shared understanding should increase during a project, at least if the team learns. However, it is an interesting question how we can assess this. The tables below display the results of the rating scales used to measure the perception of shared understanding. (We applied the rating scales after meeting 2, 3, 4, 5, and 10.) The ‘now questions’ are on a 6-point scale, the ‘evolve questions’ on a 7-point scale. A score of 4 ranked on a six-point scale means that understanding has increased a bit. On the seven-point scales a score of 4 means that nothing has changed. The instrument concerning content was shown in Box 1.

 

CONTENT

At the end of

Now

6-point scale

Evolve

7-point scale

Shared now

6-point scale

Shared evolve

7-point scale

Meeting 2

3

4.5

4.5

4.5

Meeting 3

4.33

6

4.67

5

Meeting 4

5.5

5

5.5

5.5

Meeting 5

4

6

5

5

Meeting 10

4.5

5

5

4.5

Table 2. Scores of perceived understanding of content

 

Table 2 displays the results of the rating scales related to ‘content’. A score of 4 and higher on the now scales indicates a high level of shared understanding. A score higher than 4 on the evolve scales indicates an improved shared understanding. Only in the second meeting was a now score below 4 indicating the understanding of the definition and requirements of the problem (content) was low. Thus, the data show that the perceived understanding of content and shared understanding of content increased during the project.

 

RELATION

At the end of

Now

6-point scale

Evolve

7-point scale

Shared now

6-point scale

Shared evolve

7-point scale

Meeting 2

5

5

4

4

Meeting 3

3.33

5

3.67

4.67

Meeting 4

4.5

5

4.5

5

Meeting 5

5

4

4

4

Meeting 10

4.5

4

5

4.5

Table 3. Scores of perceived understanding of each other

 

After the third meeting the perceived understanding of the other group members was low in relation to the other scores (now scores below 4, see Table 3). Also the perceived shared understanding of each other was low after the third meeting. However, the students indicated that their understanding had improved since the previous meeting. Only after meetings 2 and 5 their shared understanding was unchanged (evolve score is 4). Individual understanding did not change in the last meetings (evolve score is 4). Another interesting remark is that the students scored their understanding of each other after the second meeting with a 5 (among the highest scores given). This is high compared with the scores on content (3) and procedure (4.5). In this second meeting the students devoted relatively more time to social interaction.

 

PROCEDURE

At the end of

Now

6-point scale

Evolve

7-point scale

Shared now

6-point scale

Shared evolve

7-point scale

Meeting 2

4.5

5

4.5

5.5

Meeting 3

3.33

4.33

4.33

4.33

Meeting 4

5

5.5

5

5

Meeting 5

4

5

5

4

Meeting 10

4

5

4

4.5

Table 4. Scores of perceived understanding of the procedure

 

Table 4 shows the results of the rating scales related to understanding the procedure. The lowest score here was after the third meeting (3.33), while all the other measurements have a score of 4 or higher. Individual and shared understanding of procedure improved during the project; on the evolve scales, the students ranked higher than a 4. Only after meeting 5 the perceived shared understanding of procedure had not changed.

 

OVERALL

At the end of

Shared evolve

7-point scale

Meeting 2

5

Meeting 3

5

Meeting 4

5

Meeting 5

5

Meeting 10

5.5

Table 5. Scores of overall perceived understanding

 

Table 5 presents the scores for overall perceived understanding. As a score of 4 on the 7-point scale means that nothing had changed, we can see that overall perceived understanding increased during the project. The results of the rating scales confirmed our hypothesis that the level of perceived shared understanding increases during a project. And, of course, successful project completion can be seen as evidence for the increasing level of shared understanding. The students’ final result was considered successful by their faculties, and the client firm was satisfied as well.

 

Discussion and conclusions

In this study we gained insight into group learning and shared understanding by studying the group interaction process in detail. Using our conceptual framework and the developed coding scheme in an empirical study, our understanding of the role of conceptual learning, feedback, and motivation has increased.

One major conclusion is that it was still difficult to find reflective utterances in the protocols. A possible explanation seems to be that it is indeed difficult to assess reflectivity. Particularly, no co-construction was found in the transcripts. As co-construction involved restructuring by all group members, it may indicate that this learning mode needs to be coded across segments. In our following work this is to be studied in more detail. Another explanation for not finding reflective utterances is that little reflective activity took place. It may be more difficult to express this kind of utterances in technology-mediated interaction. If this is true the importance of assessing group learning and shared understanding increases, and it emphasises the changing role of support in problem-based learning. By support we refer not only to technology support, but also to social support, such as the role of a tutor.

A main conception of the current study was that interaction should be balanced. Based on the analysis of the current study, we defined some guidelines for balancing interaction (see also Mulder & Swaak, 2000). Concerning task-related interaction, it seems fruitful to integrate questioning and listening functionalities during video meetings. With respect to social interaction, it is highly recommended to first meet face-to-face or to exchange background information on the team web site. To support the procedure, the use of an agenda and the use of an electronic schedule are helpful. Finally, a general rule for technology-related interaction could be that the whole team received an introduction on optimal media use at the start of the project. These guidelines could be helpful for a tutor when starting up distributed learning teams or when intervening in problem-based learning.

 

Limitations

We are aware that we conducted only one study with one group (N=1). However, our main intent was to tackle the problem of assessing group learning and shared understanding in an empirical study. In order to make this assessment theory-driven, we developed a model to describe and explain group learning and shared understanding. As mentioned earlier, this study was part of a larger project called INTEnD, which makes comparative analysis possible. By participating in an international research project, we could profit from both the quality of an in-depth study and the comparison across studies.Between the fall of 1998 and the end of 2000, a total of nineteen teams involving participants in two or more countries were studied. Although comparative analyses have been made (Steinfield et al., 2001), these analyses were not in the same level of detail as took place in the current study. Nevertheless, two general conclusions from these comparative analyses could be shown regarding (1) the low amount of learning on technology use, the so-called media stickiness, and (2) limited social interaction that took place in the international teams.

In the current analysis we focused on video meetings (i.e., synchronous interaction). In learning teams and project work continuous alternation between synchronous and asynchronous interaction is common. Therefore, we should also apply our research model (including the coding scheme) to asynchronous interactions like email exchange. Though we did pay attention to this kind of interaction and media use during our observations and in the weekly diaries, we did not analyse this to the same level of detail as we did the videoconferencing meetings.

Another characteristic of this study was that the students formed a zero-history group on engineering design. Questions arise whether the current students can be conceived as real project members, and to what extent the results found in engineering design are applicable to other project work. Moreover, the students had hardly any experience working in project teams and with technology use. Thus, in this case a zero-history group not only implied that the students did not know each other; the group also had zero-history on task, procedure, and in this case, technology support. All this makes the current project team a very specific case. Therefore, we are to be careful in generalising our results.

 

Future work

In a forthcoming study guidelines as formulated earlier are to be tested in quasi- experimental set-ups. To do this we use a more comprehensive framework that would involve at least the following ingredients: (1) the area of group-based learning, (2) a balance among task-related interaction, procedural, technology-related and social interaction, and (3) both synchronous and asynchronous interaction. With respect to group-based learning, we address both project-based and problem-based learning (see also Van der Veen, 2001). We focus on some kind of learning: the more problem-based learning and working in project teams to which we apply our conceptual framework on group learning and shared understanding. Since most project work has focused on the task, we find the need for balanced interaction, i.e., task-related interaction (content), social interaction (relation), planning of activities (procedure), and technology. Third, we want to pay more attention to asynchronous interaction and switching between synchronous and asynchronous interaction. Then, our research results on and guidelines for group learning and shared understanding do not only apply to synchronous interaction, but also pay attention to asynchronous interaction.

 

Acknowledgements

We would like to thank all INTEnD colleagues for enabling this rich experience on working and learning together in a globally-dispersed research team, our colleagues in the GigaCSCW project, in which this work has been carried out, and especially Jan Gerrit Schuurman for his helpful comments on our paper. Last but not least, we would like to thank the students of the current study for their willingness to participate in the international project, and for giving us such a rich set of data.

 

References

  • Argyris, G., & Schön, D. (1978). Organizational learning, Reading, MA: Addison Wesley.
  • Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.
  • Clark, H. H., & Brennan, S. E. (1991). Grounding in Communication. In Resnick, L. B., Levine, J. M., & Teasley, S. D. (Eds.) Perspectives on Socially Shared Cognition (pp. 127-149), Washington, DC: American Psychological Association.
  • Dennis, A. R., Valacich, J. S., Speier, C., & Morris, M.G. (1998). Beyond Media Richness: An Empirical Test of Media Synchronicity Theory. Proceedings of the 31st Hawaii International Conference on System Sciences (pp.48-57). Los Alamitos, CA: IEEE Computer Society.
  • Donnellon, A. Gray, B., & Bougon, M.G. (1986). Communication, meaning, and organized action. Administrative Science Quarterly, 31, 43-55.
  • Gramsbergen-Hoogland, Y. H., & Molen, H.,T. van der (1992). Gesprekken in organisaties, Groningen: Wolters-Noordhoff.
  • Krauss, R. M., & Fussell, S.R. (1991). Constructing shared communicative environments. In Resnick, L. B., Levine, J. M., & Teasley, S. D. (Eds.) Perspectives on Socially Shared Cognition (pp. 172-200). Washington, DC: American Psychological Association.
  • Kreijns, K., Kirschner, P.A. & Jochems, W. (2002). The Sociability of Computer-Supported Collaborative Learning Environments. In this issue of Education Technology &Society.
  •  Landis, J.R. & Koch, G.G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.
  • Meij, H. van der (2000). Personal Communication. 17 May 2000.
  • Mulder, I. (1999). Understanding technology-mediated interaction processes (Report No. TI/RS/99042). Enschede: Telematica Instituut.
  • Mulder, I. (2000). Coding scheme and manual: How to code content, relation, and process aspects of technology-mediated group interaction? (Report No. TI/RS/2000/099). Enschede: Telematica Instituut.
  • Mulder, I., & Swaak, J. (2001). A study on globally dispersed teamwork: coding technology-mediated interaction processes. In Taillieu, T. (Ed.) Collaborative Strategies and Multi-organizational Partnerships (pp. 235-243). Leuven: Garant.
  • Mulder, I., & Swaak, J. (2000). How do globally dispersed teams communicate? Results of a case study (TI/RS/2000/114). Enschede: Telematica Instituut.
  • Norman, D. A. (1993). Things that make us smart: Defending human attributes in the age of the machine. Reading, MA: Addison Wesley Publishing Company.
  • Schober, M. F., & Clark, H .H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21, 211-232.
  • Snow, R.E. (1989). Toward Assessment of Cognitive and Conative Structures in Learning. The Educational Researcher, 18 (9), 8-14.
  • Steinfield, C., Huysman, M., David, K., Jang, C. Y., Poot, J., Huis in 't Veld, M., Mulder, I., Goodman, E., Lloyd, J., Hinds, T., Andriessen, E., Jarvis, K., Van der Werff, K., & Cabrera, A. (2001). New Methods for Studying Global Virtual Teams: Towards a Multi-Faceted Approach. Proceedings of the 34th Hawaii International Conference on Systems Sciences, January 3-6, 2001, Maui, Hawaii (CD-ROM).
  • Steinfield, C., Jang, C., & Pfaff, B. (1999). Supporting virtual team collaboration: The TeamSCOPE system. Proceedings of the Group99 Conference, Phoenix, Arizona, 81-90.
  • Swaak, J., Mulder, I., Houten, Y. van, & Hofte, H. ter (2000). Sluiten ICT-ontwikkelingen aan bij de behoeften van projectwerkers? Onderzoek naar synchrone en asynchrone interacties: redenen en problemen. Gedrag en Organisatie, 13 (6), 327-343.
  • Veen, J. van der (2001). Telematic support for group-based learning, Enschede, The Netherlands: Twente University Press.

decoration