Educational Technology & Society 4(1) 2001
ISSN 1436-4522

Computer Modeling and Biological Learning

Wolff-Michael Roth
Lansdowne Professor, Applied Cognitive Science
MacLaurin Building A548, University of Victoria
Victoria, BC, V8W 3N4 Canada
Tel: +1 250 721 7885
Fax: +1 250 472 4616
mroth@uvic.ca


Daniel V. Lawless
Faculty of Education, University of Victoria
Victoria, BC, V8W 3N4 Canada
Tel: +1 250 721 7885
Fax: +1 250 472 4616
vlawless@mail.island.net

 

ABSTRACT

In this article, we argue that learning is more than the transfer of information from expert to novice; additionally, learning processes involve the learner’s body as a fundamental condition. We provide examples from a large database on student learning that show how gestures are an important means in the construction of perception and communication as students interact over and about a computer software environment. Robust learning, as we know it, is therefore a biological phenomenon rather than one that is independent of the machinery on which it is implemented. Based on this and other studies, we suggest that learning environments that do not support students’ use of body and gesture can limit what and how they learn.

Keywords: Cognition, Biological learning, Gesture, Perception, Human-computer interaction


Introduction

Although multi-media computer-based sources (CD-ROM, Internet) have supplemented the traditional fare of educational resources, they have not substantively changed existing assumptions about learning as information processing and knowledge acquisition. In this article, we take a different stance to learning and argue that it is inherently a biological phenomenon such that sensori-motor activities are central to the appropriation even of abstract (Newtonian) concepts such as “velocity” and “force.”

The purpose of this article is to show how one piece of computer-based modeling software allows students to draw on gestures (pointing and imaging) to identify, perceptually isolate, and symbolically represent objects and events. These gestures appear to support the emergence of an appropriate scientific language for describing and explaining objects and events. We argue that these results are consistent with models of learning as a biological rather than information-processing phenomenon.

 

Theoretical Framework

Neurophysiologists and neurobiologists alike emphasize that biological systems, while they receive stimuli at their periphery, are closed in informational terms (e.g., Maturana & Varela, 1987; G. Roth, 1992). Accordingly, meaning is generated within biological systems, each developing its own network of expectations, actions, and perceptions. Consequently, while some physical environments can be taken as constant across all individuals inhabiting them, the perceived significant entities and structural relations within the environment differ among individuals. Together, the ensemble of significant entities and structural relations constitute an individual’s umwelt (von Uexküll, 1973) or lifeworld (Wittgenstein, 1994). Following the lead of theoretical biologist Jacob von Uexküll (e.g., 1973, 1982) we view learning as the adaptation of each organism's umwelt unit, which expresses itself in the changes of perception and actions. As such, learning occurs as the individual picks up from its umwelt (but not from the physical environment) relevant information. Accordingly, in the context of neurobiology, meaning is an emergent neurodynamical process; neuronal structures become support systems of meaning.

According to neurobiologist G. Roth (1996), the four main brain systems (sensory, learning and memory, limbic evaluation, and motor systems) are deeply integrated. Based on a review of neurobiological literature, he suggests that there is overwhelming evidence for the fact that these four subsystems are mutually constitutive—made possible by the strong interconnectivity of the brain from a neuroanatomical perspective. For example, one recent neuroscientific study proposed that objects are recognized by their “graspability” rather than by their pictorial aspects (Rizzolatti, Fadiga, Fogassi, & Gallese, 1997). This suggests that objects are (at least in part) meaningful elements of a lifeworld in terms of the actions that they afford. Accordingly, perception is shaped by affordances (Gibson, 1979)—contextual information generated within the individual and relevant to his or her actions. The same physical environment is perceived differently not only by different species but also by members within the same species. The ambient array includes invariants (shadows, texture, color etc.) that play a crucial role in perception. Nevertheless, the perception of invariants is enhanced for those individuals that move their entire bodies (Merleau-Ponty, 1945) or even body parts (Quine, 1995) relative to the environment. Although one perceives through the eyes, those eyes are in a head, which is attached to a body that has locomotion.

This perspective on knowing and learning has significant implications for the design of learning environments. For example, research focused on computer-based simulation environments—e.g., Interactive Physics™ (Roth, 1996) or Envisioning Machine (Roschelle, 1992)—afford a limited repertoire of movements, gestures, involving fingers, hands, and arms. Because these simulation environments overlay natural phenomena (e.g., motion) with conceptual entities (e.g., force and velocity vectors), they allow students to integrate phenomenal and conceptual dimensions in ways that experiments with real objects and events do not support (e.g., Roth, Woszczyna, & Smith, 1996).

Previous research showed that there is considerable variation in the way students perceive physical phenomena; watching the same demonstration by the teacher, some students in a class perceived “considerable movement,” others “a little movement,” while yet others see “no movement at all” (e.g., Roth, McRobbie, Lucas, & Boutonné, 1997). When students have direct access to the entities talked about (objects or visual representations) so that they can use gestures to point or move along relevant boundaries of the entities, students begin to detect mutually incongruent perceptions (Roth, 1999b).

Relevant to this article, researchers (linguists, psychologists) distinguish two types of gestures: deictic and iconic gestures (Kendon, 1997). Gestures are classified as deictic when hand movements, positions, and configurations are used to point to some entity or location in space. For example, a speaker may point her finger to an arrow on a computer screen and say, “pick this one.” Gestures are classified as iconic when the movement of the body part (finger, hand, or arm) produces a visual image of the idea to be communicated. For example, a speaker may produce a parabolic shape in the space before him (in a coordinated movement of finger, hand, and arm) while saying, “the ball went this way.” Here, the gesture iconically depicts the trajectory of the object.

Computer simulations of natural phenomena afford the use of gestures to support perceptions, which, when students work collectively, may change as a consequence of their negotiations and therefore lead to learning (Roth, in press-b). Pointing (deictic gestures) frequently does not identify the boundaries of the entities referred to; movement and iconic gestures have a greater capacity for making salient relevant perceptual cues in the environment. This is so because there is a mutually constitutive relationship between the object in the world and the structure of the gesture. From the viewpoint of speakers, concrete entities motivate the form of iconic gestures used to make the former salient. From the viewpoint of listeners, iconic gestures are approximations for the shape of the concrete entities to be looked for.

 

Purpose

The overarching purpose of our research agenda concerns the role of gestures as an intermediary stage between sensori-motor activity and cognition of abstract entities. In this study, our purpose is to provide empirical evidence for the role of body movement and gesture to learning. This empirical evidence is drawn from a study in a computer-based learning environment. Our findings have considerable implications for those computer-based learning environments that neglect the use of the body as a resource in learning.

 

Study Context

Over the past decade we have conducted studies on learning as students investigate biological and physical phenomena. Two of these studies have involved learning through interaction with computer modeling programs. In each study, particular attention was given to the relationship between activity, gestures, and scientific language. In this paper, a study is reported involving students in a high school physics class using a computer-based Newtonian microworld (Interactive Physics™) to learn the principles of the physics of motion. Over a period of four weeks, three groups of students were videotaped while they conducted a series of investigations.

 

Participants

Forty-six grade 11 students (41 males, 5 females) from 3 sections of a qualitative grade 12 physics course participated in this study (20, 15, 11 students, respectively). The students attended a private school in Canada (grades 4-13), which was in its first year of transition from an all-boy to a coeducational institution. For about half of the students, this course was a precursor to the grade 13 advanced physics course. Most students were not science majors and later pursued careers in business, medicine, law, and politics. Roth taught all three sections of this physics course.

 

Interactive Physics™

Interactive Physics™ is a computer-based Newtonian microworld in which users conduct experiments related to motion (with or without friction, pendulum, spring oscillators, or collisions). The microworld allows for different representations of observable entities (measurable quantities). For example, force, velocity, or acceleration can be represented by means of instruments such as strip chart recorders and digital and analog meters. The computer activities were planned because Interactive Physics™ superposes conceptual representations of these quantities, vectors, and the objects creating hybrid objects bridging phenomenal and conceptual worlds (Roth, Woszczyna, & Smith, 1996). Therefore, students have concurrent access to phenomenal and conceptual representations, which they do not have with real world experiments. By alternating between real world and computer activities, Roth (in his role as teacher) hoped to assist students in developing a stable discourse about motion phenomena in different situations.

All student activities in the present study included, at a minimum, one circular object (Figure 1). A force (open arrow) could be attached to this object by highlighting and moving it with the mouse. The object’s velocity was always displayed as a small arrow and students could modify its initial value by highlighting the object, “grabbing” the tip of the small arrow, and manipulating its magnitude and direction. Students were instructed to find out more about the microworld, especially the meaning of the “arrows,” that is, the vectors representing force and velocity. Although students concurrently conducted real world experiments on motion in which they analyzed distance-time, velocity-time, and acceleration-time graphs, they were not told the scientific names of the “arrows.” In part, Roth did this because as a teacher he believed in inquiry-based learning; originally, he wanted to know whether students would eventually use them without being instructed to. As a basic guideline, the design developed in a similar study by Roschelle (1992) was followed.

Some of the prepared activities displayed nothing more than the circular object (including its velocity) and a force. Others required students to manipulate the “arrows” (force and velocity) to hit a small rectangle and knock it off its pedestal. After setting force and initial velocity, students could “run” the experiment (see button at top left of interface). A tracking feature “froze” the motion as if recorded with flash photography. During the microworld experiment, the cursor takes the form of a stop sign, and a simple mouse click stops the motion. The replay feature (bottom left) allows the inspection of individual states in the motion of the sphere. (Clicking left and right arrow replays the entire event forward and backward, respectively; clicking on the “1” moves the recorded strip one image at a time.).

 


Figure 1. The interface of Interactive Physics™


Data Analysis

The conversations of four student groups over and about the Interactive Physics™ displays were recorded during three 1-hour periods separated by 2-week intervals. Our analytic methodology is grounded in micro-sociological research on human-machine interactions (Suchman, 1987) and interdisciplinary research on gestures (Goodwin, 1986; Kendon, 1997). Because of the nature of the tasks, the students talked with the teacher about the screen displays, the arrows displayed on the computer screen, and their theories about the microworld. Our videotapes, therefore, provided us with natural protocols of students’ sense-making efforts. These protocols were available for analysis in the form of video recordings and their transcripts. We viewed the videotapes and read the transcripts repeatedly to formulate tentative understandings. During subsequent viewing and reading, we attempted to find evidence that confirmed or disconfirmed tentative understanding of the phenomena on the tapes. Through this process of constant comparison (Strauss, 1987), we arrived at the set of claims reported below.

 

Findings

There is little fine-grained research on how computer-based learning environments change learning processes when compared to more traditional learning environments (e.g., Roschelle, 1992). There is even less work concerning the different roles of gestures and other body movements in computer-supported and other learning environments. This study was designed to provide detailed descriptions of student activities (talk, gesture) in the context of one specific computer-based modeling environment. Because this environment supports the use of gestures, we ultimately suggest that a biological model of learning is suitable to account for the empirical data.

We present our data in three sections. First, we provide examples that show how students often do not perceive events in standard ways and their talk is best characterized as a “muddle” (a term proposed by Rorty, 1989). Second, gestures constitute an important resource for students in the coordination of their perceptions and talk about the objects on the computer screen. Perceptions of, actions on, and talk about the objects on the computer screen adapt in an evolutionary way until they form a consistent system for describing and explaining phenomena. Third, as an expressive means, gestures appear prior to the associated and equivalent verbal means of representing observations and explanations.

 

Muddle and Conceptual Bedlam

When students begin their investigation, they frequently differ in the nature of the objects and events they see on the monitor. For example, whereas our video shows that an object moved upward before beginning its descent, one group of students observed that the object immediately descended. They do not notice the parabolic shape of the trajectory of an accelerated object or that the force on the object stays constant in the course of the event while the velocity changes. Furthermore, students not only differ in what they perceive but they also do not notice that they are talking about different entities. This leads to a muddled form of talk such that, as the lessons unfold, a large number of different words are used for the same object and the same words are used to denote different objects.

In the following episode, the students observe a ball, one image at a time, moving in the way depicted in Fig. 2. (The students saw only one snapshot at a time but in rapid succession. We plotted every third image students saw to the right of the previous one.)

 

01

Glen:

It goes straight down.

02

Ryan:

Yeah, it would go downward.

03

Linda:

I think it went backward first though?

04

Ryan:

The initial velocity went the way the little arrow goes.

05

Linda:

Didn’t it go backward first and then go forward?

06

Ryan:

I think so […]

 


Figure 2. Transcript and video display. Consecutive vertical positions of the object and the relative size of the arrows that had been observed by the students prior to this episode..


From the analysts’ perspective, the ball first moved up before it descended. In contrast, both Glen and Ryan initially maintain that the ball moved directly downward (lines 01, 02). At the same time, Linda suggests that the ball first goes backwards and then moves forwards (line 03). It may appear strange that such blatant perceptual differences are possible. Nevertheless, our research shows that this phenomenon is rather common among people who are unfamiliar with some phenomenon (e.g., Roth, McRobbie, Lucas, & Boutonné, 1997).

Students in the same group and class not only perceive events in different ways, they also use the same words in different ways. This leads to confusion, for in one conversation, the same words may refer to different objects on the computer screen and different words may be used for the same object. In Fig. 2, the students were actually not aware that, for different students, the “little arrow” referred to different entities. As our analysis of the episodes before and after revealed, Linda and Ryan meant velocity (which they also described as “skinnier”). Glen, however, used the same descriptor to denote the force vector (open arrow) possibly because it was shorter. Because the relative length of the two arrows was changed throughout the activity, what the “little arrow” (and correspondingly, the “big arrow”) referred to changed constantly.

The different and changing referents of the same words were not the only ambiguity in the students’ conversation. As the following episode (Fig. 3) shows, the students used many other words to denote the two arrows; Glen even changes the word he uses in this brief excerpt.

 

01

Glen:

Oh yeah the big arrow 's time, ‘K (1.2) the big arrow 's time.

 

 

02

Ryan:

OK, we'll make it shorter.

03

Linda:

So then the little arrow is direction. (1.3)

04

Glen:

Yeah the big arrow is direction. No I mean the big arrow is velocity=

05

Linda:

=No, time=

06

Ryan:

=No, it's time but it also directs, though.

07

Glen:

We don't know yet. What did I do?

Figure 3. Transcript and arrow configuration. The figure shows the configuration that students presently are looking at as their talk unfolds. “(1.2)” indicates time elapsed in seconds, “=” indicates latching, that is, speaking without leaving the normal pause between two speaking turns.

 

In all groups observed, there was considerable variation in the words used to denote a particular referent. Thus, in the group featured here, the students referred to the same vector using the following terms (in order of their appearance): little arrow, big arrow, initial speed, velocity, initial speed, velocity, force, effort, strength, speed, strength, speed, direction, speed and direction, and velocity. In a similar way, this and other groups used 10 to 15 different words to denote the force. The list of words also shows that student may use some term that is scientifically correct early on but continue to change. Consequently, a teacher listening to students at such a point may assume that students already understand although they still are far away from an understanding.

Such evidence makes it clear that we cannot take for granted that students actually receive information that some instructor intends to supply. If our observations are correct, then one of the important questions we need to answer is how students eventually break out of this seemingly hopeless situation.

 

Evolutionary Adaptation of Perception, Action, and Talk

In moving towards an answer we suggest that the use of pointing (deictic gestures) and imagery gestures (iconic gestures) allows students to converge on a common perception of objects and events. Deictic gestures can be thought of as depicting lines that connect narrative point of view and targeted object (McNeill, 1992). Prior to the episode in Fig. 4, another student had suggested that grabbing it at the “black dot on the end” would turn [force]. Now, Mike disagrees and places his index finger on the heel of the arrow (Fig. 4a) (not visible on the video image is the black dot “handle” that is normally visible to the user of Interactive Physics™). Here, then, pointing is used to disambiguate the referent of “black dot [at the end].”

 


 


 ‘No, this black dot.’

‘This arrow…’

a.

b.

Figure 4. Transcript and video illustrating indexical (a) and iconic (b) gestures.

 

Iconic gestures resemble their referents in some way; that is, they often provide a visual image of an object or thought. In Fig. 4b, Mike holds up his right index finger and utters “this arrow.” At this moment there are two arrows on the screen: the force (open) arrow points to the right and slightly downward, whereas the velocity (small) arrow points straight up. When Mike says “this arrow” the two listeners (as well as the analyst of the video image) are most likely to pick that arrow that somehow resembles the shape of the gesture. In Fig. 4b, this is the velocity vector (hidden from viewpoint of camera). Gestures that depict the motion of entities are also called iconic because the shape of the hand’s trajectory has visual resemblance with the trajectory of the object. Thus, iconic gestures are used to denote objects and their movements across the monitor. Iconic gestures also depict conceptual entities (velocity, force) that are said to explain the events perceived. Here, there is a close connection between the actions (pointing, mouse manipulation) on objects and gestures, and the nature of the objects perceived.

Deictic and iconic gestures that pick out objects by touching them or picking out their shape in a perceptual way assist students in attuning each other to their respective experiences. Picking out objects and events and negotiating the referents of the descriptive and explanatory language are the starting point for the learning process in which students evolve a new language about moving objects.

The following episode allows readers to understand the salience and importance of deictic and iconic gestures (Fig. 5). Here, we can see the hands of three students Edward, Jay, and Nick. Because of their initial difficulties understanding each other and making sense out of each other’s talk, the three began to enact many gestures. These allow them to become attuned to the way they used particular words and the objects they were intended to refer to.

Jay, sitting to the far right, intends to suggest a new experiment. Because their changing ways of referring to the objects became confusing, he actually moves his finger straight up from the object. Nick, who begins to speak before Jay has finished, moves his index finger and hand upward from the force. Both situations involve iconic gestures. In the second case, there are two aspects of iconic nature. The hand (see last panel) is shaped in the arrow configuration (cf. Eco, 1984) and also moves upward. Potentially, the later aspect already encodes a hypothesis about the outcome of the investigation to be conducted in the proposed configuration.

 


Jay: Put the gravity

Nick:

straight up

The gravity is

 

 


Nick: going

up now.

 

Figure 5. Transcript and associated images from the videotape

 

Out of initial and humble beginnings, students come to gesture events often before they are able to express them in words. This phenomenon has consistently appeared across a variety of different databases that we analyzed (e.g., Roth, 1999a, 1999b, in press-a). During the initial stages of learning, delays occur between gestures and the corresponding words, particularly when students have little experience with phenomena and are therefore not familiar with them.

The ability to describe an event, that is, to make propositional statements that connect things and verbs arises after the ability to pick out simple objects from the environment and represent them with some arbitrary sign (Quine, 1995). Our analyses show time and again that verbal forms of description arise after gestures have been used to depict the focal events. In the following episode (Fig. 6), the student attempts to describe what is happening when they run the experiment involving [force] and [velocity] lined up horizontally.

 


Like when we are doing it…

It goes in that straight direction.

 

Figure 6. Transcript and video sequence

 

The video images show that the verbal description of the object (“it”) as moving in a “straight direction” follows after the student has already gestured the trajectory once. The text coincides with the repetition of the gesture in which the right hand sweeps from left (where the object is located) to the right until the hand moves out of the picture. There are therefore two forms of delay. We describe these delays in the next subsection.

 

The Delayed Appearance of Symbolic Communication

The development of an appropriate language for making observational and theoretical descriptions lags with respect to the gestural communicative forms. Thus, students are able to gesture the relationship between instantaneous velocity of an object (denoted as small arrow on the monitor), force on the object (denoted as open arrow), and the trajectory that the object follows, one or two consecutive lessons prior to evolving an equivalent form of talk. Our videotapes show how the lag between gesture and language decreases to the order of seconds and eventually disappears so that gestures and corresponding speech occur simultaneously (within ± 200 milliseconds).

 


Wouldn’t the length of the arrows (1.60) Since that arrow 's longer the velocity is higher

I                   2.00

I      1.47

I       0.33       M 0.10

 


that’s

why:: it’s

pushing it that’a way.

I                    0.20

I                0.53

M           0.83            I

 

Figure 7. Transcript and associated images from the videotapes. The following transcription conventions are used: I = marker that aligns text and image above it; M = temporal marker; (1.60) = 1.60 seconds pause; italics for stress of syllable [e.g., that’s]; and 0.53 = 0.53 seconds between markers.

 

At the moment of the episode, the three students still use different names for the arrows and use them in inconsistent ways. As Fig. 3 shows, students previously associated them with “time,” “energy,” “time step,” and many other words. In Fig. 7, then, Glen attempts to describe and explain the events (traces are still visible in the top left of the first frame). His utterances (Fig. 7) are paralleled by the gestures of both hands. Glen holds his right hand with fingers parallel to the open arrow [force]. He then makes another brief circular gesture, which marks the transition between two iconic gestures and highlights the salience of the hand (e.g., McNeill, 1992), while uttering “that arrow” that immediately preceded the causal meaning unit “that’s why it is pushing it….” Before he says “the velocity” (third frame in Fig. 7) his left hand appears, held parallel to [velocity]. In the next frame, both hands are visible: the right parallel to [force], the left parallel to [velocity]. Then, the right hand “pushes” against the left hand, which is moving to the left. This movement continues to the end of the sentence and out of the video frame. Here, the gesture of the right hand (Fig. 7, third frame) begins 0.83 seconds (i.e., 0.10 + 0.20 + 0.53) before the corresponding word “pushing” (Fig. 7, sixth frame). That is, the iconic gesture provides a visual description of the object trajectory (still visible in first frame of Fig. 7) that Glen attempts to explain before he actually verbalizes the explanation.

At the time of this episode, Glen (along with his two peers) does not yet describe the arrows in scientific terms, that is, as force and velocity. As Fig. 8 recorded two weeks later shows, he and his two peers use the appropriate scientific (verbal) language only two weeks later during a subsequent lesson with the microworld. However, although he has not yet developed an appropriate language at the time of Fig. 7, his gesture is already consistent with scientific practice—when understood as a description of the relationship between the concepts of velocity and force. He characterizes the action of the outline arrow as “pushing,” which is a vernacular form of describing forces. Glen also associates the longer pushing arrow with a resulting higher velocity. Here, the referent of “velocity” is not completely clear and two readings are possible. Because the utterance coincides with the positioning of the left hand, “velocity” can be heard as the referent to the left hand: therefore, the longer right arrow (force) pushes more and therefore leads to a longer left arrow (velocity). But the fragment “Since that arrow ’s longer the velocity is higher” can also mean that the longer right arrow is equivalent to a higher velocity. Here, then, “velocity” (incorrectly so from a scientific perspective) would refer to the right arrow. However, the referents for each of the two hands are clear by their position in space in the course of the motion. The directional orientation of the right hand is constant and parallel to [force]. The left hand changes direction in the way [velocity] previously changed.

Our final episode (Fig. 8) was recorded two weeks after that represented in Fig. 7. Here, the students exhibit an appropriate scientific language for talking about the microworld.

 

01

Ryan:

Both of the forces have the same               [equal force.

 

 

02

Linda:

                                                                          [Equal forces.

03

Ryan:

The forces are equal, equal but opposite, and the resulting velocity is zero.

04

Glen:

Hey, do we need to have a velocity?

05

Linda:

We can't have a velocity.

06

Ryan:

No velocity or acceleration.

07

Glen:

It’s got a velocity of zero. What is its initial velocity? Lets put a…

 

Figure 8. Transcript and arrow configuration near the end of students’ development of an appropriate scientific language for talking about the events in the microworld. “[” indicates overlapping speech.

 

In turns 1 through 3, Ryan and Linda correctly use the term “force” for the open arrow and recognize that equal and opposite forces do not change the velocity, which is zero in this case. Glen suggests that they could have a velocity (turn 04), which he clarifies as “initial velocity” (turn 07) after Linda and Ryan reiterated that in this configuration, there could be neither a (non-zero) velocity nor acceleration. However, at the end of and going beyond this episode, Glen pursues the idea that equal and opposite forces do not change an existing initial velocity.

 

Discussion

This study was designed to understand the function of gestures when high school students interact with a computer-based modeling program. Our data show that, initially, students may perceive what is displayed in different often non-standard ways. In the early stages, they often use the same words to denote different entities or different words to denote the same entities. At first glance, the world that students perceive and describe appears chaotic and full of conceptual bedlam. Nevertheless, in this Babel-like situation, we show how deictic and iconic gestures allow students to bring order to their perception of, actions on, and talk about (descriptions and explanations) entities and phenomena. In the process, iconic gestures often depict the standard scientific observations and explanations prior to the corresponding verbal concepts. Over time, the perceptions, actions, and talk mutually constrain each other and students converge on common, shared ways of talking about and gesturing the events.

After students have come to notice existing differences in the way they perceive and talk about objects and events, their gestures become central to reach perceptual and discursive alignment with others. In an evolutionary process, two parallel developments occur. On the one hand, actions on, perceptions of, and talk about objects and events become increasingly consistent with one another. (This phenomenon has also been described among scientists who work at the frontiers of their field of study [e.g., Gooding, 1990; Pickering, 1995].) On the other hand, students, who started out perceiving and describing entities in different ways evolve increasingly shared ways of looking at and talking about the focal events. (This phenomenon has also been described by sociologists, who therefore talk about the social construction of knowledge [e.g., Knorr-Cetina, 1981].) Gestures, however, arise from sensori-motor activity and therefore constitute an important bodily dimension to human knowing and learning.

Our observations of significant temporal delays between the gestural and verbal communication are consistent with observations in studies of non-computer environments (e.g., Roth, 1999a, in press-a) and with observations reported by other researchers (e.g., Goldin-Meadows, Alibali, & Church, 1993). These shifts characterize transition stages where conceptual talk is at one level but gestures already depict knowledge at a higher level. Furthermore, it has been suggested that students in such a transition (as evidenced by the difference between gestural and verbal expressions) are more susceptible to successful teaching than students whose verbal and gestural expressions are consistent (e.g., Goldin-Meadow, Wein, & Chang, 1992). We suggest that learning environments that support the use of new expressive gestures, such as hands-on activities and the present computer-modeling environment, may therefore precipitate the learning of abstract concepts. Here, physical (gestural) movements encourage and support the emergence of abstract concepts.

We understand these findings within the theoretical framework proposed by von Uexküll and further developments in second-order cybernetics (e.g., Brier, 1998; Roth, 1999b) and biosemiotics (e.g., Brier, 1995; Sebeok, 1986; Sharov, 1992). Thus, as our students interact with a novel context, how they see and act in the umwelt changes; their perceptions and actions adapt through their physical interactions (gestures and manipulations of objects). These changes are first observable at a level that is not usually considered in theories of cognition and learning—bodily engagement with the world, manipulations, gestures, and perception. Biological learning is therefore central both to new ways of perceiving the physical world and to developing new conceptual frameworks that account for different perceptions.

Our work has significant implications for the design of educational technology and for theorizing learning from and in the context of educational technology. Elsewhere, we suggested that gestures (symbolic body movements) directly emerge from ergotic movements (actions on objects) and epistemic (sensory) movements (Roth, in press-c). Hands-on activities with real objects afford movements intended to collect data (sensory activities) and manipulate objects (motor activities). Thus, students can feel shapes, sense temperatures, and get sensory feedback when they push objects; they can also manipulate their world in various ways. In computer-based learning environments, these movements with sensory and motor intentions are much more restricted. Many computer-based learning environments do not support sensori-motor actions and therefore cannot facilitate learning in the way we describe here. On the other hand, the recent development of virtual worlds (which simulate experiences in a three-dimensional world) and haptic devices (e.g., joysticks with feedback mechanisms indicating to the user the strength of resisting forces), afford new and different ways of learning directly involving the human body. At this point in time, little is known about how these new technologies afford learning. Nevertheless, we believe that these new environments are not only interesting from a learning perspective, but also that they support new forms of experimental research for better understanding the differential roles of body and language in learning.

 

Acknowledgements

This study was made possible, in part, by grant 410-99-0021 from the Social Sciences and Humanities Research Council of Canada.

 

References

  • Brier, S. (1995). Cyber-Semiotics: On autopoiesis, code-duality and signgames in bio-semiotics. Cybernetics & Human Knowing, 3 (1), 3-14.
  • Brier, S. (1998). The cybersemiotic explanation of the emergence of cognition. The explanation of cognition, signification and communication in a non-Cartesian cognitive biology. Evolution and Cognition, 4, 90-102.
  • Eco, U. (1984). Semiotics and the philosophy of language, Bloomington: Indiana University Press.
  • Gibson, J. J. (1979). The ecological approach to visual perception, Boston: Houghton Mifflin.
  • Goldin-Meadow, S., Alibali, M. & Church, R. B. (1993). Transitions in concept acquisition: Using the hands to read the mind. Psychological Review, 100, 279-297.
  • Goldin-Meadow, S., Wein, D. & Chang, C. (1992). Assessing knowledge through gesture: Using children’s hands to read their minds. Cognition and Instruction, 9, 201-219.
  • Gooding, D. (1990). Experiment and the making of meaning: Human agency in scientific observation and experiment, Dordrecht: Kluwer Academic Publishers.
  • Goodwin, C. (1986). Gestures as a resource for the organization of mutual orientation. Semiotica, 62, 29-49.
  • Kendon, A. (1997). Gesture. Annual Review of Anthropology, 26, 109-128.
  • Knorr-Cetina, K. D. (1981). The manufacture of knowledge: An essay on the constructivist and contextual nature of science, Oxford: Pergamon Press.
  • Maturana, H. & Varela, F. J. (1987). The tree of knowledge: The biological roots of human understanding, Boston: New Science Library.
  • McNeill, D. (1992). Hand and mind: What gestures reveal about thought, Chicago: University of Chicago.
  • Merleau-Ponty, M. (1945). Phénoménologie de la perception [Phenomenology of perception], Paris: Gallimard.
  • Pickering, A. (1995). The mangle of practice: Time, agency, & science, Chicago, IL: University of Chicago.
  • Quine, W. V. (1995). From stimulus to science, Cambridge, Mass: Harvard University Press.
  • Rizzolatti, G., Fadiga, L., Fogassi, L. & Gallese, V. (1997). The space around us. Science, 277, 190-191.
  • Rorty, R. (1989). Contingency, irony, and solidarity, Cambridge: Cambridge University Press.
  • Roschelle, J. (1992). Learning by collaborating: Convergent conceptual change. The Journal of the Learning Sciences, 2, 235-276.
  • Roth, G. (1992). Das konstruktive Gehirn: Neurobiologische Grundlagen von Wahrnehmung und Erkenntnis [The constructivist brain: Neurobiological foundations of perception and cognition]. In S. J. Schmidt (Ed.) Kognition und Gesellschaft, Frankfurt: Suhrkamp, 277-336.
  • Roth, G. (1996). Limbisches und motorisches System: Die neurobiologischen Grundlagen von Bewerten und Handeln [Limbic and motor systems: The neurobiological basis of evaluation and action]. In Research Group “Interdisciplinary Cognitive Science” (Ed.) Representation and meaning III, Bremen, Germany: Center for Cognitive Science, 139-155.
  • Roth, W.-M. (1999a). Discourse and agency in school science laboratories. Discourse Processes, 28, 27-60.
  • Roth, W.-M. (1999b). The evolution of umwelt and communication. Cybernetics & Human Knowing, 6 (4), 5-23.
  • Roth, W.-M. (in press-a). From gesture to scientific language. Journal of Pragmatics.
  • Roth, W.-M. (in press-b). Situating cognition. The Journal of the Learning Sciences.
  • Roth, W.-M. (in press-c). From epistemic (ergotic) actions to scientific discourse: The bridging function of gestures. Pragmatics & Cognition.
  • Roth, W.-M., McRobbie, C., Lucas, K. B. & Boutonné, S. (1997). Why do students fail to learn from demonstrations? A social practice perspective on learning in physics. Journal of Research in Science Teaching, 34, 509-533.
  • Roth, W.-M., Woszczyna, C. & Smith, G. (1996). Affordances and constraints of computers in science education. Journal of Research in Science Teaching, 33, 995-1017.
  • Sebeok, T. A. (1986). “Talking” with animals: Zoosemiotics explained. In J. Deely, B. Williams, & F. E. Kruse (Eds.) Frontiers in semiotics, Bloomington: Indiana University Press, 76-82.
  • Sharov, A. (1992). Biosemiotics: functional-evolutionary approach to the analysis of the sense of information. In T. A.Sebeok & J. Umiker-Sebeok (Eds.) Biosemiotics: The semiotic web, New York: Mouton de Gruyter, 345-373.
  • Strauss, A. L. (1987). Qualitative analysis for social scientists, New York, NY: Cambridge University Press.
  • Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication, Cambridge: Cambridge University Press.
  • Tobin, K. (1990). Research on science laboratory activities: In pursuit of better questions and answers to improve learning. School Science and Mathematics, 90, 403-418.
  • von Uexküll, J. (1973). Theoretische biologie [Theoretical biology], Frankfurt: Suhrkamp (First published in 1928).
  • von Uexküll, J. (1982). The theory of meaning. Semiotica, 42 (1), 25-82.
  • Wittgenstein, L. (1994/1958). Philosophical investigations (3rd ed.), New York: Macmillan.

decoration