Educational Technology & Society 3(3) 2000
ISSN 1436-4522

Using Case Method and Experts in Inter-University Electronic Learning Teams

Anne Hoag, Ph.D.
Assistant Professor of Communications
The Pennsylvania State University
University Park, Pennsylvania  16802 USA
Tel: +1 814 863 5678
Fax: +1 814 863 8161
amh13@psu.edu

Thomas F. Baldwin, Ph.D.
Professor of Telecommunications
Michigan State University
409 Communications Arts & Science Building
East Lansing, Michigan 48824 USA


ABSTRACT

This article describes the design, execution and outcomes of a Web-based course in cable telecommunications offered jointly at two U.S. universities. The goal was to create an online learning community where inter-university student teams collaborate electronically to learn through case-based problem solving.  The main features were distributed team-centered learning, case method, online access to industry experts, a variety of synchronous and asynchronous IT and communication tools and electronic team teaching.  Outcomes indicate positive learning differences.  Satisfaction, higher overall, was lower than expected with experts and teammates at the distant university. Compared to the classroom model, students acquired greater experience in areas other than knowledge of course content: teamwork, communication, time management and technology use. Evidence emerged that satisfaction and learning outcomes may be more related to a student’s positive outlook than to factors such as grade point average, prior experience with teams or with technology.  Our recommendations are to retain student-centered team learning, inter-university cooperation, team teaching and case method features while restructuring others.  We further recommend adopting a course management system to streamline teaching-related tasks.

Keywords: Case method, Experts, Distributed teams, Learning community, Inter-university collaboration


Introduction

In higher education, teachers and administrators want it all for their students: models of learning that stress knowledge of course content and that also include learning objectives in teamwork, communication skills, time management and technology use. Moreover, educators now commonly employ active learning and experiential learning techniques to raise student learning from mere comprehension to high order learning such as application and synthesis.  The challenge becomes how to do it all in the same number of weeks that has always defined a term or semester.

As teachers and researchers, we undertook this challenge in the context of our own teaching. This article describes the design, execution and outcomes of a Web-based course; its goal was an online learning community where knowledge of course content and several other learning objectives would be stressed equally and where an emphasis on experiential learning would bring students to higher order learning levels. The course was designed for and taught jointly to Telecommunications students at two U.S. universities.  In the course, inter-university student teams would collaborate electronically to solve real telecommunication management problems. The terms distributed teams and electronic teams are used interchangeably to refer to the inter-university, geographically distributed nature of the student teams. The main features were distributed team-centered learning, case method, online access to industry experts, a variety of synchronous and asynchronous IT and communication tools and electronic team teaching. The problem solving activities took form as electronic debates, collaboratively written reports and a business marketing plan.

In the following sections, we first describe the design of the online course, its features and their purposes.  Next, our methods and data for assessing outcomes are given.  Following this, analysis of both quantitative survey data and qualitative evidence from both the students and the instructors is presented.  After discussing the outcomes, we conclude with our plans for the future and recommendations to others pursuing similar teaching and learning goals.

 

Instructional Design and Pedagogy

We investigated a number of existing models and pedagogy innovations.  The concept of electronic learning communities as described by several scholars (Palloff & Pratt, 1999; McLellan, 1998; Gibbs, 1998;) appealed to us.  Our experience as teachers agreed with ideas such as teaching and learning are both student responsibilities and that learning communities include groups other than teachers and students.  This would manifest itself in our project as industry experts invited into the community.

 

Students and Teams

Preparing students for life beyond university is an underlying theme in our teaching.  To that end, we selected features for E-classroom, as our online course was called, that would build valuable and relevant skills in teamwork, communication, technology and time management.  Published research on distributed teams convinced us that an inter-university distributed team experience would have to be carefully designed and engendered.  The lessons of the extant scholarship in this arena are that if students are unprepared, if trust and membership in the community are not fostered and if technology complexity exceeds benefits, probability of success declines (Wegner et al, 1999; LaRose et al, 1998; Gay et al, 1999; Webster, 1998; Brandon & Hollingshead, 1999; Goodman & Darr, 1999; Alavi, 1996).

E-classroom was designed to achieve several objectives related to cable telecommunications management: system franchising (the process of negotiating and maintaining a license to operate), programming and marketing all of which were in the content descriptions of both the Penn State and Michigan State courses.  In addition, the E-classroom was intended to build skill with collaborative work online.  The management of communication businesses, now organized in massive international conglomerates, requires collaboration with many people in many different locations.  We thought that creating such a learning environment -- bringing together students from two universities – would simulate this type of collaboration and build these collaborative skills.  Based on our own teaching experience and that of others (Benbunan-Fich & Hiltz, 1999; Porter, 1993; Center for the Study of Work Teams, 1999), the case method and team learning were adapted to advance these objectives.

More generally, we believed that this exercise would allow the students to learn something about themselves; how they worked in a group, the contributions they could uniquely make, their leadership skills and the importance of taking responsibility under circumstances perhaps more critical than in an independent project.

In the Fall of 1999, 66 students participated in the first run of E-classroom:  43 in the Penn State class, 23 in the Michigan State class.  These students were placed in one of 12 teams with five or six people to a team.  Each team had members from both universities so that they were required to work together across the geographic separation.

The case and problems, access to the experts and to CourseTalk (the threaded discussion tool developed at Penn State) were all on the E-classroom web site.  A demonstration version of the site can be viewed at <www.courses.psu.edu/comm/comm488_amh13/ifets>. The site included the three-part problem and extensive background information, some of which was directly relevant and some only tangential to the problem, a common feature of case studies for learning.  A part of the exercise was to find and use the relevant information. 

 

Experts

The students could also access 18 industry experts.  These were real people, recognized in industry as experts, who agreed to participate (many were alumni of either Michigan State or Penn State).  The experts included system operators, equipment manufacturers, network vice presidents, a Washington, DC communications attorney and marketing specialists. They lived and worked throughout the U.S.; one traveled from Taiwan to the UK during the course and remained in contact throughout.

The students reached the experts via a GUI Web-based interview tool designed at Penn State. By clicking on the experts, students would get biographical information that told them the types of questions that could be addressed to each.  Because the experts were busy professionals, we screened questions so that none were overburdened.  Once a question was answered by an expert, that answer was posted so that all teams could read it.

 

Student Preparation and Communication Tools

The students were prepared for the work by two "e-tasks."  The first required team members to read the instructions for using CourseTalk and then make a posting to their team site.  CourseTalk worked essentially as a bulletin board.  Only the members of a team and the instructors could access the team site on CourseTalk so that the teams could not crib from each other. They were also required to practice sending and receiving email attachments.  Most teams organized IM or ICQ chats and set meeting times to address the issues synchronously and assign tasks.  Some of the teams also used the telephone.  Finally, the teams could use videoconferencing.  Their videoconference time was relatively short so they were instructed to use this resource for carefully planned tasks.

The second e-task required the students within each team to introduce themselves with some biographical information and then to agree on a "contract" covering their goals and outlining some rules for participation.

 

The Case and Problems

The E-classroom project was a major segment of the two courses, representing 50 percent of the grade.  Slightly more than one month of the semester was given to the exercise.  The E-classroom work was all associated with a fictional management case based on a real situation.  Because the case was so central to

the activity, it will be described here. 

The case profiles a cable telecommunications company operating in the U.S.  Greenville Broadband Inc. is a cable system facing competitive threats, rapidly changing technology, regulatory and marketing challenges.  The case provides background on the market and competition, the organizational structure, profiles of individual managers and other players and links to a wide variety of online resources.  Integrated into the case are three specific management issues or problems.

The first problem was designed to require online negotiation and debate. The system franchise is being renewed.  It required the company and the city to come to agreement on the renewal where the central issue was the technology.  The city desired an expensive plant upgrade.  The company wished to deploy resources and technology differently, upgrading to digital. For background, the students had the city assessment of community cable-related needs, previous franchise agreements, segments of the cable ordinance and sections of the Communications Act on franchising. Finally, six of the 18 experts were specialists in telecommunications law and practice in this area.  The 12 teams were paired for this part.  One of each pair was designated the city, the other the company.  The city and company worked independently to prepare their positions.  In the first round, each opposing side presented its position to the other. In a second round, they then were asked to come to an agreement on the terms of the renewal, specifically addressing the technology investment conflict. 

The second problem required online research, team brainstorming and decisionmaking.  Teams had to consider a variety of competing needs, opportunities and threats to design a digital television package. The report for this part was to present and defend a final decision and design for several analog and digital product lines.  The students had demographic and psychographic information on the market, programming and technology data, audience ratings for all of the networks on the analog service, the license fees for all of the analog networks and descriptions of each network (via links to the network web sites).  The experts for this problem were several network vice presidents of programming and affiliate relations, a chief operating officer and four engineers and marketing specialists in digital television technology. 

The final problem in the case was document-centered.  It required extensive cooperation to develop a marketing launch plan for the service packages created in the previous problem.  The plan had to include budgets, training and public relations plans and strategies for marketing trials, positioning and retention.  This was the most extensive of the three problems. 

 

Grading and Feedback

As instructors, we graded each of the three problems in the case.  For each part there was a numerical grade and a two- or three-page narrative critique.  Further, there was a peer grade for each student.  Each team member graded each of the others.  We could modify this result based on our evaluations of individual student performance on CourseTalk and the quality and volume of questions to the experts.  Therefore the students were not entirely at the mercy of their peers.  

 

Methods and Data

Before E-classroom began, measurements of student attitudes, expectations and technology skills were taken in two surveys: one using Likert scale items, the second a series of open-ended questions intended to measure student expectations.  After E-classroom concluded and all grading was given to the students, an extensive survey of both Likert scale items and open ended questions was administered anonymously by a third party. It measured attitudes, satisfaction, technology skills and self-assessments of performance. These two rounds of data collection provide pre- and post-“treatment” data. The data were supplemented and triangulated by measures such as E-classroom grades, instructors' assessment of CourseTalk postings, experts site postings, and of the process overall.

In constructing the instruments, several sources of existing reliable scales were consulted.  However, we found no instruments or scales that met the competing desires to measure such a great number of pedagogy and technology innovations yet keep the instrument a manageable length.  We decided to risk measurement reliability to obtain the needed breadth of measures.  Data analysis revealed some poor measures and they were eliminated from this analysis and discussion.  Only where necessary, we substituted single face-valid items where a scale measure would have been preferable.

Of the 66 students who completed the course, 64 completed the post-project survey (one missing at each campus). Quantitative analysis was performed in SPSS 9.0.  Qualitative data is analyzed and presented in the next section to triangulate and highlight findings in the numbers. 

The data were used to create two categories of variables.

  • Control and independent variables (Table 1):
    • Gender
    • overall grade point average (GPA)
    • team experience
    • team orientation
    • pre-E-classroom technology skill and habit levels
    • a single item trust orientation measure
    • perceived value of E-classroom
    • perceived value of experts
    • positive outlook toward the experience

 

  • Outcomes (Table 2):
    • Satisfaction with various aspects of the course
    • changes in technology skill levels
    • E-classroom grade

     

Where variables are composed of multiple items, scale reliability is given in parentheses.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

Minimum

Maximum

Mean

Std. Deviation

Gender (0=M, 1=F)

66

0

1

.39

.49

GPA, on a 4 point scale

64

2

4

3.04

.50

Proportion of My Courses w/ Teams

64

.00

1.00

.5616

.2421

Team Orientation (3 items; alpha=.66)

65

7

43

13.03

4.22

I am not a trusting person (reversed)

65

1

7

5.49

1.59

Perceived Value of Experts (4 items; alpha=.887)

65

2

7

4.96

1.23

Perceived Value of EC (5 items; alpha=.892)

65

2

7

5.48

1.22

I had fun

65

1.00

7.00

4.9846

1.5860

I know more about real world now

65

1.00

7.00

4.9231

1.5442

Learned a lot about myself

65

1.00

7.00

4.5846

1.6478

Email use frequency – before

62

2.00

5.00

4.0161

.8197

Internet Sophistication – before

62

1.00

9.00

4.6452

2.5031

Computer sophistication – before

62

1.00

6.00

3.8065

1.4239

Valid N (listwise)

58

 

 

 

 

Table 1.  Descriptive Statistics of Independent Variables
(measured on a Likert scale; 1=strongly disagree, 7=strongly agree)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

 

Minimum

Maximum

Mean

Std. Deviation

EC final grade, of 500 possible pts. (b)

66

333

483

434.11

32.69

Change in computer skill

60

-3

7

4.28

2.03

Change in Email Frequency

60

0

4

1.07

.90

Change in Internet Skill

60

-3

14

.32

2.73

Satisfaction... w/ team (4 items; alpha=.798)

65

2

7

5.11

1.22

Satisfaction... w/Learning Opportunity (4 items; alpha=.869)

65

2.0

7.0

5.554

.992

...w/ Quality of work we produced

65

2.00

7.00

5.4462

1.3114

...w/ The experts

64

2.00

7.00

4.9844

1.2785

...w/ Threaded Discussion Tool

64

1.00

7.00

4.5000

1.7182

...w/ Videoconferencing

59

1.00

7.00

4.6441

1.8172

...w/Electronic system for communicating with experts

62

2.00

7.00

4.9677

1.5254

...w/ EC Website

65

1.00

7.00

5.7077

1.3196

...w/ Team teaching w/ 2 profs

64

1.00

7.00

4.6406

1.4405

...w/ Content of case

65

1.00

7.00

5.3385

1.2659

...w/ Overall learning experience

65

2.00

7.00

5.6615

1.1629

Valid N (listwise)

51

 

 

 

 

Table 2. Descriptive Statistics of Outcome Variables
(Likert scale measures: 1=very dissatisfied, 7=very satisfied. (b):  >450=A, 400-449=B, 350-399=C, 325-349=D, <349=F)

 

Both before and after E-classroom, students also responded to a series of open-ended questions on likes, dislikes and perceived benefits.  In the post-event survey, they also described changes in self-awareness.

 

Results and Analysis

In this section we present analysis and interpretation of both quantitative and qualitative data.  We were most interested in understanding how each of the many design features of E-classroom contributed to learning in five areas: content knowledge acquisition and skill improvement in teamwork, communication, technology and time management. We believed that individual student differences would account for learning differences as well so student attributes were recorded for gender, grade point average (GPA). Each student’s orientation to teamwork and trust, technology skill levels before E-classroom began and finally, an attribute we called “positive outlook,” a construct we discuss in a later section of this article.  Figure 1 summarizes our intentions for the learning process.  In addition, satisfaction with various facets of the course design were taken.  Satisfaction measures were collected for two reasons:  first, learning outcome measures in this study were primarily qualitative, our assessment as the instructors. Second, satisfaction outcomes are easier to obtain in quantitative self-reports and do reveal much about the experience from the students’ perspective. 

 


Figure 1. Intended Learning Process in E-classroom

 

Learning Outcomes

Student learning of the course content was of primary importance to us.  As described earlier, the two courses are similar and concern themselves with the cable telecommunications industry in the U.S.  The courses both cover topics such as industry practices and trends, system management and regulation. In both courses, case method has been used in the past. Our assessment of E-classroom compared to our conventional classroom methods is that learning levels were similar in knowledge gains but higher in other skill areas.  

Work product. We were pleased with the final reports in each of the three parts of the case.  Furthermore, they were progressively better from the first to the third.  Most of the reports were highly professional.  We attribute this to the collaboration, having each experienced a lesser professional quality level in individual projects.  Additionally, we credit the active learning component – the Web-based collaborative work replaced lectures and more passive learning techniques.  The student- and team-centered learning aspect put the onus on the student to learn from the materials on the Web site , the experts and from one another.  Overall, we were very pleased with the students’ learning outcome in course content knowledge.  Beyond gaining knowledge, the students learned to apply and synthesize course content, an outcome we do not see commonly in the conventional classroom.

In quantitative analysis, we had one proxy for learning outcomes: the students’ final scores in E-classroom.  In regression analysis of the Figure 1 model, using students’ final scores as the dependent variable, the model which best fit was GPA and “I had fun” with a r2 of .248 (f=9.08, p>.0001). When the full model was entered, the r2 was .384 (f=2.11, p>.05) and “perceived value of experts” was a significant predictor in addition to GPA.  No other factors contributed significantly to predicting EC final scores.

If the students were having fun with the problems, it probably meant that they were continuously experiencing positive reinforcement by their own work product and the collaborative interaction.  Under this reinforcement, they would work harder and take more pride in the product.  This is reflected in the results of the survey item, “satisfaction with work quality,” with a mean rank of 5.44 in a range of 1 to 7 with “7” being “very satisfied.”  This rating was significantly higher than the satisfaction ratings for other outcomes.

If the students valued the "experts" they were likely to use the experts and take their advice.  This was reflected positively in the grade since the instructors respected the experts and generally thought like the experts.

Technology Skills.   In this area, we have objective measures of change.  To create a measure for Internet Sophistication students checked a list of tasks and applications related to the Internet and electronic communications including email, IM or ICQ, chat rooms, newsgroups, videoconferencing, teleconferencing, web publishing, html, sending and receiving attachments, downloading software and ftp’ing.  The range of responses was 1 to 9 before and 2 to 16 after.  The mean score was 4.65 before and 5.03 after, a positive change.  However, a means test shows this change was not statistically significant (See table 3).  It is clear that some students dramatically improved their skill levels in this area, however.

For Computer Application Skills students reported on their skill in word processing, spreadsheets, and various elementary tasks such as formatting a disk.  The range was 1 to 6 before and 1 to 12 after.  The means were 3.81 before and 7.92 after.  A means test confirms the significance of this positive change (Table 3).

Email use frequency corresponded to a 1=once/month, 2=once/week, 3=3-6 times/week, 4=daily, 5=more than once per day.  The range and mean before was 2 to 5 and 4.02; after it was 3 to 5 and 4.54.  Despite the small positive shift, it is statistically significant as shown in Table 3.  In our observations, there were dramatic positive changes in email habits, greater than the rough scale used would imply.

 

Pair

Mean Difference

Std. Dev.

Std. Error

T

Internet Sophistication: before & after

.317

2.734

.353

.897
not significant

Computer Application Skills: before & after

4.283

2.026

.2616

16.376
p>.0001

Email Use Frequency:
before & after

.550

.746

.009

5.709
p>.0001

Table 3. T-tests of mean differences in technology skills: before & after

 

Team, Communication and Time Management Skills

Most students adapted quickly to the course procedures and communication techniques offered.  There was just a small number who were uncomfortable with online communication.  For these people, the one-month project moved too quickly for them to become comfortable.  Most students learned to work with each other at a distance.  They became task-oriented and efficient.  We do not believe that an excessive amount of time was given to organization and other non-substantive effort. 

Of the 12 teams, one failed to work well.  The Michigan State and Penn State persons within the team became alienated to a degree, and in the end worked more or less independently with individuals submitting elements of the project.  We tried to correct the problems, but too much damage had been done in the earliest phases for us to have much impact.  From an optimistic perspective, perhaps it is remarkable that 11 of 12 teams, composed of six strangers, aggregated from two institutions for an ad hoc project, worked smoothly. 

There was a certain amount of start-up fumbling.  The e-tasks were designed to get the students familiar with procedures, but these tasks were not enough like the substantive parts of the problem.  We believe that allocating more time to each part of the project would allow the collaborators to design a system for attacking the tasks and to become comfortable with each other.  Further, we believe that asking the students to designate a chairperson, or at least task leaders, might have helped.  In some cases this occurred naturally, but the instructors made no systematic effort to teach efficient task management techniques.  These are probably learned best by experience, positive and negative, but nonetheless some guidance could help.

The survey results show students on average believe their skills improved in the areas of team, communication, technology and time management.  The survey asked students to describe the benefits of E-classroom after the experience.  Typical responses included “[the benefits were] improving communication skills, improving time management, working with technology,” “an introduction to the global office by way of telecommunications,” “it helps you get adjusted to work in a corporate setting with no face to face communication,” and “have experience of virtual cooperation.”

“I learned to objectively criticize myself, which is important.  I also learned that I have more good skills as a leader and shouldn't hold back from voicing my ideas.” “You get to learn how to deal with people a thousand miles away and meet deadlines.”  “I learned to clearly get my point across quickly and concisely.” “Learned a lot with my group and on my own without a lecture telling me,” “I learned that time management skills are very important and my computer is also very important.” “It enables you to draw on the knowledge of more than one person, other students from other schools, experts.” “You have to depend on others when working in teams.”

 

Satisfaction

The descriptive statistics given in the previous section (tables 1 and 2) show that on average student satisfaction with all features was above the midpoint (a rating of 4).  The range and standard deviations show further that satisfaction varied greatly.  A portion of the causal model in Figure 1 was testable with the quantitative data we collected.  A multiple regression was run to predict satisfaction from student attributes and student perceptions of E-classroom features.  Table 4 below summarizes the results.

 

Dependent Variable

Full Model
R2 & f

Best Fit Model
Predictors

R2
 (or Pearson r)

F

Satisfaction with Team Experience (4 items; alpha=.798)

.452 (f=2.79, p>.01)

Perceived value of EC, gender, email use-before

.373

10.7 (p>.0001)

Satisfaction with Learning Opportunity (5 items; alpha=.869)

.528 (f=3.79, p>.0001)

Perceived value of EC, “I know more about the real world now”

.438

21.46 (p>.0001)

Satisfaction with Overall  Learning Experience

.740 (f=9.61, p>.0001)

Perceived value of EC, “I know more about the real world now”

.654

51.97 (p>.0001)

Satisfaction with the Experts

.725 (f=8.7, p>.0001)

Perceived value of experts, Trust orientation

.676

56.31 (p>.0001)

Satisfaction with Team Teaching

.396 (f=2.17, p>.05)

Perceived value of experts, “I had fun”

.271

10.1 (p>.0001)

Satisfaction with Case Content

.265 (f=1.21, not  significant)

“I know more about the real world now”

.082
(Pearson r = .44, p>.0001)

4.98 (p>.05)

 

Satisfaction with Web site

.503 (f=3.42, p>.001)

Perceived value of EC, Perceived value of experts, “I know more about the real world now,” “I had fun”

.423

9.72 (p>.0001)

Satisfaction with Threaded Discussion Tool

.363 (f=1.89, p>.06)

“I know more about the real world now,” “I learned a lot about myself”

.31

12.15 (p>.0001)

Satisfaction with videoconferencing

.271 (f=1.12, not significant)

“I know more about the real world now”

.079

Pearson r=.274 (p>.05)

2.1 (p>.05)

Table 4. Results of Regression Analysis on Satisfaction

 

E-classroom Features

Both the descriptive data in Tables 1 and 2 and the results of regression in Table 4  tell a story of success and a need for tweaking the model.  Each facet of satisfaction is discussed below.

Distributed Teams. This feature was a cornerstone of the pedagogical design.  We expected a degree of student discomfort and in the beginning we witnessed a fair amount.  As the adage goes, pain leads to gain and it seems even the students came to appreciate this.  Satisfaction with teams was related to the perceived value of the E-classroom experience indicating the students, too, viewed teamwork as integral to the experience. 

Student comments bear this out. “[It was good to] feel as though our group had decision-making power and the resources to back up our decisions.”  “I trusted those dudes to do their share of the work w/o even really knowing them.” “I learned about how to be an effective member of a team.” “I learned that I will be able to step up in a team situation when I enter the real world.” “It was a treat to team up with students from other universities.”

There was one problematic outcome of the team experience – satisfaction with teammates at the distant university.  As noted earlier, we had a reliable measure of “satisfaction with team experience” and its mean score was respectably high, 5.09.  Elsewhere we measured differences within teams.  In comparing responses to individual items, satisfaction with “my teammates at my university” and “my teammates at the other university,” there was a discrepancy.  The mean score for the former was 5.69 and 3.27 for the latter, a statistically significant difference (t=7.42, p>.0001).  However, these two items were not correlated – there is no evidence that being satisfied with one’s close-by teammates has any relation to satisfaction with one’s distant teammates.  We interpret these findings as meaning that there were interpersonal or communication problems within one or two teams.   This was encouraging as we expected to find a systematic failure due to the lack of “richness” in the communication technologies or generalized discomfort with not being able to interact face-to-face.

This brought another problem to light.  The dissatisfaction with distant teammates is found more often in those teams where the Penn State students outnumbered the Michigan State students.  Based on our interaction with members of these lopsided teams, we believe that unequal team composition may have led to a detrimental bypassing of the online collaborative experience.  We know of a few instances where Penn State students would meet face-to-face and ignore their Michigan State mates. 

Case Content and Case method.  Satisfaction with the case was fairly high and may be related to the students’ appreciation for real world contact and experience.  Quotes given in the next section give form to this result.

Experts.  Satisfaction with experts was not quite as important as we had hoped.  We believe this is the result of course design. The parts of the project moved quickly, so that in some cases expert responses were made after the due date.  The instructors constantly encouraged consultation with the experts so that it became an additional burden in the work.  Students may have preferred to address questions to the instructors who were more aware of the specifics of the problem, and, who the students knew were doing the grading.  In the future the problem could be spread over a greater time period, enhancing the opportunity to use the experts and permitting instructors to refer students to experts instead of answering questions themselves.

Still, student comments from the survey support the statistical measures that many did value the experts: “Being able to talk to experts and work on 'real life' cases was an excellent way to learn the material.”  Students appreciated the opportunities to “be closer to industry experts and their experiences” and “mimic a corporate communication setting.”

Technology.   It was expected that each of the technologies would be adopted and used differently, depending on the task.  They were.  Initially, many students seemed eager to videoconference and most teams wanted more synchronous communication.  But over time, the need for efficiency gave way to a preference for more convenient and asynchronous collaboration through CourseTalk.  The satisfaction ratings were collected at the end and therefore represent a snapshot in time; the ratings themselves reveal little (Table 2).  However, the predictors of satisfaction indicate to us that the students appreciated the chance to try out these tools before entering the workforce. As one student wrote, “[I learned] that given the challenge of working with technology, if I have the time I can figure anything out.”  Another stated, “[It was good to] use technology and its ups and downs before you are placed in a more 'unforgiving' environment.”

Team Teaching and Instructor Effort.  We expected the instructor effort to be extensive and our predictions were more than realized.  An enormous amount of communication was necessary--mostly by e-mail, some by telephone.  Much of this can be attributed to the novelty of the project.  We wrote the case completing it just in time for the start of E-classroom.  If the case had already been in existence some of this work would have been eliminated. 

The project was relatively advanced for the background of the students--a professional level problem in an area where the professionals themselves had little experience.  This generated many questions for us as the instructors; we believe the students may have felt more comfortable with us than the experts.  The size of the class, constituted from two institutions, contributed to the heavy workload.  Some of the grading was divided, but each of us felt obligated to review all of the work to sign off on the grade and thereby take responsibility for our home institution students.

For these reasons, we believe team teaching is a valuable component of the model.  However, the student data indicate there may have been some aversion to team teaching. In our experience this is always the case, even when the team is local, in person.  Students are wary of two people influencing the grade; they try to "psych out" one person to figure out what they have to do to achieve their grade goals; a second person complicates the situation.  In the team-teaching-at-a-distance situation, perhaps it is important to emphasize that the local, on site, instructor will take responsibility for the grades of her or his students, the distant instructor will provide feedback on the work, but will not give grades to students from the other university.

 

Student Attributes

GPA.  It is interesting to note that this measure of academic success only predicted E-classroom grade and nothing else, not satisfaction or other outcomes.   Neither measure captures the complexity of the learning process and this result reinforces our belief that E-classroom accommodates many learning styles.

Team Orientation.  There were two proxies to serve as controls.  On average, 56 percent of the students’ other courses had teams. The team orientation scale had a mean of 13 within a range of one to 43.  Both indicators struck us as quite low.  Given this attribute of the population, we are surprised there was not more team dysfunction.  We credit the “e-tasks,”  with helping to avert problems.  More likely, this group of students, close to graduation, understood the importance of teamwork and rose to the occasion.

Approximately one-fifth of the students were non-native English speakers and this may confound some of our results.  For both the native and non-native English speakers this meant an extra adjustment.  Since the case and communication was conducted in English, of course, it was difficult for non-native English speakers to keep up with and participate in synchronous discussions in chat rooms.  The native English speakers may have lost patience with the imperfect English of the others and the need to make editorial corrections.  However, this was another learning experience that certainly has value in the evolving global economy.

 Trust Orientation.   We had only a weak measure, a single item. However, when trust emerged as a significant predictor of satisfaction with experts, it occurred to us that the experts feature may have had the unexpected effect of providing a confidence-builder or backup for team decisions.

Gender.  This appears to have mattered in the case of satisfaction with team experience, with men reporting higher satisfaction levels.  We believe this can be explained in the experience of the single dysfunctional team which was four-fifths women.  It was the single team that failed to collaborate effectively on any activity.  To compound the impact, men outnumbered women in this quasi-experiment by approximately 50 percent.  We suspect the male-heavy population and this team’s satisfaction reports may have skewed this analysis.

 “Positive Outlook” is an attribute for which we have little statistically significant evidence.  In the regression analysis, agreement with statements such as “I had fun,” “I know more about the real world now,” and “I learned a lot about myself” served as evidence of this underlying variable which we believe existed in this group of students.  Our personal interactions with students demonstrated to us that a good sized portion of these 66 students were open-minded, flexible and cheerful about this experience throughout.  This was despite the many rough spots each team encountered.  To boost our belief that this attribute exists, we can each point to students who possessed this “positive outlook” but who did not score particularly high in E-classroom nor who had high GPAs.  Our interactions with these students also convince us that substantial learning occurred in this group.

Peer Evaluation. The peer evaluation was designed encourage full participation.  If a student did not carry a fair share of the load, the student could be downgraded by peers.  There were consistent scores for students across all students in a team suggesting that the scores were accurate representation of the relative contribution.  Some students were anxious about the scores, somewhat unwilling to trust their peers.  We had students grade themselves.  This was probably a mistake.  Some were very honest and were harder on themselves than were their peers.  Others inflated their own scores, as compared to peers, either unrealistically or in a deliberate attempt to inflate the result.  One team apparently had an agreement to score each other the same on each part.  Presumably they had some internal means of discipline to assure participation.  Whatever the case, the instructors could modify the outliers, if necessary, based on observations of performance in CourseTalk, transcripts of chats and contacts with experts.

 

Conclusions and Recommendations

In the history of education, the classroom is a relatively recent teaching and learning innovation compared to the apprenticeship, the dominant model of the last millenium.  It therefore seems natural to consider the brick-and-mortar university model will give way over time. The story we share here adds to the growing argument for electronic learning models that emphasize experience, teams and an expanded learning community that includes others besides teachers, such as industry leaders and experts.

Learning models that work emerge from the shared experience of teachers and students, and not from administrators.  But consider the financial, technological and social pressures in higher education – if teacher-scholars do not become increasingly more active in designing and testing new learning models, the administration function may be forced to make changes for us. Aversion to change notwithstanding, our experience with E-classroom suggests that faculty leadership in innovation can be rewarding on many levels.

We would both do this again.  Certainly there was enough positive feedback on both method and substance.  Students learned about themselves, the "real world" (in their words), teamwork and communication technology in a way that seemed considerably more effective than through other instructional techniques.

We learned that in collaborative work projects it is important to allow enough time for the students to become familiar with the communication technologies available and so they do not bypass collaboration opportunities in the effort to meet deadlines.  An "e-task" designed to get the teams started, should have a substantive element so that they begin by working on a problem.  The broader time frame also facilitates communication with experts.

It is probably a disadvantage in this kind of instruction to have unequal representation from two or more institutions.  In our case, with one of the universities outnumbered on teams two to one, it was possible for the larger group to meet in person and ignore the distant teammates. 

The plan is to continue this inter-university collaboration in the same courses.  The background of the Greenville case will be updated.  We will add more challenging next steps and more short assignments.   But we are satisfied, that independent of the specific content, this method is preparing students for the work environment in the global telecommunication economy.

We will keep the distributed team and experts features as well.  We would revise in many areas however.  First, we will keep student numbers from each institution equal to avoid the problem of one side’s students bypassing electronic collaboration.  We will extend the number of weeks allotted for E-classroom to nearly the entire semester.  This will give teams more time to gel and build trust and to develop relationships with the experts.  We plan to adopt a course management system (such as blackboard.com) to streamline record-keeping.  Some of the Web GUIs will be redesigned.  Finally, we are seeking a document sharing and annotation tool to eliminate the volume and confusion of multiple versions of reports and memos from student teams.

A final note to those readers concerned about institutional-level barriers (Wheeler et al.,1996). Our experience suggests they are small and easily overcome.  It is true that our home institutions are relatively resource-rich but they are also, by virtue of their enormous sizes, perhaps more bureaucratic and geo-centric.  We found that on the course level, there were few barriers to collaboration on the design and execution of this project. The only requirements for a project like ours are Internet access and some Web and technical support.

 

References

Alavi, M. (1996). Computer-mediated Collaborative Learning:  An Empirical Evaluation. MIS Quarterly, 18 (2), 159-174.

Benbunan-Fich, R. & Hiltz, S. R. (1999). Educational Applications of CMCS:  Solving Case Studies through Asynchronous Learning Networks.Journal of Computer Mediated Communication, 4 (3),
http://www.ascusc.org/jcmc/vol4/issue3/benbunan-fich.html.

Brandon, D. P. & Hollingshead, A. B. (1999). Collaborative Learning and Computer-Supported Groups. Communication Education, 48, 109-126.

Center for the Study of Work Teams (1999). Abstracts and Lessons Learned: Developing High-Performance Work Teams,
http://www.workteams.unt.edu/edu/cases2.htm.

Gay, G., Sturgill, A., & Martin, W. (1999). Document-centered Peer Collaborations:  An Exploration of the Educational Uses of Networked Communication Technologies. Journal of Computer Mediated Communication, 4 (3),
http://www.ascusc.org/jcmc/vol4/issue3/gay.html.

Gibbs, W. J. (1998). Implementing On-line Learning Environments. Journal of Computing in Higher Education, 10 (1), 16-37. 

Goodman, P. S. & Darr, E. D. (1999). Computer-Aided Systems and Communities:  Mechanisms for Organizational Learning in Distributed Environments. MIS Quarterly, 22 (4), 417-440.

LaRose, R., Gregg, J. & Eastin, M. (1999). Audiographic Telecourses for the Web: An Experiment. Journal of Computer Mediated Communication, 4 (3),
http://www.ascusc.org/jcmc/vol4/issue3/larose.html.

McLellan, H. (1998). The Internet as a Virtual Learning Community. Journal of Computing in Higher Education, 9 (2), 92-112.

Palloff, R. M. & Pratt, K. (1999). Building Learning Communities in Cyberspace, Jossey-Bass:  San Francisco.

Porter, G. (1993). Are We Teaching People Not to Work in Teams:  Reflections on Team Based Assignments in the College Classroom. CSWT Anniversary Proceedings,
http://www.workteams.unt.edu/proceed/porter.htm.

Webster, J. (1998). Desktop Videoconferencing: Experiences of Complete Users, Wary Users and Non-Users. MIS Quarterly, 22 (3), 257-286.

Wegner, S. B., Holloway, K. C., & Wegner, S. K. (1999). The Effects of a Computer-Based Instructional Management System on Student Communications in a Distance Learning Environment. Educational Technology & Society, 2 (4),
http://ifets.ieee.org/periodical/vol_4_99/wegner.html.

Wheeler, B. C, Valacich, J. S., Alavi, M. & Vogel, D. (1996). A Framework for Technology-mediate Inter-institutional Telelearning Relationships. Journal of Computer Mediated Communication, 1(1),
http://www.ascusc.org/jcmc/vol1/issue1/wheeler/essay.html.


decoration