Educational Technology & Society 4 (2) 2001
ISSN 1436-4522

Evaluation of Computer-Assisted Instruction in Principles of Economics

Dennis Coates
UMBC Department of Economics, 1000 Hilltop Circle
Baltimore, MD 21250 USA
coates@umbc.edu

Brad R. Humphreys
UMBC Department of Economics, 1000 Hilltop Circle
Baltimore, MD 21250 USA
humphrey@umbc.edu

 

ABSTRACT

Despite increasing use, little is known about the effectiveness of web-based instructional material. This study assesses the effectiveness of supplementary web-based materials and activities in introductory economics courses. We have collected data on 66 students from three principles sections that describe demographic characteristics, use of web-based instructional resources, and performance on graded quizzes and examinations. We use this data to statistically assess the effectiveness of the web-based material.

Student utilization of web-based material was extensive. Students frequently used on-line practice quizzes and accessed the web-based material often. A sizable fraction of the students actively posted and read threaded discussions on the course bulletin board.

The statistical analysis shows that both on-line computer graded practice quizzes and posting to the class bulletin board are positively correlated with student performance on the quizzes and exams, but use of web-based content and passive reading of bulletin board posts ("lurking") is not. These results suggest that faculty should focus more on developing self-test quizzes and effective bulletin board discussion projects and less on generating on-line content.

Keywords: Web-based instruction, Evaluation, Asynchronous communication, Introductory level courses


Introduction

The use of web-based instruction is increasingly common in many disciplines in higher education. Reserve materials are available on-line from libraries, class discussions are held via e-mail, textbook publishers provide WWW sites for their products, etc.; software developers are making programs available to colleges and universities that greatly facilitate on-line instruction and testing. Although these materials are generally used as supplements in traditional lecture hall settings, they also serve as a substitute for class meetings in the rapidly growing area of distance education.

Little is known about the effectiveness of these web-based supplements to face-to-face instruction. How intensively will students utilize on-line course materials? Does access to on-line course materials increase comprehension and retention? Despite the paucity of answers to these and similar questions, the rush to make on-line technology an important component of higher education continues.

This study assesses the effectiveness of on-line materials in two principles of economics courses. These two introductory courses, principles of microeconomics and principles of macroeconomics, were offered as traditional face-to-face courses for several years by the instructors with no web-based component. In this case, supplemental web-based components were added to existing courses without a complete overhaul of the design and pedagogical approach.

There are many advantages to an analysis of the effectiveness of web-based instructional techniques in this particular setting. The adoption of web-based instructional techniques is frequently an incremental process, and many instructors augment face-to-face classes with some web-based components. These two introductory economics courses are offered by most institutions of higher education, from community colleges to small liberal arts colleges to comprehensive research universities. At many institutions of higher education, including our own, economics courses are taken by students from many different disciplines as part of the general distributional requirements. There will be, therefore, considerable heterogeneity among students participating in the study. Moreover, because similar courses are taught at a wide variety of schools, the evidence here will be of interest to a broad audience. Also, unlike many other disciplines, economics is not an important part of the secondary school curriculum; many students' first exposure to economics comes in college-level principles courses. This will tend to reduce the effects of prior academic experience on outcomes. The same may not be true of math courses, for example; the quality and quantity of previous math instruction may have a large effect on student outcomes in introductory-level math courses.

 

Literature Review

This paper makes a contribution to the small but growing literature on the quantitative evaluation of the effects of web-based instruction on student outcomes in higher education. As the use of computers and web-based instruction has grown over the last decade, so has interest in assessing the effectiveness of these tools and methods. Because many of the web-based instructional techniques use a relatively small set of highly adaptable tools like synchronous and asynchronous computer mediated communications and hyperlinked content, research on the effectiveness of such techniques can be applicable to many academic disciplines and settings. Consequently, our review of this literature is selective rather than comprehensive.

Within economics, several recent papers have focused on the evaluation of web-based instruction. Agarwal and Day (1998) examined the effect of web-based instructional techniques on measures of student outcomes, as measured by course grades and results on the Test of Understanding College Economics (TUCE). This study employed a control group that did not have access to web-based materials and an experimental group that did, both taught by the same instructor using the same text, tests, and instructional style. The experimental group made use of E-mail and discussion lists for communication and the WWW for information retrieval and access. In this study, those students with access to web-based instruction performed better than those who did not in the sense that their average score on the TUCE was a statistically significant 1.15 points higher.

In a similar study, Navarro and Shoemaker (2000) found that students in a principles of macroeconomics course who had access to a set of web-based instructional material (a CD-ROM with course content, class-related bulletin boards and chatrooms, and e-mail) performed better than students who did not have access to this material in the sense that the students with access to the web-based material scored significantly higher on an 11 question final exam composed of essay questions.

A large literature on the evaluation of teaching and learning in economics courses also exists. For example, research on the teaching of college economics has addressed issues of student effort, study time and attendance, as well as the role of learning and teaching styles, gender, maturity, aptitude and preparation. John Siegfried and William Walstad (1998) summarized the extensive literature in this area. This literature is closely related to the extensive literature on education production functions surveys of which include papers by Eric Hanushek (1986), (1996) and recent volumes by Helen Ladd (1996) and Gary Burtless (1996). Our research can be viewed as an extension of the methods and techniques used in these studies to the area of web-based instructional techniques.

Evaluations of the in of online learning are becoming more common in other disciplines. Kearsley, Lynch, and Wizer (1995) Bruce, Peyton, and Batson (1993); Berge and Collins (1995); Harasim (1989,1993); Hiltz (1994); Mason and Kaye (1989); Waggoner, (1992) have all evaluated the role of online learning. The general tenor of these studies is that student satisfaction is increased, there is greater interaction between students and between students and instructors, and critical thinking and problem-solving skills are frequently reported as improved. Moreover, grade point average and other measures of student achievement are as high or higher under online teaching than in traditional classes. Beyond studies documenting student satisfaction with web-based instructional techniques, other studies make use of regression analysis to assess the effectiveness of web-based instructional techniques. In a recent special issue of the Journal of Universal Computer Science, Makrakis, et al. (1998) use regression analysis to assess the effectiveness of a hypermedia system and courseware in computer science instruction. This study finds that the design and presentation of instructional material and students' on-line interaction with instructors to be important explanatory variables.

 

Evaluation Framework

We employ a straightforward evaluation strategy. We want to understand how the intensity of utilization of web-based instructional techniques affects outcomes in introductory economics courses. We began by developing a set of web-based instructional materials, including interactive exercises and computer-graded quizzes, and made these available to students in three introductory-level economics courses. These courses had been previously taught as traditional face-to-face courses with no web-based instructional material by the instructors.

The web-based instructional materials were essentially supplemental; they did not take the place of any classroom-based activity. Instead, the web-based material and activities were intended to increase the interaction of the students with the material and the instructors, as well as the interaction between students, beyond the level of interaction found in a typical lecture course with no web-based material or activity.

We use students' scores on various quizzes and the final exam as evaluation instruments. The final exams had similar formats and were administered in class; all students had the same amount of time to complete the final exams. The quizzes were administered on-line.

Our investigation focuses on determining how much of the variation in the quiz and examination scores can be explained by variation in the students' use of the web-based material and activities after controlling for other observable factors that might affect those scores. Our prior belief was that the more a student utilized the web-based material and activities, the better that student would perform on the post-test, other things equal. We recognize that web-based instructional techniques represent complex systems with many inter-related components and that appropriate measurement of student utilization of these systems is a difficult problem. Because of this, we included a number of different measures of student use of the web-based material and activities in our empirical evaluation.

 

Data Description

The data used in this paper were collected from three principles-level economics classes at a mid-sized state university during the academic year 1998-1999. Two classes were principles of macroeconomics, taught by Coates, and one was principles of microeconomics, taught by Humphreys. The classes were taught in a traditional lecture setting and had been offered by the instructors in previous semesters without any web-based component.

For this study, the students in each class had password protected access to course-related material using the WebCT courseware program. The web-based material included course related content (including supplemental readings), practice quizzes that students could take up to five times, hyperlinks to course-related material on the internet, access to a threaded bulletin board for asynchronous discussion of course material, access to a chat room for synchronous discussion, e-mail and access to an on-line grade book where students could check their grades for the classes.

The pedagogical approach to integrating web-based activities assumed that a student's use of these resources would increase his or her exposure to the material. Additionally, this would encourage active learning by involving the students in synchronous and asynchronous interaction that they would not undertake in a traditional lecture-based course. Students were given an incentive to participate in the interactive exercises through a participation grade. The instructors also used techniques such as answering questions asked in class on the bulletin board, and leading bulletin board discussions.

66 students enrolled in the three sections, 38 in the macroeconomics sections and 28 in the microeconomics section. Six students dropped, leaving 60 students that completed the courses through the final exam. Prior to the first day in class, there was no indication that web-based material, including quizzing, would be used in the classes. Student participation in the asynchronous discussion that took place on the bulletin board and scores from the on-line quizzes determined approximately 10% of each student's final grade in the course.

 

Demographic Data

Demographic data on the students were collected by using on-line surveys. 14 students (21%) did not complete these surveys, and the following statistics are based on data from the 52 students who completed the surveys. The sample was 77% male and 96% white. 96% were full-time students. 69% reported being involved in extra curricular activities. Additional sample information is presented in Table 1.

These students were fairly typical of the student body, except that the majority of the students here do not live on campus; that is, this is predominantly a commuter campus. The information in Table 1 reveals a skew toward resident students in our data. This was probably because the classes were introductory level and the students taking them are more likely than other students to live on campus.

 

Class Standing

%

Primary Internet Access

%

Freshman

21

Home/Dorm

61

Sophomore

42

Library/Lab

35

Junior

21

Other

4

Senior

15

 

 

Hours Spent Working

%

Commute Time

%

Did Not Work

38

Lived On Campus

63

< 10 hours per week

23

< 10 minute drive

8

10-20 hours per week

15

10-20 minute drive

12

20-30 hours per week

12

20-30 minute drive

10

> 30 hours per week

12

< 30 minute drive

8

 

 

 

 

Table 1. Demographic Data

 

Internet Use

How intensely do students utilize web-based instructional resources? Answering this question is an important step in evaluating the effectiveness of these resources. If students are reluctant to use web-based resources, then no matter how effective these materials are at enhancing comprehension, they will ultimately have little value.

Our personal teaching experience suggests that simply making supplemental material available to students does not guarantee that students will utilize these materials. Copies of past exams and solutions to problem sets placed on reserve at the library are often neglected by students. However, these materials may be neglected because the total cost of accessing them (including time, shoe leather and copying costs) exceeds the expected benefit. Proponents of computer-assisted instructional material often argue that these materials have a lower cost of access, which will lead to increased use and, consequently, comprehension and mastery of the material.

The computer-assisted instructional material used in this study can be grouped into two general categories: material that enhances the student's interaction with the course material, which includes the practice quizzes and the supplemental web-based content, and material that enhances the student's interaction with other students and the instructor, primarily through the course bulletin board and, to a lesser extent, through e-mail. We examine each in turn.

 

Utilization of Practice Quizzes

Students in all three classes had access to practice quizzes. These quizzes were composed of multiple choice, true-false and matching questions and organized by broad topic (markets, consumer theory, macroeconomic policy, etc.) Each quiz consisted of a small set of five to ten questions drawn randomly from a large pool of potential questions. Each quiz could be taken up to five times, and the pool of potential questions was large enough that the probability of drawing the same question in multiple quizzes was small. The quizzes were also graded by the computer as soon as the quiz was submitted, and students could immediately see their score and the correct answer to each question.

The two macroeconomics classes used five quizzes and an online portion of the final exam, and the microeconomics class used five quizzes. Because each quiz could be taken up to five times, there were a total of 1,840 student quiz-opportunities for the 66 students. There were 1,195 actual student quiz-attempts, a 65% utilization rate by the students, suggesting that the students made considerable use of the practice quizzes. Table 2 shows the frequency distributions for utilization of the practice quizzes and the scores on the practice quizzes. The left panel of Table 2 shows the number of times a student attempted a practice quiz. So the second row of this panel shows that in 29 instances a student took a particular practice quiz only one time, despite the potential for taking that quiz for additional times; this represented 8% of the practice quiz attempts in the sample. Clearly, from Table 2, a majority of the students who attempted any given practice quiz took that quiz the maximum number of times allowed (5), suggesting that students perceived some benefit from multiple attempts at the quizzes.

 

# of Attempts

Frequency

%

High Score

Frequency

%

0

65

18

0-59

19

6

1

29

8

60-69

13

4

2

34

9

70-79

29

10

3

32

9

80-89

64

21

4

38

10

90-100

178

59

5

170

46

 

 

 

Total

368

 

 

303

 

 

 

 

 

 

 

Table 2. Frequency Distributions - Practice Quizzes

 

The right panel of Table 2 shows the frequency distribution of the highest score on a quiz for the 303 instances where a student took a quiz one or more times. In 80% of these cases the high score was a B (80 to 89% of the possible points) or an A (90 to 100% of the possible points). The modal percent of the possible points is 100%, which occurred in 103 cases. In other words, in 103 cases out of 303 observations, the student taking a quiz answered every question correctly. One possible explanation for this high frequency of is that the quizzes were relatively short (5-10 questions each). Another possible explanation is the benefits students derived from taking the quizzes multiple times.

A natural question is to examine the possibility of a statistical relationship between the high score on a quiz and the number of times that quiz was attempted. Since quizzes were taken outside of class, one could interpret more attempts as greater student effort or more time spent studying for the course. A statistically significant relationship between these variables would be evidence that some sort of learning took place when students exerted more effort by taking a quiz multiple times. If these variables were statistically independent, then no learning took place and performance is unrelated to outside effort. The Pearson c2 statistic for this sample was 41.25, which has a P-value of 0. The null hypothesis of no relationship between high score and attempts per quiz is rejected, suggesting the presence of some relationship between these variables. All scores below 69 were placed in the same category for this test in order to obtain enough cells with a predicted value of more than 5% to make the c2 test valid. A likelihood-ratio c2 test similarly suggested a relationship between the variables.

 

Utilization of Asynchronous Communication Tools

Students were also provided with other on-line resources. These additional resources were designed to increase student interaction with the material by providing web-based content or to increase student interaction with other students. The latter category included e-mail, a bulletin board, and chat rooms. Some measures of student use of these resources are summarized on Table 3.

The on-line content consists of original html pages that reinforce the course content by explaining material in different ways, some using animated graphics and other material uniquely suited to the web as well as links to other material on the internet. This type of material is not available for every topic in the courses, but the major topics are covered.

The variable "Hits" on Table 3 is the total number of content pages accessed by each student over the course of the semester. This variable reflects general student use of the on-line material. In general, it is not a very good measure of intensity of use of the on-line content for two reasons. First, the internal hits counter is incremented every time a page is displayed in the student's web browser. Thus each page that the student must pass through before reaching a particular page of content is counted as a hit, even though the page may contain no course content. Second, this variable does not take into account how much time a student spends on a page or the intensity with which a student focuses on the content displayed on a page. Glancing at a graph for a few seconds and closely reading a passage are given the same weight in this metric.

Keeping this caveat in mind, the frequency distribution of "Hits" on Table 3 suggests that there was relatively little variation in the students' access of the on-line content. The total hits for a majority of the students falls in the 101-500 range. A likely explanation for this grouping of hits is that navigating through the content to visit the last page in a particular "thread" of linked pages one or two times would generate a total number of hits in this range. A small group of about 10% of the students either utilized, or surfed through this material much more frequently. The "Hits" measure of usage does not allow us to distinguish these alternative uses of the material.

 

Page Hits

Observed

%

BB Posts

Observed

%

BB Posts Read

Observed

%

0-100

4

6.1

0-10

34

51.5

0-190

33

50.0

101-500

36

54.6

11-20

8

12.1

191-380

14

21.2

501-1000

11

16.7

21-30

7

10.6

381-570

2

3.0

1001-1500

7

10.6

31-40

10

15.1

571-760

3

4.6

1501-2000

7

10.6

41-50

3

4.6

761-950

4

6.1

2001-2527

1

1.5

51-60

4

6.1

951-1116

10

15.2

Total

66

100.0

 

66

100.0

 

66

100.0

 

 

 

 

 

 

 

 

 

Table 3. Frequency Distributions - Communication Tools

 

We did not have access to a summary statistic for the number of e-mails sent or for use of the chat rooms. We did have access to the total number of bulletin board messages posted and read by each student. In order to provide students with an incentive to use the bulletin board, a small part of each student's final grade depended on the number of postings read and written, but otherwise the grade determination process was not altered when the web-based components were added to the courses. The instructors also monitored the bulletin boards for the purpose of answering questions and, in some instances, initiating threads.

Like "Hits", posts and postings read are clearly imperfect measures of a student's use of this resource. A two word post ("Me too!") and a carefully thought out answer to a question posed by the instructor are both given the same weight in the "posts" variable. Careful reading of all the posts in a thread and skimming through 50 posts in five minutes are also indistinguishable. Still, these variables can provide an approximate indicator of student use of the bulletin board.

The right two columns of Table 3 show the frequency distributions for the total number of bulletin board articles posted and read for each student. The general pattern that emerges from these distributions is one where a majority of students post relatively infrequently (the modal number of posts in the sample was 1, the total number of posts made by 1 in 5 students) but a smaller but important group of students (the slightly less than 40% in the next three groups) posted considerably more often. The frequency distribution on "Read" suggests that even those students who posted infrequently looked at a majority of the threads on the bulletin board.

The number of students with "read" totals above 761 is interesting. Given that there were about 1500 posts, these students read, or at least surfed through, about half to three-quarters of the total postings. At the other end of the distribution, the median and modal number of posts read is 190 or less. This translates into only about 13% of the postings. Combining the lowest two categories, over 75% of the students read a quarter or less of the postings. In other words, participation in the bulletin board discussions is characterized as great participation by a small number of students, very disappointing participation by the vast majority of students, and moderate participation by a small number of students.

Alternatively, the small number of very high "Read" totals could represent strategic behavior on the part of a few students trying to get extra points for bulletin board participation by rapidly surfing through a large number of posts in a short amount of time. However, in each class, students were explicitly told that the number of messages posted, not the number of messages read, would determine their grade.

We have decided not to use survey data on student attitudes about web-based material in this study. Several factors affected this choice. Data on student's attitudes about web-based material are frequently analyzed in the distance education literature and often find that students feel that web-based material is useful and beneficial. See, for example, Agarwal and Day (1998); Kearsley, Lynch, and Wizer (1995); Bruce, Peyton, and Batson (1993); Berge and Collins (1995); Harasim (1989) (1993); Hiltz (1994); Mason and Kaye (1989); Waggoner, (1992). We chose to examine the relationship between use of web-based material and performance in this paper, and feel that, if done correctly, this analysis can increase our understanding of the appropriate role for these materials. It can also provide guidance for the development of on-line material by identifying the relative effectiveness of different techniques. We were also concerned that, in the case of principles level students, the novelty of web-based material might lead students to report that this material was useful and beneficial no matter what the true effect.

 

Statistical Analysis

The assessment of the students' use of web-based material above is informative. However, learning takes place in a complex environment and an examination of usage statistics may not reflect the full story. In order to separately account for the different factors that affect student performance, statistical models must be used. In this section we describe our empirical models for estimating the effects of participation in online discussions and multiple attempts at practice and other quizzes on student performance. Performance is measured in several different ways including scores on the quizzes, the mid-semester exams, and the final exam. We begin by describing the basic empirical model and then turn to a discussion of the results.

 

Statistical Model

Our empirical model relates student performance on quizzes and examinations to a variety of socio-demographic characteristics and measures of effort and background. The model addresses two basic questions:

1. Does the ability to take quizzes multiple times provide benefits to students as captured by higher scores on quizzes and examinations?

2. Does student participation in the online bulletin board discussions and access to the on-line material provide benefits to students as captured by higher scores on quizzes and examinations?

The basic model is:

Yi,t = ao + a1Wi,t + a2Ci + a3Zi + ei,t

where i indexes students (i = 1N) and t indexes student performance in terms of scores on quizzes, exams, and the overall course grade (t = 1T ), the aj's are vectors of parameters to be estimated and the variables are defined as

Yi,t Outcome for student i on quiz or examination t

Zi A course indicator variable

Wi,t List of variables reflecting student i's use of web-based material prior to the quiz or examination t

Ci List of variables reflecting measurable factors specific to student i

ei,t Mean zero, normally distributed error term

Included in the Wi,t are variables measuring the number of attempts a student made at a given quiz as well as variables reflecting experience with the internet and participation in the online bulletin board discussions. Ci contains attributes of the student such as race, gender, and involvement in extracurricular activities or work. These factors vary across students but do not vary from one quiz or exam to the next. Zi is a dummy variable distinguishing students enrolled in the microeconomics course from those enrolled in the macroeconomics course.

We have no particular expectations about gender and race, but we do have expectations on the other characteristics. We expect transfer students and those who are working or those involved in extracurricular activities to perform worse, on average, than non-transfer students and those who neither work nor participate in extracurricular activities. These hypotheses are, of course, holding all other things constant. The hypothesis about transfer students bears more explanation. It is largely a function of the particular situation at this institution whereby the school must accept anyone who has completed two years at a state-run community college. Such students enter without having to take a standardized entrance examination, the Scholastic Aptitude Test (SAT), and are generally thought by the faculty to be weaker students. Transfer students are a subset of the sample.

We collected a large amount of data on students' use of the on-line quizzes and these data provide a rich environment for investigating the effects of web-based instruction. Unlike quizzes administered in the classroom, each of the on-line quizzes could be taken multiple times. This provides an interesting setting for examining the effectiveness of on-line quizzes. The more attempts at a given quiz the student makes, the more familiar the student becomes with the material and the better the student performs on the current quiz attempt. Taking the quizzes multiple times increases the student's interaction with the material. We also hypothesize that those students that attempt more on-line quizzes will perform better on the final exam than students who make fewer attempts at the quizzes. Again, the mechanism that produces this increased performance is the increased interaction with the course material engendered by the multiple attempts at each quiz.

 

Empirical Results

We begin by looking at the factors which explain variation in the average score on repeated attempts at a given on-line practice quiz. Table 4 describes the specific control variables and measures of students' use of the web-based instructional material. Note that these data form a panel, with observations on each student's attempts at each of five on-line quizzes. There were 838 usable student quiz attempts in our data.

 

Variable Name

Description

read

Bulletin Board Posts Read

posted

Bulletin Board Posts

hits

Number of class web pages visited

male

Gender Dummy = 1 if Male

white

Race Dummy = 1 if White

aid

Financial aid Dummy = 1 if student received financial aid

ec101

= 1 if student was enrolled in ECON 101

noecon

= 1 if student had not taken a previous economics class

job

= 1 if student worked during the semester

athlete

= 1 if student was a scholarship athlete

extra

= 1 if student was involved in extra curricular activity

busy

job + athlete + extra

tran

= 1 if student transfered to UMBC

quizatt

Number of practice quizzes attempted

quiz1

Score on first graded on-line quiz

midexam

Score on Midterm exam

 

 

Table 4. Varible Definitions

 

Table 5 shows the results of our analysis of the determinants of student's scores on repeated attempts at an on-line quiz. Earlier versions of this paper included an analysis of the effects of repeated quiz attempts on the average score on on-line quizzes. In this case, the attempts variable may be correlated with the error term, making the parameter estimates from these models biased and inconsistent. We were unable to correct for these statistical problems and have dropped this analysis from the paper. Because of the panel nature of the data, we estimate the model using a "random effects" estimator that allows for unobserved student- specific factors that affect the dependent variable. These unobservable factors, which can be interpreted as interest in the subject or motivation, are modeled as random variables. See Greene (2000), chapter 14, for details on random effects estimators and panel data.

The dependent variable is student i's score on on-line quiz t. The first explanatory variable, lagged score, is the student's score on the previous attempt at quiz t. The second explanatory variable, attempt number, reflects the number of times the student has attempted quiz t. Model 1 includes only attempts, Model 2 includes only the score on the previous quiz attempt, and Model 3 includes both variables. Note the strong positive correlation between the student's score on a quiz and the number of attempts. An additional attempt at the quiz raises the score on the quiz by .37 points, in column 1 of Table 5. The effect is statistically significant with a p-value well below .01. Similarly, if one uses the score from the previous attempt at this quiz as a regressor, that variable is strongly statistically significant with a coefficient of about .49. In other words, an additional point on the previous attempt translates into an additional half point on the current attempt. Including both the number of the attempt and the lagged score as explanatory variables results in both being significant at the 5% level or better. Each is also positive. The lesson from these results is that additional attempts at the quizzes translate into higher scores on the quizzes.

 

 

Model 1

Model 2

Model 3

Variable

Coefficient

Std. Err.

Coefficient

Std. Err.

Coefficient

Std. Err.

Const.

4.96

0.44

3.52

0.3

3.17

0.33

lagged score

--

--

0.49

0.03

0.47

0.03

attempt number

0.37

0.04

--

--

0.13

0.05

job

0.41

0.29

0.22

0.15

0.24

0.15

aid

-0.15

0.29

-0.12

0.14

-0.13

0.14

transfer

-0.27

0.28

-0.23

0.14

-0.24

0.14

extra

0.39

0.29

0.16

0.14

0.18

0.14

male

-0.07

0.27

-0.05

0.13

-0.05

0.13

white

0.35

0.29

0.11

0.15

0.11

0.15

ec101

-2.07

0.29

-1.35

0.15

-1.37

0.15

athlete

0.09

0.36

0.002

0.18

0.005

0.18

quiz2

-0.23

0.16

0.04

0.18

0.04

0.18

quiz3

-0.3

0.17

0.12

0.18

0.12

0.18

quiz4

-0.23

0.17

0.01

0.18

0.01

0.18

quiz5

1.27

0.17

0.85

0.19

0.88

0.18

Wald c2

285.1

 

712.1

 

722.7

 

R2

0.31

 

0.46

 

0.47

 

 

 

 

 

 

 

 

Table 5. Random Effects Estimation Results

Quiz Score on Each Attempt is Dependent Variable

 

For each of the model specifications reported in Table 5 both the microeconomics indicator and the quiz5 dummy are statistically significant. The results indicate that students in the microeconomics section scored lower than students in the macroeconomics sections, other things equal. This could be due to differences in the instructors or differences in the nature of the material. In Model 3, job and transfer are significant at the 10% level. The former indicates that students with jobs score about a quarter point better than non-working students, while the latter indicates that transfer students score about a quarter point worse than non-transfer students.

The results on Table 5 are generally supportive of the idea that providing students access to on-line quizzes is an effective web-based instructional technique. Although many of the individual explanatory variables are not significant, the set of explanatory variables, when taken together, are strongly significant based on the c2 statistic which has a 1% critical value of about 27 for these models. The models also explain a large amount of the observed variation in quiz scores - 47% in the case of Model 3. Most important is the parameter on the number of attempts at a quiz, which is positive and significant. This parameter suggests that additional attempts at on-line quizzes lead to higher scores on these quizzes, and hence to an increased understanding of the material.

Next, we turn to an analysis of the determinants of the score on the final examination. Again, we investigate the idea that if web-based instructional techniques are effective, then student use of the web-based material and activities should be positively correlated with scores on the evaluation instruments. Note that the sample size falls dramatically in this case, down to 40 observations. Nonetheless, there are some interesting results, which are shown on Table 6.

 

 

Model 1

Model 2

Variable

Coefficient

Std. Err.

Coefficient

Std. Err.

Const.

-9.870

-0.780

-19.230

-1.380

read

-0.004

-0.430

-0.007

-0.740

posted

0.379

1.610

0.443

1.920

male

0.050

0.020

1.020

0.310

white

5.380

1.270

6.640

1.580

ec101

14.840

1.530

17.620

1.840

busy

--

--

1.700

0.730

tran

--

--

6.920

1.950

quizatt

0.730

1.550

0.723

1.520

midexam

0.410

4.210

0.443

4.630

R2

0.76

 

0.79

 

 

 

 

 

 

Table 6. Ordinary Least Squares Estimation

Score on Final Exam is Dependent Variable

 

On this table, Model 1 contains only demographic controls and measures of the student's use of the web-based instructional materials. Model 2 adds a variable indicating the demands on student's time outside the classroom, busy, which is a proxy for students who work, are involved in intercollegiate athletics, or other extracurricular activities. Because of concerns about the possibility that the score on the mid-term exam is correlated with the error term, we also estimated these models excluding this variable. This had no appreciable affect on the results.

As expected, the student's score on the midterm is an important determinant of the score on the final exam. One additional point on the midterm raises the score on the final exam by .38 points. Race and whether or not the student has a job are also statistically significant at the 10% level. The average score on the final exam is 54. Whites score 7.6 points higher than do non-whites, about 14% of the mean. Those with jobs score about 7 points higher, 13% of the mean, than those who do not have jobs. In addition, whether or not the student transferred carries a positive coefficient, with a t-statistic about 1.6. Of the internet variables, only the number of postings to the bulletin board is statistically significant at the 5% level or better. This coefficient is .51, so an additional 15 postings to the bulletin board would have the same effect on the score on the final exam as race. The number of attempts at quizzes throughout the course of the semester carries a positive coefficient, with a t-statistic around 1.5. This coefficient has a P-value of 0.13, so it is nearly significant at the 10% level. The number of articles read is clearly not relevant as its t-statistic is well below 1 in absolute value.

These results suggest that student's posting to the class bulletin board is strongly associated with higher scores on the final examination, and that a student's use of on-line practice quizzes is somewhat associated with higher scores on the final exam. Thus bulletin boards, to the extent that students can be induced to make posts, appear to be an effective web-based instructional technique.

Note that read, the number of bulletin board posts read by each student, is statistically insignificant in these results. This suggests that "lurking" - passively reading bulletin board posts and not actively participating in the asynchronous communication - does not have the same payoff, in terms of the performance on the final exam, as does active participation in bulletin board discussions. This may be due to the additional thought and interaction with the course material involved in composing bulletin board posts, relative to simply reading what others have written. This result suggests that designers of web-based instructional material should focus on activities that encourage active participation in asynchronous communication and discourage "lurking."

 

Conclusions

We set out to describe and analyze student use of web-based materials in principles of economics classes. Students in three sections of principles of macroeconomics and microeconomics were provided with an array of web-based material, including content, computer graded quizzes that could be taken multiple times, and synchronous and asynchronous communications tools.

Students made extensive use of the on-line quizzes, completing about 2/3 of the total available quizzes. The distribution of the total number of "hits" on pages in the course suggests that students did not ignore the web-based content, but that a majority of the students probably did not return to this material multiple times. A majority of students were somewhat reluctant to make posts to the class bulletin board, although a small but significant fraction posted frequently. A majority of students read the bulletin board postings.

Student utilization of the web-based material was significant, especially when the fact that these students were primarily freshmen and sophomores and campus residents is taken into account. Much of the research on student use of web-based material comes from classes taught at a distance and composed of adult learners who are working full time while attending school. These adult learners may have high opportunity costs of going to the library to access reserve materials, coming to office hours, or forming study groups and thus would be expected to utilize web-based material more often. However, younger campus residents have relatively lower opportunity costs of taking advantage of these traditional materials and resources and might, therefore, be expected to make greater use of them. This is especially true if reserve materials, office hours and study groups are substitutes for web based materials. The observed utilization in our sections suggests that web based materials can be useful even for resident undergraduate economics students.

Our analysis suggests that on-line practice quizzes can be an effective tool. Taking multiple quizzes on a topic increased the high score on that topic significantly and there is some evidence that more attempts at the practice quizzes was positively correlated with the student's score on the final exam. Posting to the class bulletin board also appears to be positively correlated with performance, although passive reading of posts made by others is not correlated with performance. Use of on-line content, as measured by the number of "hits" on class web pages, was not correlated with performance.

Posting to the bulletin board was a good predictor of performance, but reading and not posting (or "lurking") was not. If this correlation between posting and performance reflects learning, then instructors using bulletin boards to enhance their principles courses should focus on developing interesting discussion topics and designing discussion exercises that draw more students into the on-line discussion.

Many publishers are rushing to provide on-line content related to texts. Faculty often think that making their notes or lecture slides available will help students. But our data suggest that the on-line content was not used extensively and our measure of use of on-line content was not a good predictor of student's performance. Before more effort goes into making on-line content available, we need to know more about the use and effectiveness of such material.

Finally, we note that these conclusions and observations are based on a relatively small sample of students. More data collection and analysis needs to be done in this area before definitive conclusions can be reached. We view this as an ongoing research project, and plan to continue to collect data. We strongly encourage other faculty who use web based material in their classes to take the time to undertake similar studies.

 

References

  • Agarwal, R. & Day, A. E. (1998). The impact of the internet on economic education. Journal of Economic Education, 29 (2), 99-110.
  • Berge, Z. L. & Collins, M. P. (1995). Computer-Mediated Communication and the Online Classroom, Creskill, NJ: Hampton Press.
  • Bruce, B., Peyton, J. K. & Baston, T. (1993). Network-based Classrooms: Promises and Realities, Cambridge, UK: Cambridge University Press.
  • Burtless, G. (1996). Does Money Matter?: The Effect of School Resources on Student Achievement and Adult Success, Washington, D. C.: Brookings Institution Press.
  • Greene, W. H. (2000). Econometric Analysis, Upper Saddle River, NJ: Prentice Hall.
  • Hanushek, E. A. (1986). The economics of schooling: Production and efficiency in public schools. Journal of Economic Literature, 24, 1141-1177.
  • Hanushek, E. A. (1996). School resources and student performance. In G. Burtless (Ed.) Does Money Matter?: The Effect of School Resources on Student Achievement and Adult Success, Washington, D.C.: Brookings Institution Press, 200-220.
  • Harasim, L. (1989). Online Education, New York, N.Y.: Praeger.
  • Harasim, L. (1993). Global Networks, Cambridge, MA: MIT Press.
  • Hiltz, R. (1994). Virtual Classrooms, Norwood, NJ: Ablex.
  • Kearsley, G., Lynch, W. & Wizer, D. (1995). The effectiveness and impact of online learning in graduate education. Educational Technology, 35, 37-42.
  • Ladd, H. F. (1996). Holding Schools Accountable: Performance-Based Reform in Education, Washington, D.C.: Brookings Institution Press.
  • Makrakis, V., Retalis, S., Koutoumanos, A., Papaspyrou, N. & Skordalakis, M. (1998). Evaluating the effectiveness of an ODL hypermedia system and courseware at the National Technical University of Athens: A case study. Journal for Universal Computer Science, 4 (3), 1-15.
  • Mason, R. & Kaye, A. (1989). Mindweave: Communications, Computers, And Distance Education, New York, N.Y.: Pergamon Press.
  • Navarro, P. & Shoemaker, J. (2000). Economics in the cyberspace: A comparison study, Unpublished manuscript.
  • Siegfried, J. J. & Walstad, W. B. (1998). Research on teaching college economics. In Siegfried, J. J. & Walstad, W. B. (Eds.) Teaching Undergraduate Economics, Boston, MA: Irwin McGraw-Hill, 141-166.
  • Waggoner, M. D. (1992). Empowering Networks: Computer Conferencing in Education, Englewood Cliffs, NJ: Educational Technology Publications.

 


decoration