Evaluation of ComputerAssisted Instruction in Principles of EconomicsDennis Coates Brad R. Humphreys
IntroductionThe use of webbased instruction is increasingly common in many disciplines in higher education. Reserve materials are available online from libraries, class discussions are held via email, textbook publishers provide WWW sites for their products, etc.; software developers are making programs available to colleges and universities that greatly facilitate online instruction and testing. Although these materials are generally used as supplements in traditional lecture hall settings, they also serve as a substitute for class meetings in the rapidly growing area of distance education. Little is known about the effectiveness of these webbased supplements to facetoface instruction. How intensively will students utilize online course materials? Does access to online course materials increase comprehension and retention? Despite the paucity of answers to these and similar questions, the rush to make online technology an important component of higher education continues. This study assesses the effectiveness of online materials in two principles of economics courses. These two introductory courses, principles of microeconomics and principles of macroeconomics, were offered as traditional facetoface courses for several years by the instructors with no webbased component. In this case, supplemental webbased components were added to existing courses without a complete overhaul of the design and pedagogical approach. There are many advantages to an analysis of the effectiveness of webbased instructional techniques in this particular setting. The adoption of webbased instructional techniques is frequently an incremental process, and many instructors augment facetoface classes with some webbased components. These two introductory economics courses are offered by most institutions of higher education, from community colleges to small liberal arts colleges to comprehensive research universities. At many institutions of higher education, including our own, economics courses are taken by students from many different disciplines as part of the general distributional requirements. There will be, therefore, considerable heterogeneity among students participating in the study. Moreover, because similar courses are taught at a wide variety of schools, the evidence here will be of interest to a broad audience. Also, unlike many other disciplines, economics is not an important part of the secondary school curriculum; many students' first exposure to economics comes in collegelevel principles courses. This will tend to reduce the effects of prior academic experience on outcomes. The same may not be true of math courses, for example; the quality and quantity of previous math instruction may have a large effect on student outcomes in introductorylevel math courses.
Literature ReviewThis paper makes a contribution to the small but growing literature on the quantitative evaluation of the effects of webbased instruction on student outcomes in higher education. As the use of computers and webbased instruction has grown over the last decade, so has interest in assessing the effectiveness of these tools and methods. Because many of the webbased instructional techniques use a relatively small set of highly adaptable tools like synchronous and asynchronous computer mediated communications and hyperlinked content, research on the effectiveness of such techniques can be applicable to many academic disciplines and settings. Consequently, our review of this literature is selective rather than comprehensive. Within economics, several recent papers have focused on the evaluation of webbased instruction. Agarwal and Day (1998) examined the effect of webbased instructional techniques on measures of student outcomes, as measured by course grades and results on the Test of Understanding College Economics (TUCE). This study employed a control group that did not have access to webbased materials and an experimental group that did, both taught by the same instructor using the same text, tests, and instructional style. The experimental group made use of Email and discussion lists for communication and the WWW for information retrieval and access. In this study, those students with access to webbased instruction performed better than those who did not in the sense that their average score on the TUCE was a statistically significant 1.15 points higher. In a similar study, Navarro and Shoemaker (2000) found that students in a principles of macroeconomics course who had access to a set of webbased instructional material (a CDROM with course content, classrelated bulletin boards and chatrooms, and email) performed better than students who did not have access to this material in the sense that the students with access to the webbased material scored significantly higher on an 11 question final exam composed of essay questions. A large literature on the evaluation of teaching and learning in economics courses also exists. For example, research on the teaching of college economics has addressed issues of student effort, study time and attendance, as well as the role of learning and teaching styles, gender, maturity, aptitude and preparation. John Siegfried and William Walstad (1998) summarized the extensive literature in this area. This literature is closely related to the extensive literature on education production functions surveys of which include papers by Eric Hanushek (1986), (1996) and recent volumes by Helen Ladd (1996) and Gary Burtless (1996). Our research can be viewed as an extension of the methods and techniques used in these studies to the area of webbased instructional techniques. Evaluations of the in of online learning are becoming more common in other disciplines. Kearsley, Lynch, and Wizer (1995) Bruce, Peyton, and Batson (1993); Berge and Collins (1995); Harasim (1989,1993); Hiltz (1994); Mason and Kaye (1989); Waggoner, (1992) have all evaluated the role of online learning. The general tenor of these studies is that student satisfaction is increased, there is greater interaction between students and between students and instructors, and critical thinking and problemsolving skills are frequently reported as improved. Moreover, grade point average and other measures of student achievement are as high or higher under online teaching than in traditional classes. Beyond studies documenting student satisfaction with webbased instructional techniques, other studies make use of regression analysis to assess the effectiveness of webbased instructional techniques. In a recent special issue of the Journal of Universal Computer Science, Makrakis, et al. (1998) use regression analysis to assess the effectiveness of a hypermedia system and courseware in computer science instruction. This study finds that the design and presentation of instructional material and students' online interaction with instructors to be important explanatory variables.
Evaluation FrameworkWe employ a straightforward evaluation strategy. We want to understand how the intensity of utilization of webbased instructional techniques affects outcomes in introductory economics courses. We began by developing a set of webbased instructional materials, including interactive exercises and computergraded quizzes, and made these available to students in three introductorylevel economics courses. These courses had been previously taught as traditional facetoface courses with no webbased instructional material by the instructors. The webbased instructional materials were essentially supplemental; they did not take the place of any classroombased activity. Instead, the webbased material and activities were intended to increase the interaction of the students with the material and the instructors, as well as the interaction between students, beyond the level of interaction found in a typical lecture course with no webbased material or activity. We use students' scores on various quizzes and the final exam as evaluation instruments. The final exams had similar formats and were administered in class; all students had the same amount of time to complete the final exams. The quizzes were administered online. Our investigation focuses on determining how much of the variation in the quiz and examination scores can be explained by variation in the students' use of the webbased material and activities after controlling for other observable factors that might affect those scores. Our prior belief was that the more a student utilized the webbased material and activities, the better that student would perform on the posttest, other things equal. We recognize that webbased instructional techniques represent complex systems with many interrelated components and that appropriate measurement of student utilization of these systems is a difficult problem. Because of this, we included a number of different measures of student use of the webbased material and activities in our empirical evaluation.
Data DescriptionThe data used in this paper were collected from three principleslevel economics classes at a midsized state university during the academic year 19981999. Two classes were principles of macroeconomics, taught by Coates, and one was principles of microeconomics, taught by Humphreys. The classes were taught in a traditional lecture setting and had been offered by the instructors in previous semesters without any webbased component. For this study, the students in each class had password protected access to courserelated material using the WebCT courseware program. The webbased material included course related content (including supplemental readings), practice quizzes that students could take up to five times, hyperlinks to courserelated material on the internet, access to a threaded bulletin board for asynchronous discussion of course material, access to a chat room for synchronous discussion, email and access to an online grade book where students could check their grades for the classes. The pedagogical approach to integrating webbased activities assumed that a student's use of these resources would increase his or her exposure to the material. Additionally, this would encourage active learning by involving the students in synchronous and asynchronous interaction that they would not undertake in a traditional lecturebased course. Students were given an incentive to participate in the interactive exercises through a participation grade. The instructors also used techniques such as answering questions asked in class on the bulletin board, and leading bulletin board discussions. 66 students enrolled in the three sections, 38 in the macroeconomics sections and 28 in the microeconomics section. Six students dropped, leaving 60 students that completed the courses through the final exam. Prior to the first day in class, there was no indication that webbased material, including quizzing, would be used in the classes. Student participation in the asynchronous discussion that took place on the bulletin board and scores from the online quizzes determined approximately 10% of each student's final grade in the course.
Demographic DataDemographic data on the students were collected by using online surveys. 14 students (21%) did not complete these surveys, and the following statistics are based on data from the 52 students who completed the surveys. The sample was 77% male and 96% white. 96% were fulltime students. 69% reported being involved in extra curricular activities. Additional sample information is presented in Table 1. These students were fairly typical of the student body, except that the majority of the students here do not live on campus; that is, this is predominantly a commuter campus. The information in Table 1 reveals a skew toward resident students in our data. This was probably because the classes were introductory level and the students taking them are more likely than other students to live on campus.
Internet UseHow intensely do students utilize webbased instructional resources? Answering this question is an important step in evaluating the effectiveness of these resources. If students are reluctant to use webbased resources, then no matter how effective these materials are at enhancing comprehension, they will ultimately have little value. Our personal teaching experience suggests that simply making supplemental material available to students does not guarantee that students will utilize these materials. Copies of past exams and solutions to problem sets placed on reserve at the library are often neglected by students. However, these materials may be neglected because the total cost of accessing them (including time, shoe leather and copying costs) exceeds the expected benefit. Proponents of computerassisted instructional material often argue that these materials have a lower cost of access, which will lead to increased use and, consequently, comprehension and mastery of the material. The computerassisted instructional material used in this study can be grouped into two general categories: material that enhances the student's interaction with the course material, which includes the practice quizzes and the supplemental webbased content, and material that enhances the student's interaction with other students and the instructor, primarily through the course bulletin board and, to a lesser extent, through email. We examine each in turn.
Utilization of Practice QuizzesStudents in all three classes had access to practice quizzes. These quizzes were composed of multiple choice, truefalse and matching questions and organized by broad topic (markets, consumer theory, macroeconomic policy, etc.) Each quiz consisted of a small set of five to ten questions drawn randomly from a large pool of potential questions. Each quiz could be taken up to five times, and the pool of potential questions was large enough that the probability of drawing the same question in multiple quizzes was small. The quizzes were also graded by the computer as soon as the quiz was submitted, and students could immediately see their score and the correct answer to each question. The two macroeconomics classes used five quizzes and an online portion of the final exam, and the microeconomics class used five quizzes. Because each quiz could be taken up to five times, there were a total of 1,840 student quizopportunities for the 66 students. There were 1,195 actual student quizattempts, a 65% utilization rate by the students, suggesting that the students made considerable use of the practice quizzes. Table 2 shows the frequency distributions for utilization of the practice quizzes and the scores on the practice quizzes. The left panel of Table 2 shows the number of times a student attempted a practice quiz. So the second row of this panel shows that in 29 instances a student took a particular practice quiz only one time, despite the potential for taking that quiz for additional times; this represented 8% of the practice quiz attempts in the sample. Clearly, from Table 2, a majority of the students who attempted any given practice quiz took that quiz the maximum number of times allowed (5), suggesting that students perceived some benefit from multiple attempts at the quizzes.
The right panel of Table 2 shows the frequency distribution of the highest score on a quiz for the 303 instances where a student took a quiz one or more times. In 80% of these cases the high score was a B (80 to 89% of the possible points) or an A (90 to 100% of the possible points). The modal percent of the possible points is 100%, which occurred in 103 cases. In other words, in 103 cases out of 303 observations, the student taking a quiz answered every question correctly. One possible explanation for this high frequency of is that the quizzes were relatively short (510 questions each). Another possible explanation is the benefits students derived from taking the quizzes multiple times. A natural question is to examine the possibility of a statistical relationship between the high score on a quiz and the number of times that quiz was attempted. Since quizzes were taken outside of class, one could interpret more attempts as greater student effort or more time spent studying for the course. A statistically significant relationship between these variables would be evidence that some sort of learning took place when students exerted more effort by taking a quiz multiple times. If these variables were statistically independent, then no learning took place and performance is unrelated to outside effort. The Pearson c^{2} statistic for this sample was 41.25, which has a Pvalue of 0. The null hypothesis of no relationship between high score and attempts per quiz is rejected, suggesting the presence of some relationship between these variables. All scores below 69 were placed in the same category for this test in order to obtain enough cells with a predicted value of more than 5% to make the c^{2} test valid. A likelihoodratio c^{2} test similarly suggested a relationship between the variables.
Utilization of Asynchronous Communication ToolsStudents were also provided with other online resources. These additional resources were designed to increase student interaction with the material by providing webbased content or to increase student interaction with other students. The latter category included email, a bulletin board, and chat rooms. Some measures of student use of these resources are summarized on Table 3. The online content consists of original html pages that reinforce the course content by explaining material in different ways, some using animated graphics and other material uniquely suited to the web as well as links to other material on the internet. This type of material is not available for every topic in the courses, but the major topics are covered. The variable "Hits" on Table 3 is the total number of content pages accessed by each student over the course of the semester. This variable reflects general student use of the online material. In general, it is not a very good measure of intensity of use of the online content for two reasons. First, the internal hits counter is incremented every time a page is displayed in the student's web browser. Thus each page that the student must pass through before reaching a particular page of content is counted as a hit, even though the page may contain no course content. Second, this variable does not take into account how much time a student spends on a page or the intensity with which a student focuses on the content displayed on a page. Glancing at a graph for a few seconds and closely reading a passage are given the same weight in this metric. Keeping this caveat in mind, the frequency distribution of "Hits" on Table 3 suggests that there was relatively little variation in the students' access of the online content. The total hits for a majority of the students falls in the 101500 range. A likely explanation for this grouping of hits is that navigating through the content to visit the last page in a particular "thread" of linked pages one or two times would generate a total number of hits in this range. A small group of about 10% of the students either utilized, or surfed through this material much more frequently. The "Hits" measure of usage does not allow us to distinguish these alternative uses of the material.
We did not have access to a summary statistic for the number of emails sent or for use of the chat rooms. We did have access to the total number of bulletin board messages posted and read by each student. In order to provide students with an incentive to use the bulletin board, a small part of each student's final grade depended on the number of postings read and written, but otherwise the grade determination process was not altered when the webbased components were added to the courses. The instructors also monitored the bulletin boards for the purpose of answering questions and, in some instances, initiating threads. Like "Hits", posts and postings read are clearly imperfect measures of a student's use of this resource. A two word post ("Me too!") and a carefully thought out answer to a question posed by the instructor are both given the same weight in the "posts" variable. Careful reading of all the posts in a thread and skimming through 50 posts in five minutes are also indistinguishable. Still, these variables can provide an approximate indicator of student use of the bulletin board. The right two columns of Table 3 show the frequency distributions for the total number of bulletin board articles posted and read for each student. The general pattern that emerges from these distributions is one where a majority of students post relatively infrequently (the modal number of posts in the sample was 1, the total number of posts made by 1 in 5 students) but a smaller but important group of students (the slightly less than 40% in the next three groups) posted considerably more often. The frequency distribution on "Read" suggests that even those students who posted infrequently looked at a majority of the threads on the bulletin board. The number of students with "read" totals above 761 is interesting. Given that there were about 1500 posts, these students read, or at least surfed through, about half to threequarters of the total postings. At the other end of the distribution, the median and modal number of posts read is 190 or less. This translates into only about 13% of the postings. Combining the lowest two categories, over 75% of the students read a quarter or less of the postings. In other words, participation in the bulletin board discussions is characterized as great participation by a small number of students, very disappointing participation by the vast majority of students, and moderate participation by a small number of students. Alternatively, the small number of very high "Read" totals could represent strategic behavior on the part of a few students trying to get extra points for bulletin board participation by rapidly surfing through a large number of posts in a short amount of time. However, in each class, students were explicitly told that the number of messages posted, not the number of messages read, would determine their grade. We have decided not to use survey data on student attitudes about webbased material in this study. Several factors affected this choice. Data on student's attitudes about webbased material are frequently analyzed in the distance education literature and often find that students feel that webbased material is useful and beneficial. See, for example, Agarwal and Day (1998); Kearsley, Lynch, and Wizer (1995); Bruce, Peyton, and Batson (1993); Berge and Collins (1995); Harasim (1989) (1993); Hiltz (1994); Mason and Kaye (1989); Waggoner, (1992). We chose to examine the relationship between use of webbased material and performance in this paper, and feel that, if done correctly, this analysis can increase our understanding of the appropriate role for these materials. It can also provide guidance for the development of online material by identifying the relative effectiveness of different techniques. We were also concerned that, in the case of principles level students, the novelty of webbased material might lead students to report that this material was useful and beneficial no matter what the true effect.
Statistical AnalysisThe assessment of the students' use of webbased material above is informative. However, learning takes place in a complex environment and an examination of usage statistics may not reflect the full story. In order to separately account for the different factors that affect student performance, statistical models must be used. In this section we describe our empirical models for estimating the effects of participation in online discussions and multiple attempts at practice and other quizzes on student performance. Performance is measured in several different ways including scores on the quizzes, the midsemester exams, and the final exam. We begin by describing the basic empirical model and then turn to a discussion of the results.
Statistical ModelOur empirical model relates student performance on quizzes and examinations to a variety of sociodemographic characteristics and measures of effort and background. The model addresses two basic questions: 1. Does the ability to take quizzes multiple times provide benefits to students as captured by higher scores on quizzes and examinations? 2. Does student participation in the online bulletin board discussions and access to the online material provide benefits to students as captured by higher scores on quizzes and examinations? The basic model is: Y_{i,t} = a_{o} + a_{1}W_{i,t} + a_{2}C_{i} + a_{3}Z_{i} + e_{i,t} where i indexes students (i = 1…N) and t indexes student performance in terms of scores on quizzes, exams, and the overall course grade (t = 1…T ), the a_{j}'s are vectors of parameters to be estimated and the variables are defined as Y_{i,t} Outcome for student i on quiz or examination t Z_{i} A course indicator variable W_{i,t} List of variables reflecting student i's use of webbased material prior to the quiz or examination t C_{i} List of variables reflecting measurable factors specific to student i e_{i,t} Mean zero, normally distributed error term Included in the W_{i,t} are variables measuring the number of attempts a student made at a given quiz as well as variables reflecting experience with the internet and participation in the online bulletin board discussions. C_{i} contains attributes of the student such as race, gender, and involvement in extracurricular activities or work. These factors vary across students but do not vary from one quiz or exam to the next. Z_{i} is a dummy variable distinguishing students enrolled in the microeconomics course from those enrolled in the macroeconomics course. We have no particular expectations about gender and race, but we do have expectations on the other characteristics. We expect transfer students and those who are working or those involved in extracurricular activities to perform worse, on average, than nontransfer students and those who neither work nor participate in extracurricular activities. These hypotheses are, of course, holding all other things constant. The hypothesis about transfer students bears more explanation. It is largely a function of the particular situation at this institution whereby the school must accept anyone who has completed two years at a staterun community college. Such students enter without having to take a standardized entrance examination, the Scholastic Aptitude Test (SAT), and are generally thought by the faculty to be weaker students. Transfer students are a subset of the sample. We collected a large amount of data on students' use of the online quizzes and these data provide a rich environment for investigating the effects of webbased instruction. Unlike quizzes administered in the classroom, each of the online quizzes could be taken multiple times. This provides an interesting setting for examining the effectiveness of online quizzes. The more attempts at a given quiz the student makes, the more familiar the student becomes with the material and the better the student performs on the current quiz attempt. Taking the quizzes multiple times increases the student's interaction with the material. We also hypothesize that those students that attempt more online quizzes will perform better on the final exam than students who make fewer attempts at the quizzes. Again, the mechanism that produces this increased performance is the increased interaction with the course material engendered by the multiple attempts at each quiz.
Empirical ResultsWe begin by looking at the factors which explain variation in the average score on repeated attempts at a given online practice quiz. Table 4 describes the specific control variables and measures of students' use of the webbased instructional material. Note that these data form a panel, with observations on each student's attempts at each of five online quizzes. There were 838 usable student quiz attempts in our data.
Table 5 shows the results of our analysis of the determinants of student's scores on repeated attempts at an online quiz. Earlier versions of this paper included an analysis of the effects of repeated quiz attempts on the average score on online quizzes. In this case, the attempts variable may be correlated with the error term, making the parameter estimates from these models biased and inconsistent. We were unable to correct for these statistical problems and have dropped this analysis from the paper. Because of the panel nature of the data, we estimate the model using a "random effects" estimator that allows for unobserved student specific factors that affect the dependent variable. These unobservable factors, which can be interpreted as interest in the subject or motivation, are modeled as random variables. See Greene (2000), chapter 14, for details on random effects estimators and panel data. The dependent variable is student i's score on online quiz t. The first explanatory variable, lagged score, is the student's score on the previous attempt at quiz t. The second explanatory variable, attempt number, reflects the number of times the student has attempted quiz t. Model 1 includes only attempts, Model 2 includes only the score on the previous quiz attempt, and Model 3 includes both variables. Note the strong positive correlation between the student's score on a quiz and the number of attempts. An additional attempt at the quiz raises the score on the quiz by .37 points, in column 1 of Table 5. The effect is statistically significant with a pvalue well below .01. Similarly, if one uses the score from the previous attempt at this quiz as a regressor, that variable is strongly statistically significant with a coefficient of about .49. In other words, an additional point on the previous attempt translates into an additional half point on the current attempt. Including both the number of the attempt and the lagged score as explanatory variables results in both being significant at the 5% level or better. Each is also positive. The lesson from these results is that additional attempts at the quizzes translate into higher scores on the quizzes.
For each of the model specifications reported in Table 5 both the microeconomics indicator and the quiz5 dummy are statistically significant. The results indicate that students in the microeconomics section scored lower than students in the macroeconomics sections, other things equal. This could be due to differences in the instructors or differences in the nature of the material. In Model 3, job and transfer are significant at the 10% level. The former indicates that students with jobs score about a quarter point better than nonworking students, while the latter indicates that transfer students score about a quarter point worse than nontransfer students. The results on Table 5 are generally supportive of the idea that providing students access to online quizzes is an effective webbased instructional technique. Although many of the individual explanatory variables are not significant, the set of explanatory variables, when taken together, are strongly significant based on the c^{2} statistic which has a 1% critical value of about 27 for these models. The models also explain a large amount of the observed variation in quiz scores  47% in the case of Model 3. Most important is the parameter on the number of attempts at a quiz, which is positive and significant. This parameter suggests that additional attempts at online quizzes lead to higher scores on these quizzes, and hence to an increased understanding of the material. Next, we turn to an analysis of the determinants of the score on the final examination. Again, we investigate the idea that if webbased instructional techniques are effective, then student use of the webbased material and activities should be positively correlated with scores on the evaluation instruments. Note that the sample size falls dramatically in this case, down to 40 observations. Nonetheless, there are some interesting results, which are shown on Table 6.
On this table, Model 1 contains only demographic controls and measures of the student's use of the webbased instructional materials. Model 2 adds a variable indicating the demands on student's time outside the classroom, busy, which is a proxy for students who work, are involved in intercollegiate athletics, or other extracurricular activities. Because of concerns about the possibility that the score on the midterm exam is correlated with the error term, we also estimated these models excluding this variable. This had no appreciable affect on the results. As expected, the student's score on the midterm is an important determinant of the score on the final exam. One additional point on the midterm raises the score on the final exam by .38 points. Race and whether or not the student has a job are also statistically significant at the 10% level. The average score on the final exam is 54. Whites score 7.6 points higher than do nonwhites, about 14% of the mean. Those with jobs score about 7 points higher, 13% of the mean, than those who do not have jobs. In addition, whether or not the student transferred carries a positive coefficient, with a tstatistic about 1.6. Of the internet variables, only the number of postings to the bulletin board is statistically significant at the 5% level or better. This coefficient is .51, so an additional 15 postings to the bulletin board would have the same effect on the score on the final exam as race. The number of attempts at quizzes throughout the course of the semester carries a positive coefficient, with a tstatistic around 1.5. This coefficient has a Pvalue of 0.13, so it is nearly significant at the 10% level. The number of articles read is clearly not relevant as its tstatistic is well below 1 in absolute value. These results suggest that student's posting to the class bulletin board is strongly associated with higher scores on the final examination, and that a student's use of online practice quizzes is somewhat associated with higher scores on the final exam. Thus bulletin boards, to the extent that students can be induced to make posts, appear to be an effective webbased instructional technique. Note that read, the number of bulletin board posts read by each student, is statistically insignificant in these results. This suggests that "lurking"  passively reading bulletin board posts and not actively participating in the asynchronous communication  does not have the same payoff, in terms of the performance on the final exam, as does active participation in bulletin board discussions. This may be due to the additional thought and interaction with the course material involved in composing bulletin board posts, relative to simply reading what others have written. This result suggests that designers of webbased instructional material should focus on activities that encourage active participation in asynchronous communication and discourage "lurking."
ConclusionsWe set out to describe and analyze student use of webbased materials in principles of economics classes. Students in three sections of principles of macroeconomics and microeconomics were provided with an array of webbased material, including content, computer graded quizzes that could be taken multiple times, and synchronous and asynchronous communications tools. Students made extensive use of the online quizzes, completing about 2/3 of the total available quizzes. The distribution of the total number of "hits" on pages in the course suggests that students did not ignore the webbased content, but that a majority of the students probably did not return to this material multiple times. A majority of students were somewhat reluctant to make posts to the class bulletin board, although a small but significant fraction posted frequently. A majority of students read the bulletin board postings. Student utilization of the webbased material was significant, especially when the fact that these students were primarily freshmen and sophomores and campus residents is taken into account. Much of the research on student use of webbased material comes from classes taught at a distance and composed of adult learners who are working full time while attending school. These adult learners may have high opportunity costs of going to the library to access reserve materials, coming to office hours, or forming study groups and thus would be expected to utilize webbased material more often. However, younger campus residents have relatively lower opportunity costs of taking advantage of these traditional materials and resources and might, therefore, be expected to make greater use of them. This is especially true if reserve materials, office hours and study groups are substitutes for web based materials. The observed utilization in our sections suggests that web based materials can be useful even for resident undergraduate economics students. Our analysis suggests that online practice quizzes can be an effective tool. Taking multiple quizzes on a topic increased the high score on that topic significantly and there is some evidence that more attempts at the practice quizzes was positively correlated with the student's score on the final exam. Posting to the class bulletin board also appears to be positively correlated with performance, although passive reading of posts made by others is not correlated with performance. Use of online content, as measured by the number of "hits" on class web pages, was not correlated with performance. Posting to the bulletin board was a good predictor of performance, but reading and not posting (or "lurking") was not. If this correlation between posting and performance reflects learning, then instructors using bulletin boards to enhance their principles courses should focus on developing interesting discussion topics and designing discussion exercises that draw more students into the online discussion. Many publishers are rushing to provide online content related to texts. Faculty often think that making their notes or lecture slides available will help students. But our data suggest that the online content was not used extensively and our measure of use of online content was not a good predictor of student's performance. Before more effort goes into making online content available, we need to know more about the use and effectiveness of such material. Finally, we note that these conclusions and observations are based on a relatively small sample of students. More data collection and analysis needs to be done in this area before definitive conclusions can be reached. We view this as an ongoing research project, and plan to continue to collect data. We strongly encourage other faculty who use web based material in their classes to take the time to undertake similar studies.
References
