Web Browsing, Mobile Computing and Academic Performance
Michael Grace-Martin, M.A.
Geri Gay, Ph.D.
Latops in Classrooms
A number of educational benefits have emerged from some of the “laptops in the classroom” research done to date; among them: increased student motivation (Gardner 1994, Rockman, 1998), better developed professional/job skills (Dwyer, 1994; Rockman, 1998), increased collaboration among students (Dwyer, 1994; Rockman, 1998), better school attendance (Stevenson, 1998), better problem-solving (Dwyer, 1994; Rockman, 1998), better and/or more sustained academic achievement (e.g., Dwyer, 1994; Stevenson, 1998; Rockman, 1998; Fisher, 1998), better writing skills (Dwyer, 1994; Rockman, 1998), and extension of the school day—i.e., students tend to keep working on school-related assignments on their laptops after the regular school day is over, for example, in the evenings at home. (e.g., Kiaer, 1998; Rockman, 1998). However, a few studies have acted to generate some skepticism about these benefits (e.g., Gardner, 1993; Fisher, 1998). Indeed, some education technologists have questioned the ostensible hegemony of the optimism regarding Internet technology and laptops in the classroom (e.g., Roschelle, 1999; Albion, 1999). It should be noted that a majority of these studies involved K-12 (versus higher education) students in classrooms in which students were provided with hard-wired connections to the Internet (versus wireless connections).
Web Browsing Behavior
Comparable, quantitative measures of browsing behavior are lacking in the research literature. There have been attempts at “characterizing” browsing behavior in the context of recommender/collaborative/filtering systems (e.g., Fab [Balabanovic, 1997], Letizia [Lieberman, 1999], GroupLens [Resnick, 1994]). But these “characterizations” are not only content-based (i.e., characterizations of the actual content of the Internet resources the user has been viewing) rather than descriptive of the nature of the browsing activity itself (e.g., amount of browsing, number of different URLs viewed, etc.), but they are typically complex, implicit mathematical constructs that have little correspondence to human behaviors or cognitive constructs—and weren’t meant to be.
Peck et al. (1992) created a GOMS model--called “Browser-Soar”--meant to simulate a user’s perceptual, cognitive and motor operations as they use a Web browser. Their model was able to account for 90% of the browsing behavior observed in ten browsing sessions. Several browsing “operators” were identified, including high-level operators like: define-search-criterion, evaluate-search-criterion, modify-search-criterion, search-for-help, etc.…and low-level operators like: scroll, page, click-item and so on. Although this model is quite useful as a means for breaking browsing behaviors into their simpler component operations, studying individual differences, and making predictions about users’ subsequent browsing behaviors, many of the higher-level operators are not observable behaviors (e.g., evaluate-search-criterion, generate-evaluation-criterion, evaluate-current-window, etc.); and many of the observable behaviors (e.g., scroll, page, drag, etc.) would be impossible to infer in a large, relatively long-term, naturalistic study from the data typically available (e.g., Web server logs, proxy server logs, class observer notes, network traffic reports, post hoc self-reports, periodic short-term user journals, etc.).
There has been some work on the types of browsing-related activities users engage in. A study by Byrne, et al. (1999) provides a “taskonomy”of naturalistic (i.e., undirected) Web browsing. They observed six general categories of Web tasks, which they referred to as: “Use Information” (when users read, listened to, viewed/watched, duplicated [copy-and-paste], downloaded to local disk, displayed for others, or printed—information from a Web page), “Locate on Page” (locating specific information on a particular Web page), “Go To Page” (includes clicking the back/forward button, bookmarks, hyperlinks, typing in a URL, history menus, the Home button, and so on), “Provide Information” (user provision of product selection, authentication info, addresses, search criteria, and so on), “Configure Browser” (e.g., modifying aspects of the browser window, managing bookmarks, anything dealing with the browser itself rather than the content it is displaying), and “React to Environment” (browser requires user action, like a “reload”). They found the “Use Information” classification to be the dominant category (in terms of Total and Average Time), with “reading” the dominant sub-category within it (in terms of both Total Time and Number of Events). The “Locate on Page”, “Go To Page” and “Configure Browser” categories exceeded the “Use Information” when evaluated in terms of Number of Events rather than Time.
The results of this study were utilized by the authors to make Web browser and Web page design recommendations. The categories lend themselves well to this purpose; and unlike some of the “Browser-Soar” operators, all of this study’s independent variable instantiations are observable behaviors. No attempt was made to look at individual differences among subjects, or to correlate these differences with something like performance on a browsing task.
Although these browsing activity categories could potentially be used to look at individual differences, they present a few problems. Logistically, a number of the activities are difficult to capture over a network because they occur only locally, within the user’s client application (e.g., scrolling, reconfiguring the browser, downloading to local disk, copying and pasting, and others). Even activities like “reading”, “listening” and “viewing/watching” can only be inferred based on available network data—e.g., knowing that a user has accessed a particular Web page doesn’t mean they’re paying attention to it..(!) So, using total/average time and number of events would lend themselves to quantitative/statistical analyses; the problem is getting at these measures in a large-scale, naturalistic study from an unobtrusive distance.
Thury (1998) and others (e.g., Spool, 1999; Marchionini, 1988) have observed individuals navigating hypertext systems of information and have described various (sometimes repeatable) phenomena that some savvy Web designers have been able to leverage to create more “usable” Web pages. Often in these studies, subjects are given specific, directed tasks to complete as the experimenter observes and records their behaviors—while attempting to discover recurrent patterns in their behaviors related to either the nature of the task they are asked to perform and/or characteristics of the stimuli (i.e., the Web pages/interfaces) they must interact with to complete the task. The reports from most of these kinds of studies often emphasize the characteristics of the hypertext pages that either enhanced or hampered user performance for the types of tasks the users were given (e.g., when a hyperlink is not underlined, users often have difficulty determining that it is a “clickable” link when searching a Web site for specific information).
The Thury study differs from many of the others in that she observed her subjects’ (her students’) Web browsing behaviors in an undirected context—i.e., the Web browsing behavior she observed was not in response to specific, experimenter-provided questions or tasks. She argues that the tasks that evoke Web browsing in a classroom situation are much less “directing” than the tasks typically given to subjects in these “usability” studies, such that researchers’ emphasis on building information-retrieval models of Web browsing reflect the nature of the contrived tasks they are giving subjects rather than anything inherent about the activity of Web browsing itself.
Her point is a good one, however for the purposes of the present paper, she doesn’t provide a very measurable/systematic means for characterizing students’ browsing behaviors. Some examples of her observations: “Students often misunderstand what they are looking at when they use the Web” and “It seemed as if the students kept on searching not until they found something worthwhile, but until an arbitrary trigger ended their searching phase”. She goes on to use these observations as the basis for some Web usability/design-oriented recommendations.
There are at least three important observations to note regarding these studies:
As the result of a gift by a generous corporate sponsor (Intel Corporation), we had the opportunity to supply laptop computers to students in two university courses (a Communication course and a Computer Science course) for use over a full semester and to study their use of these computers. The computers included wireless network cards that gave these students wireless access to the campus network from a number of strategic locations on campus: 1) in and around the two classrooms, 2) in and around the major campus libraries, 3) in and around a major campus cafeteria, 4) outdoors, near/between these locations. When students used their laptops to browse the Web, browsing (URL, time, date, etc.) was captured by a proxy server and recorded in an extensive log file. The laptop computers were distributed to the students during the second week of classes; they were returned during the last week of classes.
In this study, we correlated the amount (e.g., number of times, number of minutes) a laptop computer was used by a student for Web browsing with the student’s academic performance. To our knowledge, this is the first time a continuous measure of usage (instead of a discrete measure, like laptop user versus non-laptop user) has been correlated with performance in a classroom laptop study. Our intention was to evaluate some of the findings observed in previous studies—especially those regarding improved academic performance and extension of the school day. This study, then, is one of very few to correlate characteristics of a student’s actual behavior related to using a laptop computer inside and outside of the classroom with their resulting individual academic performance.
Though the two courses both related to new media issues, they were quite different in terms of format/pedagogy, student characteristics/demographics and nature of assignments.
Communication 440 Course Description
Students were encouraged to explore research and design issues relevant to the social uses of the Internet. Through readings and class exercises, students examined different computer-mediated communications and computer-supported cooperative work environments to understand issues affecting the design and use of information systems.
Computer Science 502 Course Description
CS 502 was designed to be a primer for the numerous technical, economic, social and legal challenges and questions that surround the design and implementation of digital libraries. The course highlighted contemporary issues and challenges in data conversion, storage and representation, information retrieval, usability issues, and the representation and negotiation of intellectual property and copyright concerns.
In this study, we were interested in whether the amount of use of mobile and/or wireless computing would have a positive or negative impact on students’ academic performances. We wanted to investigate the types of browsing behaviors related to superior academic performance versus sub-par performance. We wanted to see if the circumstances (e.g., class format, subject matter) mattered and how/if they interacted with attributes of browsing behavior.
The subjects consisted of students enrolled in CS 502: Computing Methods for Digital Libraries and Comm 440: Computer Mediated Communication—Explorations in Work, Learning and Play. Both were upper level courses—normally taken by upper-classmen—in their respective disciplines. 53 students completed CS 502, 29 students completed Comm 440, yielding a total of 82 subjects for our experiment. The CS 502 students were composed primarily of computer science and engineering majors. The Comm 440 students were a more diverse group, composed of Communication majors (10), Art (5), Agriculture (4), Engineering (2), and a smattering of others.
NumURLs: the total number of URLs a subject viewed over course of semester (note: one Web page can contain references to multiple text and binary files—i.e., there’s a many-to-one relationship between URLs and a single Web page).
NumMins: this is an estimate of the total number of minutes a subject spent browsing Web content over the course of the semester. Because an HTTP connection isn’t normally a continuous connection (i.e., once a file has downloaded to the browser, the browser is no longer connected to the Internet until the next page/file request), there must be some degree of presumption in order to get some measure of the time the subject spent on the downloaded content. We chose to allot one minute of browsing time per Web page/file request (e.g., if a subject downloaded ten URLs at 1:02pm, these would be counted as one request [remember the many-to-one URL to Web page relationship] and counted as one minute of browsing). Now, if the subject went on to access another page/file within 10 minutes (“10” was an arbitrary cutoff we chose) of the previous download request (say, at 1:08pm), then the difference in minutes between the two times was added to the total number of minutes (in this example case, we would now have 1 + 6 for a total of 7 minutes of browsing). Then, if another download request is made within 10 minutes of the time of the last request (e.g., say, at 1:16pm), then the difference between this time and the last download time (1:08pm) is again added to obtain the total browsing minutes (i.e., 7 + 8 = 15 minutes).
NumDays: total number of different dates on which browsing activity for the subject was observed/recorded over the semester.
DaySpan: number of days between first and last days of observed/recorded browsing for a subject. Useful for “normalizing” NumDays variable for subjects who didn’t start browsing until later in the semester (for whatever reason), or who stopped browsing well before the end of the semester.
Sessions: total number of “browsing sessions” over the semester per subject. The 10-minute cutoff rule (as described in the explanation of the NumMins variable) was used to differentiate discrete browsing sessions. So, less than ten minutes between consecutive URL requests indicated the continuation of an ongoing session; more than ten minutes represented the beginning of new, distinct browsing session. Though the selection of “10” was mostly arbitrary (and the specific number chosen deemed not overly critical, as long as it was uniformly applied in the data analysis), it was thought that 10 minutes was enough time to read/attend to the full content of most Web pages before moving on.
Mins/Ses: this is the NumMins variable divided by the Sessions variable. It represents a subject’s “Average Session Length” in minutes.
Mins/Day: this is the NumMins variable divided by the NumDays variable. It represents the average number of minutes a subject spent browsing per day of actual browsing.
Sess/Day: this is the Sessions variable divided by the NumDays variable. It represents a subject’s average number of browsing sessions per days of actual browsing.
Sess/Span: this is the Sessions variable divided by the DaySpan variable. It represents a subject’s average number of browsing sessions over the entire period of days from first browsing session to last. This variable provides a measure of how ‘consistent’ or ‘sporatic’ a student’s browsing was—i.e., a higher number tends to indicate browsing was more consistent.
URLs/Min: this is the NumURLs variable divided by the NumMins variable. It represents the average amount of time a subject spent per URL.
URLs/Ses: this is the NumURLs variable divided by the Sessions variable. It represents the average number of URLs a subject browsed per session.
URLs/day: this is the NumURLs variable divided by the NumDays variable. It represents the average number of URLs a subject browsed per day of actual browsing.
There were three nominal independent variables that figured into our analyses:
The dependent variable was Final Course Grade. The final course grade in CS 502 was based on recitation and lecture participation (33%), two short answer examinations (33%), and four assignments requiring searching/researching on the Web (33%). The final course grade in Comm 440 was based on participation in online discussions (15%), developing discussion questions for one class session and one lab session (20%), 3 paper submissions for final group project (45%) and 2 peer reviews related to final group project (20%).
We obtained permission from Cornell’s Subjects Review Board to capture the browsing behavior of these students via a proxy server. The browsing software clients (Internet Explorer and Netscape) were configured on the students’ laptops to go through the proxy server by default; students, however, were given—and made aware of—their option to bypass the proxy server as desired. (To bypass the proxy server, students were required to select the “proxy off” option made relatively conveniently available from their Windows 98 Start menu. They had to do this before each use of the browsing software, because upon being re-started, the browsing software would revert, by default, to going back through the proxy server.) Reports from students, the generous appearance of Web URLs in the log containing socially-stigmatized content (such as pornography), and the voluminous browsing data captured from nearly all of the students in both classes indicate few students regularly exercising the option to bypass the proxy server.
Approximately three weeks into the semester, the decision was made to require authentication at the proxy server. Although this decision made students’ path through the proxy server a little less invisible (i.e., they had to supply a user name and password each time they started up a Web browser—however this user name and password could be saved by the browser so that they didn’t have to actually be typed in each time), it allowed us to identify (at the level of the individual) the browsing students did from home on their laptops. Prior to this, we were only capturing IP addresses, which uniquely identified laptops only while students were on the wireless network (i.e., students’ wireless IP addresses were static/fixed), not when they were at home on various remote access connections (e.g., modems, cable modems, ethernet connections).
Use of the laptop computers by students was not limited to the test course in which the student received the laptop; students were free to use the laptops as they wished—including: completing work for other courses, entertainment and/or recreation, employment-related tasks, social activities, etc.
By the end of the semester, the proxy log contained more than 1.7 million Web URLs representing the Web content accessed by the students in the two test classes, 24 hours/day, 7 days/week over the last 15 weeks of the 16-week semester.
These records were imported into an Access database so the data could be manipulated, managed and extracted from using structured query language (SQL) programming statements.
All statistical analyses were performed in MiniTab version 12 for Windows. A Pearson product moment correlation coefficient between Final Course Grade and each of the quantitative independent variables was computed. Correlations with p-values of .05 and below were statistically significant for the purposes of this study.
Research Question #1: Is there evidence indicating academic performances are enhanced by students taking advantage of nearly ‘ubiquitous’ access to mobile, networked computers?
Statistically significant positive correlations between independent variables indicating “quantity” of browsing and final course grade will tend to support enhancement of academic performance as result of ubiquitous computing.
Research Question #2: Is there evidence indicating students’ academic performances are enhanced by having access to the laptop computers outside of the classroom—thereby “extending the school day”?
Statistically significant positive correlations between independent variables indicating “quantity” of browsing and final course grade—for browsing recorded between classes and/or from home—will tend to support the “extension of the school day” claim.
Research Question #3: Are there mediating factors affecting the valences of questions 1 and 2 that can be isolated?
Statistically significant correlations within one browsing context but not another, for students in one course but not the other, or for one gender group but not the other, will tend to implicate browsing context, gender and/or course (respectively) as significant factors mediating correlations between Web browsing and academic performance.
Table 1. Browsing Descriptive Statistics
As shown in Table 1, the mean final grade for the two classes was quite similar: 90.0 for Communication 440 (Comm 440) and 91.7 for Computer Science 502 (CS 502); in terms of letter grades, these indicate mean final grades of approximately B+ in both cases. The mean URLs/minute (note: We included all URLs, including those not referencing files ending in .htm or .html. Therefore, there is a one-to-many relationship between “a Web page” and the URLs associated with it.) was also quite similar: 11.29 for Comm 440 versus 11.21 for CS 502 (note: The similarity on this measure for the two groups suggests that the graphical and ad content of the Web pages visited by students in both classes was likely similar—assuming neither group of students browsed through individual Web pages consistently faster or slower than the other.). The two groups diverge when it comes to amount of browsing in terms of number of sessions, length of sessions, and total browsing minutes per day. The smaller Number of Sessions and Minutes/Day values for the CS 502 class may be partially due to the greater availability of computer lab facilities for CS students—i.e., they were more likely to substitute use of lab computers for use of their laptops while on campus before and after class. (Self-reports by CS 502 students during a focus group conducted by a member of our research group pointed to this latter practice.)
Table 2. Browsing Context (URLs:Mins)
For each pair of percentages in Table 2, the first number is the percentage in terms of total URLs for the column; the second number is the percentage in terms of total Minutes for the column.
The comparative usage context profiles shown provide further evidence that CS 502 students were more likely than Comm 440 students to use lab computers while on campus. Almost half of the Comm 440 students’ use of the laptops for Web browsing occurred on the campus wireless network outside of class; this compares to less than a quarter for the CS 502 students.
Correlations with Final Grade
Communication + Computer Science Classes
Due to the interactions we found with independent variables that differed between the two classes, there were relatively few statistically significant correlations between browsing behavior and final grade for the combination of students from both classes. However, a few statistically significant correlations did emerge from the During Class context. (Note: the full correlation results are included in the appendix of this paper.)
During Class Lecture
Longer browsing sessions led to decreased academic performance (correlation coefficient=-0.284, p=.029), perhaps due to the prolonged inattention to the instructor and/or in-class activities extended browsing sessions may generate. A statistically significant positive correlation with the NumDays variable (corr. coeff.=+0.260, p=.047) may simply be the result of a likely positive correlation between class attendance and final grade more generally. One other correlation from the During Class context that nears statistical significance is the one with Number of Sessions (corr.coeff.=+0.234, p=.074): more sessions over the course of the semester tended to lead to a higher final grade. So, longer browsing sessions during class tend to lead to lower grades; but there’s a hint that a greater number of browsing sessions during class may actually lead to higher grades.
Overall—i.e., across all browsing contexts—the more browsing Comm 440 students did, the worse their final grades. There were negative correlations with Number of Sessions (coeff=-0.434, p=.044), Minutes/Day (coeff=-0.395, p=.069), and Sessions/Day (coeff=-0.435, p=.043).
During Class Lecture
Contrary to the overall trend for Comm 440 students, but consistent with the trend for the During Class context for the two classes combined, there was a positive correlation between Sessions per Day/Class and final grade (corr.coeff.=+0.485, p=.03). Although final grade was negatively correlated with Session Length, it was not significant (corr.coeff.=-0.225, p=.34). Session Length was likely less of a liability in the Communication class because in-class browsing was actually encouraged during—and integrated into—numerous in-class activities.
Wireless Outside of Class Lecture
There were no statistically significant correlations at a=.05. The closest was the negative correlation with Sessions/Day (coeff=-0.330, p=.13).
Outside of the classroom and away from the ‘structure’ of the school day on campus, there’s robust evidence that Comm 440 students’ academic performances degraded with Web browsing. There were strong negative correlations with Number of URLs (coeff=-0.501, p=.029), Number of Minutes (coeff=-0.478, p=.039), and Number of Days (coeff=-0.495, p=.031). As in the case of overall browsing, there were negative correlations with Number of Sessions (coeff=-0.463, p=.046) and Sessions/Day (coeff=-0.460, p=.047).
Computer Science Class
There were positive correlations with Number of Sessions (coeff=+0.286, p=.051) and Sessions per Day Span (coeff=+0.388, p=.007). Overall, then, the more browsing—and the more “consistent” browsing—the Computer Science students did on their laptops, the higher the final grade they received in the course.
During Class Lecture
There were negative correlations with Session Length (coeff=-0.311, p=.054) and Minutes per Class (coeff=-0.358, p=.025). So, the more time computer science students spent continuously browsing during class lectures, the lower their final grades tended to be.
Wireless Outside of Class Lecture
There were no statistically significant results at alpha=.05, but there was a trend toward a negative correlation with Session Length (coeff=-0.387, p=.083). It should be noted that a small proportion of CS students actually used their laptops on the wireless network outside of the class lecture (about 39%; and much of that was probably during the class discussion section).
There were no statistically significant correlations at alpha=.05. The one correlation that approached significance was the positive one between final course grade and Number of Sessions per Day Span (coeff=+0.329, p=.061).
We wondered whether there might be any significant differences between male and female students regarding the relationships between browsing variables and final grades. Since the computer science class had only two female students, we confined this analysis to students from the communication class.
In the overall and “home” contexts, there were differences in terms of ‘degree’ (i.e., correlations in the same direction, but somewhat greater for one gender group versus the other), but nothing close to an interaction. For example, across all browsing contexts, the Sessions/Day variable was negatively correlated with grade for both male and female students. While the correlation was statistically significant for the female students at a=.05 (N=7; correlation coefficient=-0.754, p=.05), it approached significance for the male students (N=15; corr.coeff.=-0.479, p=.071).
On the wireless network, though—both during class and before/after class—the correlation between URLs/Minute and final grade was significantly stronger for female students (During Class: N=6, corr.coeff.=+0.833, p=.039; Other Wireless: N=7, corr.coeff.=+0.867, p=.011) than for male students (During Class: N=15, corr.coeff.=+0.034, p=.91; Other Wireless: N=15, corr.coeff.=+0.161, p=.565). This finding suggests that female students benefited grade-wise from browsing more quickly through URLs, whereas male students did not. (An interpretation of this result requires closer examination of the actual content of the URLs students browsed and is beyond the scope of this paper).
Across both courses, the longer the average browsing sessions students engaged in during class the lower the final grades they tended to receive (coeff=-0.284, p=.029). This suggests that longer browsing sessions during class tend to be a liability for students’ academic performances regardless of the nature of the students or the course. The other statistically significant correlation—the positive one between final grade and number of different days browsing variable (NumDays): corr.coef.=+0.260, p=.047—is probably confounded with a likely positive correlation between final grade and class attendance, which is precipitated by a high positive correlation between the NumDays variable and class attendance.
Comm 440 students appeared to benefit much more from having the computers in the classroom than the CS 502 students did. The collaborative and interactive nature of the course, and possibly of the students that would take this type of course, may have turned the tide in favor of having the laptops when compared to their use within the more unidirectional lecture format of CS 502.
Outside of the classroom, however—particularly at home—CS 502 students were the ones who tended to benefit, while Comm 440 students suffered considerably from greater Web use.
The one gender-related finding that turned up was a significant difference on the wireless network in terms of the URLs/Minute variable: a greater URLs/Minute rate was correlated with higher grades for female students (During Class: N=6, corr.coeff.=+0.833, p=.039; Other Wireless: N=7, corr.coeff.=+0.867, p=.011), but not for male students (During Class: N=15, corr.coeff.=+0.034, p=.91; Other Wireless: N=15, corr.coeff.=+0.161, p=.565). At the very least, this result points out the differential impact of different browsing styles that vary with user characteristics.
What do these results say about the “extension of the school day” and “educational benefits of ubiquitous network access” claims? Simply that such benefits may exist for some populations in some contexts, but that characteristics of the user and his/her educational environment may limit or even reverse these benefits when measured in terms of academic performance. The advisability of introducing laptop computers into a curriculum or classroom, then, may hinge on characteristics of the students (e.g., major and gender), aspects of course content and/or class structure, and the availability of other computing facilities to students on campus.
It’s also clear that the existence or absence of ubiquitous network access may significantly alter a student’s use of a laptop computer. Another study based on data from these same subjects showed social computing (e.g., email and instant messaging) to be one of the primary uses of the wireless laptops by students (Gay, in press). When immoderate Web browsing leads to lower grades—for example, during lecture-driven classes or at home in a dormitory in some cases—student achievement and productivity may be boosted by limiting network access in certain contexts and forcing a focus on the content and applications (based on instructor input/recommendations) provided on the laptop itself. Some education technologists have even suggested the need for specific-purpose computing devices in the classroom (e.g., a PalmPilot outfitted only with a particular sensor probe [for data collection] and associated software for the day’s lesson) versus laptop computers running typical general purpose software, like word processors, spreadsheets, Web browsers, etc. for keeping students on task (Soloway, 1999).
For the study described in this paper, we purposely avoided examining the content of the URLs students were browsing. This was both to keep the study manageable and to see if something insightful could be gleaned from observing behavioral browsing characteristics alone. Although we believe this to have been a fruitful direction, we would not reject the possibility that integrating an investigation of the content of the URLs with these data may provide even more insight. However, including the URL content data introduces additional, potentially complex variables that may act to ‘muddy’ the picture. Furthermore, generalizing from specific interactions between content types, browsing contexts, browsing behavior, course characteristics and student demographics to other courses and learning environments may actually reduce predictive accuracy. Nevertheless, an examination of URL content—whether in conjunction with the quantitative browsing data or not—will follow.
The following tables show the correlations between the independent browsing variables (labeled at the top of each column) and the dependent variable, Final Grade. In each column, the upper number is the Pearson correlation coefficient and the lower number is the p-value.
Students=All (both classes), Context=All, Gender=All, N=69
Students=All (both classes), Context=During Class, Gender=All, N=59
Students=All (both classes), Context=Other Wireless, Gender=All, N=45
Students=All (both classes), Context=At Home, Gender=All, N=52
Students=Comm 440, Context=All, Gender=All, N=22
Students=CS 502, Context=All, Gender=All, N=47
Students=Comm 440, Context=During Class, Gender=All, N=20
Students=CS 502, Context=During Class, Gender=All, N=39
Students=Comm 440, Context=Other Wireless, Gender=All, N=25
Students=CS 502, Context=Other Wireless, Gender=All, N=20
Students=Comm 440, Context=At Home, Gender=All, N=19
Students=CS 502, Context=At Home, Gender=All, N=33
Students=Comm 440, Context=All, Gender=Female, N=7
Students=Comm 440, Context=All, Gender=Male, N=15
Students=Comm 440, Context=During Class, Gender=Female, N=6
Students=Comm 440, Context=During Class, Gender=Male, N=15
Students=Comm 440, Context=Other Wireless, Gender=Female, N=7
Students=Comm 440, Context=Other Wireless, Gender=Male, N=15
Students=Comm 440, Context=At Home, Gender=Female, N=5
Students=Comm 440, Context=At Home, Gender=Male, N=14