As a college parent, you probably have very little influence over the amount of time your college student spends studying. That is appropriate, as you begin to allow your student to gain independence and control over his choices and decisions. However, you might help your student understand the importance of investing enough time in his work in order to do well. As a parent, you may be able to help your student think through the realities of how he spends his time. Then, of course, it will be your job to step back and let him find his way.
The college experience is about more than just coursework. College is a time to meet new people, experience new things, and work at gaining independence. But college is also about classes, exams, studying, working with professors, and, hopefully, gaining a wealth of useful knowledge and new ways of thinking. In order for students to succeed, they need to put in the time. Unfortunately, many students either do not understand the amount of time necessary to do well in college, or they do not prioritize the amount of time they need to spend studying.
What is expected?
The general rule of thumb regarding college studying is, and has been for a long time, that for each class, students should spend approximately 2-3 of study time for each hour that they spend in class. Many students carry a course load of 15 credits, or approximately 15 hours of class time each week. Doing some simple math indicates that your student should be spending roughly 30 hours of study time and 15 hours in class. This 45 hours is the equivalent of a full time job – the reason that your student is called a full time student. For many students, this number is a surprise.
For students who were able to get by in high school with very little study time, this is more of a shock than a surprise. Many students spent little more than 4-5 hours per week studying in high school. (Yes, there are students who spent significantly more than this studying in high school, but they are not the majority.) One study has suggested that many students in college study an average of 10-13 hours per week. This is the equivalent of less than 2 hours per day. Only approximately 11% of students spent more than 25 hours per week studying. Clearly there is a significant gap between the reality (10-13 hours) and the ideal (30+ hours).
Students come to college expecting it to be harder than high school, and expecting to spend more time studying. However, they may not realize the degree of difference with which they will be confronted. These students want to do well; they simply do not yet understand what is required from them to do well.
There are some additional factors that may affect the amount of time students spend studying.
- Expectations – Some researchers have suggested that there may be a correlation between the amount of time a student expects to study when she comes to college and the actual amount of time that student spends. Students who come to college with lower expectations about required time may spend less time.
- Attitude – Some students may not only have an unreasonable sense of the amount of time required, but they may feel that once they have spent what they consider a reasonable amount of time studying they “deserve” a good grade. These students equate amount of effort with good grades. (“I deserve an A because I worked really hard on this paper.”) Students who couple unrealistic expectations with a grade entitlement attitude are going to be disappointed, unhappy, and angry.
- Social media – One small study has suggested that those students who spent significant amount of time on Facebook spent less time studying. This study suggests that these students spent an average of 1-5 hours per week studying rather than the 11-15 hours per week that the non-Facebook users spent. This should not suggest that college students should not use Facebook or other social media. This is a way of life for many students. It does suggest, however, that students need to be aware of how they spend their time and that they need to be cautious. Certainly, much more research will be done in this area.
- Alcohol – Another interesting study was conducted in 2008 by NASPA – Student Affairs Administrators in Higher Education. This study surveyed 30,183 students who took the Alcohol.edu on-line alcohol education course. This study suggested that first year students who used alcohol spent approximately 10.2 hours per week drinking and 8.4 hours per week studying. Again, this study should be kept in perspective, but it does remind us of what is obvious: students who spend significant amount of time in college drinking spend less time studying.
Most of these factors are not surprises. Obviously, students who spend significant amounts of their time doing other things – whether that is spending time on-line, drinking, working, or simply socializing – spend less time studying. What is important, however, is that students may not realize how much time they should be studying and they may not realize how much time they are actually studying.
Parents may need to help their students think about expectations and habits. It might help a student to think about the 168 hours in a week and keep a log of how he actually spends his time. It might help a student to rethink her college education as a full-time job, requiring the approximately 40 hours per week that a full time job would. It may help a student to plan a realistic study schedule to manage study time more efficiently.
Once you help your student consider his study time management, however, it is important that you, as a college parent, let your student take the lead at actually putting a plan into action. Your student will need to make her own choices and decisions. Hopefully, she will use her time wisely, and if not, she will learn important lessons from her choices.
Helping Your Student Be a Better Student: Twelve Questions to Ask
Location, Location, Location: Where’s Your College Student Studying?
Are There Secrets to College Success?
Beating the Procrastination Monster: How College Parents Can Help
Other articles of interest
A substantial body of research affirms the commonsense notion that involvement in academic work and quality of effort pay off: the more students engage in educationally purposeful activities, the more they learn (see comprehensive reviews in Kuh et al. 2007 and Pascarella and Terenzini 2005). An important element is how much time students invest in studying (Astin 1993). Yet while time is important, it is increasingly clear that how students spend their study time also matters. Spending many hours memorizing facts in order to perform well on an exam may earn a good grade, but it is not likely to result in long-term retention or the ability to apply what was learned in novel situations (see Bransford, Brown, and Cocking 2000). A recent longitudinal analysis of student performance on the open-ended performance task of the Collegiate Learning Assessment, administered to the same students at the beginning of the first year and at the end of the sophomore year, found that hours spent studying alone corresponded to improved performance, but hours spent studying with peers did not (Arum, Roksa, and Velez 2008).1 While we should not ignore the importance of how study time is used, this article focuses on the simple question of how much full-time college students study, whether study time has declined, and if so, what may account for the decline.
In higher education, a well-established rule of thumb holds that students should devote two hours of study time for every hour of class time. Assuming a full-time load of fifteen credit hours, students adhering to this standard should spend thirty hours per week studying. But since its first national administration in 2000, the National Survey of Student Engagement (NSSE) has found that the average full-time college student falls well short of that standard. NSSE asks students how many hours they spend “in a typical seven-day week” on a variety of activities, including “preparing for class (studying, reading, writing, doing homework or lab work, analyzing data, rehearsing, and other academic activities),” and the results indicate that, on average, full-time NSSE respondents only study about one hour for each hour of class. This figure has been relatively stable from 2000 through 2010. For example, among some 420,000 full-time first-year students and seniors attending 950 four-year institutions in the United States in 2009 and 2010, only 11 percent of first-years and 14 percent of seniors reported studying twenty-six or more hours per week. About three out of five (58 percent of first-years and 57 percent of seniors) said they study fifteen or fewer hours per week. On average, full-time 2009 and 2010 respondents at US institutions studied only 14.7 hours per week. The results were comparable for Canadian students in NSSE 2009 and 2010, who studied an average of 14.3 hours per week. (Students taking all of their classes online were excluded from these analyses.) These findings also track closely with time-use studies from the late 1980s and early 1990s using both survey and time-use diary approaches (see Gardiner 1994, 51–53).
The Faculty Survey of Student Engagement (FSSE, a companion survey to NSSE) includes questions about how much time faculty members expect students to spend preparing for class, and how much they believe students actually spend. Interestingly, faculty expectations for student preparation time are much closer to what students actually report than to the conventional standard. In 2010, the average faculty expectation for study time was 16.5 hours per week, only two hours higher than what students reported. But when asked how much time they believe students actually spend preparing for class, faculty provided a low estimate of nine hours per week, on average. So the faculty perception is that students are studying about 7.5 hours less per week than they should. But what do long-term trends in college students’ study time look like?
Economists Philip Babcock and Mindy Marks recently assembled time-series survey data on college student time use from a number of sources spanning four decades (see table 1). Their study, titled “The Falling Time Cost of College: Evidence from a Half Century of Time Use Data,” will appear in a forthcoming issue of the Review of Economics and Statistics. While the journal article discusses the implications of diminished study time for understanding trends in the economic return to baccalaureate education and in human capital investment, the authors summarized their findings in the more sensationally titled “Leisure College, USA: The Decline in Student Study Time” published by the American Enterprise Institute (AEI) (Babcock and Marks 2010). As both titles indicate, they found evidence of a pronounced decline in the number of hours that full-time college students say they study, from about twenty-four hours per week in 1961 to fourteen hours per week in 2003. Although Babcock and Marks examined change in study time over three time periods (1961 to 1981; 1987, 1988, and 1989 to 2003, 2004, and 2005; and 1961 to 2003), I focus attention in this article on the long-term change from 1961 to 2003, which is also the focus of the AEI article.
Table 1: Data Sources on Study Time Analyzed by Babcock and Marks
Nationally representative sample of
Number of hours per week “on the average” spent “Studying (Outside of class)”
Direct entry of
Nationally representative sample of undergraduates
Number of hours
Direct entry of
Higher Education Research Institute College Student Survey (HERI)
Time spent “during a typical week” on “Studying/Homework
Hours spent “in a
1HERI data from each three-year period were pooled to increase the likelihood of institutional matches between periods.
2HERI surveys were administered locally, with random sampling recommended but not verified.
Babcock and Marks devote a portion of each article to identifying and addressing factors that might account for the apparent drop in study time. I will briefly summarize these and the evidence marshaled to dismiss them. Next, I consider some possible explanations for the decline advanced by the researchers, adding some of my own to the list. I conclude with a discussion of what we are to make of these findings.
Accounting for possible confounding factors
A dramatic difference between undergraduate education in 1961 and today involves technology. The mechanics of information search and retrieval, and of preparing and revising written assignments, have profoundly changed since 1961. Information that previously required a visit to one or more libraries, sometimes even at other locations, is often only a few mouse clicks away today. With regard to writing, most students now compose at the keyboard rather than writing longhand and transcribing. Sentences and whole paragraphs can be inserted, altered, moved, or removed in a matter of seconds, whereas in the past such editing often meant arduously rewriting or retyping pages. Given these changes, it seems plausible that some of the change in study time may reflect efficiency gains due to new technologies. But Babcock and Marks counter that the lion’s share of the drop in study time occurred between 1961 and 1981, predating the wide adoption of microcomputers, modern word processors, and easy electronic access to research sources over campus computer networks. So, new technology fails to explain most of the decline.
It is well established that subtle variations in survey design can affect responses. The several surveys examined each have their own idiosyncratic ways of both asking about time use and structuring the response (see table 1). Some surveys ask about a typical week, one asks about the last week, and one asks students to report an average. One explicitly defines “week” to mean seven days, while the others do not. Two surveys asked students to fill in an exact number, while two others asked students to select from different sets of discrete ranges. As a result of these differences, some of the observed decline in study time may be an artifact of the different survey questions and response frames. To test for such framing effects, the researchers administered the several question versions to randomly selected students in four large classes at a single public university. The observed differences were then used to adjust mean study hours from the National Longitudinal Study of Youth, 1979 (NLSY79), the Higher Education Research Institute’s College Student Survey (HERI), and NSSE surveys to be comparable to the 1961 baseline, Project Talent. (The adjustment reduced the means for NLSY79 and increased them for HERI and NSSE.) While this procedure is by no means conclusive—for example, it assumes that students in the four selected classes at a single university are sufficiently representative of the larger survey populations to provide a fair comparison, and also that framing effects are constant across historical eras—it is reasonable, and the use of adjusted means boosts confidence that distortions due to question wording and different response frames have been reduced, if not decisively eliminated.
The 1961 baseline data are for first-year students (plus perhaps a small number who may have had sophomore standing at the time of the survey), while the later comparisons include other classes. Babcock and Marks assert that because NLSY79 and NSSE data show first-year students study slightly less than seniors, any bias introduced by including the other classes would have the effect of increasing, rather than decreasing, the average study time in the later surveys.
Another set of questions involves the institutions in the different datasets. Recent decades have witnessed the emergence of new postsecondary providers, but this does not explain the change because the 1961 to 2003 comparison is limited to students at institutions represented in both data sets. Only the 1961 to 1981 comparison, involving nationally representative samples of students, did not match institutions. The study also shows that large declines in study time between 1961 and 2003 remain evident when the sample is disaggregated by broad institutional type (doctorate granting, master’s level, baccalaureate liberal arts, and other baccalaureate, identified hereafter as Carnegie groups). The drop in adjusted average study time ranged from nine hours at master’s institutions to 11.6 hours at baccalaureate liberal arts colleges (the group with the highest average study time in each period—nearly five hours per week above the overall mean in 1961, and about three hours above the mean in 2003).
Using matched sets of institutions raises the question of whether the students at those institutions are sufficiently representative of the US undergraduate population. Babcock and Marks show that selected background characteristics of students at the subset of Project Talent institutions matched to NSSE are very similar to those for the full Project Talent data set, both in the aggregate and when examined within Carnegie groups. They also contrast students at the matched NSSE institutions in 2003 against nationally representative data from the National Postsecondary Student Aid Study (NPSAS), again both in the aggregate and by Carnegie group. For the most part the two populations are similar, though NSSE shows an overrepresentation of women, students whose fathers have a bachelor’s degree, and students not working for pay. But they note that because each of these groups tends to report more study time, any bias introduced would increase, rather than decrease, the overall estimate of study time for 2003, and thus reduce the magnitude of the decline from 1961.
The college-going population today is itself considerably different from what it was in 1961—with more women, more students of color, more nontraditional-aged students, and a larger share of high school graduates who continue their education. To what extent do these changes in the composition of the college-going population account for changes in study time? Babcock and Marks show descriptive data that document a consistent decline in study hours across categories of gender, race, and parents’ education. They also employ a statistical technique to decompose the change in study time so as to isolate the amount of the observed change that is attributable to change in the underlying populations (using gender, age, race, and parents’ education to describe those populations). The general conclusion from these analyses was that changes in the student body explain only a trivial amount of the change in study time between 1961 and 1981 or 2003. But the analysis of the intermediate period—1987, 1988, and 1989 to 2003, 2004, and 2005—yielded somewhat different results. For these data, verbal SAT scores were available and included in the analysis, which found that changes in student composition accounted for nearly one-fifth of the total change in study time. To be sure, that leaves four-fifths unexplained, but it does suggest that some of the change in study time is related to differences in students’ preparation for college.
There is one other important point to make with regard to compositional differences in the student population between 1961 and 2003. More students are now working for pay, and the number of hours worked has risen as well. Comparing the 1961 and 2003 samples, the proportion of full-time students who work increased from about one-quarter to 55 percent. The share working more than twenty hours per week—whom I will call “heavy workers”—jumped from 5 to 17 percent.2 At the 1961 baseline, heavy workers studied seven hours per week less than those not working, and five and a half hours less than those working up to twenty hours per week. While all groups declined by 2003, the heavy workers started from a lower base and their drop in study hours was half that of the other groups. In a footnote, Babcock and Marks indicate that when hours worked and major were added to the analysis of compositional differences, the change in student population accounts for 18 percent of the drop in study time. Students working long hours and caring for dependents have competing claims on their time, and it is not surprising that an increase in the heavy-work population (21 percent of whom had dependents in 2003–4, according to NPSAS) accounts for an appreciable portion of the decline in study time. This finding raises questions about other characteristics not included in the composition analysis, such as age, hours spent working in the home, and residential versus commuter status—all of which are related to demographic changes in the undergraduate population during the period studied.
A final possible explanation for the change in study time involves the well-documented transformation in the distribution of undergraduate majors (Brint et al. 2005). But as with demographics and Carnegie groups, the descriptive data show a consistent pattern of decline within groups of related majors. And as noted above, a version of the decomposition analysis took major into account, and a large share of the decline remained unexplained.
In their efforts to identify and rule out possible explanations for the observed drop in study time, Babcock and Marks overlook changes in pedagogy. Recent decades have seen escalating criticism of the lecture method, accompanied by new approaches to engage students with learning inside and outside of class. Several of these new approaches can involve significant time commitments apart from “studying” as conventionally understood, but little is known about how students account for such activities when prompted to report on their time use. Consider service learning and various forms of field-based learning, such as co-op or internship programs and other field placements. If students take our questions literally, it’s doubtful that they consider time spent on those activities to qualify as “studying,” “homework,” or “preparing for class,” but the truth is we don’t know. Even NSSE’s parenthetical elaboration, “studying, reading, writing, doing homework or lab work, analyzing data, rehearsing, and other academic activities,” does not explicitly incorporate such activities. In NSSE 2010, 40 percent of first-years and 52 percent of seniors reported at least sometimes participating in service learning, and half of seniors reported having done a practicum, internship, field or co-op experience, or clinical assignment. If students exclude these activities when reporting how much they study, that could explain some of the decline in reported study time. This illustrates a difficulty with making long-term comparisons on how students spend their time when the activities that count as teaching and learning are themselves changing.
What should we make of reduced study time? Who or what is to blame?
While we might quibble over some of the details, Babcock and Marks make a fairly persuasive case that the amount of time that full-time college students devote to their studies on a weekly basis has dropped by about ten hours between 1961 and 2003, and the decline cannot be fully accounted for by changes in how study time was measured, in technology, in the college-going population, in the mix of college majors, or in the range of higher education providers. So what has changed? As suggested by the “Leisure College, USA” title, the researchers conclude that the decline in study time represents “increased demand for leisure,” which they attribute to two mechanisms. The first of these is student empowerment, largely linked to the wide institutionalization of student evaluations of teaching. The argument goes that institutions cater to students’ needs in a competitive market, and students can demand easier courses by rewarding some faculty and punishing others through their teaching evaluations. While this is hardly a new assertion, little evidence exists to back it up. The researchers also implicate faculty incentives and preferences, referencing Murray Sperber’s (2005) assertion that a “nonaggression pact” exists between students and faculty, in which each party agrees not to demand too much of the other. As Babcock and Marks put it, “we are hard-pressed to name any reliable, noninternal reward that instructors receive for maintaining high standards—and the penalties for doing so are clear” (2010, 5). This line of reasoning is consistent with the FSSE results reported earlier, which show that faculty expectations for study time are not too different from what students actually report. The evidence on incentives for faculty to invest effort in activities other than teaching is stronger than it is for pressure exerted by students through their evaluations. (More on this to follow.)
Second, Babcock and Marks propose that employers may be relying less on grades and more on educational pedigree, that students have recognized and responded to this preference, and that this has reduced achievement orientation in college: “students seem to be allocating more time toward distinguishing themselves from their competitors to get into a good college, but less time distinguishing themselves academically from their college classmates once they get there” (2010, 6; emphasis in the original). But widely expressed concerns with grade inflation suggest that there has been no observable decline in overall performance as measured by grades. Furthermore, this argument seems primarily applicable to students at the most selective institutions. If educational pedigree matters so much to students, we should expect students at less prestigious institutions to put in additional effort in the first year or two so as to improve the prospects of “trading up” through transfer, a pattern that is not evident in the analysis of study time.
A word about “leisure.” In both articles, Babcock and Marks define leisure as time spent neither working for pay nor engaged in academic pursuits (i.e., attending class or studying). This definition misclassifies certain nondiscretionary activities, most significantly work in the home, including dependent care, and time spent commuting to work or school—both activities that consume more time among older students, a subset of the college-going population that has grown considerably since 1961. We can examine the implications of these definitional choices by applying them to NSSE’s time-use questions. NSSE asks students how many hours they spend per week on seven activities: preparing for class, working for pay on campus, working for pay off campus, participating in cocurricular activities, relaxing and socializing, caring for dependents, and commuting to class. NSSE does not ask about time spent in class, nor does it ask about work in the home apart from dependent care. With those caveats, let’s compare the broad definition of leisure to a classification that distinguishes discretionary and nondiscretionary activities other than studying (table 2). Looking at the results for first-years and seniors combined, fully eight hours are reclassified from “leisure” to nondiscretionary activities, resulting in an approximate balance between discretionary and nondiscretionary activities, exclusive of academic commitments. (The differences by class level are interesting, as well, with seniors devoting more time to nondiscretionary activities than to either of the other categories.) This paints a very different picture from the depiction of twenty-five hours a week devoted to leisure. Definitions matter. We can have legitimate concerns about how much time students should be devoting to coursework, but it’s important to acknowledge the full range of students’ nonacademic commitments. “Leisure College” may be provocative, but it mischaracterizes the lived experience of a substantial portion of the college-going population that has grown markedly over the period studied.
Table 2: Alternate Classifications of Average Time Allocations1
Work for pay
Source: National Survey of Student Engagement, combined 2009 and 2010 data. Results are unweighted. Average hours calculated by taking the midpoint from each range given on the survey, and assigning a value of 32 to the “more than 30 hours” category.
1Limited to full-time students at US institutions who are not taking all classes online.
2Work for pay, dependent care, and commuting.
3Cocurricular activities, relaxing and socializing, dependent care, and commuting..
4Cocurricular activities, relaxing and socializing.
The contemporary discourse about declining standards in higher education conveys an image of steady, if not accelerating, erosion. Thus one of the most interesting findings from the Babcock and Marks study is that the bulk of the decline in study time—nearly eight out of ten hours—took place between 1961 and 1981. This is corroborated by their analysis of HERI data between 1987 and 2005 (table 3). Technological change may not account for the large initial drop, but it likely does account for the slight subsequent decline. Whatever happened to studying appears to have happened between 1961 and 1981.
Table 3: Summary of Three Study Time Comparisons
Number of matched Institutions
Class levels in
Change in weekly
1961 to 1981
First-years & all years3
24.43 to 16.86
16.61 to 14.88
1961 to 2003
24.43 to 14.40
1 Adjusted for framing differences between different surveys (except HERI).
2 Because Project Talent and NLSY79 involve nationally representative samples, the researchers considered it unnecessary to compare identical sets of institutions.
3 Project Talent surveyed 1960 high school graduates in 1961, thus the sample likely includes a small share with sophomore standing. The researchers report that first-year students studied less than seniors in NLSY and NSSE, and they conclude that restricting the comparison to first-year students would result in a larger decline in study time.
4 See note 3 above.
This was a time of profound change in US higher education. The higher education system grew by more than one thousand institutions between 1960 and 1980. Enrollments nearly doubled. Women’s participation increased dramatically: from 1961 to 1981, the share of female high school graduates who enrolled in college grew from 30 to 53 percent while the male participation rate stayed flat at 56 percent (National Center for Education Statistics 2010). The civil rights movement led colleges and universities to expand opportunities for ethnic minority students. By 1981, the last of the baby boomers had graduated from high school, colleges and universities were looking at smaller cohorts of prospective students, and serious doubts were voiced about the viability of many institutions. As institutions were increasingly concerned about maintaining enrollments, the students’ rights movement and the demise of in loco parentis had given students greater voice in campus affairs. And the research enterprise expanded between 1960 and 1980, as federal sponsorship of research and development activity expanded by $1.4 trillion in constant 2000 dollars (Thelin 2004).
At the same time, faculty attitudes and institutional priorities were changing. Between 1975 and 1984, the proportion of faculty at four-year institutions who reported a greater interest in teaching than in research dropped from 70 percent to 63 percent. Faculty agreement with the proposition that teaching effectiveness, not publication, should be the primary criterion for promotion dropped from 70 to 58 percent. And the share who agreed with the statement “In my department, it is very difficult to achieve tenure without publishing” rose from 54 to 69 percent (Boyer 1987). These comparisons use 1975 rather than 1961 as the baseline, so they likely understate the full extent of the change in faculty attitudes and departmental practices between 1961 and 1981. But it is clear that the sharp decline in study time roughly coincided with an increasing emphasis on scholarly productivity in faculty incentives and preferences, as well as increased federal R&D support.
Babcock and Marks attribute nearly all of the drop in study time to students’ “demand for leisure,” but this neglects the full range of factors that may be at work. Some are fairly speculative, others less so. The speculative accounts include: student pressure on faculty to cut back on out-of-class requirements, imposed through end-of-course teaching evaluations (the demand for leisure argument); diminished employer emphasis on academic performance in hiring decisions; and expansion in the range of out-of-class activities associated with students’ coursework, which students may not include when accounting for their study time. Plausible though they may be, little evidence exists either to support or to refute these accounts.
Two other explanations for the decline in study time, involving both students and faculty, have at least some supporting evidence. The composition of the student body has changed substantially since 1961, with more students working for pay, more hours worked, more students with responsibilities in the home, and more students who commute to school. Adding only the first of these to their statistical analysis, Babcock and Marks found an appreciable increase in the portion of the decline in study time that is attributable to changes in the student population. It seems likely that a more comprehensive analysis would account for still more of the decline. The other explanation involves erosion in the importance of teaching in both the faculty reward structure and faculty preferences, coinciding with expansion of the research enterprise. This is consistent with Sperber’s “nonaggression pact” account, as well as the fact that faculty expectations for study time are relatively close to how much time students actually report.
Between the 1960s and the early 1980s, higher education began to serve a more diverse population of students, with many students having greater work and family commitments. At the same time, faculty interest in teaching declined as colleges and universities increasingly emphasized their role in producing new knowledge through research and scholarship. We began asking less of our students during this period, and their performance fell to meet our expectations. The good news, such as it is, is that the steep decline arrested itself in the early 1980s.
Arum, Richard, Josipa Roksa, and Melissa Velez. 2008. Learning to Reason and Communicate in College: Initial Report of Findings from the CLA Longitudinal Study. New York: Social Science Research Council.
Astin, Alexander W. 1993. What Matters in College? Four Critical Years Revisited. San Francisco: Jossey-Bass.
Babcock, Philip, and Mindy Marks. 2010. “Leisure College USA: The Decline in Student Study Time.” Washington, DC: American Enterprise Institute.
———. Forthcoming. “The Falling Time Cost of College: Evidence from a Half Century of Time Use Data.” Review of Economics and Statistics.
Boyer, Ernest L. 1987. College: The Undergraduate Experience in America. New York: HarperCollins.
Bransford, John D., Ann L. Brown, and Rodney R. Cocking. 2000. How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press.
Brint, Steven, Mark Riddle, Lori Turk-Bicakci, and Charles S. Levy. 2005. “From the Liberal to the Practical Arts in American Colleges and Universities: Organizational Analysis and Curricular Change.” Journal of Higher Education 76 (2): 151–80.
Gardiner, Lion F. 1994. Redesigning Higher Education: Producing Dramatic Gains in Student Learning. San Francisco: Jossey-Bass.
Kuh, George D., Jillian Kinzie, Jennifer A. Buckley, Brian K. Bridges, and John C. Hayek. 2007.Piecing Together the Student Success Puzzle: Research, Propositions, and Recommendations. San Francisco: Jossey-Bass.
National Center for Education Statistics. 2010. Digest of Education Statistics 2009. Washington DC: US Department of Education.
Pascarella, Ernest T., and Patrick T. Terenzini. 2005. How College Affects Students: A Third Decade of Research. San Francisco: Jossey-Bass.
Sperber, Murray. 2005. “How Undergraduate Education Became College Lite—and a Personal Apology.” In Declining by Degrees: Higher Education at Risk, edited by Richard H. Hersh and John Merrow, 131–44. New York: Palgrave Macmillan.
Thelin, John R. 2004. A History of American Higher Education. Baltimore: Johns Hopkins University Press.
- As the authors acknowledge, a gross measure of “hours spent studying with peers” does not distinguish the different circumstances under which such study may take place. They leave open the possibility that differentiating the nature and organization of group study might reveal some forms to be effective, and others not.
- This underestimates the national percentage. Using a nationally representative sample of four-year institutions in 2003–4, NPSAS data show 34 percent of undergraduates working more than twenty hours per week.
Alexander C. McCormick is associate professor of education at Indiana University Bloomington and director of the National Survey of Student Engagement.
To respond to this article, e-mail firstname.lastname@example.org, with the authors’ names on the subject line.