Alfie Kohn Homework Stress Facts

Share this article

September 6, 2006

The Truth About Homework

Needless Assignments Persist Because of Widespread Misconceptions About Learning

By Alfie Kohn

Para leer este artículo en Español, haga clic aquí.

There’s something perversely fascinating about educational policies that are clearly at odds with the available data.  Huge schools are still being built even though we know that students tend to fare better in smaller places that lend themselves to the creation of democratic caring communities.  Many children who are failed by the academic status quo are forced to repeat a grade even though research shows that this is just about the worst course of action for them.  Homework continues to be assigned – in ever greater quantities – despite the absence of evidence that it’s necessary or even helpful in most cases.

The dimensions of that last disparity weren’t clear to me until I began sifting through the research for a new book.  To begin with, I discovered that decades of investigation have failed to turn up any evidence that homework is beneficial for students in elementary school.  Even if you regard standardized test results as a useful measure, homework (some versus none, or more versus less) isn’t even correlated with higher scores at these ages.  The only effect that does show up is more negative attitudes on the part of students who get more assignments.

In high school, some studies do find a correlation between homework and test scores (or grades), but it’s usually fairly small and it has a tendency to disappear when more sophisticated statistical controls are applied.  Moreover, there’s no evidence that higher achievement is due to the homework even when an association does appear.  It isn’t hard to think of other explanations for why successful students might be in classrooms where more homework is assigned – or why they might spend more time on it than their peers do.

The results of national and international exams raise further doubts.  One of many examples is an analysis of 1994 and 1999 Trends in Mathematics and Science Study (TIMSS) data from 50 countries.  Researchers David Baker and Gerald Letendre were scarcely able to conceal their surprise when they published their results last year:  “Not only did we fail to find any positive relationships,” but “the overall correlations between national average student achievement and national averages in [amount of homework assigned] are all negative.

Finally, there isn’t a shred of evidence to support the widely accepted assumption that homework yields nonacademic benefits  for students of any age.  The idea that homework teaches good work habits or develops positive character traits (such as self-discipline and independence) could be described as an urban myth except for the fact that it’s taken seriously in suburban and rural areas, too.

In short, regardless of one’s criteria, there is no reason to think that most students would be at any sort of disadvantage if homework were sharply reduced or even eliminated.  Nevertheless, the overwhelming majority of American schools – elementary and secondary, public and private – continue to require their students to work a second shift by bringing academic assignments home.  Not only is this requirement accepted uncritically, but the amount of homework is growing, particularly in the early grades.  A large, long-term national survey found that the proportion of six- to-eight-year-old children who reported having homework on a given day had climbed from 34 percent in 1981 to 58 percent in 1997 – and the weekly time spent studying at home more than doubled.

Sandra Hofferth of the University of Maryland, one of the authors of that study, has just released an update based on 2002 data.  Now the proportion of young children who had homework on a specific day jumped to 64 percent, and the amount of time they spent on it climbed by another third.  The irony here is painful because with younger children the evidence to justify homework isn’t merely dubious – it’s nonexistent.


So why do we do something where the cons (stress, frustration, family conflict, loss of time for other activities, a possible diminution of interest in learning) so clearly outweigh the pros?  Possible reasons include a lack of respect for research, a lack of respect for children (implicit in a determination to keep them busy after school), a reluctance to question existing practices, and the top-down pressures to teach more stuff faster in order to pump up test scores so we can chant “We’re number one!”

All these explanations are plausible, but I think there’s also something else responsible for our continuing to feed children this latter-day cod-liver oil.  Because many of us believe it’s just common sense that homework would provide academic benefits, we tend to shrug off the failure to find any such benefits.  In turn, our belief that homework ought to help is based on some fundamental misunderstandings about learning.

Consider the assumption that homework should be beneficial just because it gives students more time to master a topic or skill.  (Plenty of pundits rely on this premise when they call for extending the school day or year.  Indeed, homework can be seen as a way of prolonging the school day on the cheap.)  Unfortunately, this reasoning turns out to be woefully simplistic.  Back “when experimental psychologists mainly studied words and nonsense syllables, it was thought that learning inevitably depended upon time,” reading researcher Richard C. Anderson and his colleagues explain.  But “subsequent research suggests that this belief is false.”

The statement “People need time to learn things” is true, of course, but it doesn’t tell us much of practical value.  On the other hand, the assertion “More time usually leads to better learning” is considerably more interesting.  It’s also demonstrably untrue, however, because there are enough cases where more time doesn’t lead to better learning.

In fact, more hours are least likely to produce better outcomes when understanding or creativity is involved.  Anderson and his associates found that when children are taught to read by focusing on the meaning of the text (rather than primarily on phonetic skills), their learning does “not depend on amount of instructional time.”  In math, too, as another group of researchers discovered, time on task is directly correlated to achievement only if both the activity and the outcome measure are focused on rote recall as opposed to problem solving.

Carole Ames of Michigan State University points out that it isn’t “quantitative changes in behavior” – such as requiring students to spend more hours in front of books or worksheets – that help children learn better.  Rather, it’s “qualitative changes in the ways students view themselves in relation to the task, engage in the process of learning, and then respond to the learning activities and situation.”  In turn, these attitudes and responses emerge from the way teachers think about learning and, as a result, how they organize their classrooms.  Assigning homework is unlikely to have a positive effect on  any of these variables.  We might say that education is less about how much the teacher covers than about what students can be helped to discover – and more time won’t help to bring about that shift.

Alongside an overemphasis on time is the widely held belief that homework “reinforces” the skills that students have learned – or, rather, have been taught — in class.  But what exactly does this mean?  It wouldn’t make sense to say “Keep practicing until you understand” because practicing doesn’t create understanding – just as giving kids a deadline doesn’t teach time-management skills.  What might make sense is to say “Keep practicing until what you’re doing becomes automatic.”  But what kinds of proficiencies lend themselves to this sort of improvement?

The answer is behavioral responses.  Expertise in tennis requires lots of practice; it’s hard to improve your swing without spending a lot of time on the court.  But to cite an example like that to justify homework is an example of what philosophers call begging the question.  It assumes precisely what has to be proved, which is that intellectual pursuits are like tennis.

The assumption that they are analogous derives from behaviorism, which is the source of the verb “reinforce” as well as the basis of an attenuated view of learning.  In the 1920s and ‘30s, when John B. Watson was formulating his theory that would come to dominate education, a much less famous researcher named William Brownell was challenging the drill-and-practice approach to mathematics that had already taken root.  “If one is to be successful in quantitative thinking, one needs a fund of meanings, not a myriad of ‘automatic responses,’” he wrote.  “Drill does not develop meanings.  Repetition does not lead to understandings.”  In fact, if “arithmetic becomes meaningful, it becomes so in spite of drill.”

Brownell’s insights have been enriched by a long line of research demonstrating that the behaviorist model is, if you’ll excuse the expression, deeply superficial.  People spend their lives actively constructing theories about how the world works, and then reconstructing them in light of new evidence.  Lots of practice can help some students get better at remembering an answer, but not to get better at – or even accustomed to — thinking.  And even when they do acquire an academic skill through practice, the way they acquire it should give us pause.  As psychologist Ellen Langer has shown, “When we drill ourselves in a certain skill so that it becomes second nature,” we may come to perform that skill “mindlessly,”  locking us into patterns and procedures that are less than ideal.

But even if practice is sometimes useful, we’re not entitled to conclude that homework of this type works for most students.  It isn’t of any use for those who don’t understand what they’re doing.  Such homework makes them feel stupid; gets them accustomed to doing things the wrong way (because what’s really “reinforced” are mistaken assumptions); and teaches them to conceal what they don’t know.  At the same time, other students in the same class already have the skill down cold, so further practice for them is a waste of time.  You’ve got some kids, then, who don’t need the practice and others who can’t use it.

Furthermore, even if practice was helpful for most students, that doesn’t mean they need to do it at home.  In my research I found a number of superb teachers (at different grade levels and with diverse instructional styles) who rarely, if ever, found it necessary to assign homework.  Some not only didn’t feel a need to make students read, write, or do math at home; they preferred to have students do these things during class where it was possible to observe, guide, and discuss.

Finally, any theoretical benefit of practice homework must be weighed against the effect it has on students’ interest in learning.  If slogging through worksheets dampens one’s desire to read or think, surely that wouldn’t be worth an incremental improvement in skills.  And when an activity feels like drudgery, the quality of learning tends to suffer, too.  That so many children regard homework as something to finish as quickly as possible – or even as a significant source of stress — helps to explain why it appears not to offer any academic advantage even for those who obediently sit down and complete the tasks they’ve been assigned.  All that research showing little value to homework may not be so surprising after all.

Supporters of homework rarely look at things from the student’s point of view, though; instead, kids are regarded as inert objects to be acted on:  Make them practice and they’ll get better.  My argument isn’t just that this viewpoint is disrespectful, or that it’s a residue of an outdated stimulus-response psychology.  I’m also suggesting it’s counterproductive.  Children cannot be made to acquire skills.  They aren’t vending machines such that we put in more homework and get out more learning.

But just such misconceptions are pervasive in all sorts of neighborhoods, and they’re held by parents, teachers, and researchers alike.  It’s these beliefs that make it so hard even to question the policy of assigning regular homework.  We can be shown the paucity of supporting evidence and it won’t have any impact if we’re wedded to folk wisdom (“practice makes perfect”; more time equals better results).

On the other hand, the more we learn about learning, the more willing we may be to challenge the idea that homework has to be part of schooling.


Copyright © 2006 by Alfie Kohn. This article may be downloaded, reproduced, and distributed without permission as long as each copy includes this notice along with citation information (i.e., name of the periodical in which it originally appeared, date of publication, and author’s name). Permission must be obtained in order to reprint this article in a published work or in order to offer it for sale in any form. Please write to the address indicated on the Contact Us page. — © Alfie Kohn

Share this article

(Da Capo Press, 2006)
Copyright © 2006 by Alfie Kohn

Does Homework Improve Learning?

By Alfie Kohn


Because the question that serves as the title of this chapter doesn’t seem all that complicated, you might think that after all this time we’d have a straightforward answer.  You might think that open-minded people who review the evidence should be able to agree on whether homework really does help.

If so, you’d be wrong.  “Researchers have been far from unanimous in their assessments of the strengths and weaknesses of homework as an instructional technique,” according to an article published in the Journal of Educational Psychology.  “The conclusions of more than a dozen reviews of the homework literature conducted between 1960 and 1989 varied greatly.  Their assessments ranged from homework having positive effects, no effects, or complex effects to the suggestion that the research was too sparse or poorly conducted to allow trustworthy conclusions.”[1]

When you think about it, any number of issues could complicate the picture and make it more or less likely that homework would appear to be beneficial in a given study:  What kind of homework are we talking about?  Fill-in-the-blank worksheets or extended projects?  In what school subject(s)?  How old are the students?  How able and interested are they?  Are we looking at how much the teacher assigned or at how much the kids actually did?  How careful was the study and how many students were investigated?

Even when you take account of all these variables, the bottom line remains that no definite conclusion can be reached, and that is itself a significant conclusion.  The fact that there isn’t anything close to unanimity among experts belies the widespread assumption that homework helps.  It demonstrates just how superficial and misleading are the countless declarations one hears to the effect that “studies find homework is an important contributor to academic achievement.”

Research casting doubt on that assumption goes back at least to 1897, when a study found that assigning spelling homework had no effect on how proficient children were at spelling later on.[2]  By 1960, a reviewer tracked down 17 experimental studies, most of which produced mixed results and some of which suggested that homework made no difference at all.[3]  In 1979, another reviewer found five more studies.  One found that homework helped, two found that it didn’t, and two found mixed results.[4]  Yet another review was published a few years later, this one of eight articles and seven dissertations that had appeared from the mid-1960s to the early 1980s.  The authors, who included a long-time advocate of traditional educational policies, claimed the results demonstrated that homework had “powerful effects on learning.”[5]  But another researcher looked more carefully and discovered that only four of those fifteen studies actually compared getting homework with getting no homework, and their results actually didn’t provide much reason to think it helped.[6]

“The literature reviews done over the past 60 years . . . report conflicting results,” one expert concluded in 1985.  “There is no good evidence that homework produces better academic achievement.”[7]  Four years later, Harris Cooper, an educational psychologist, attempted to sort things out by conducting the most exhaustive review of the research to date.  He performed a “meta-analysis,” which is a statistical technique for combining numerous studies into the equivalent of one giant study.[8]  Cooper included seventeen research reports that contained a total of 48 comparisons between students who did and did not receive homework.  About 70 percent of these found that homework was associated with higher achievement.  He also reviewed surveys that attempted to correlate students’ test scores with how much homework they did.  Forty-three of fifty correlations were positive, although the overall effect was not particularly large:  Homework accounted for less than 4 percent of the differences in students’ scores.[9]

Cooper and his colleagues published a review of newer studies in 2006.  Those that compared students with and without homework found a stronger association with achievement than the earlier studies had, but these new experiments measured achievement by students’ scores on tests that had been designed to match the homework they had just done.  As for more recent studies looking for a relationship between achievement and time spent on homework, the overall correlation was about the same as the one found in 1989.[10]

Among the recent studies not included in Cooper’s new review:  One, using a methodology associated with economics, concluded that the amount of math homework given to teenagers was a very good predictor of these students’ standardized test scores in math.[11]  But another study – the same one that found younger students are spending a lot more time doing homework these days (see chapter 1) — discovered that the extent of that time commitment was “not associated with higher or lower scores on any [achievement] tests.”  (By contrast, the amount of time children spent reading for pleasure was strongly correlated with higher scores.)[12]

Taken as a whole, the available research might be summarized as inconclusive.  But if we look more closely, even that description turns out to be too generous.  The bottom line, I’ll argue in this chapter, is that a careful examination of the data raises serious doubts about whether meaningful learning is enhanced by homework for most students.  Of the eight reasons that follow, the first three identify important limitations of the existing research, the next three identify findings from these same studies that lead one to question homework’s effectiveness, and the last two introduce additional data that weaken the case even further.


Limitations of the Research

1.  At best, most homework studies show only an association, not a causal relationship.  Statistical principles don’t get much more basic than “correlation doesn’t prove causation.”  The number of umbrellas brought to a workplace on a given morning will be highly correlated with the probability of precipitation in the afternoon, but the presence of umbrellas didn’t make it rain.  Also, I’d be willing to bet that kids who ski are more likely to attend selective colleges than those who don’t ski, but that doesn’t mean they were accepted because they ski, or that arranging for a child to take skiing lessons will improve her chances of being admitted.   Nevertheless, most research purporting to show a positive effect of homework seems to be based on the assumption that when students who get (or do) more homework also score better on standardized tests, it follows that the higher scores were due to their having had more homework.

There are almost always other explanations for why successful students might be in classrooms where more homework is assigned – let alone why these students might take more time with their homework than their peers do.  Even Cooper, a proponent of homework, concedes that “it is equally plausible,” based on the correlational data that comprise most of the available research on the topic, “that teachers assign more homework to students who are achieving better . . . or that better students simply spend more time on home study.”[13]  In still other cases, a third variable – for example, being born into a more affluent and highly educated family – might be associated with getting higher test scores and with doing more homework (or attending the kind of school where more homework is assigned).  Again, it would be erroneous to conclude that homework is responsible for higher achievement.  Or that a complete absence of homework would have any detrimental effect at all.

Sometimes it’s not easy to spot those other variables that can separately affect achievement and time spent on homework, giving the impression that these two are causally related.  One of the most frequently cited studies in the field was published in the early 1980s by a researcher named Timothy Keith, who looked at survey results from tens of thousands of high school students and concluded that homework had a positive relationship to achievement, at least at that age.  But a funny thing happened ten years later when he and a colleague looked at homework alongside other possible influences on learning such as quality of instruction, motivation, and which classes the students took.  When all these variables were entered into the equation simultaneously, the result was “puzzling and surprising”:  homework no longer had any meaningful effect on achievement at all.[14]  In other words, a set of findings that served – and, given how often his original study continues to be cited, still serves – as a prominent basis for the claim that homework raises achievement turns out to be spurious.

Several studies have actually found a negative relationship between students’ achievement (or their academic performance as judged by teachers) and how much time they spend on homework (or how much help they receive from their parents).[15]  But researchers who report this counterintuitive finding generally take pains to explain that it “must not be interpreted as a causal pattern.”[16]  What’s really going on here, we’re assured, is just that kids with academic difficulties are taking more time with their homework in order to catch up.

That sounds plausible, but of course it’s just a theory.  One study found that children who were having academic difficulties actually didn’t get more homework from their teachers,[17] although it’s possible they spent longer hours working on the homework that they did get.  But even if we agreed that doing more homework probably isn’t responsible for lowering students’ achievement, the fact that there’s an inverse relationship seems to suggest that, at the very least, homework isn’t doing much to help kids who are struggling.  In any event, anyone who reads the research on this topic can’t help but notice how rare it is to find these same cautions about the misleading nature of correlational results when those results suggest a positive relationship between homework and achievement.  It’s only when the outcome doesn’t fit the expected pattern (and support the case for homework) that they’re carefully explained away.

In short, most of the research that’s cited to show that homework is academically beneficial really doesn’t prove any such thing.

2.  Do we really know how much homework kids do?  The studies claiming that homework helps are based on the assumption that we can accurately measure the number and length of assignments.  But many of these studies depend on students to tell us how much homework they get (or complete).  When Cooper and his associates looked at recent studies in which the time spent on homework was reported by students, and then compared them with studies in which that estimate was provided by their parents, the results were quite different.  In fact, the correlation between homework and achievement completely disappeared when parents’ estimates were used.[18]  This was also true in one of Cooper’s own studies:  “Parent reports of homework completion were . . . uncorrelated with the student report.”[19]   The same sort of discrepancy shows up again in cross-cultural research — parents and children provide very different accounts of how much help kids receive[20] — and also when students and teachers are asked to estimate how much homework was assigned.[21]  It’s not clear which source is most accurate, by the way – or, indeed, whether any of them is entirely reliable.

These first two flaws combine to cast doubt on much of the existing data, according to a damning summary that appears in the Encyclopedia of Educational Research:  “Research on homework continues to show the same fundamental weaknesses that have characterized it throughout the century:  an overdependence on self-report as the predominant method of data collection and on correlation as the principal method of data analysis.”[22]

3.  Homework studies confuse grades and test scores with learning.  Most researchers, like most reporters who write about education, talk about how this or that policy affects student “achievement” without questioning whether the way that word is defined in the studies makes any sense.  What exactly is this entity called achievement that’s said to go up or down?  It turns out that what’s actually being measured – at least in all the homework research I’ve seen — is one of three things:  scores on tests designed by teachers, grades given by teachers, or scores on standardized exams.  About the best thing you can say for these numbers is that they’re easy for researchers to collect and report.  Each is seriously flawed in its own way.

In studies that involve in-class tests, some students are given homework – which usually consists of reviewing a batch of facts about some topic – and then they, along with their peers who didn’t get the homework, take a quiz on that very material.  The outcome measure, in other words, is precisely aligned to the homework that some students did and others didn’t do — or that they did in varying amounts.  It’s as if you were told to spend time in the evening learning the names of all the vice presidents of the United States and were then tested only on those names.   If you remembered more of them after cramming, the researcher would then conclude that “learning in the evening” is effective.

In the second kind of study, course grades are used to determine whether homework made a difference.  The problem here is that a grade, as one writer put it long ago, is “an inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite amount of material.”[23]  Quite apart from the destructive effects that grades have on students’ interest in learning, their depth of understanding, and their preference for challenging tasks, the basis for a grade is typically as subjective as the result is uninformative.  Any given assignment may well be given two different grades by two equally qualified teachers – and may even be given two different grades by a single teacher who reads it at two different times.  The final course grade, moreover, is based on a combination of these individual marks, along with other, even less well defined considerations.[24]

As bad as grades are in general, they are particularly inappropriate for judging the effectiveness of homework for one simple reason:  The same teacher who handed out the assignments then turns around and evaluates the students who completed them.  The final grade a teacher chooses for a student will often be based at least partly on whether, and to what extent, that student did the homework.  Thus, to say that more homework is associated with better school performance (as measured by grades) is to provide no useful information about whether homework is intrinsically valuable.  Yet grades are the basis for a good number of the studies that are cited to defend that very conclusion.  The studies that use grades as the outcome measure, not surprisingly, tend to show a much stronger effect for homework than studies that use standardized test scores.[25]

Here’s one example.  Cooper and his colleagues conducted a study in 1998 with both younger and older students (from grades 2 through 12), using both grades and standardized test scores to measure achievement.  They also looked at how much homework was assigned by the teacher as well as at how much time students spent on their homework.  Thus, there were eight separate results to be reported.  Here’s how they came out:

Younger students

Effect on grades of amount of homework assigned                     No sig. relationship

Effect on test scores of amount of homework assigned               No sig. relationship

Effect on grades of amount of homework done                          Negative relationship

Effect on test scores of amount of homework done                    No sig. relationship


Older students

Effect on grades of amount of homework assigned                     No sig. relationship

Effect on test scores of amount of homework assigned               No sig. relationship

Effect on grades of amount of homework done                          Positive relationship

Effect on test scores of amount of homework done                    No sig. relationship


Of these eight comparisons, then, the only positive correlation – and it wasn’t a large one – was between how much homework older students did and their achievement as measured by grades.[26]  If that measure is viewed as dubious, if not downright silly, then one of the more recent studies conducted by the country’s best-known homework researcher fails to support the idea of assigning homework at any age.

The last, and most common, way of measuring achievement is to use standardized test scores.  Purely because they’re standardized, these tests are widely regarded as objective instruments for assessing children’s academic performance.  But as I’ve argued elsewhere at some length,[27] there is considerable reason to believe that standardized tests are a poor measure of intellectual proficiency.  They are, however, excellent indicators of two things.  The first is affluence:  Up to 90 percent of the difference in scores among schools, communities, or even states can be accounted for, statistically speaking, without knowing anything about what happened inside the classrooms.  All you need are some facts about the average income and education levels of the students’ parents.  The second phenomenon that standardized tests measure is how skillful a particular group of students is at taking standardized tests – and, increasingly, how much class time has been given over to preparing them to do just that.

In my experience, teachers can almost always identify several students who do poorly on standardized tests even though, by more authentic and meaningful indicators, they are extremely talented thinkers.  Other students, meanwhile, ace these tests even though their thinking isn’t particularly impressive; they’re just good test-takers.  These anecdotal reports have been corroborated by research that finds a statistically significant positive relationship between a shallow or superficial approach to learning, on the one hand, and high scores on various standardized tests, on the other.  What’s more, this association has been documented at the elementary, middle, and high school level.

Standardized tests are even less useful when they include any of these features:

*  If most of the questions are multiple-choice, then students are unable to generate, or even justify, their responses.  To that extent, students cannot really demonstrate what they know or what they can do with what they know.  Multiple-choice tests are basically designed so that many kids who understand a given idea will be tricked into picking the wrong answer.

*  If the test is timed, then it places a premium not on thoughtfulness but on speed.

* If the test is focused on “basic skills,” then doing well is more a function of cramming forgettable facts into short-term memory than of really understanding ideas, making connections and distinctions, knowing how to read or write or analyze problems in a sophisticated way, thinking like a scientist or historian, being able to use knowledge in unfamiliar situations, and so on.

*  If the test is given to younger children, then, according to an overwhelming consensus on the part of early-education specialists, it is a poor indicator of academic skills.  Many children under the age of eight or nine are unable to demonstrate their proficiency on a standardized test just because they’re tripped up by the format.

*  If the test is “norm-referenced” (like the Iowa Test of Basic Skills, Terra Nova, Stanford Achievement Test, and others used widely in classrooms and also by researchers), then it was never designed to evaluate whether students know what they should.  Instead, its primary purpose is to artificially spread out the scores in order to facilitate ranking students against each other.  The question these tests are intended to answer is not “How well are our kids – or our schools – doing?” but “Who’s beating whom?”  We know nothing about academic competence in absolute terms just from knowing what percentage of other test-takers a given child has bested.  Moreover, the selection of questions for these tests is informed by this imperative to rank.  Thus, items that a lot of students answer correctly (or incorrectly) are typically eliminated – regardless of whether the content is important – and replaced with questions that about half the kids will get right.  This is done in order to make it easier to compare students to one another.

My purpose in these few paragraphs has been to offer only a very brief summary of the reasons that informed educators and parents would never regard a standardized test score as meaningful information about the quality of a student’s thinking – or about the quality of a school.  (In the latter case, a high or rising average test score may actually be a reason to worry.  Every hour that teachers spend preparing kids to succeed on standardized tests, even if that investment pays off, is an hour not spent helping kids to become critical, curious, creative thinkers.)  The limitations of these tests are so numerous and so serious that studies showing an association between homework and higher scores are highly misleading.  Because that’s also true of studies that use grades as a stand-in for achievement, it should be obvious that combining two flawed measures does nothing to improve the situation.[28]

I’m unaware of any studies that have even addressed the question of whether homework enhances the depth of students’ understanding of ideas or their passion for learning.  The fact that more meaningful outcomes are hard to quantify does not make test scores or grades any more valid, reliable, or useful as measures.  To use them anyway calls to mind the story of the man who looked for his lost keys near a streetlight one night not because that was where he dropped them but just because the light was better there.

If our children’s ability to understand ideas from the inside out is what matters to us, and if we don’t have any evidence that giving them homework helps them to acquire this proficiency, then all the research in the world showing that test scores rise when you make kids do more schoolwork at home doesn’t mean very much.  That’s particularly true if the homework was designed specifically to improve the limited band of skills that appear on these tests.  It’s probably not a coincidence that, even within the existing test-based research, homework appears to work better when the assignments involve rote learning and repetition rather than real thinking.[29]  After all, “works better” just means “produces higher scores on exams that measure these low-level capabilities.”

Overall, the available homework research defines “beneficial” in terms of achievement, and it defines achievement as better grades or standardized test scores.  It allows us to conclude nothing about whether children’s learning improves.


Cautionary Findings


Assume for the moment that we weren’t concerned about basing our conclusions on studies that merely show homework is associated with (as opposed to responsible for) achievement, or studies that depend on questionable estimates of how much is actually completed, or studies that use deeply problematic outcome measures.  Even taken on its own terms, the research turns up some findings that must give pause to anyone who thinks homework is valuable.

4.  Homework matters less the longer you look.  The longer the duration of a homework study, the less of an effect the homework is shown to have.[30]  Cooper, who pointed this out almost in passing, speculated that less homework may have been assigned during any given week in the longer-lasting studies, but he offered no evidence that this actually happened.  So here’s another theory:  The studies finding the greatest effect were those that captured less of what goes on in the real world by virtue of being so brief.  View a small, unrepresentative slice of a child’s life and it may appear that homework makes a contribution to achievement; keep watching and that contribution is eventually revealed to be illusory.

5.  Even where they do exist, positive effects are often quite small.  In Cooper’s review, as I’ve already pointed out, homework could explain only a tiny proportion of the differences in achievement scores.  The same was true of a large-scale high school study from the 1960s.[31]  And in a more recent investigation of British secondary schools, “the payoff for working several more hours per week per subject would appear to be slight, and those classes where there was more homework were not always those classes which obtained better results.”[32]  As one scholar remarked, “If research tells us anything” about homework, it’s that “even when achievement gainshave been found, they have been minimal, especially in comparison to the amount of work expended by teachers and students.”[33]

6.   There is no evidence of any academic benefit from homework in elementary school.  Even if you were untroubled by the methodological concerns I’ve been describing, the fact is that after decades of research on the topic, there is no overall positive correlation between homework and achievement (by any measure) for students before middle school – or, in many cases, before high school.  More precisely, there’s virtually no research at all on the impact of homework in the primary grades – and therefore no data to support its use with young children – whereas research has been done with students in the upper elementary grades and it generally fails to find any benefit.

The absence of evidence supporting the value of homework before high school is generally acknowledged by experts in the field – even those who are far less critical of the research literature (and less troubled by the negative effects of homework) than I am.  But this remarkable fact is rarely communicated to the general public.  In fact, it’s with younger children, where the benefits are most questionable, if not altogether absent, that there has been the greatest increase in the quantity of homework!

In 1989, Cooper summarized the available research with a sentence that ought to be e-mailed to every parent, teacher, and administrator in the country:  “There is no evidence that any amount of homework improves the academic performance of elementary students.”[34]  In revisiting his review a decade later, he mentioned another large study he had come across.  It, too, found minuscule correlations between the amount of homework done by sixth graders, on the one hand, and their grades and test scores, on the other.  For third graders, the correlations were negative:  more homework was associated with lower achievement.[35]

In 2005, I asked Cooper if he knew of any newer studies with elementary school students, and he said he had come across exactly four, all small and all unpublished.  He was kind enough to offer the citations, and I managed to track them down.

The first was a college student’s term paper that described an experiment with 39 second graders in one school.  The point was to see whether children who did math homework would perform better on a quiz taken immediately afterward that covered exactly the same content as the homework.  The second study, a Master’s thesis, involved 40 third graders, again in a single school and again with performance measured on a follow-up quiz dealing with the homework material, this time featuring vocabulary skills.  The third study tested 64 fifth graders on social studies facts.

All three of these experiments found exactly what you would expect:  The kids who had drilled on the material – a process that happened to take place at home — did better on their respective class tests.  The final study, a dissertation project, involved teaching a lesson contained in a language arts textbook.  The fourth graders who had been assigned homework on this material performed better on the textbook’s unit test, but did not do any better on a standardized test.  And the third graders who hadn’tdone any homework wound up with higher scores on the standardized test.[36]  Like the other three studies, the measure of success basically involved memorizing and regurgitating facts.

It seems safe to say that these latest four studies offer no reason to revise the earlier summary statement that no meaningful evidence exists of an academic advantage for children in elementary school who do homework.[37]  And the news isn’t much better for children in middle school or junior high school.  If the raw correlation between achievement (test scores or grades) and time spent on homework in Cooper’s initial research review is “nearly nonexistent” for grades 3 through 5, it remains extremely low for grades 6 through 9.  The correlation only spikes at or above grade 10.[38]

Such a correlation would be a prerequisite for assuming that homework provides academic benefits but I want to repeat that it isn’t enough to justify that conclusion.  A large correlation is necessary, in other words, but not sufficient.  Indeed, I believe it would be a mistake to conclude that homework is a meaningful contributor to learning even in high school.  Remember that Cooper and his colleagues found a positive effect only when they looked at how much homework high school students actually did (as opposed to how much the teacher assigned) and only when achievement was measured by the grades given to them by those same teachers.  Also recall that Keith’s earlier positive finding with respect to homework in high school evaporated once he used a more sophisticated statistical technique to analyze the data.

All of the cautions, qualifications, and criticisms in this chapter, for that matter, are relevant to students of all ages.  But it’s worth pointing out separately that absolutely no evidence exists to support the practice of assigning homework to children of elementary-school age – a fact that Cooper himself rather oddly seems to overlook (see chapter 4).  No wonder “many Japanese elementary schools in the late 1990s issued ‘no homework’ policies.”[39]  That development may strike us as surprising – particularly in light of how Japan’s educational system has long been held out as a model, notably by writers trying to justify their support for homework.[40]  But it’s a development that seems entirely rational in light of what the evidence shows right here in the United States.


Additional Research

7.  The results of national and international exams raise further doubts about homework’s role.  The National Assessment of Educational Progress (NAEP) is often called the nation’s report card.  Students who take this test also answer a series of questions about themselves, sometimes including how much time they spend on homework.  For any number of reasons, one might expect to find a reasonably strong association between time spent on homework and test scores.  Yet the most striking result, particularly for elementary students, is precisely the absence of such an association.  Even students who reported having been assigned no homework at all didn’t fare badly on the test.

Consider the results of the 2000 math exam.  Fourth graders who did no homework got roughly the same score as those who did 30 minutes a night.  Remarkably, the scores then declined for those who did 45 minutes, then declined again for those who did an hour or more!  In eighth grade, the scores were higher for those who did between 15 and 45 minutes a night than for those who did no homework, but the results were worse for those who did an hour’s worth, and worse still for those did more than an hour.  In twelfth grade, the scores were about the same regardless of whether students did only 15 minutes or more than an hour.[41]  Results on the reading test, too, provided no compelling case that homework helped.[42]

International comparisons allow us to look for correlations between homework and test scores within each country and also for correlations across countries.  Let’s begin with the former.  In the 1980s, 13-year-olds in a dozen nations were tested and also queried about how much they studied.  “In some countries more time spent on homework was associated with higher scores; in others, it was not.”[43]  In the 1990s, the Trends in International Mathematics and Science Study (TIMSS) became the most popular way of assessing what was going on around the world, although of course its conclusions can’t necessarily be generalized to other subjects.  Again, the results were not the same in all countries, even when the focus was limited to the final years of high school (where the contribution of homework is thought to be strongest).  Usually it turned out that doing some homework had a stronger relationship with achievement than doing none at all, but doing a little homework was also better than doing a lot. [44]  This is known as a “curvilinear” relationship; on a graph it looks sort of like an upside-down U.

But even that relationship didn’t show up in a separate series of studies involving elementary school students in China, Japan, and two U.S. cities:  “There was no consistent linear or curvilinear relation between the amount of time spent on homework and the child’s level of academic achievement.”  These researchers even checked to see if homework in first grade was related to achievement in fifth grade, the theory being that homework might provide gradual, long-term benefits to younger children.  Again they came up empty handed.[45]

What about correlations across cultures?  Here we find people playing what I’ll later argue is a pointless game in which countries’ education systems are ranked against one another on the basis of their students’ test scores.  Pointless or not, “a common explanation of the poor performance of American children in cross-cultural comparisons of academic achievement is that American children spend little time in study.”[46]  The reasoning, in other words, goes something like this:

Premise 1:  Our students get significantly less homework than their counterparts across the globe.

Premise 2:   Other countries whup the pants off us in international exams.

Conclusion:  Premise 1 explains Premise 2.

Additional conclusion:  If U.S. teachers assigned more homework, our students would perform better.

Every step of this syllogism is either flawed or simply false.  We’ve already seen that Premise 1 is no longer true, if indeed it ever was (see chapter 1).  Premise 2 has been debunked by a number of analysts and for a number of different reasons.[47]  Even if both premises were accurate, however, the conclusions don’t necessarily follow; this is another example of confusing correlation with causation.

But in fact there is now empirical evidence, not just logic, to challenge the conclusions.  Two researchers looked at TIMSS data from both 1994 and 1999 in order to be able to compare practices in 50 countries.  When they published their findings in 2005, they could scarcely conceal their surprise:

Not only did we fail to find any positive relationships, [but] the overall correlations between national average student achievement and national averages in the frequency, total amount, and percentage of teachers who used homework in grading are all negative!  If these data can be extrapolated to other subjects – a research topic that warrants immediate study, in our opinion – then countries that try to improve their standing in the world rankings of student achievement by raising the amount of homework might actually be undermining their own success. . . . More homework may actually undermine national achievement.[48]

In a separate analysis of the 1999 TIMSS results that looked at 27 U.S. states or districts as well as 37 other countries, meanwhile, “there was little relationship between the amount of homework assigned and students’ performance.”[49]  And the overall conclusion was also supported by TIMSS data showing that “Japanese junior high school students performed at the top but did not study as much as their peers in other countries.”[50]

8.  Incidental research raises further doubts about homework.  Reviews of homework studies tend to overlook investigations that are primarily focused on other topics but just happen to look at homework, among several other variables.   Here are two examples:

First, a pair of Harvard scientists queried almost 2,000 students enrolled in college physics courses in order to figure out whether any features of their high school physics courses were now of use to them.  At first they found a very small relationship between the amount of homework that students had had in high school and how well they were currently doing.  Once the researchers controlled for other variables, such as the type of courses kids had taken, that relationship disappeared.  The same researchers then embarked on a similar study of a much larger population of students in college science classes – and found the same thing:  Homework simply didn’t help.[51]

Second, back in the late 1970s, New Jersey educator Ruth Tschudin identified about three hundred “A+ teachers” on the basis of recommendations, awards, or media coverage.  She then set out to compare their classroom practices to those of a matched group of other teachers.  Among her findings:  the exceptional teachers not only tended to give less homework but also were likely to give students more choices about their assignments.

It’s interesting to speculate on why this might be true.  Are better teachers more apt to question the conventional wisdom in general?  More likely to notice that homework isn’t really doing much good?  More responsive to its negative effects on children and families?  More likely to summon the gumption to act on what they’ve noticed?  Or perhaps the researchers who reviewed the TIMMS data put their finger on it when they wrote, “It may be the poorest teachers who assign the most homework [because] effective teachers may cover all the material in class.”[52]  (Imagine that quotation enlarged and posted in a school’s main office.)

This analysis rings true for Steve Phelps, who teaches math at a high school near Cincinnati.  “In all honesty,” he says, “the students are compelled to be in my class for 48 minutes a day.  If I can’t get done in 48 minutes what I need to get done, then I really have no business intruding on their family time.”[53]  But figuring out how to get it done isn’t always easy.  It certainly took time for Phil Lyons, the social studies teacher I mentioned earlier who figured out that homework was making students less interested in learning for its own sake – and who then watched as many of them began to “seek out more knowledge” once he stopped giving them homework.  At the beginning of Lyons’s teaching career, he assigned a lot of homework “as a crutch, to compensate for poor lessons. . . . But as I mastered the material, homework ceased to be necessary.  A no-homework policy is a challenge to me,” he adds.  “I am forced to create lessons that are so good that no further drilling is required when the lessons are completed.”

Lyons has also conducted an informal investigation to gauge the impact of this shift.  He gave less and less homework each year before finally eliminating it completely.  And he reports that

each year my students have performed better on the AP Economics test.  The empirical data from my class combined with studies I’ve read convinced me.  Homework is an obvious burden to students, but assigning, collecting, grading, and recording homework creates a tremendous amount of work for me as well.  I would feel justified encroaching on students’ free time and I’d be willing to do the grading if I saw tangible returns, but with no quantifiable benefit it makes no sense to impose on them or me.[54]

The results observed by a single teacher in an uncontrolled experiment are obviously not conclusive.  Nor is the Harvard physics study.  Nor is Tschudin’s survey of terrific teachers.  But when all these observations are combined with the surprising results of national and international exams, and when these, in turn, are viewed in the context of a research literature that makes a weak, correlational case for homework in high school – and offers absolutely no support for homework in elementary school – it gradually becomes clear that we’ve been sold a bill of goods.

People who never bought it will not be surprised, of course.  “I have a good education and a decent job despite the fact that I didn’t spend half my adolescence doing homework,” said a mother of four children whose concern about excessive homework eventually led to her becoming an activist on the issue.[55]  On the other hand, some will find these results not only unexpected but hard to believe, if only because common sense tells them that homework should help.  But just as a careful look at the research overturns the canard that “studies show homework raises achievement,” so a careful look at popular beliefs about learning will challenge the reasons that lead us to expect we will find unequivocal research support in the first place.  The absence of supporting data actually makes sense in retrospect, as we’ll see in chapter 6 when we examine the idea that homework “reinforces” what was learned in class, along with other declarations that are too readily accepted on faith.

It’s true that we don’t have clear evidence to prove beyond a reasonable doubt that homework doesn’t help students to learn.  Indeed, it’s hard to imagine what that evidence might look like – beyond repeated findings that homework often isn’t even associated with higher achievement.   To borrow a concept from the law, however, the burden of proof here doesn’t rest with critics to demonstrate that homework doesn’t help.  It rests with supporters to show that it does, and specifically to show that its advantages are sufficiently powerful and pervasive to justify taking up children’s (and parents’ and teachers’) time, and to compensate for the distinct disadvantages discussed in the last chapter.  When a principal admits that homework is “taking away some of the years of adolescence and childhood” but then says that requiring it from the earliest grades “give[s] us an edge in standardized testing,” we have to wonder what kind of educator – indeed, what kind of human being – is willing to accept that trade-off even if the latter premise were true.[56]

Most proponents, of course, aren’t saying that all homework is always good in all respects for all kids – just as critics couldn’t defend the proposition that no homework is ever good in any way for any child.  The prevailing view — which, even if not stated explicitly, seems to be the premise lurking behind our willingness to accept the practice of assigning homework to students on a regular basis — might be summarized as “Most homework is probably good for most kids.”  I’ve been arguing, in effect, that even that relatively moderate position is not supported by the evidence.  I’ve been arguing that any gains we might conceivably identify are both minimal and far from universal, limited to certain ages and to certain (dubious) outcome measures.  What’s more, even studies that seem to show an overall benefit don’t prove that more homework – or any homework, for that matter — has such an effect for most students.  Put differently, the research offers no reason to believe that students in high-quality classrooms whose teachers give little or no homework would be at a disadvantage as regards any meaningful kind of learning.

But is there some other benefit, something other than academic learning, that might be cited in homework’s defense?  That will be the subject of the following chapter…



For full citations, please see the reference section of The Homework Myth.

1. Cooper et al., p. 70.

2. This early study by Joseph Mayer Rice is cited in Gill and Schlossman 2004, p. 175.

3. Goldstein.

4. Austin.

5. Paschal et al.; Walberg et al.

6. Barber, p. 56.  Two of the four studies reviewed by Paschal et al. found no benefit to homework at all.  The third found benefits at two of three grade levels, but all of the students in this study who were assigned homework also received parental help.  The last study found that students who were given math puzzles (unrelated to what was being taught in class) did as well as those who got traditional math homework.

7. Jongsma, p. 703.

8. There is reason to question whether this technique is really appropriate for a topic like homework, and thus whether the conclusions drawn from it would be valid.  Meta-analyses may be useful for combining multiple studies of, say, the efficacy of a blood pressure medication, but not necessarily studies dealing with different aspects of complex human behavior.  Mark Lepper (1995), a research psychologist at Stanford University, has argued that “the purely statistical effect sizes used to compare studies in a meta-analysis completely and inappropriately ignore the crucial social context in which the conduct and interpretation of research in psychology takes place.”  The real-world significance of certain studies is lost, he maintains, when they are reduced to a common denominator.  “The use of purely statistical measures of effect size” – overlooking what he calls the “psychological size of effects” – “promotes a[n] illusion of comparability and quantitative precision that is subtly but deeply at odds with the values that define what makes a study or a finding interesting or important.”  This concern would seem to apply in the case of distinctive investigations of homework.  (Quotations from pp. 414, 415, 420.)

9. Cooper 1999a, 2001.   The proportion of variance that can be attributed to homework is derived by squaring the average correlation found in the studies, which Cooper reports as +.19.

10. Cooper et al. 2006.

11. Betts.

12. Hofferth and Sandberg, p. 306.

13. Cooper 1999a, p. 100.  It’s also theoretically possible that the relationship is reciprocal:  Homework contributes to higher achievement, which then, in turn, predisposes those students to spend more time on it.  But correlations between the two leave us unable to disentangle the two effects and determine which is stronger.

14. Cool and Keith.  Interestingly, Herbert Walberg, an avid proponent of homework, discovered that claims of private school superiority over public schools proved similarly groundless once other variables were controlled in a reanalysis of the same “High School and Beyond” data set (Walberg and Shanahan).

15. For example, see Chen and Stevenson; Epstein; Georgiou; Gorges and Elliott.

16. Epstein and Van Voorhis, pp. 183-84.  Also see Walberg et al., pp. 76-77.

17. Muhlenbruck et al.  In Cooper et al. 1998, “there was some evidence that teachers in Grades 2 and 4 reported assigning more homework to classes with lower achievement, but students and parents reported that teachers assigned more homework to higher achieving students, especially when grades were the measure of achievement” (p. 80).

18. Cooper et al. 2006, p. 44.

19. Cooper et al. 2001, pp. 190-91.

20. Chen and Stevenson, p. 558.

21.  “Several surveys have found that students consistently report their homework time to be higher than teachers’ estimates” (Ziegler 1986, p. 21).

22. Ziegler 1992, p. 602.  Cooper (1989a, p. 161), too, describes the quality of homework research as “far from ideal” for a number of reasons, including the relative rarity of random-assignment studies.

23. Dressel, p. 6.

24. For a more detailed discussion about (and review of research regarding) the effects of grades, see Kohn 1999a, 1999b.

25. Cooper 1999a, p. 72.  That difference shrank in the latest batch of studies (Cooper et al. 2006), but still trended in the same direction.

26. Cooper et al. 1998.  The correlation was .17.

27. See Kohn 1999b, 2000, which includes analysis and research to support the claims made in the following paragraphs.

28. Nevertheless, Cooper criticizes studies that use only one of these measures and argues in favor of those, like his own, that make use of both (see Cooper et al. 1978, p. 71).  The problems with tests and grades may be different, but they don’t cancel each other out when the two variables are used at the same time.

29. Cooper 1989a, p. 99.  On the other hand, a study reporting a modest correlation between achievement test scores and the amount of math homework assigned also found that “repetitive exercises” of the type intended to help students practice skills actually “had detrimental effects on learning” (Trautwein et al., p. 41).

30. Cooper 1999a, p. 72; 2001, p. 16.  The studies he reviewed lasted anywhere from two to thirty weeks.

31. Natriello and McDill.  “An additional hour of homework each night results in an increase in English [grade point average] of 0.130” (p. 27).

32. Tymms and Fitz-Gibbon.  Quotation appears on p. 8.  If anything, this summary understates the actual findings.  When individual students’ scores on the English A-level exams were examined, those who worked for more than seven hours a week in a particular subject “tended to get a third of a grade better than students of the same gender and ability who worked less than [two hours] a week, and if students with similar prior achievement are considered, the advantage only amounted to about a fifth of a grade.”  When the researchers compared classes rather than individuals – which is probably the more appropriate unit of analysis for a homework study — the average A-level grades in heavy-homework classes were no different than those in light-homework classes, once other variables were held constant (pp. 7-8).

33. Barber, p. 55.

34. Cooper 1989a, p. 109.  Why this might be true is open to interpretation.  Cooper (2001, p. 20) speculates that it’s because younger children have limited attention spans and poor study skills, but this explanation proceeds from – and seems designed to rescue — the premise that the problem is not with the homework itself.  Rather, it’s the “cognitive limitations” of children that prevent them from taking advantage of the value that’s assumed to inhere in homework.  While it wouldn’t be sufficient to substantiate this account, it would certainly be necessary to show that homework usually is valuable for older students.  If there’s any reason to doubt that claim, then we’d have to revisit some of our more fundamental assumptions about how and why students learn.

35. The unpublished study by C. Bents-Hill et al. is described in Cooper 2001, p. 26.

36. The four, in order, are Finstad; Townsend; Foyle; and Meloy.

37. When Cooper and his colleagues reviewed a new batch of studies in 2006, they once again found that “the mean correlation between time spent on homework and achievement was not significantly different from zero for elementary school students” (Cooper et al. 2006, p. 43).

38. Cooper 1989a, p. 100.  The correlations were .02, .07, and .25, respectively.

39. Baker and Letendre, p. 118.

40. For example, see any number of writings by Herbert Walberg.  Another possible reason that “elementary achievement is high” in Japan:  teachers there “are free from the pressure to teach to standardized tests” (Lewis, p. 201).  Until they get to high school, there are no such tests in Japan.

41. See the table called “Average Mathematics Scores by Students’ Report on Time Spent Daily on Mathematics Homework at Grades 4, 8, and 12: 2000,” available from the National Center for Education Statistics at: homework.asp.  As far as I can tell, no data on how 2004 NAEP math scores varied by homework completion have been published for nine- and thirteen-year-olds.  Seventeen-year-olds were not asked to quantify the number of hours devoted to homework in 2004, but were asked whether they did homework “often,” “sometimes,” or “never” – and here more homework was correlated with higher scores (U.S. Department of Education 2005, p. 63).

42. In 2000, fourth graders who reported doing more than an hour of homework a night got exactly same score as those whose teachers assigned no homework at all.  Those in the middle, who said they did 30-60 minutes a night, got slightly higher scores. (See nationsreportcard/reading/results/homework.asp).  In 2004, those who weren’t assigned any homework did about as well as those who got either less than one hour or one to two hours; students who were assigned more than two hours a night did worse than any of the other three groups.  For older students, more homework was correlated with higher reading scores (U.S. Department of Education 2005, p. 50).

43. Ziegler 1992, p. 604.

44. Mullis et al. 1998, p. 114.

45. Chen and Stevenson, pp. 556-57.

46. Ibid., p. 551.

47. Even at a first pass, TIMSS results suggest that the U.S. does poorly in relative terms only at the high school level, not with respect to the performance of younger students.  But TIMSS results really don’t support the proposition that our seniors are inferior.  That’s true, first, because, at least on the science test, the scores among most of the countries are actually pretty similar in absolute terms (Gibbs and Fox, p. 87).  Second, the participating countries “had such different patterns of participation and exclusion rates, school and student characteristics, and societal contexts that test score rankings are meaningless as an indicator of the quality of education” (Rotberg, p. 1031).  Specifically, the students taking the test in many of the countries were older, richer, and drawn from a more selective pool than those in the U.S.  Third, when one pair of researchers carefully reviewed half a dozen different international achievement surveys conducted from 1991 to 2001, they found that “U.S. students have generally performed above average in comparisons with students in other industrialized nations” (Boe and Shin; quotation appears on p. 694).  Also see the many publications on this subject by Gerald Bracey.

48. Baker and Letendre, pp. 127-28, 130.  Emphasis in original.

49. Mullis et al. 2001, chap. 6.

50. Tsuneyoshi, p. 375.

51. Sadler and Tai; personal communication with Phil Sadler, August 2005.  The larger study also found that students who took Advanced Placement science courses – and did well on the test – didn’t fare much better in college science courses than those who didn’t take the A.P. classes at all.

52. Baker and Letendre, p. 126.

53. Phelps, personal communication, March 2006.

54. Lyons, personal communication, December 2005.

55. Quoted in Lambert.

56. This New Jersey principal is quoted in Winerip, p. 28.

Copyright © 2006 by Alfie Kohn. Permission must be obtained in order to reprint this chapter in a published work or in order to offer it for sale in any form. Please write to the address indicated on the Contact Us page.

0 Thoughts to “Alfie Kohn Homework Stress Facts

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *