How Steve Levitt convinced me to give my son extra screen time right now

Alternative title – Quick take: “The Behavioralist Goes to School,” by Levitt et al.

This evening my middle-school son was negotiating for additional screen time, and I proposed that he receive that additional time based on incremental improvement of his grades. But we had just discussed Levitt et al.’s recent paper testing loss aversion with student incentives over dinner, so he opted to start with the extra screen time and then lose it if the grades failed to improve. (His grades are pretty good anyway; just sayin’.) And I couldn’t not go with the evidence, so this had better work.

Back to the paper: Do students respond to incentives if the reward comes right away? Do they respond to non-monetary incentives? Does offering those incentives once crowd out intrinsic motivation?

If those are your questions, then Levitt et al. have some answers in “The Behavioralist Goes to School: Leveraging Behavioral Economics to Improve Educational Performance.” Here’s the abstract:

We explore the power of behavioral economics to influence the level of effort exerted by students in a low stakes testing environment. We find a substantial impact on test scores from incentives when the rewards are delivered immediately. There is suggestive evidence that rewards framed as losses outperform those framed as gains. Nonfinancial incentives can be considerably more cost-effective than financial incentives for younger students, but are less effective with older students. All motivating power of incentives vanishes when rewards are handed out with a delay. Our results suggest that the current set of incentives may lead to underinvestment.

Here’s some detail on the motivation from the authors: “One of the biggest puzzles in education is why investment among many students is so low given the high returns. One explanation is that the current set of long-run returns does not sufficiently motivate some students to invest effort in school.”

And here’s a little more detail on the study and results.

They focus on three features of past incentive programs:

  1. There is a time gap between when students exert effort and when they receive the reward.
  2. Rewards are offered as gains (not losses).
  3. Rewards are monetary.

After some proof-of-concept testing, they ultimately run their field experiments with 5,000+ students in Chicago public schools.

On point 1 (the time gap), they use a test where possibility of rewards are announced to students immediately before the test (so it’s a test of immediate effort, not preparation) and the rewards are given immediately after, as compared to a treatment where the reward is delivered a month later. Receipt of the reward is determined by improvement relative to a baseline test several months before. Depending on the setting, the test is a low-stakes diagnostic reading or math assessment.

“We find that large incentives delivered immediately, whether financial [$20 cash] or nonfinancial [trophy worth $3], have a significant impact on test performance of about a tenth of a standard deviation. In stark contrast, rewards delivered with a one month delay have no impact, nor do small financial rewards [$10 cash].” “As far as we know, ours is the first study to demonstrate that student responsiveness to incentives is sensitive to the size of the reward.”

To test point 2 (gains versus losses), they vary whether students receive the reward before the test and then have to return it immediately after testing or they receive the reward after the test. “In the pooled estimates, the coefficients on losses are roughly twice the magnitude of the analogous ‘gain’ treatments, but are not statistically different from those treatments.”

On point 2 (monetary rewards), they test non-monetary rewards – a trophy worth $3 – against the large and small monetary rewards. “In the pooled results, the point estimates for non-pecuniary rewards (framed either as a gain or a loss) are somewhat smaller than those for the $20 treatment and much larger than those from the $10 treatment.”

Gender: “Our findings with respect to gender are consistent with a wealth of prior research that shows boys tend to be more sensitive to short-term incentives than girls, which may be due in part to gender differences in time preferences.”

Age: “In general, we see similar results across young and old students, with the exception of nonfinancial incentives framed as losses, where we find large positive effects on young students and small negative impacts on older students.”

Do these incentives affect subsequent test performance? The low financial incentives (which had no impact in the short run) lead to negative impacts on tests a few months later. The other incentives have no statistically significant impact and have a mix of positive and negative point estimates.

My short take away: Nuance, nuance, nuance. Student motivation is probably largely overlooked, and offering incentives can have a positive effect. But if you’re doing student incentives, test them out before committing at scale. Although List et al. don’t find pervasive evidence of problems after the incentives are removed, they do find a little, and a couple of other studies (here and here) have as well.

Bonus reading – a few other papers on financial incentives for students


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s