How I used behavioral science to run a marathon

I recently took a few days off between jobs, and I thought, “Hey, it would be fun to run a marathon while I have some time on my hands, just to see if I can!” I haven’t been training for a marathon, but I have been running, and I’ve run long distances in the past.

On the first day of vacation, I jogged from my house to a nearby lake that is about five miles around, figuring I’d do laps until I got to my 26.2 miles. But just after I passed 13 miles, I was out of energy and walked home with only a half-marathon to show for it.

On the last day of vacation, I decided to give it one more try. Now, one of the principles that I’ve learned from behavioral science is the value of commitment mechanisms, whether it’s a savings account that restricts access once you make a deposit (which increased in savings in the Philippines), committing in advance to a financial loss if you don’t quit smoking (which decreased smoking in the Philippines), or letting farmers pre-commit to purchasing fertilizer (which boosted fertilizer use in Kenya).

So I found an 18-mile trail near my house, parked my car at one end, and ran 13.5 miles in one direction. At that point, I had few alternatives to running the 13.5 miles back to my car. It’s true, I could have run the last 5.5 miles to the other end of the trail, but it would have been a pain to get back to my car. I also could have walked, but that just would have meant hours of walking in the cold with a dying phone and few supplies. (This is what Bryan, Karlan, and Nelson call a “soft commitment,” where the consequences are principally psychological rather than economic.) So I jogged back. Slowly, but jogging all the way. I made it back to my car just as my phone told me I’d clocked 27 miles.


What we learn from two new studies of patient satisfaction surveys

Over at Development Impact, I blog on two recent publications I had about how to better measure the patient experience in Nigeria.

Pitfalls of Patient Satisfaction Surveys and How to Avoid Them

A child has a fever. Her father rushes to his community’s clinic, his daughter in his arms. He waits. A nurse asks him questions and examines his child. She gives him advice and perhaps a prescription to get filled at a pharmacy. He leaves.

How do we measure the quality of care that this father and his daughter received? There are many ingredients: Was the clinic open? Was a nurse present? Was the patient attended to swiftly? Did the nurse know what she was talking about? Did she have access to needed equipment and supplies?

Both health systems and researchers have made efforts to measure the quality of each of these ingredients, with a range of tools. Interviewers pose hypothetical situations to doctors and nurses to test their knowledge. Inspectors examine the cleanliness and organization of the facility, or they make surprise visits to measure health worker attendance. Actors posing as patients test both the knowledge and the effort of health workers.

Read more…

How do researchers estimate regressions with patient satisfaction at the outcome? A brief review of practice

Recently, Anna Welander Tärneberg and I were doing research with patient satisfaction as the outcome, and we checked how other researchers had estimated these equations in the past. Here is what we found, as documented in the appendix of our recently published paper in the journal Health Economics.


People use a lot of different methods, and many authors use multiple methods. But there is a rich history of using Ordinary Least Squares regressions to estimate impacts on patient satisfaction. In our paper, we used OLS but verified all the results with Probit and Logit regressions. To add to this list, Dunsch et al. (including me) have a new paper out last week on patient satisfaction in Nigeria, also using OLS as the main estimation method.