What I’ve been producing

I’ve been away for a while: here’s a little bit of what I’ve been up to. Last week, I gave a talk at Stanford University, entitled “The Global Landscape of In-Service Teacher Professional Development Programs,” which you can watch below.

I’ve put up a few new blog posts on other blogs, in case you missed them:

 

And a couple of my blog posts have made it into other language:

 

Advertisements

How to use out-of-school time, the power of pen pals, and new research in South Africa

I have three recent posts over on World Bank blogs. Check them out!

 

Identifying great teachers and communicating with policymakers

I wrote a couple of items this week around the blogosphere:

Looking for a shortcut to identifying great teachers? You may be out of luck. On new evidence about the relationship between teacher performance on tests and student learning.

“The right data at the right time”: How to effectively communicate research to policy makers. A policymaker from Jamaica’s Ministry of Education shares insights on how to communicate your research.

If you want School Report Cards to improve learning, try sharing results on the whole local education market

Over at Let’s Talk Development, I give my take on an interesting new study using school report cards.

Better information to improve service delivery: New evidence

Countries around the world have experimented with “school report cards”: providing parents with information about the quality of their school so that they can demand higher quality service for their children. The results have been mixed. Andrabi, Das, and Khwaja bring a significant contribution to that literature in last month’s American Economic Review with their article, “Report Cards: The Impact of Providing School and Child Test Scores on Educational Markets.”

Here’s the abstract: “We study the impact of providing school report cards with test scores on subsequent test scores, prices, and enrollment in markets with multiple public and private providers. A randomly selected half of our sample villages (markets) received report cards. This increased test scores by 0.11 standard deviations, decreased private school fees by 17 percent, and increased primary enrollment by 4.5 percent. Heterogeneity in the treatment impact by initial school test scores is consistent with canonical models of asymmetric information. Information provision facilitates better comparisons across providers, and improves market efficiency and child welfare through higher test scores, higher enrollment, and lower fees.”

Read my take at the original post!

A few self-links

  1. This morning I posted “What a new preschool study tells us about early child education – and about impact evaluation” over at Development Impact, about an interesting study “Cognitive science in the field: A preschool intervention durably enhances intuitive but not formal mathematics,” which is a randomized controlled trial in Delhi, India.
  2. You can also just watch the researchers explain that paper below.

3. The French version of my post, “A Framework for Taking Evidence from One Location to Another,” based on the work of Mary Ann Bates and Rachel Glennerster, is now available:  Comment déterminer si un projet avec de bons résultats dans un pays fonctionnera ailleurs ?

4. The Portuguese version of my post, “Are good school principals born or can they be made?” based on the work of Roland Fryer and others, is now available: Os bons diretores da escola nascem ou podem ser criados?

Even if technology improves literacy, is it worth the cost?

Ben Piper reports on insightful work that he and co-authors have done comparing various education technology intervention in Kenya in terms of both effectiveness (do they improve reading ability?) and the cost-effectiveness (what’s the cost per reading gain?).

I recommend his full post (or the research paper it’s based on). Here are a couple of highlights:

When compared to traditional literacy programs, the more intensive ICT interventions did not produce large enough gains in learning outcomes to justify the cost. This is not to say that each of the ICT interventions did not produce improvements in students’ reading ability…. [But] the cost-effectiveness of all of these programs might still be significantly lower than a clear investment in high quality literacy programs…. In additional to monetary cost, an opportunity cost existed…. Many of the teachers, tutors, and students lacked exposure to technology and the time and energy spent on learning how to use the technology reduced the amount of time for instructional improvement activities. 

When costs are considered, there are non-ICT interventions that could have larger impacts on learning outcomes with reduced costs; one such option could include assigning the best teachers to the first grade when children are learning how to read, rather than to the end of primary school as many schools do.

Economists will disagree with the standard errors if I understand the specification right: Randomization is at the district level and I don’t believe the authors cluster the standard errors. 

But I don’t think that will change the fundamental message here: Even if there are some gains from education technology, we have to ask when they will be most likely to be worth the cost.