I’ve been away for a while: here’s a little bit of what I’ve been up to. Last week, I gave a talk at Stanford University, entitled “The Global Landscape of In-Service Teacher Professional Development Programs,” which you can watch below.
I’ve put up a few new blog posts on other blogs, in case you missed them:
And a couple of my blog posts have made it into other language:
I have three recent posts over on World Bank blogs. Check them out!
Over at Let’s Talk Development, I write about an experiment that showed an inspirational movie to Ugandan high school students and led many of them to pass their math exams:
How going to the movies helped Ugandan high schoolers pass their tests
I wrote a couple of items this week around the blogosphere:
Looking for a shortcut to identifying great teachers? You may be out of luck. On new evidence about the relationship between teacher performance on tests and student learning.
“The right data at the right time”: How to effectively communicate research to policy makers. A policymaker from Jamaica’s Ministry of Education shares insights on how to communicate your research.
Over at Let’s Talk Development, I give my take on an interesting new study using school report cards.
Better information to improve service delivery: New evidence
Countries around the world have experimented with “school report cards”: providing parents with information about the quality of their school so that they can demand higher quality service for their children. The results have been mixed. Andrabi, Das, and Khwaja bring a significant contribution to that literature in last month’s American Economic Review with their article, “Report Cards: The Impact of Providing School and Child Test Scores on Educational Markets.”
Here’s the abstract: “We study the impact of providing school report cards with test scores on subsequent test scores, prices, and enrollment in markets with multiple public and private providers. A randomly selected half of our sample villages (markets) received report cards. This increased test scores by 0.11 standard deviations, decreased private school fees by 17 percent, and increased primary enrollment by 4.5 percent. Heterogeneity in the treatment impact by initial school test scores is consistent with canonical models of asymmetric information. Information provision facilitates better comparisons across providers, and improves market efficiency and child welfare through higher test scores, higher enrollment, and lower fees.”
Read my take at the original post!
Ben Piper reports on insightful work that he and co-authors have done comparing various education technology intervention in Kenya in terms of both effectiveness (do they improve reading ability?) and the cost-effectiveness (what’s the cost per reading gain?).
I recommend his full post (or the research paper it’s based on). Here are a couple of highlights:
When compared to traditional literacy programs, the more intensive ICT interventions did not produce large enough gains in learning outcomes to justify the cost. This is not to say that each of the ICT interventions did not produce improvements in students’ reading ability…. [But] the cost-effectiveness of all of these programs might still be significantly lower than a clear investment in high quality literacy programs…. In additional to monetary cost, an opportunity cost existed…. Many of the teachers, tutors, and students lacked exposure to technology and the time and energy spent on learning how to use the technology reduced the amount of time for instructional improvement activities.
When costs are considered, there are non-ICT interventions that could have larger impacts on learning outcomes with reduced costs; one such option could include assigning the best teachers to the first grade when children are learning how to read, rather than to the end of primary school as many schools do.
Economists will disagree with the standard errors if I understand the specification right: Randomization is at the district level and I don’t believe the authors cluster the standard errors.
But I don’t think that will change the fundamental message here: Even if there are some gains from education technology, we have to ask when they will be most likely to be worth the cost.