Friday, March 12, 2010

RCTs and transparency

A hot topic in development economics right now is the use of randomized control trials in impact evaluation. The basic idea is to assess the impact of development projects by using a methodology similar to clinical trials of pharmaceutical drugs. So, for instance, say your project involves building 1,000 schools. The process would be to choose 2,000 suitable locations for the schools, and then randomly divide them into treatment and control groups, so that you can compare the outcomes in the locations that got schools with those that did not.

While pretty much all development economists agree that this is a good way to do impact evaluation, opinions vary dramatically on just how good it is- some go so far as to argue that RCTs are the only way we really know anything about development, while others see RCTs as very limited and only appropriate in certain cases. I won't rehash the whole debate over the pros and cons, but a good overview is here.

One argument that I haven't heard raised before in favor of RCTs relates to transparency. When I used to work in development, I went to a presentation of some non-RCT research results at the World Bank with a relatively high-up practitioner colleague who was bright, but not quantitatively minded. The discussion at the seminar inevitably revolved around the technical details, with some people questioning the vailidity of the presenter's use of instrumental variables and results, the presenter defending them, etc.

After the seminar, my colleague explained that what we had seen was the reason why he didn't pay much attention to development economics research. He understood that the results can depend on econometric assumptions and choice of techniques in important ways- but without being able to even begin to understand how, he felt it was safer to just ignore it than to actually let it influence any decisions he made.

I think he was exactly right, and a major advantage of RCTs is that they avoid this problem. My colleague, and others like him, are in no position to have an opinion about, say, the validity of the instruments used in a particularly study or the debate over how IV results should be interpreted, for example. You might argue that this means he should just listen to an economist about this stuff. But he knows that different economists are going to tell him different things, and without some basis for wrapping his head around the underlying issues, it's a pretty risky move to just blindly follow their advice. By contrast, my colleague could certainly grasp a typical RCT analysis- how to interpret the results, how to evaluate external validity concerns, etc.

Obviously, research results that are easy for policymakers and donors to understand are going to have more influence over policy than those that aren't. But perhaps less obviously, the spread of RCTs could actually expand the influence of quantitative analysis in a more indirect way. People like my colleague don't pay much attention to development economics research, and with good reason. But if more of the research used a comprehensible methodology like RCTs, I think people like my colleague might start paying a lot more attention.

No comments:

Post a Comment