Different disciplines hold themselves to different standards when it comes to how strong your empirical evidence has to be before it constitutes something that you can claim as a "result." By result I mean you can start telling everyone else about it, publish it, etc. One such distinction that intrigues me is the one between economics and public health. Relative to economics, public health has very low standards. This is particularly the case when it comes to dealing with what public health people call "confounding factors," which is part of what economists would call "endogeneity." Put simply, this is the idea that if you observe some correlation between A and B, it may be the case that there is a C that you didn't account for that is driving your results. For example, I remember reading a study one time that linked cigar smoking to cirrhosis of the liver. Someone pointed out that it was much more likely to be the case that alcoholism causes cirrhosis of the liver with a high probability, and that cigar smokers are marginally more likely to be alcoholics compared to the rest of the population, as opposed to that cigar smoking actually causes cirrhosis of the liver. But, this finding is good enough for a result that you can publish in public health. And it's not just the journals- public health results that seem very likely to have been driven by spurious correlation frequently make it into the newspaper: I read recently that coffee might prolong my life. You'd never get away with that in economics. In order to publish anything, you need to go through pretty intense statistical machinations to deal with even the merest hint of confounding factors.
This is not a knock on public health, as there is no independent reason why a higher standard is better. You can never account for all of the potential confounding factors (or other problems); no empirical research finding is 100% iron clad. So the question is simply what constitutes "good enough," in the sense that your result is worth being considered as part of literature and included in further discussions of that topic. Too low of standard and the debate will be polluted by evidence that isn't really evidence, but too high of a standard and you will exclude important and relevant research from the discussion.
What I find strange about this is that there is no reason why public health should have a lower standard than economics. In fact, if anything it should be the other way around. The data in public health studies is often as good as data can get; you can hold that data to a higher standard. Not to mention, it seems to me that before we start telling people what's going to kill them and what's going to prolong their lives, we ought to be really sure we know what we're talking about. A spurious correlation between coffee drinking and longevity is not something that we should be putting in the newspaper.
Conversely, economics is by nature a less exact science. We know a lot less about what we're talking about than public health people do, so we need to be more open to the possibility that we're wrong in any given case. The data we work with are also much worse (particularly in development economics). It would seem to follow that a greater degree of statsitical dubiousness should be acceptable, but it's not. For example, suppose an economist came up with a finding along the lines of, "A looks like it does lead to B, but my data isn't very good and there are plenty of reasons to doubt the relationship. But if A did in fact lead to B, that would have important implications for Theory X." That would never make it into a good economics journal, let alone the newspaper. Yet, it seems to me something along those lines could be very much relevant when it comes to contributing to economists' understanding of the world.
So what explains the discrepancy? The only explanation I can think of is political economy- i.e., what's in the interests of the practitioners of the discipline? I know less about public health, but it seems that public health studies tend to have lots of money and big funders behind them. The funders want results, and they want these results publicized. A lower statistical standard gives rise to more tangible results that you can claim in order show the funders that their money is being put to good use. The practioners of the discipline on the whole benefit from a lower standard because happier funders means more grant money in the long run.
In economics, by contrast, grants are not what makes the world go round; different incentives are at work. Over time, the discipline has become more and more technical and mathematically sophisticated, which functions as a barrier to entry- a higher degree of mathematical sophistication means fewer people are willing or able to get a PhD in the subject. This barrier to entry has been very effective, as unlike almost all other academic disciplines the supply of PhD economists does not far exceed the demand for them. The unemployment rate among PhD economists is close to zero, and economics professors get paid in the neighborhood 30-40% more than people in other social science disciplines. Maintaining higher standards of statistical rigor reinforces this barrier to entry- producing a result that economists find worthwhile requires you to put your years of difficult and/or painful training in econometrics to use. Hence, economists as a whole benefit from maintaining the higher standard because it helps to keep the labor market tight.
Can anyone think of another explanation?