I'm a huge fan of Bayesian learning. I don't try to figure out the right answers, I try to figure out what all the possible answers and assign probabilities to each one. When I get new information, I update my probabilities based on how credible the information is, and how strong my prior beliefs were. For example, a while ago I did some reading about how laws that allow people to carry concealed handguns affect violent crime. Beforehand, I would have said there's about a 5% chance that concealed carry laws reduce violent crime, a 60% chance that these laws increase crime, and a 35% chance that it doesn't make any difference.
The stuff I was reading was at Econ Journal Watch, which was neat because it allows the authors of various studies that contradict one another to go back and forth; I looked at some other things as well. In any case, I found one (gated, unfortunately) paper that presented what it claimed was strong evidence that concealed carry laws reduce crime, and then a bunch of other papers that poked holes in it. Much of the hole-poking was legitimate, but it was the sort of hole-poking that suggests that maybe the coefficients were smaller or less significant, rather than the sort of hole-poking that suggests fatal flaws. It would be very difficult to convince me that concealed carry laws really do reduce crime, but what I read was enough for me to update my beliefs substantially. I'd now say there's about 10% chance that concealed carry laws reduce crime, a 15% chance that they increase crime, and a 75% chance that it doesn't make any difference. My priors updated and my Bayesian learning for the day complete, I happily resumed my regularly scheduled activities.
But some people would approach this kind of thing in a very different way- from what we could call a "Working Hypothesis" perspective. Such a person with similar underlying views as mine were at the beginning would instead take as the starting point, "I believe concealed carry laws reduce crime," and then they would act as though this hypothesis were true. When presented with new evidence, they would either a) interpret the evidence as not strong enough to overturn their working hypothesis and reject the evidence, or b) interpret the evidence as strong enough to overturn their working hypothesis in which case they would switch their belief to "I believe concealed carry laws increase crime" and proceed to go through life as though that hypothesis were true.
It's fairly obvious how in many instances a Bayesian Learning perspective would lead to better decisions than a Working Hypothesis perspective. But my question is, is there any circumstance under which a Working Hypothesis perspective might actually be preferable? Or is it just plain worse?