Following up his article on mathiness, Paul Romer argues in a blog post that:
- Science has its own set of norms.
- These norms are in danger from people with different norms, such as those of politics.
- We need to exclude people with the norms of politics from scientific debate.
Points 1 and 2 are right. “Norms of politics” is not a good phrase to describe argumentative bad faith: political debate at its best is much more enlightening than that, and ‘anti-politics’ populism ought not to be indulged. But science does depend on norms, which must always be defended – particularly when a lot of money is being thrown at the discipline, creating incentives for shoddy work.
From a psychologically informed perspective, point 3 seems very risky. Humans are prone to many self-serving biases, which lead us to discount evidence against our views. If we truly cannot assume that reasonable scientists may differ, then we are already in the bad equilibrium. Until we are sure of that, then it is better to try hard to consider the other side’s point of view, and, as Oliver Cromwell once begged of some argumentative theological scholars, “to think it possible you may be mistaken.”
Professor Romer’s targets in the mathiness article are people with whom has substantive disagreements. (He fairly admits that this may be bias on his part.) His blog post also caricatures Milton Friedman’s famous article on scientific methodology, which has certainly been criticized before, but which is still worth reading and thinking hard about. I am not persuaded these are good targets for exclusion and shunning.
Scientific norms, any norms, do not usually fall apart because bad guys sneak in and undermine the honest people. The process is more insidious. We are all subject to pressure to publish, and the desire for fame and status; and most standards involve judgment, and grey areas. (Example: if we run many regressions and only report some of them, we risk biasing our p-values towards significance. But this does not mean we should always report every single regression we ran on some data.) These grey areas create “moral wiggle room” for us to weaken our own standards. We should all take care to avoid that, but conversely, very few of us can afford to be smug about our standards, because the pressures we face are so universal.
Frauds and cheats should certainly be excluded from science. But “mathiness” seems to be more of a grey area. All theoretical papers use simplification; all of them, I suspect, have some degree of imprecision in the map from real world concepts to parts of the formal theory. (And imprecise theories are not always bad, but that is another story.) If we shun people on this basis, we risk preferentially shunning our intellectual opponents. This itself can turn science into a dialogue of the deaf, of communities who only cite each others’ papers, not because they are “cults” but because it is easier for all of us to stay in our intellectual comfort zones and receive the praise of those whom we disagree with (in my experience, every scientific community believes six impossible things before breakfast).
The best place to start is by ruthlessly excluding mathiness and other dubious practices from our own work. Computer programmers say: “Be conservative in what you do, be liberal in what you accept from others.”