During situations of crisis, such as the one we are going through, we rely on a tiny subset of our population to make timely and important decisions. The decisions that they make affect us now and will continue to affect us for a long time to come. I recently wrote a book on decision-making during these strange times, which is aimed at all of us, but as I wrote it, I began to realize that the highest impact decisions come from this tiny subset. These decisions include how to implement a lockdown, whether to support struggling citizens (and who constitutes “struggling”), whether to increase funding for the health sector (and how much), and so on.
At a time of crisis, we look to our leaders to take decisive action on our behalf. Those leaders are our elected politicians, their policy advisors, experts, academics and so on. This is all as it should be. Whilst we live in a democratic society, we all accept that at certain times the executive has to have the power to act on our behalf without us directly having a say-so. I mean, imagine holding a referendum on whether to implement a lockdown upon the outbreak of a virus. I think we can all agree that this would be a bad idea.
That’s all well and good if this tiny group of people make good decisions. But how can we be sure that these key decision-makers are making the right decisions, at the right times, and in our best interests? It’s a tough ask, to be sure. There are plenty of voices and opinions arguing in favour of, or against, any particular course of action. We want these decision-makers, to whom we have delegated power to listen to all of these voices, and then make decisions on our behalf in an objective and unbiased way. But do they?
Alongside a team of researchers, I explored this in a recent paper. We looked into whether the people who undertake and implement policy decisions on our behalf are likely to be objective when evaluating data, or whether they rely on their pre-existing beliefs. Put simply, do people do what they feel is right, or what the data tells them is right?
To do so, we took a sample of individuals who are routinely implementing or advising on matters of policy. We divided them into two groups. To one of the groups we gave them a table which contained data on the effectiveness of a skin cream on a rash, and we asked them a simple question: Was the skin cream effective? To the second group, we gave them a table with the exact same data, but changed the labels. Now the table contained data on the effectiveness of minimum wage laws on poverty. We asked this group: Were the minimum wage laws effective?
Why would this matter? Our study built on a previous study exploring confirmation bias in a sample of the US population (a general sample). It turns out that our sample of policy professionals are fairly agnostic on the effectiveness of a particular skin cream on a rash. But they have strong opinions about the effectiveness of minimum wage laws. Some believe them to be ineffective, others very effective, while others believe them to hurt the economy. Since the data was exactly the same, the accuracy in responses should be exactly the same across the two groups, right?
The results were stunning.
We found that, when asked to evaluate data that they had strong beliefs about (minimum wage laws), policy professionals were much more likely to be incorrect, relative to when they were asked to evaluate data they did not have strong beliefs about (skin cream). The implication is that, for instance, if one of our policymakers has a strong belief that coronavirus is not a big deal, they search for information that confirms that belief. In other words, they have a bias towards information that confirms what they already believe. You may have heard of confirmation bias. Policymakers are as susceptible to it as everybody else. Except their errors can have far graver consequences.
If this gives you chills, don’t worry, you are not alone. In the same paper, we also asked whether a simple intervention can help in reducing the extent of the bias. We took another sample (this time of just economists working in policy positions) and asked them to answer the question on the effectiveness of minimum wage laws (just as before). A few hours later, we came back and randomly paired them up. We then asked them to answer the question again. The second time around they had to deliberate with a peer, and hence were held to account by their peer.
We found that, for confirmation bias, the pairing up worked a treat: accuracy increased substantially. Nevertheless, the results point to accountability being one of the main methods to reducing bias-driven errors, and now, perhaps more than ever, it is critical to hold our decision-makers accountable.
This is no panacea by the way. Procedures that force deliberation and consideration take time. Time that may not be readily available given the urgency of this crisis. That is, after all, why we transfer power to politicians and policymakers in the first place. Nevertheless, taking, and then sticking to, an incorrect decision can have far graver consequences than losing a small amount of time and increasing the probability of a correct decision.
How can we increase the probability of finding a vaccine? How can we divert resources to tackle the pandemic better? How can we protect livelihoods? These are questions that need to be asked every day. In times of emergencies, delegating decisions to experts is important. They have better information than we do, and their judgements can be extra-ordinarily important in helping us through the situation. They have been studying problems like these far longer than you and I, so it makes sense to delegate.
But, they are human. And they make mistakes. That means that the onus is on the public to do what we are able to hold them accountable.
In my book, I discuss how accountability and trust are important for your own decisions in crisis situations. But similar rules apply for us holding policymakers accountable. We need to trust our policymakers. We need to believe that they have our best interests at heart. But it is our responsibility to verify that they are doing the right thing. Constantly.
With any luck, working together in this way, we will all get through this.
Banner Photo by Nicolas Dmítrichev on Unsplash
Very interesting blog
LikeLike