I’ve posted here before about why measuring belief in conspiracy theories can be tricky. Recently I was invited to visit University of Cambridge’s Conspiracy and Democracy project and the issue of measuring belief came up again, particularly the question of what it means when somebody indicates on a survey that they “believe” (or don’t “believe”) a conspiracist claim. I wrote a blog post for the Conspiracy and Democracy project about the idea, which I’m reposting in full below. You can also watch the full public talk I gave at Cambridge at the bottom of this post.
Psychologists love to measure things, and psychologists who study conspiracy theories are no exception. To understand where conspiracy theories come from, we need to be able to measure the extent to which people believe them. But measuring things is often trickier than it first appears.
At first glance, it seems pretty straight-forward. Pick a few theories and ask some people how strongly they believe the claims to be true or false. But the trouble is that different teams of researchers often ask about different theories, or ask about the same theories in different ways, so their scales might not be directly comparable. Plus, a scale item referring to the London 7/7 bombings, say, might not be ideal for non-British samples. Plus, conspiracy theories go in and out of fashion, potentially resulting in a scale that no longer works very well. An item from a 1994 study carried out in the States, for example, asks whether “The Japanese are deliberately conspiring to destroy the American economy.” (At the time, almost half of the sample–46%–said the theory was either definitely or probably true.)
In an effort to overcome these problems, colleagues at Goldsmiths and I created a measure of generic conspiracism. Our scale doesn’t refer to any specific conspiracist claim (like the allegation that the 9/11 attacks were an inside job), but instead asks about the generic assumptions that underlie the theories (like the idea that governments routinely harm and deceive their own citizens).
This is an improvement, but we’re still left with another problem, and this one is trickier to solve. We’re asking people how much they believe (or disbelieve) conspiracist claims, but what exactly does it mean when someone says they believe something like that? Does it mean they literally believe it to be true? Or might they believe it in a metaphorical sense? Might they not believe it, but tell us they do because they want to appear funny or ironic or unconventional? Might they be making a political statement? Or might they not be concerned about the truth at all? Measures of conspiracism might mean different things to different people.
This issue was pointed out by Conspiracy and Democracy project-member Alfred Moore in his response to my public talk for the project, and I agree wholeheartedly that it is an important issue for future psychological research to deal with.
It is also an issue which highlights the need for an interdisciplinary approach to conspiracy theories. In particular, history and political science–two of the major approaches of the Conspiracy and Democracy project–can be particularly valuable.
A historical approach can give us a fine-grained understanding of what people actually believe, at least on the scale of individuals. If a historian comes across a letter written by an well-known conspiracist in which they admit that they don’t actually believe a word of it, that’s a good indication that the person’s publicly stated views do not match their privately held beliefs.
Political scientists, like psychologists, are usually more interested in broad trends rather than specific individuals, and often rely on survey data. In an effort to make sure people’s responses reflect their actual beliefs, some political scientists have found a way to make people put their money where their mouth is. People’s answers to political questions, like the size of the federal deficit, are often biased by their partisan affiliation, but offering people cash for accurate answers can significantly reduce their partisan bias.
Of course, while monetary incentives might make people more accurate when answering questions that have a known answer, conspiracy theories don’t always have an objectively correct answer. The challenge will be figuring out if a similar honesty system can be put to use in the context of conspiracy theories.