close
close

The model shows why debunking election misinformation often doesn’t work MIT News

The model shows why debunking election misinformation often doesn’t work MIT News

When an election result is disputed, people who are skeptical of the outcome may be influenced by authority figures to choose one side or the other. These people may be independent observers, political figures or news organizations. However, these “debunking” efforts do not always have the desired effect and in some cases can lead people to become more attached to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help determine whether debunking efforts persuade people to change their beliefs about the legitimacy of an election. Their results suggest that while debunking fails in most cases, it can be successful under the right conditions.

For example, the model showed that successful debunking is more likely when people are less sure of their original beliefs and when they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority supports a result that runs counter to an alleged bias: for example, Fox News’ declaration that Joseph R. Biden won in Arizona in the 2020 US presidential election.

“When people see an act of exposure, they treat it as a human act and understand it as they understand human actions—that is, as something someone did for their own reasons,” says Rebecca Saxe, John W. Jarve- Professor of Brain and Cognitive Sciences, member of MIT’s McGovern Institute for Brain Research and senior author of the study. “We used a very simple, general model of how people understand other people’s actions and found that that’s all you need to describe this complex phenomenon.”

The results could have implications for the United States’ preparations for the Nov. 5 presidential election as they help reveal the conditions that would be most likely to lead people to accept the election results.

MIT graduate student Setayesh Radkani is the lead author of the article, which appears today in a special issue of the journal on elections PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoctoral fellow who is now an assistant professor of political science at the University of California, Berkeley, is also an author of the study.

Model motivation

In its work to debunk elections, the MIT team took a novel approach, building on Saxe’s extensive work on “theory of mind” – how people think about other people’s thoughts and motivations.

As part of her doctoral research, Radkani developed a computer model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive measures in the same way, depending on their prior beliefs about the measure and authority. Some may see the authority as lawful in punishing an unjustified act, while others believe the authority is going too far and imposing an unjustified punishment.

After attending an MIT workshop on polarization in societies last year, Saxe and Radkani came up with the idea of ​​applying the model to how people respond to an authority that tries to influence their political beliefs. They enlisted Landau-Wells, who earned her doctorate in political science before working as a postdoctoral fellow in Saxe’s lab, to join their effort, and Landau suggested using the model to debunk beliefs about the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions about people’s beliefs as they receive new information. This approach treats exposure as an action that a person takes for their own reasons. People who observe the authority’s statement then interpret for themselves why the person said what they did. Based on this interpretation, people may or may not change their own beliefs about the election outcome.

Furthermore, the model does not assume that beliefs are necessarily false or that a group of people act irrationally.

“The only assumption we made is that there are two groups in society that differ in their views on an issue: one group believes the election was stolen, the other group doesn’t,” says Radkani. “Otherwise these groups are similar. They share their beliefs about authority—what the different motives of authority are and how the authority is motivated by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority tries to invalidate a group’s belief about the validity of an election result.

Each time they ran the model, the researchers changed the degree of certainty of each group’s original beliefs and also varied the groups’ perceptions of the authority’s motivations. In some cases groups believed that authority was motivated by promoting accuracy, in others this was not the case. The researchers also changed groups’ perceptions of whether the authority was biased toward a particular viewpoint and how strongly the groups believed in those perceptions.

Build consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements from an authority seeking to convince them that the choice was legitimate. The researchers found that in most of the scenarios they examined, beliefs remained polarized and in some cases became even more polarized. This polarization could also extend to new issues that have nothing to do with the original context of the election, the researchers found.

However, under certain circumstances the debunking was successful and the beliefs settled on an accepted outcome. This was more likely when people were initially more uncertain about their initial beliefs.

“When people are very, very safe, it becomes difficult to move them. “Basically, a lot of this debunking of authority doesn’t matter,” Landau-Wells says. “However, there are many people who belong to this insecure group. They have doubts but no firm belief. One of the lessons of this article is that we are in a situation where the model says you can influence people’s beliefs and get them to do true things.”

Another factor that can lead to belief convergence is that people believe that the authority is unbiased and highly motivated by accuracy. It’s even more compelling when an agency makes a claim that runs counter to its perceived bias – for example, when Republican governors claim that elections in their states were fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been underway to train impartial election observers who can vouch for whether an election was legitimate. These types of organizations could be well positioned to persuade people who may have doubts about the legitimacy of the election, the researchers say.

“More than anything, they try to teach people to be independent, unbiased, and committed to the truth of the outcome. These are the types of entities you need. We want them to succeed in being perceived as independent. We want them to be successful and seen as truthful, because in this space of uncertainty, it is the voices that can move people to a correct outcome,” says Landau-Wells.

The research was funded in part by the Patrick J. McGovern Foundation and the Guggenheim Foundation.

Related Post