[ad_1]

Misinformation
Misinformation

Late last year, in the days before the Slovakian parliamentary elections, two viral audio clips threatened to derail the campaign of a pro-Western, liberal party leader named Michal Šimečka. The first was a clip of Šimečka announcing he wanted to double the price of beer, which, in a nation known for its love of lagers and pilsners, is not exactly a popular policy position.

In a second clip, Šimečka can be heard telling a journalist about his intentions to commit fraud and rig the election. Talk about career suicide, especially for someone known as a champion of liberal democracy.

There was, however, just one issue with these audio clips: They were completely fake.

The International Press Institute has called this episode in Slovakia the first time that AI deepfakes — fake audio clips, images, or videos generated by artificial intelligence — have played a prominent role in a national election. While it’s unclear whether these bogus audio clips were decisive in Slovakia’s electoral contest, the fact is Šimečka’s party lost the election, and a pro-Kremlin populist now leads Slovakia.

In January, a report from the World Economic Forum found that over 1,400 security experts consider misinformation and disinformation (misinformation created with the intention to mislead) the biggest global risk in the next two years — more dangerous than war, extreme weather events, inflation, and everything else that’s scary. There are a bevy of new books and a constant stream of articles that wrestle with this issue. Now even economists are working to figure out how to fight misinformation.

In a new study, “Toward an Understanding of the Economics of Misinformation: Evidence from a Demand Side Field Experiment on Critical Thinking,” economists John A. List, Lina M. Ramírez, Julia Seither, Jaime Unda and Beatriz Vallejo conduct a real-world experiment to see whether simple, low-cost nudges can be effective in helping consumers to reject misinformation. (Side note: List is a groundbreaking empirical economist at the University of Chicago, and he’s a longtime friend of the show and this newsletter).

While most studies have focused on the supply side of misinformation — social media platforms, nefarious suppliers of lies and hoaxes, and so on — these authors say much less attention has been paid to the demand side: increasing our capacity, as individuals, to identify and think critically about the bogus information that we may encounter in our daily lives.

A Real-Life Experiment To Fight Misinformation

The economists conducted their field experiment in the run-up to the 2022 presidential election in Colombia. Like the United States, Colombia is grappling with political polarization. Within a context of extreme tribalism, the authors suggest, truth becomes more disposable and the demand for misinformation rises. People become willing to believe and share anything in their quest for their political tribe to win.

To figure out effective ways to lower the demand for misinformation, the economists recruited over 2,000 Colombians to participate in an online experiment. These participants were randomly distributed into four different groups.

One group was shown a video demonstrating “how automatic thinking and misperceptions can affect our everyday lives.” The video shows an interaction between two people from politically antagonistic social groups who, before interacting, express negative stereotypes about the other’s group. The video shows a convincing journey of these two people overcoming their differences. Ultimately, they express regret over unthinkingly using stereotypes to dehumanize one another. The video ends by encouraging viewers to question their own biases by “slowing down” their thinking and thinking more critically.

Another group completed a “a personality test that shows them their cognitive traits and how this makes them prone to behavioral biases.” The basic idea is they see their biases in action and become more self-aware and critical of them, thereby decreasing their demand for misinformation.

A third group both watched the video and took the personality test.

Finally, there was a control group, which neither watched the video nor took the personality test.

To gauge whether these nudges get participants to be more critical of misinformation, each group was shown a series of headlines, some completely fake and some real. Some of these headlines leaned left, others leaned right, and some were politically neutral. The participants were then asked to determine whether these headlines were fake. In addition, the participants were shown two untrue tweets, one political and one not. They were asked whether they were truthful and whether they would report either to social media moderators as misinformation.

What They Found

The economists find that the simple intervention of showing a short video of people from politically antagonistic backgrounds getting along inspires viewers to be more skeptical of and less susceptible to misinformation. They find that participants who watch the video are over 30 percent less likely to “consider fake news reliable.” At the same time, the video did little to encourage viewers to report fake tweets as misinformation.

Meanwhile, the researchers find that the personality test, which forces participants to confront their own biases, has little or no effect on their propensity to believe or reject fake news. It turns out being called out on our lizard brain tribalism and other biases doesn’t necessarily improve our thinking.

In a concerning twist, the economists found that participants who both took the test and watched the video became so skeptical that they were about 31 percent less likely to view true headlines as reliable. In other words, they became so distrustful that even the truth became suspect. As has become increasingly clear, this is a danger in the new world of deepfakes: not only do they make people believe untrue things, they also may make people so disoriented that they don’t believe true things.

As for why the videos are successful in helping to fight misinformation, the researchers suggest that it’s because they encourage people to stop dehumanizing their political opponents, think more critically, and be less willing to accept bogus narratives even when it bolsters their political beliefs or goals. Often — in a sort of kumbaya way — centrist political leaders encourage us to recognize our commonalities as fellow countrymen and work together across partisan lines. It turns out that may also help us sharpen our thinking skills and improve our ability to recognize and reject misinformation.

Critical Thinking In The Age Of AI

Of course, this study was conducted back in 2022. Back then, misinformation, for the most part, was pretty low-tech. Misinformation may now be getting turbocharged with the rapid proliferation and advancement of artificial intelligence.

List and his colleagues are far from the first scholars to suggest that helping us become more critical thinkers is an effective way to combat misinformation. University of Cambridge psychologist Sander van der Linden has done a lot of work in the realm of what’s known as “psychological inoculation,” basically getting people to recognize how and why we’re susceptible to misinformation as a way to make us less likely to believe it when we encounter it. He’s the author of a new book called Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity. Drawing an analogy to how vaccinations work, Van der Linden advocates exposing people to misinformation and showing how it’s false as a way to help them spot and to reject misinformation in the wild. He calls it “prebunking” (as in debunking something before it happens).

Of course, especially with the advent of AI deepfakes, misinformation cannot only be combated on the demand side. Social media platforms, AI companies, and the government will all likely have to play an important role. There’s clearly a long way to go to overcoming this problem, but we have recently seen some progress. For example, OpenAI recently began “watermarking” AI-generated images that their software produces to help people spot pictures that aren’t real. And the federal government recently encouraged four companies to create new technologies to help people distinguish between authentic human speech and AI deepfakes.

This new world where the truth is harder to believe may be pretty scary. But, as this new study suggests, nudges and incentives to get us to slow our thinking, think more critically, and be less tribal could be an important part of the solution.

[ad_2]

1 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *