[ad_1]

Technology is inherently neutral. Whether it is used for good or evil depends upon whose hands it lands in—and what they do with it. At least, so goes the argument that most of us have come to accept as a framework for assessing—and potentially regulating—the role of artificial intelligence.

We have all read about AI’s benefits as celebrated by techno-optimists, and the risks warned by techno-dystopians. Those dangers include technology’s ability to spread misinformation and conspiracy theories, including easily created deepfakes.

As the CEO of the Center of Science and Industry—one of America’s leading science museums and educational institutions—I am a close observer of social media’s ability to feed conspiracy theories through misinformation. Examples abound. Social media still claims that vaccines cause autism, even though the theory is based on data that was retracted 14 years ago. Nonetheless, this debunked “science” feeds into the social media misinformation machine, and extends to the alleged dangers of the COVID vaccines.

For all these reasons I was thrilled to read the results of a recent, brilliantly designed study conducted by researchers from MIT and Cornell. It demonstrated that generative AI, using ChatGPT4 Turbo, is capable of encouraging people to reexamine and change a fixed set of conspiracy-related beliefs. 

It worked like this: First, more than 2,000 Americans “articulated, in their own words, a conspiracy theory in which they believe, along with the evidence they think supports this theory.”

After that, they were asked to participate in a three-round conversation with the chatbot, which was trained to respond accurately to the false examples referenced by the subjects to justify their beliefs. 

The results were deeply encouraging for those of us committed to creating a world safe for the truth.  In fact, given the conventional wisdom in behavioral psychology that changing people’s minds is near impossible, the results are nothing short of astounding. 

The survey found that 20% of the sample, after conversing with the chatbot, changed their opinions. This is a dramatically large effect, given how deeply held the views were, and it lasted for at least two months. 

Even the researchers were surprised. Gordon Pennycook, an associate professor from Cornell, noted, “The research upended the authors’ preconceived notions about how receptive people were to solid evidence debunking not only conspiracy theories, but also other beliefs that are not rooted in good-quality information.”

It is hard to move minds because belief in conspiracies makes people feel good. It satisfies unmet needs for security and recognition—whether those beliefs are related to science or politics. We support a candidate or a theory because of how it makes us feel. 

Thus, when we argue with another human, it is a battle of feelings versus feelings. Which is why those debates are oftentimes not productive. But a calm and reasonable conversation with a chatbot, who marshals evidence without emotion, demonstrated the power of perceived objectivity.

Conversation with AI creates a healthy dissociation from another human being. I suspect that separation is what enabled the subjects to rethink their feelings. It gave them emotional space. They did not become defensive because their feelings were not hurt, nor their intelligence demeaned. That was all washed away, so the subjects were able to actually “hear” the data—to let it in to trigger reconsideration.

Interestingly, the non-feeling chatbot allowed them to feel heard. And guess what? This is exactly how the best scientific educators work their magic. They meet the audience where they are. They do not shame or demean anyone for holding inaccurate beliefs or not understanding the basic science. Instead, they listen humbly, work to unpack what they are hearing, and then—with sensitivity—respond and share information in nonauthoritarian exchange.

The best educators also do not flaunt their expertise; they “own” it with confidence and communicate with authority but not arrogance. Surprisingly, the same proved true for AI; in the study it provided citations and backup, but never elevated its own stature. There was no intellectual bullying.

Another powerful attribute of chatbot learning is it replicates what happens when someone does the research themselves. The conversation made them more inclined to agree with the conclusions because they “got this” on their own. In behavioral psychology, that is called the endowment effect. Something has more value when you participate in its creation.

I am also excited by the study because—again, like the best educators—the chatbots were accurate. Of the claims made by the AI, “99.2% were found to be true, while 0.8% were deemed misleading. None were found to be completely false.”

Like you I am sure, my brain is spinning with the possibilities that this work opens up. For example, the survey authors imagine that, “social media could be hooked up to LLMs to post corrective responses to people sharing conspiracy theories, or we could buy Google search ads against conspiracy-related search terms.”

Science denialism has been with us for millennia. Internet technology married to social media has made it even more dangerous. Wouldn’t it be a delicious irony if, thanks to AI technology, misinformation finally meets its match?

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *