AI Framework Aims to Disrupt Echo Chambers and Curb Misinformation Spread on Social Media Platforms

With false narratives spreading faster than ever, researchers are using AI to fight fire with fire by exposing and dismantling digital echo chambers.

Research: Echoes amplified: a study of AI-generated content and digital echo chambers. Image Credit: Tero Vesalainen  / Shutterstock

Research: Echoes amplified: a study of AI-generated content and digital echo chambers. Image Credit: Tero Vesalainen  / Shutterstock

Falling for clickbait is easy these days, especially for those who mainly get their news through social media. Have you ever noticed your feed littered with articles that look alike?

Thanks to artificial intelligence (AI) technologies, the spread of mass-produced contextually relevant articles and comment-laden social media posts has become so commonplace that it can appear as though it's coming from different information sources. The resulting "echo chamber" effect could reinforce a person's existing perspectives, regardless of whether that information is accurate.

A new study involving Binghamton University, State University of New York researchers offers a promising solution: developing an AI system to map out interactions between content and algorithms on digital platforms to reduce the spread of potentially harmful or misleading content. That content can be amplified through engagement-focused algorithms, the study noted, and enable conspiracy theories to spread, especially if the content is emotionally charged or polarizing.

Researchers believe their proposed AI framework would counter this by allowing users and social media platform operators - Meta or X, for example - to pinpoint sources of potential misinformation and remove them if necessary. More importantly, it would make it easier for their platforms to promote diverse information sources to audiences.

"The online/social media environment provides ideal conditions for that echo chamber effect to be triggered because of how quickly we share information," said study co-author Thi Tran, assistant professor of management information systems at the Binghamton University School of Management. "People create AI, and just as people can be good or bad, the same applies to AI. Because of that, if you see something online, whether it is something generated by humans or AI, you need to question whether it's correct or credible."

Researchers noted that digital platforms facilitate echo chamber dynamics by optimizing content delivery based on engagement metrics and behavioral patterns. Close interactions with like-minded people on social media can amplify a person's biased cherry-picking tendency when choosing information messages to react to, leading to diverse perspectives being filtered out.

The study tested this theory by randomly surveying 50 college students, each reacting to five misinformation claims about the COVID-19 vaccine:

  • Vaccines are used to implant barcodes in the population.
  • COVID-19 variants are becoming less lethal.
  • COVID-19 vaccines pose greater risks to children than the virus itself.
  • Natural remedies and alternative medicines can replace COVID-19 vaccines.
  • The COVID-19 vaccine was developed as a tool for global population control.

Here is how the survey's participants responded:

  • 90% stated they would still get the COVID-19 vaccine after hearing the misinformation claims.
  • 70% indicated they would share the information on social media, more so with friends or family than with strangers.
  • 60% identified the claims as false information.
  • 70% expressed a need to conduct more research to verify the falsehood.

According to the study, these responses highlighted a critical aspect of the dynamics of misinformation: many people could recognize false claims but also felt compelled to seek more evidence before dismissing them outright.

"We all want information transparency, but the more you are exposed to certain information, the more you're going to believe it's true, even if it's inaccurate," Tran said. "With this research, instead of asking a fact-checker to verify each piece of content, we can use the same generative AI that the 'bad guys' are using to spread misinformation on a larger scale to reinforce the type of content people can rely on."

The research paper, "Echoes Amplified: A Study of AI-Generated Content and Digital Echo Chambers," was presented at a conference organized by the Society of Photo-Optical Instrumentation Engineers (SPIE). It was also authored by Binghamton's Seden Akcinaroglu, a professor of political science; Nihal Poredi, a PhD student in the Thomas J. Watson College of Engineering and Applied Science; and Ashley Kearney from Virginia State University.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Pony.ai Partners with Dubai RTA to Launch Robotaxi Fleet