AI Chatbots May Standardize Human Thinking And Threaten Cognitive Diversity, Researchers Warn

A new analysis warns that as billions of people rely on the same AI chatbots for writing and reasoning, human expression and problem-solving may become increasingly uniform - unless developers build far greater diversity into the next generation of AI systems.

Opinion: The homogenizing effect of large language models on human expression and thought. Image Credit: iQoncept / Shutterstock

Opinion: The homogenizing effect of large language models on human expression and thought. Image Credit: iQoncept / Shutterstock

AI chatbots are standardizing how people speak, write, and think. If this homogenization continues unchecked, it risks reducing humanity's collective wisdom and ability to adapt, computer scientists and psychologists argue in an opinion paper published March 11 in the Cell Press journal Trends in Cognitive Sciences. They say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets, not only to help preserve human cognitive diversity, but also to improve chatbots' reasoning abilities.

Researchers highlight risks of homogenized language and reasoning

"Individuals differ in how they write, reason, and view the world," says first author and computer scientist Zhivar Sourati of the University of Southern California. "When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized, producing standardized expressions and thoughts across users."

Cognitive diversity supports creativity and problem-solving

Within groups and societies, cognitive diversity bolsters creativity and problem-solving, the researchers say. However, cognitive diversity is shrinking worldwide as billions of people are using the same handful of AI chatbots for an increasing number of tasks, they say. When people use chatbots to help them polish their writing, for example, their writing loses its stylistic individuality, and they feel less creative ownership over what they produce.

"The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," says Sourati.

Training data bias may narrow perspectives reflected in AI outputs

The team cites multiple studies showing that LLM outputs are less varied than human-generated writing and that they tend to reflect the language, values, and reasoning styles of Western, educated, industrialized, rich, and democratic societies.

"Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," says Sourati.

Studies show group creativity may decline when relying on LLMs

Though studies show that individuals often generate more ideas with more detail when they use LLMs, groups produce fewer, less creative ideas when they use LLMs than when they combine their collective powers, the researchers note.

"Even if people are not the first-hand users of LLMs, LLMs are still going to affect them indirectly," says Sourati. "If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas."

LLM interactions may subtly influence opinions and reasoning styles

Beyond language, studies have shown that after interacting with biased LLMs, people's opinions become more similar to those of the LLM they used. LLMs also favor linear modes of reasoning, such as "chain-of-thought reasoning," which requires models to show step-by-step reasoning. This emphasis reduces the use of intuitive or abstract reasoning, which the researchers say is sometimes more efficient than linear reasoning. They also note that LLMs can alter people's expectations, subtly shifting the direction of their work.

"Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem 'good enough' instead of crafting their own, which gradually shifts agency from the user to the model," says Sourati.

Researchers call for more diverse training data and interaction approaches

The researchers say that AI developers should intentionally incorporate diversity in language, perspectives, and reasoning into their models. They emphasize that this diversity should be grounded in the diversity that exists within humans globally, rather than introducing random variation.

"If LLMs had more diverse ways of approaching ideas and problems, they would better support the collective intelligence and problem-solving capabilities of our societies," said Sourati. "We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations."

Funding

This research was supported by funding from the Air Force Office of Scientific Research.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
MIT’s Self-Adapting AI Learns New Facts Like a Student and Upgrades Itself