Evaluating User Perceptions of Generative AI Advice on Challenges

In an article published in the journal Nature, researchers explored generative artificial intelligence (AI), exemplified by tools like ChatGPT (Generative Pre-Trained Transformer). This study, comprising four preregistered experiments among 3308 participants from the US, delved into people's perceptions of AI-generated advice on societal and personal challenges, exploring how awareness of AI influenced evaluations and acceptance.

Study: Evaluating User Perceptions of Generative AI Advice on Challenges. Image credit: KT Stock photos/Shutterstock
Study: Evaluating User Perceptions of Generative AI Advice on Challenges. Image credit: KT Stock photos/Shutterstock

Background

Integrating AI into daily life offers opportunities to address societal and personal challenges by providing intelligent suggestions. While AI's potential to offer recommendations for complex issues is recognized, the study focused on the crucial aspect of user acceptance. Previous research highlights challenges in distinguishing between AI- and human-generated content, with algorithm aversion observed in various contexts. This research contributed by systematically investigating how individuals evaluated and accepted advice from generative AI, considering both societal and personal challenges, and exploring the impact of awareness about AI authorship on these evaluations.

The present study addressed gaps in the previous research by examining the nuanced interplay between AI-generated advice, user awareness, and the identity relevance of the context. By assessing outcomes such as evaluations of the author, content, and downstream consequences like the preference for AI- vs. human-generated advice, the research aimed to deepen our understanding of the dynamics surrounding the adoption of generative AI in addressing real-world challenges.

Methods

This research comprised four experimental studies involving 3308 participants from the US to explore perceptions of generative AI advice on societal and personal challenges. Ethical approval was obtained, and participants provided informed consent. The studies employed a mix of between-participants designs and manipulated factors such as author identity, author transparency, and context across various challenges.

Study 1 focused on societal challenges, manipulating author identity (AI vs. human), author transparency, and contexts (fake news, migration, global warming, pandemic preparedness, and future workforce).

Studies 2a and 2b were centered on personal challenges, evaluating the devaluation of AI-generated responses based on author transparency. In Study 2a, participants self-selected challenges, while Study 2b assigned challenges randomly.

Study 3 investigated the impact of prior experience transparency on participants' choice between human and AI advisors across three societal challenges.

Participants: Recruited through Prolific, participants received financial remuneration, and the final samples varied across studies (Study 1: N = 1003, Study 2a: N = 501, Study 2b: N = 800, Study 3: N = 1004).

Measures: Main outcomes included evaluations of author competence and content. Studies 2a and 2b assessed behavioral intentions, and Study 3 measured participant choice of advisor.

Generative AI Content: ChatGPT-3.5 was used to generate AI content based on predefined prompts related to societal and personal challenges.
Statistics and Reproducibility: Hypotheses and analyses were preregistered, and the studies incorporated both confirmatory and exploratory analyses. Null findings were subjected to equivalence tests.

Results

In Study 1, the researchers investigated how people evaluate solutions to societal challenges proposed by either human experts or generative AI (ChatGPT). They found that when the author's identity (AI or human) was transparent, AI authors were perceived as less competent than human authors. However, this aversion towards AI did not extend to the evaluation of content quality or sharing intentions.

Study 2a aimed to replicate these findings in the context of personal challenges, with similar results: AI authors were perceived as less competent when their identity was transparent, but this did not significantly impact content evaluation or sharing intentions.

Study 2b replicated these results with random context assignment, reinforcing the conclusion that AI authors face competence devaluation. In Study 3, the researchers explored the consequences of lower perceived competence of AI authors by examining people's choices between human and AI advisors for societal challenges. They found that, overall, people preferred human advisors. However, when individuals had positive prior experiences with AI-generated content, their willingness to choose AI advice increased. This suggested that positive exposure to AI-generated content can mitigate the bias against AI advisors.

Discussion

The studies demonstrated that when the identity of an advisor is revealed, people tend to perceive AI advisors as less competent compared to human experts. This AI aversion, however, does not translate into a devaluation of the advice itself, and individuals remain open to implementing or sharing AI-generated recommendations. Notably, positive experiences with AI-generated advice mitigate this aversion, suggesting that increasing exposure to beneficial AI interactions may enhance overall acceptance.

The findings also highlighted that the perceived competence of AI advisors does not impact the evaluation of the advice or the willingness to implement it. The studies contributed to the ongoing discourse on algorithm aversion, indicating that easily understandable AI recommendations, such as those generated by ChatGPT, may reduce skepticism.

The results held even in contexts of high personal relevance, emphasizing that AI aversion may be less pronounced in situations involving societal or personal challenges. However, it is crucial to consider potential limitations, including the specificity of contexts examined and variations in content quality, which warrant further investigation. Future research could explore the link between perceived competence, content evaluation, and accountability for outcomes, as well as the effectiveness of marking human- and AI-generated content.

Conclusion

In conclusion, this research enhanced the understanding of AI aversion, revealing that people are not averse to AI-generated advice but to the AI advisor itself. Positive experiences with generative AI mitigated this aversion, offering a more optimistic view on accepting and utilizing the new generation of AI tools. The study suggested that the clear and easily understandable recommendations from generative AI could be valuable for human decision-makers in addressing societal and personal challenges.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2023, November 19). Evaluating User Perceptions of Generative AI Advice on Challenges. AZoAi. Retrieved on May 20, 2024 from https://www.azoai.com/news/20231119/Evaluating-User-Perceptions-of-Generative-AI-Advice-on-Challenges.aspx.

  • MLA

    Nandi, Soham. "Evaluating User Perceptions of Generative AI Advice on Challenges". AZoAi. 20 May 2024. <https://www.azoai.com/news/20231119/Evaluating-User-Perceptions-of-Generative-AI-Advice-on-Challenges.aspx>.

  • Chicago

    Nandi, Soham. "Evaluating User Perceptions of Generative AI Advice on Challenges". AZoAi. https://www.azoai.com/news/20231119/Evaluating-User-Perceptions-of-Generative-AI-Advice-on-Challenges.aspx. (accessed May 20, 2024).

  • Harvard

    Nandi, Soham. 2023. Evaluating User Perceptions of Generative AI Advice on Challenges. AZoAi, viewed 20 May 2024, https://www.azoai.com/news/20231119/Evaluating-User-Perceptions-of-Generative-AI-Advice-on-Challenges.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Insights into the Ethical Terrain of AI-Driven Political Microtargeting