Playful AI Chatbots Lower Privacy Defenses and Raise Engagement Risks

Interactive mobile apps and AI chatbots may lull users into playful engagement, inadvertently weakening their privacy defenses. Penn State researchers call for smarter design strategies to keep users aware of what they’re sharing.

Research: Are you fooled by interactivity? The effects of interactivity on privacy disclosure. Image Credit: oatawa / Shutterstock

The more interactive a mobile app or artificial intelligence (AI) chatbot is, the more playful they are perceived to be, with users letting their guard down and risking their privacy, according to a team led by researchers at Penn State.

The researchers investigated the impact of mobile app interactivity on users' vigilance toward privacy risks during the sign-up process, and how this influences their attitudes toward the app and their willingness to continue using it. The team found that interactivity motivates users to engage with the app by fostering a heightened sense of playfulness and lowering their privacy concerns. The findings, published in the journal Behaviour & Information Technology, have implications for user privacy in an era increasingly dominated by mobile apps and AI chatbots that are designed to be fun and engaging, according to senior author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State.

How Interactivity Affects User Privacy Concerns

"I think, in general, there's been an increase in the extent to which apps and AI tools pry into user data - ostensibly to better serve users and to personalize information for them," Sundar said. "In this study, we found that interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy. Companies could exploit this vulnerability to extract private information without users being totally aware of it."

The Experiment: Message vs. Modality Interactivity

In an online experiment, researchers asked 216 participants to complete the sign-up process for a simulated fitness app. Participants were randomly assigned to different versions of the app with varying levels of two different types of interactivity: "message interactivity," ranging from simple questions and answers to highly interconnected chats, where the app's messaging builds on the user's previous responses; and "modality interactivity," referring to options such as clicking and zooming in on images.

Then the participants answered questions about their experience with the app's sign-up process by rating perceived playfulness and privacy concerns on seven-point scales to indicate how strongly they agree or disagree with specific statements, such as "I felt using the app is fun" and "I would be concerned that the information I submitted to the app could be misused." The researchers examined the responses to identify the effect of both the type and extent of interactivity on user perceptions of the app.

Key Findings and Surprising Results

They found that interactivity enhanced perceived playfulness and users' intention to engage with an app, which was accompanied by a decrease in privacy concerns. Surprisingly, Sundar said, message interactivity, which the researchers thought would increase user vigilance, instead distracted users from thinking about the personal information they may be sharing with the system. That is, the way AI chatbots operate today – building responses based on a user's prior inputs – makes individuals less likely to think about the sensitive information they may be sharing, according to the researchers.

Designing for Privacy Awareness

"Nowadays, when users engage with AI agents, there's a lot of back-and-forth conversation, and because the experience is so engaging, they forget that they need to be vigilant about the information they share with these systems," said lead author Jiaqi Agnes Bao, assistant professor of strategic communication at the University of South Dakota who completed the research during her doctoral work at Penn State. "We wanted to understand how to better design an interface to make sure users are aware of their information disclosure."

While user vigilance plays a significant role in preventing the unintended disclosure of personal information, app and AI developers can balance playfulness and privacy concerns through design choices that result in win-win situations for both individuals and companies, Bao said.

"We found that if both message interactivity and modality interactivity are designed to operate in tandem, it could cause users to pause and reflect," she said. "So, when a user converses with an AI chatbot, a pop-up button asking the user to rate their experience or leave comments on how to improve their tailored responses can give users a pause to think about the kind of information they share with the system and help the company provide a better customized experience."

Ethical Responsibility of AI Platforms

AI platforms extend beyond simply offering users the option to share or not share personal information during conversations, according to study co-author Yongnam Jung, a doctoral candidate at Penn State.

"It's not just about notifying users, but about helping them make informed choices, which is the responsible way for building trust between platforms and users," she added.

Reinforcing the Research

The study builds on the team's earlier research, which revealed similar patterns, according to the researchers. Together, they said, the two studies underscore a critical trade-off. While interactivity enhances the user experience, it highlights the benefits of the app and draws attention away from potential privacy risks.

A Cautionary Tale for Generative AI

Generative AI, for the most part and in most application domains, is based on message interactivity, which is conversational in nature, said Sundar, who is also the director of Penn State's Center for Socially Responsible Artificial Intelligence (CSRAI). He added that this study's findings challenge the current thinking among designers, who believe that, unlike clicking and swiping tools, conversation-based tools make people more cognitively aware of negative aspects, such as privacy concerns.

"In reality, conversation-based tools are turning out to be a playful exercise, and we're seeing this reflected in the larger discourse on generative AI where there are all kinds of stories about people getting so drawn into conversations that they do things that seem illogical," he said. "They are following the advice of generative AI tools for very high-stakes decision making. In some ways, our study is a cautionary tale for this newer suite of generative AI tools. Perhaps inserting a pop-up or other modality interactivity tools in the middle of a conversation may stem the flow of this mesmerizing, playful interaction and jerk users into awareness now and then."

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Light-Trained Physical Neural Networks Promise Faster Greener AI