VizTrust Reveals How Users Build And Lose Trust In AI Chatbots In Real Time

A new analytics tool shines a light on the hidden signals that drive, or erode, our trust in AI chatbots, providing developers with a roadmap to build more human-centered and trustworthy digital assistants.

Research: VizTrust: A Visual Analytics Tool for Capturing User Trust Dynamics in Human-AI Communication. Image Credit: N Universe / ShutterstockResearch: VizTrust: A Visual Analytics Tool for Capturing User Trust Dynamics in Human-AI Communication. Image Credit: N Universe / Shutterstock

As artificial intelligence tools like ChatGPT are integrated into our everyday lives, our interactions with AI chatbots online become more frequent. Are we welcoming them, or are we trying to push them away?

New research from Binghamton University, State University of New York, aims to answer those questions through VizTrust. This analytics tool makes user trust dynamics in human-AI communication visible and understandable. 

Xin "Vision" Wang, a PhD student at the Thomas J. Watson College of Engineering and Applied Science's School of Systems Science and Industrial Engineering, is developing VizTrust as part of her dissertation. She presented her current work and findings in April at the Association for Computing Machinery (ACM) CHI 2025 conference in Yokohama, Japan.

VizTrust was born out of a pressing challenge: User trust in AI agents is highly dynamic, context-dependent, and difficult to quantify using traditional methods.

"Most studies rely on post-conversation surveys, but they only can capture trust state before and after the human-AI interaction," Wang said. "They miss the detailed, moment-by-moment signals that show why a user's trust may rise or fall during an interaction."

To address this, VizTrust evaluates user trust based on four dimensions grounded in social psychology: competence, benevolence, integrity, and predictability. Additionally, VizTrust analyzes trust-relevant cues from user messages, such as emotional tone, engagement level, and politeness strategies, using machine learning and natural language processing techniques to visualize changes in trust throughout a conversation.

"The power of large language models and generative AI is rising, but we need to find out the user experience when people use different conversational applications," Wang said. "Without diving in to see what exactly happened that influenced a bad experience, we can never really find out the best solution to improve the AI model."

The research paper illustrates the functionality of VizTrust through a use case involving a software engineer stressed out by his job and a therapy chatbot designed to support workers. They discuss his work-related stress and offer him some advice on how to manage it.

By analyzing subtle linguistic and behavioral shifts in user language and interaction, VizTrust pinpoints moments when trust is built or eroded. For example, VizTrust highlights one instance where the trust level decreases due to repeated suggestions that the user dislikes. This type of insight is vital not only for academic understanding but also for practical improvements to the design of conversational AI systems.

"Trust is not just a user issue – it's a system issue," Wang said, "With VizTrust, we're giving developers, researchers and designers a new lens to see exactly where trust falters, so they can make meaningful upgrades to their AI system."

VizTrust has already gained recognition by being accepted as a late-breaking work at CHI 2025, the most prestigious conference in the field of human-computer interaction. VizTrust stood out among more than 3,000 late-breaking submissions from around the world, with a competitive acceptance rate of just under 33%.

Co-authors on the project include SSIE Assistant Professors Sadamori Kojaku and Stephanie Tulk Jesso, as well as Associate Professor David M. Neyens from Clemson University and Professor Min-Sun Kim from the University of Hawaiʻi at Mānoa.

Wang is moving VizTrust to the next stage of development and will increase its adaptability to individual differences.

"When people interact with AI agents, they may have very different attitudes," she said. "We may need to take a specific, individual perspective to understand their trust - for example, their personal characteristics, their implicit trust level, even their previous interactions with AI systems can influence their attitudes."

Looking ahead, Wang envisions deploying VizTrust as a publicly available tool online to support broader research and development.

"By making VizTrust accessible," she said, "we can begin to bridge the gap between technical performance and human experience and make AI system more human-centered and responsible."

Source:
Journal reference:
  • Xin Wang, Stephanie Tulk Jesso, Sadamori Kojaku, David M Neyens, and Min Sun Kim. 2025. VizTrust: A Visual Analytics Tool for Capturing User Trust Dynamics in Human-AI Communication. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25). Association for Computing Machinery, New York, NY, USA, Article 581, 1–10. DOI: 10.1145/3706599.3719798, https://dl.acm.org/doi/10.1145/3706599.3719798

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Matillion Unleashes AI Data Engineers To Transform Data Workflows And Empower Every User