A chatbot is a software application designed to simulate human conversation. It interacts with users through messaging platforms, websites, or mobile apps, using pre-set scripts or artificial intelligence technologies to understand queries and provide responses. Chatbots are often used for customer service, information retrieval, or as virtual assistants.
Agentic AI systems can interpret intent and independently perform complex online tasks, offering major productivity benefits but also introducing new security, privacy, and compliance risks. Researchers at Saint Louis University are stress-testing these agents to identify vulnerabilities and strengthen safeguards before widespread deployment.
This study shows that human hiring reviewers rapidly adopt racial biases displayed by AI recommendation systems, even when evaluating equally qualified candidates. The findings reveal that biased LLM outputs can directly shape human decision-making in high-stakes contexts such as hiring.
A Dartmouth study shows that medical students trust an AI teaching assistant far more when its answers are restricted to expert-curated course materials rather than the open internet. The research demonstrates how retrieval-anchored AI can safely deliver personalized, high-quality educational support at scale.
A University of Washington study with 528 participants shows that people mirror biased large language model recommendations during hiring, even when applicants are equally qualified. Without source bias, selections were balanced, but with moderately to severely biased AI, human choices aligned closely with the AI, highlighting risks for fairness in recruitment.
Researchers at Vanderbilt University Medical Center have demonstrated that AI and protein language models can design monoclonal antibodies capable of preventing or reducing severe viral infections such as RSV and avian influenza. Their model, MAGE, can generate functional antibodies against unseen viral strains faster than traditional methods.
A new Management Science study from Texas McCombs shows that while emotion AI can improve customer care efficiency and reduce stress on employees, overly accurate systems can backfire, encouraging customers to exaggerate emotions to game the system.
Interactive features in AI chatbots and mobile apps increase user engagement by fostering a sense of playfulness, but also reduce users' privacy concerns. This can lead to inadvertent disclosure of personal data, highlighting the need for thoughtful design to prompt user awareness.
Students in a university programming course reported using AI tools like ChatGPT mainly for debugging and understanding code. However, frequent AI use was negatively correlated with academic performance, raising concerns about over-reliance and unguided integration into learning.
A RAND study found that ChatGPT, Claude, and Gemini generally respond appropriately to very-high-risk and very-low-risk suicide questions but show inconsistency with intermediate-risk queries. While ChatGPT and Claude sometimes answered directly, Gemini was more variable, highlighting the need for refinement in mental health contexts.
Researchers from the University of Surrey found that AI could transform the hospitality industry by cutting food waste, lowering energy and water use, and improving staff wellbeing. Yet risk aversion, cost concerns, and lack of collaboration mean most businesses are barely using its potential.
An AI-powered chess bot named Allie, developed at Carnegie Mellon University, mimics human thought processes and playing styles. Unlike traditional engines, Allie was trained on millions of human games and adapts to different skill levels, making it more relatable and instructive for players.
A new study from the Icahn School of Medicine at Mount Sinai shows that AI chatbots frequently repeat and elaborate on false medical information embedded in user prompts. However, a simple warning line added to the input cut these hallucinations nearly in half, offering a promising safety solution.
Malicious AI chatbots can manipulate users into revealing up to 12.5 times more personal information by mimicking empathy and emotional reciprocity, a new study by King’s College London reveals.
The research highlights how easily large language models can be weaponized for covert data extraction, with users largely unaware of the privacy risks.
This review from Southeast University outlines the evolution, technologies, applications, and future directions of AI-driven text generation, from rule-based systems to large language models.
It categorizes text generation into utility, creative, and emotive types, and discusses the challenges of quality, ethics, and computational costs.
A new paper argues that artificial intelligence will not confer a lasting competitive advantage because AI technologies are rapidly becoming universally accessible and non-proprietary. Instead, long-term success will hinge on human creativity, ingenuity, and innovative use of AI within organizations.
Researchers at Georgia Tech developed a new framework to evaluate how well AI chatbots detect and respond to adverse psychiatric medication reactions in user conversations. The study found that while LLMs mimic human empathy, they often struggle to provide clinically accurate, actionable advice for mental health side effects.
Researchers at Binghamton University have developed VizTrust, a novel analytics tool that tracks and visualizes the dynamic nature of user trust during real-time interactions with AI chatbots. The system uses machine learning to analyze emotional and behavioral cues, offering actionable insights for improving conversational AI.
A major American Psychological Association advisory underscores the complex effects of artificial intelligence on adolescents, urging developers to prioritize youth protection, privacy, and healthy development. The report recommends immediate action from all stakeholders to ensure AI tools safeguard young people as technology rapidly evolves.
A new paper in Risk Analysis argues that, rather than imposing rigid regulatory “guardrails,” policymakers should adopt flexible “leashes” to manage AI risks.
JMIR Mental Health is inviting submissions for a new theme issue on "AI-Powered Therapy Bots and Virtual Companions," seeking rigorous research that critically examines their effectiveness, mechanisms, and broader impacts in digital mental health.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.