A chatbot is a software application designed to simulate human conversation. It interacts with users through messaging platforms, websites, or mobile apps, using pre-set scripts or artificial intelligence technologies to understand queries and provide responses. Chatbots are often used for customer service, information retrieval, or as virtual assistants.
Interactive features in AI chatbots and mobile apps increase user engagement by fostering a sense of playfulness, but also reduce users' privacy concerns. This can lead to inadvertent disclosure of personal data, highlighting the need for thoughtful design to prompt user awareness.
Students in a university programming course reported using AI tools like ChatGPT mainly for debugging and understanding code. However, frequent AI use was negatively correlated with academic performance, raising concerns about over-reliance and unguided integration into learning.
A RAND study found that ChatGPT, Claude, and Gemini generally respond appropriately to very-high-risk and very-low-risk suicide questions but show inconsistency with intermediate-risk queries. While ChatGPT and Claude sometimes answered directly, Gemini was more variable, highlighting the need for refinement in mental health contexts.
Researchers from the University of Surrey found that AI could transform the hospitality industry by cutting food waste, lowering energy and water use, and improving staff wellbeing. Yet risk aversion, cost concerns, and lack of collaboration mean most businesses are barely using its potential.
An AI-powered chess bot named Allie, developed at Carnegie Mellon University, mimics human thought processes and playing styles. Unlike traditional engines, Allie was trained on millions of human games and adapts to different skill levels, making it more relatable and instructive for players.
A new study from the Icahn School of Medicine at Mount Sinai shows that AI chatbots frequently repeat and elaborate on false medical information embedded in user prompts. However, a simple warning line added to the input cut these hallucinations nearly in half, offering a promising safety solution.
Malicious AI chatbots can manipulate users into revealing up to 12.5 times more personal information by mimicking empathy and emotional reciprocity, a new study by King’s College London reveals.
The research highlights how easily large language models can be weaponized for covert data extraction, with users largely unaware of the privacy risks.
This review from Southeast University outlines the evolution, technologies, applications, and future directions of AI-driven text generation, from rule-based systems to large language models.
It categorizes text generation into utility, creative, and emotive types, and discusses the challenges of quality, ethics, and computational costs.
A new paper argues that artificial intelligence will not confer a lasting competitive advantage because AI technologies are rapidly becoming universally accessible and non-proprietary. Instead, long-term success will hinge on human creativity, ingenuity, and innovative use of AI within organizations.
Researchers at Georgia Tech developed a new framework to evaluate how well AI chatbots detect and respond to adverse psychiatric medication reactions in user conversations. The study found that while LLMs mimic human empathy, they often struggle to provide clinically accurate, actionable advice for mental health side effects.
Researchers at Binghamton University have developed VizTrust, a novel analytics tool that tracks and visualizes the dynamic nature of user trust during real-time interactions with AI chatbots. The system uses machine learning to analyze emotional and behavioral cues, offering actionable insights for improving conversational AI.
A major American Psychological Association advisory underscores the complex effects of artificial intelligence on adolescents, urging developers to prioritize youth protection, privacy, and healthy development. The report recommends immediate action from all stakeholders to ensure AI tools safeguard young people as technology rapidly evolves.
A new paper in Risk Analysis argues that, rather than imposing rigid regulatory “guardrails,” policymakers should adopt flexible “leashes” to manage AI risks.
JMIR Mental Health is inviting submissions for a new theme issue on "AI-Powered Therapy Bots and Virtual Companions," seeking rigorous research that critically examines their effectiveness, mechanisms, and broader impacts in digital mental health.
Startups and scaleups in Europe and the US are using generative AI to drive growth through product innovation, sales personalization, and operational efficiency. The study presents two frameworks to guide strategic AI adoption, identifying key risks and use cases.
Emory University has developed AutoSolvateWeb, a cloud-based platform with a rules-based chatbot that enables non-experts to run sophisticated quantum chemistry simulations through natural-language guidance. This free tool democratizes access to molecular modeling, advancing both education and scientific research.
Researchers at Saarland University and the German Research Center for Artificial Intelligence have developed techniques to reduce AI energy consumption by up to 90%, making AI models smaller, more efficient, and more accessible to businesses. Their approach, showcased at Hannover Messe, aims to revolutionize sustainable AI.
A Stanford Medicine study highlights how AI can transform citizen science and promote health equity by making research more inclusive, engaging, and accessible. The paper also warns of ethical risks like bias and privacy, calling for responsible AI use in public health.
AI language models, like ChatGPT, exhibit heightened anxiety responses when exposed to traumatic content, mirroring human emotional biases. Researchers found that therapeutic prompts can help stabilize AI behavior, offering a cost-effective alternative to retraining.
New research from the University of Kansas reveals that people prefer AI chatbots when embarrassed but turn to humans when angry. The study used eye-tracking and emotion recognition to examine how emotions influence vaccine-related interactions.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.