AI is employed in healthcare for various applications, including medical image analysis, disease diagnosis, personalized treatment planning, and patient monitoring. It utilizes machine learning, natural language processing, and data analytics to improve diagnostic accuracy, optimize treatment outcomes, and enhance healthcare delivery, leading to more efficient and effective patient care.
Agentic AI systems can interpret intent and independently perform complex online tasks, offering major productivity benefits but also introducing new security, privacy, and compliance risks. Researchers at Saint Louis University are stress-testing these agents to identify vulnerabilities and strengthen safeguards before widespread deployment.
A new paper titled “A Survey on Omni-Modal Language Models” provides a comprehensive analysis of AI systems that integrate text, image, audio, and video understanding within a unified framework. Authored by researchers from Shandong Jianzhu University and Shandong University, the survey positions omni-modal language models (OMLMs) as key enablers on the path toward Artificial General Intelligence (AGI).
Nanyang Technological University, Singapore (NTU) and Zero Gravity (0G) have launched a S$5 million joint research hub to advance blockchain-based AI technologies that promote transparency, accountability, and decentralisation. The four-year initiative will pioneer open AI systems, secure marketplaces, and proof-of-useful-work frameworks.
The American College of Cardiology (ACC) and OpenEvidence have announced a strategic partnership to integrate generative AI into cardiovascular care, enabling clinicians to access real-time, evidence-based insights at the point of care. The collaboration aims to enhance decision-making, improve patient outcomes, and ensure responsible AI implementation in clinical workflows.
Mount Sinai Health System will deploy Microsoft Dragon Copilot, an AI-powered clinical assistant designed to streamline documentation, automate administrative tasks, and reduce clinician burnout. The rollout marks a key milestone in Mount Sinai’s digital transformation and commitment to responsible AI in healthcare.
A Mass General Brigham study reveals that large language models often fail to challenge illogical medical queries due to excessive agreeableness, risking misinformation. Published in npj Digital Medicine, the research shows that targeted fine-tuning and prompt engineering can dramatically improve models’ safety and reasoning.
Korean researchers at ETRI have unveiled next-generation remote collaboration technology that enables lifelike face-to-face interaction in XR, including real-time eye contact and realistic handshakes through exoskeleton haptic gloves and AI-powered digital humans.
Researchers highlight how AI adoption introduces new security vulnerabilities, with Hertz Fellow Vivek Nair’s startup Multifactor developing cryptographic solutions to safeguard both businesses and consumers. Backed by the Hertz Foundation and Y Combinator, the company aims to establish “agentic AI security” as the next pillar of cybersecurity.
The University of Texas at San Antonio has launched the College of AI, Cyber, and Computing, uniting programs in artificial intelligence, cybersecurity, computing, and data science to serve over 5,000 students and drive workforce growth. Located in San Antonio’s urban core, the college will foster industry collaboration, innovation, and regional economic development.
Researchers have developed DreamConnect, a dual-stream AI system that interprets and edits visual mental imagery directly from fMRI signals using natural language prompts. This novel framework allows users to reshape imagined scenes, bridging brain activity and interactive visualization.
An editorial in JMIR Medical Informatics explores the rapid rise of ambient AI scribes, highlighting their potential to ease clinician burnout and improve patient care. The authors also warn of risks including accuracy, privacy, autonomy, and evidence gaps that demand rigorous evaluation.
Researchers at the University of Maryland School of Medicine used a generative AI large language model to scan emergency department notes and flag high-risk patients with potential H5N1 bird flu exposure. The tool identified hidden occupational and animal-contact risks quickly, cheaply, and with strong accuracy.
AI performance alone is not enough for safe deployment in critical settings like hospitals or aviation.
Researchers show that human-AI teamwork must be evaluated together, especially under poor algorithm performance, to protect against risk.
A new study from the Icahn School of Medicine at Mount Sinai shows that AI chatbots frequently repeat and elaborate on false medical information embedded in user prompts. However, a simple warning line added to the input cut these hallucinations nearly in half, offering a promising safety solution.
Malicious AI chatbots can manipulate users into revealing up to 12.5 times more personal information by mimicking empathy and emotional reciprocity, a new study by King’s College London reveals.
The research highlights how easily large language models can be weaponized for covert data extraction, with users largely unaware of the privacy risks.
This perspective paper proposes a comprehensive, three-axis blueprint to integrate AI into water treatment at the technological, engineering, and industrial levels. By transforming rigid systems into intelligent, adaptive infrastructures, AI promises efficiency, resilience, and sustainability in water management.
Researchers at Rensselaer Polytechnic Institute and City University of Hong Kong propose a novel AI framework inspired by 3D brain-like neural structures and recursive loops. This vertically structured design could make AI systems more efficient, adaptive, and accessible while offering insights into human cognition.
Researchers at Huazhong University of Science and Technology have introduced Soft-GNN, a graph-based AI approach that dynamically evaluates and prioritizes reliable data, making artificial intelligence more robust against errors and manipulation.
Researchers at Georgia Tech developed a new framework to evaluate how well AI chatbots detect and respond to adverse psychiatric medication reactions in user conversations. The study found that while LLMs mimic human empathy, they often struggle to provide clinically accurate, actionable advice for mental health side effects.
JMIR Mental Health is inviting submissions for a new theme issue on "AI-Powered Therapy Bots and Virtual Companions," seeking rigorous research that critically examines their effectiveness, mechanisms, and broader impacts in digital mental health.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.