AI is employed in healthcare for various applications, including medical image analysis, disease diagnosis, personalized treatment planning, and patient monitoring. It utilizes machine learning, natural language processing, and data analytics to improve diagnostic accuracy, optimize treatment outcomes, and enhance healthcare delivery, leading to more efficient and effective patient care.
The University of Texas at San Antonio has launched the College of AI, Cyber, and Computing, uniting programs in artificial intelligence, cybersecurity, computing, and data science to serve over 5,000 students and drive workforce growth. Located in San Antonio’s urban core, the college will foster industry collaboration, innovation, and regional economic development.
Researchers have developed DreamConnect, a dual-stream AI system that interprets and edits visual mental imagery directly from fMRI signals using natural language prompts. This novel framework allows users to reshape imagined scenes, bridging brain activity and interactive visualization.
An editorial in JMIR Medical Informatics explores the rapid rise of ambient AI scribes, highlighting their potential to ease clinician burnout and improve patient care. The authors also warn of risks including accuracy, privacy, autonomy, and evidence gaps that demand rigorous evaluation.
Researchers at the University of Maryland School of Medicine used a generative AI large language model to scan emergency department notes and flag high-risk patients with potential H5N1 bird flu exposure. The tool identified hidden occupational and animal-contact risks quickly, cheaply, and with strong accuracy.
AI performance alone is not enough for safe deployment in critical settings like hospitals or aviation.
Researchers show that human-AI teamwork must be evaluated together, especially under poor algorithm performance, to protect against risk.
A new study from the Icahn School of Medicine at Mount Sinai shows that AI chatbots frequently repeat and elaborate on false medical information embedded in user prompts. However, a simple warning line added to the input cut these hallucinations nearly in half, offering a promising safety solution.
Malicious AI chatbots can manipulate users into revealing up to 12.5 times more personal information by mimicking empathy and emotional reciprocity, a new study by King’s College London reveals.
The research highlights how easily large language models can be weaponized for covert data extraction, with users largely unaware of the privacy risks.
This perspective paper proposes a comprehensive, three-axis blueprint to integrate AI into water treatment at the technological, engineering, and industrial levels. By transforming rigid systems into intelligent, adaptive infrastructures, AI promises efficiency, resilience, and sustainability in water management.
Researchers at Rensselaer Polytechnic Institute and City University of Hong Kong propose a novel AI framework inspired by 3D brain-like neural structures and recursive loops. This vertically structured design could make AI systems more efficient, adaptive, and accessible while offering insights into human cognition.
Researchers at Huazhong University of Science and Technology have introduced Soft-GNN, a graph-based AI approach that dynamically evaluates and prioritizes reliable data, making artificial intelligence more robust against errors and manipulation.
Researchers at Georgia Tech developed a new framework to evaluate how well AI chatbots detect and respond to adverse psychiatric medication reactions in user conversations. The study found that while LLMs mimic human empathy, they often struggle to provide clinically accurate, actionable advice for mental health side effects.
JMIR Mental Health is inviting submissions for a new theme issue on "AI-Powered Therapy Bots and Virtual Companions," seeking rigorous research that critically examines their effectiveness, mechanisms, and broader impacts in digital mental health.
Researchers have developed RiskPath, an open-source AI toolkit that uses explainable deep learning to predict chronic diseases years before symptoms emerge, achieving up to 99% accuracy. It identifies shifting risk factors across life stages, enabling earlier and more precise preventive care.
Mount Sinai is the first U.S. medical school to provide all students and selected faculty with access to OpenAI’s secure ChatGPT Edu platform. The initiative aims to integrate responsible, advanced AI into education, research, and clinical reasoning while safeguarding data and promoting ethical use.
Nanjing University researchers unveiled a new AI training framework and benchmark that dramatically improve how AI collaborates with humans, especially when facing unexpected, real-world challenges. Their approach enables AI to communicate more effectively, adapt swiftly, and outperform traditional methods in human-AI teamwork.
Researchers from Nanjing University and Carnegie Mellon University have developed a new AI method that improves offline reinforcement learning by focusing on true cause-and-effect relationships in historical data. This breakthrough enables autonomous systems, like driverless cars and healthcare AI, to make safer and more accurate decisions without real-time interaction.
Mayo Clinic Platform_Accelerate has supported 15 global health tech startups in a 30-week program to refine and validate AI-driven solutions across clinical care and health administration. These innovations span diagnostics, mental health, clinical trials, and more, showcasing the future of precision medicine.
Researchers at UC San Diego Health have developed an AI-powered, non-invasive mapping system to accurately locate arrhythmia sources, revolutionizing treatment of ventricular arrhythmias like atrial fibrillation. This system improves ablation targeting by using advanced simulations of heart rhythms.
Ascertain has secured $10 million in Series A funding to scale its AI-powered case management platform, aiming to streamline administrative tasks and improve care delivery efficiency. With Northwell Health and Deerfield Management backing the effort, Ascertain targets major labor gaps in healthcare workflows.
Researchers at Wuhan University developed Fair Adversarial Training (FairAT), a technique that enhances the fairness and robustness of AI models by focusing on their most vulnerable data points. FairAT significantly improves both equity and security, outperforming existing state-of-the-art methods.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.