AI is employed in healthcare for various applications, including medical image analysis, disease diagnosis, personalized treatment planning, and patient monitoring. It utilizes machine learning, natural language processing, and data analytics to improve diagnostic accuracy, optimize treatment outcomes, and enhance healthcare delivery, leading to more efficient and effective patient care.
The paper addresses concerns about the accuracy of AI-driven chatbots, focusing on large language models (LLMs) like ChatGPT, in providing clinical advice. The researchers propose the Chatbot Assessment Reporting Tool (CHART) as a collaborative effort to establish structured reporting standards, involving a diverse group of stakeholders, from statisticians to patient partners.
Researchers have explored the feasibility of using a camera-based system in combination with machine learning, specifically the AdaBoost classifier, to assess the quality of functional tests. Their study, focusing on the Single Leg Squat Test and Step Down Test, demonstrated that this approach, supported by expert physiotherapist input, offers an efficient and cost-effective method for evaluating functional tests, with the potential to enhance the diagnosis and treatment of movement disorders and improve evaluation accuracy and reliability.
Researchers introduced Relay Learning, a novel deep-learning framework designed to ensure the physical isolation of clinical data from external intruders. This secure multi-site deep learning approach, Relay Learning, significantly enhances data privacy and security while demonstrating superior performance in various multi-site clinical settings, setting a new standard for AI-aided medical solutions and cross-site data sharing in the healthcare domain.
Researchers outlined six principles for the ethical use of AI and machine learning in Earth and environmental sciences. These principles emphasize transparency, intentionality, risk mitigation, inclusivity, outreach, and ongoing commitment. The study also highlights the importance of addressing biases, data disparities, and the need for transparency initiatives like explainable AI (XAI) to ensure responsible and equitable AI-driven research in these fields.
This article explores the challenges and approaches to imparting human values and ethical decision-making in AI systems, with a focus on large language models like ChatGPT. It discusses techniques such as supervised fine-tuning, auxiliary models, and reinforcement learning from human feedback to imbue AI systems with desired moral stances, emphasizing the need for interdisciplinary perspectives from fields like cognitive science to align AI with human ethics.
A recent research publication explores the profound impact of artificial intelligence (AI) on urban sustainability and mobility. The study highlights the role of AI in supporting dynamic and personalized mobility solutions, sustainable urban mobility planning, and the development of intelligent transportation systems.
Researchers introduced the Lightweight Hybrid Vision Transformer (LH-ViT) network for radar-based Human Activity Recognition (HAR). LH-ViT combines convolution operations with self-attention, utilizing a Residual Squeeze-and-Excitation (RES-SE) block to reduce computational load. Experimental results on two human activity datasets demonstrated LH-ViT's advantages in expressiveness and computing efficiency over traditional approaches.
Researchers have introduced an innovative IoT-based system for recognizing negative emotions, such as disgust, fear, and sadness, using multimodal biosignal data from wearable devices. This system combines EEG signals and physiological data from a smart band, processed through machine learning, to achieve high accuracy in emotion recognition.
Researchers have introduced FACTCHD, a framework for detecting fact-conflicting hallucinations in large language models (LLMs). They developed a benchmark that provides interpretable data for evaluating the factual accuracy of LLM-generated responses and introduced the TRUTH-TRIANGULATOR framework to enhance hallucination detection.
Researchers explored the application of distributed learning, particularly Federated Learning (FL), for Internet of Things (IoT) services in the context of emerging 6G networks. They discussed the advantages and challenges of distributed learning in IoT domains, emphasizing its potential for enhancing IoT services while addressing privacy concerns and the need for ongoing research in areas such as security and communication efficiency.
This review explores the landscape of social robotics research, addressing knowledge gaps and implications for business and management. It highlights the need for more studies on social robotic interactions in organizations, trust in human-robot relationships, and the impact of virtual social robots in the metaverse, emphasizing the importance of balancing technology integration with societal well-being.
This study investigates the role of social presence in shaping trust when collaborating with algorithms. The research reveals that the presence of others can enhance people's trust in algorithms, offering valuable insights into human-algorithm interactions and trust dynamics.
This study explores the development and usability of the AIIS (Artificial Intelligence, Innovation, and Society) collaborative learning interface, a metaverse-based educational platform designed for undergraduate students. The research demonstrates the potential of immersive technology in education and offers insights and recommendations for enhancing metaverse-based learning systems.
This research paper delves into the black box problem in clinical artificial intelligence (AI) and its implications for health professional-patient relationships. Drawing on African scholarship, the study highlights the importance of trust, transparency, and explainability in clinical AI to ensure ethical healthcare practices and genuine fiduciary relationships between healthcare professionals and patients.
This paper explores the increasing presence of autonomous artificial intelligence (AI) systems in healthcare and the associated concerns related to liability, regulatory compliance, and financial aspects. It discusses how evolving regulations, such as those from the FDA, aim to ensure transparency and accountability, and how payment models like Medicare Physician Fee Schedule (MPFS) are adapting to accommodate autonomous AI integration.
Researchers present a stochastic programming model and an Improved Tabu Search (I-TS) algorithm to optimize the scheduling of Autonomous Mobile Robots (AMRs) in hospital settings. The study addresses stochastic elements in service and travel times, providing insights into effective AMR route planning and the feasibility of the proposed model for various hospital environments.
This research delves into the application of machine learning (ML) algorithms in wastewater treatment, examining their impact on this essential environmental discipline. Through text mining and analysis of scientific literature, the study identifies popular ML models and their relevance, emphasizing the increasing role of ML in addressing complex challenges in wastewater treatment, while also highlighting the importance of data quality and model interpretation.
Researchers address the emotional risks associated with anthropomorphism in human-social robot interactions. They introduce the concept of a virtual interactive environment (VIE) and advocate for VIE Indication (VIEI) as a means to clarify the virtual nature of these interactions, emphasizing ethical responsibilities, managing expectations, and promoting a more nuanced understanding of social robots.
Recent research published in Scientific Reports investigates the impact of biased artificial intelligence (AI) recommendations on human decision-making in medical diagnostics. The study, conducted through three experiments, reveals that AI-generated biased recommendations significantly affect human behavior, leading to increased errors in medical decision-making tasks.
In a groundbreaking study, researchers delve into the intricate web of psychological reactions people have towards robots. This comprehensive research effort introduces the Positive-Negative-Competence (PNC) model, categorizing diverse psychological processes into three dimensions.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.