Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach

In an article published in the journal Nature, researchers explored how healthcare organizations can bridge the Artificial Intelligence (AI) translation gap by adopting an enterprise Quality Management System (QMS) tailored to healthcare AI. Such a framework not only ensured compliance with regulations but also streamlined processes, minimizing redundancy and optimizing the ethical and effective deployment of artificial intelligence in patient care.

Study: Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach. Image credit: PopTika/Shutterstock
Study: Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach. Image credit: PopTika/Shutterstock

Background

Healthcare software, including AI, Software as a Medical Device (SaMD), and machine learning (ML), offers transformative potential for patient care and clinical workflows. However, integrating these technologies into healthcare is challenging within a complex regulatory environment. Ensuring the effective translation of research into practical clinical solutions is a pressing issue, requiring a collaborative and systematic approach.

Existing efforts recognized challenges in standardization, maturity, and alignment among stakeholders in healthcare AI. While the need for common standards and organizational maturity has been acknowledged, a comprehensive framework to bridge the translation gap and provide a clear pathway from research to clinical application has been lacking.

This paper proposed leveraging the QMS framework as a systematic and adaptable solution. QMS, known for documenting processes and achieving quality objectives, effectively managed regulatory requirements and ensures adherence to evolving standards. Aligned with International Standard Organization (ISO) 13485 and risk-based approaches, QMS offered a streamlined path for healthcare organizations (HCOs) to meet regulatory requirements, fostering enduring safety, effectiveness, ethicality, and alignment with the needs of the user and organization.

The paper focused on key QMS components—Process & Data, People & Culture, and Validated Technology—as drivers for HCOs to integrate clinical excellence and research rigor. By addressing these components, the paper aimed to provide a foundation for HCOs to establish a cohesive system, effectively closing the AI translation gap, and facilitating safe, effective, and ethically sound AI/ML applications in routine patient care.

Risk-based design, development, and monitoring

The medical device industry's long-standing use of a risk-based approach has been pivotal in guiding the design, development, and deployment of healthcare technologies with the least burdensome perspective. This method empowered HCOs to strategically allocate resources, emphasizing equity, safety, and data privacy to prevent errors and foster continuous improvement.

Risk-based practices in healthcare AI/ML have expanded beyond the medical device domain. Initiatives like AAMI's Technical Information Report and broader frameworks such as NIST AI Risk Management, the Whitehouse Blueprint, Coalition for Health AI Blueprint, and Health AI Partnership Key Decision Points exemplified the incorporation of risk management into AI initiatives. Grounded in intended use, risk management involves the identification, enumeration, mitigation, and monitoring of potential hazards. Features minimizing risks were integrated during design and development, and a comprehensive risk management plan was established to address safety, bias, and other anticipated issues throughout the software's life cycle.

Continuous monitoring, reporting, and systematic feedback within a QMS ensured ongoing safety and adherence to risk controls. QMS formalization systematically identified risks, document mitigation strategies, and provided a framework for testing and auditing. This approach, complemented by software life cycle best practices and AI/ML, facilitates performance metric capture, enabling calibration and maintenance during deployment. A multidisciplinary governance system guided proactive risk mitigation, and a robust change management plan with access controls ensured transparency and continuity. Ultimately, a risk-based approach guaranteed the necessary rigor and controlled each stage of the AI technology life cycle.

Compliance-facilitating infrastructure

To navigate evolving regulations in healthcare software, a robust QMS served as a compliance-facilitating infrastructure for HCOs. Whether a software is regulated depends on its intended use and regulatory changes. A QMS aided in minimizing operational risk by standardizing stakeholder responsibilities, ensuring auditability and traceability, and maintaining an inventory of developed and deployed AI technologies. It established policies and procedures governing governance, development, evaluation, maintenance, and monitoring, with an emphasis on roles and training requirements for stakeholders.

Bidirectional communication within a regulated QMS gathered real-time data, integrating with patient safety operations and risk management. Creating an innovation infrastructure aligned with compliance necessitates governance support, leadership mandate, and integration with ethical standards. This approach allowed HCOs to implement tools, testing, and monitoring into their QMS, promoting safe, effective, and ethical AI/ML at scale through formal documentation that ensured transparency and traceability to regulatory requirements.

Conclusion

In conclusion, HCOs can expedite the integration of AI into clinical practice using a QMS. This proactive framework, blending quality culture, risk-based design, and compliance infrastructure, ensured continuous ethical oversight. Implementing a QMS demanded adaptability interdisciplinary collaboration, and drew on regulatory insights. This approach prioritized patient safety, fostered trust, and facilitated the adoption of innovative AI technologies, including those driven by Large Language Models.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2023, November 30). Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach. AZoAi. Retrieved on July 27, 2024 from https://www.azoai.com/news/20231130/Bridging-the-AI-Translation-Gap-in-Healthcare-A-Quality-Management-System-Approach.aspx.

  • MLA

    Nandi, Soham. "Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach". AZoAi. 27 July 2024. <https://www.azoai.com/news/20231130/Bridging-the-AI-Translation-Gap-in-Healthcare-A-Quality-Management-System-Approach.aspx>.

  • Chicago

    Nandi, Soham. "Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach". AZoAi. https://www.azoai.com/news/20231130/Bridging-the-AI-Translation-Gap-in-Healthcare-A-Quality-Management-System-Approach.aspx. (accessed July 27, 2024).

  • Harvard

    Nandi, Soham. 2023. Bridging the AI Translation Gap in Healthcare: A Quality Management System Approach. AZoAi, viewed 27 July 2024, https://www.azoai.com/news/20231130/Bridging-the-AI-Translation-Gap-in-Healthcare-A-Quality-Management-System-Approach.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Advancing Additive Manufacturing with ML and Digital Twin