In an article published in the journal Nature, researchers explored how healthcare organizations can bridge the Artificial Intelligence (AI) translation gap by adopting an enterprise Quality Management System (QMS) tailored to healthcare AI. Such a framework not only ensured compliance with regulations but also streamlined processes, minimizing redundancy and optimizing the ethical and effective deployment of artificial intelligence in patient care.
Healthcare software, including AI, Software as a Medical Device (SaMD), and machine learning (ML), offers transformative potential for patient care and clinical workflows. However, integrating these technologies into healthcare is challenging within a complex regulatory environment. Ensuring the effective translation of research into practical clinical solutions is a pressing issue, requiring a collaborative and systematic approach.
Existing efforts recognized challenges in standardization, maturity, and alignment among stakeholders in healthcare AI. While the need for common standards and organizational maturity has been acknowledged, a comprehensive framework to bridge the translation gap and provide a clear pathway from research to clinical application has been lacking.
This paper proposed leveraging the QMS framework as a systematic and adaptable solution. QMS, known for documenting processes and achieving quality objectives, effectively managed regulatory requirements and ensures adherence to evolving standards. Aligned with International Standard Organization (ISO) 13485 and risk-based approaches, QMS offered a streamlined path for healthcare organizations (HCOs) to meet regulatory requirements, fostering enduring safety, effectiveness, ethicality, and alignment with the needs of the user and organization.
The paper focused on key QMS components—Process & Data, People & Culture, and Validated Technology—as drivers for HCOs to integrate clinical excellence and research rigor. By addressing these components, the paper aimed to provide a foundation for HCOs to establish a cohesive system, effectively closing the AI translation gap, and facilitating safe, effective, and ethically sound AI/ML applications in routine patient care.
Risk-based design, development, and monitoring
The medical device industry's long-standing use of a risk-based approach has been pivotal in guiding the design, development, and deployment of healthcare technologies with the least burdensome perspective. This method empowered HCOs to strategically allocate resources, emphasizing equity, safety, and data privacy to prevent errors and foster continuous improvement.
Risk-based practices in healthcare AI/ML have expanded beyond the medical device domain. Initiatives like AAMI's Technical Information Report and broader frameworks such as NIST AI Risk Management, the Whitehouse Blueprint, Coalition for Health AI Blueprint, and Health AI Partnership Key Decision Points exemplified the incorporation of risk management into AI initiatives. Grounded in intended use, risk management involves the identification, enumeration, mitigation, and monitoring of potential hazards. Features minimizing risks were integrated during design and development, and a comprehensive risk management plan was established to address safety, bias, and other anticipated issues throughout the software's life cycle.
Continuous monitoring, reporting, and systematic feedback within a QMS ensured ongoing safety and adherence to risk controls. QMS formalization systematically identified risks, document mitigation strategies, and provided a framework for testing and auditing. This approach, complemented by software life cycle best practices and AI/ML, facilitates performance metric capture, enabling calibration and maintenance during deployment. A multidisciplinary governance system guided proactive risk mitigation, and a robust change management plan with access controls ensured transparency and continuity. Ultimately, a risk-based approach guaranteed the necessary rigor and controlled each stage of the AI technology life cycle.
To navigate evolving regulations in healthcare software, a robust QMS served as a compliance-facilitating infrastructure for HCOs. Whether a software is regulated depends on its intended use and regulatory changes. A QMS aided in minimizing operational risk by standardizing stakeholder responsibilities, ensuring auditability and traceability, and maintaining an inventory of developed and deployed AI technologies. It established policies and procedures governing governance, development, evaluation, maintenance, and monitoring, with an emphasis on roles and training requirements for stakeholders.
Bidirectional communication within a regulated QMS gathered real-time data, integrating with patient safety operations and risk management. Creating an innovation infrastructure aligned with compliance necessitates governance support, leadership mandate, and integration with ethical standards. This approach allowed HCOs to implement tools, testing, and monitoring into their QMS, promoting safe, effective, and ethical AI/ML at scale through formal documentation that ensured transparency and traceability to regulatory requirements.
In conclusion, HCOs can expedite the integration of AI into clinical practice using a QMS. This proactive framework, blending quality culture, risk-based design, and compliance infrastructure, ensured continuous ethical oversight. Implementing a QMS demanded adaptability interdisciplinary collaboration, and drew on regulatory insights. This approach prioritized patient safety, fostered trust, and facilitated the adoption of innovative AI technologies, including those driven by Large Language Models.
- Overgaard, S. M., Graham, M. G., Brereton, T., Pencina, M. J., Halamka, J. D., Vidal, D. E., & Economou-Zavlanos, N. J. (2023). Implementing quality management systems to close the AI translation gap and facilitate safe, ethical, and effective health AI solutions. Npj Digital Medicine, 6(1), 1–5. https://doi.org/10.1038/s41746-023-00968-8, https://www.nature.com/articles/s41746-023-00968-8