Two-thirds of U.S. doctors see the benefits of using artificial intelligence (AI) in their work, according to a survey by the American Medical Association. However, only about one-third reported using it at the time. This gap highlights the need for improved training, as medical schools, like many other sectors, struggle to prepare future physicians to utilize AI responsibly and effectively.
Tufts School of Medicine assesses AI readiness and integration
Tufts University School of Medicine adopted an AI-focused learning objective in fall 2024. Building on that initiative, a team led by Maria Blanco, professor of psychiatry and associate dean for faculty development, published a study in 2025 exploring how faculty, staff, and students use and perceive AI. The needs-assessment survey helped establish a roadmap for integrating AI training into the curriculum.
“This technology has the potential to change how medicine is practiced and how students learn, so we cannot disregard it,” Blanco said.
Survey results reveal early adoption but limited proficiency
The survey was conducted among all Tufts medical students and a select group of faculty. Responses were received from 128 faculty and 138 students, evenly distributed across academic years. The findings revealed widespread experimentation with AI tools but low confidence in their use: only 12% of students and 9% of faculty reported proficiency.
AI use cases in clinical practice, research, and teaching
Faculty most commonly use AI in clinical practice, for example, assisting with note-taking and patient chart analysis. Others use AI in research or classroom teaching, employing it to develop learning materials such as quizzes and test questions, provided the outputs are verified for accuracy (read more).
Students primarily use AI to enhance their understanding of coursework, leveraging it for interactive learning, explanation, and content summarization.
Ethical awareness is central to AI adoption in medical training
Both students and faculty expressed concern about the ethical implications of AI use and requested formal training on the responsible application of AI. The School of Medicine’s AI & Biotech Medicine Club hosted a faculty-student forum last spring to discuss the ethics of AI, reflecting the growing interest in the technology’s societal and clinical implications.
Blanco emphasized that ethical awareness must include recognizing the limitations of AI. “These technologies simulate knowledge generation and clinical reasoning with human-like fluency. This can be deceiving,” she said. “We have to make sure that we train everyone to know how to use it, because it has assets, but also significant pitfalls and risks.”
Maintaining human-centered care amid technological change
Blanco underscored that AI should augment, not replace, human judgment. Faculty expressed similar concerns that overreliance on AI could weaken the human connection fundamental to medicine. “At the end of the day, we have to make sure this is what helps us refocus on our human-centered care,” Blanco said. “My hope is that this will push us to strengthen critical thinking and refine our clinical reasoning skills.”
Hands-on training and faculty development initiatives
Both students and faculty noted that there was limited time for experimentation with AI tools. Two-thirds of respondents recommended more training and practical workshops to build competency. In response, the School of Medicine launched a series of faculty and student seminars focused on AI applications and critical evaluation.
The school also updated its 2024 AI policy and launched several pilot projects, including:
- AI-assisted instruction in the Problem-Based Learning course led by medical librarians;
- A chatbot for practicing communication in Medical Interviewing and the Doctor-Patient Relationship;
- An interactive AI learning app for the Neuroscience course;
- AI-supported case analyses in Introduction to Clinical Reasoning.
Institutional support and cross-campus collaboration
Tufts University has expanded its institutional resources for AI integration, including guidelines on discussing AI in syllabi and training materials for faculty and staff on appropriate AI use. These initiatives aim to ensure consistency and ethical standards across disciplines.
“Medical schools and medical organizations are embracing AI to help physicians and trainees make the most of these resources,” Blanco said. “The key is that we are all learning from each other, students, faculty, and, of course, our technology teams.”
Toward a future of responsible AI in medicine
Tufts University’s initiatives reflect a growing movement in medical education to strike a balance between innovation and ethics. By equipping future clinicians with both technical and critical reasoning skills, the School of Medicine aims to prepare graduates for a healthcare landscape where AI serves as a powerful yet carefully managed partner in patient care.