Dartmouth’s Neuroscience course reveals that students favor AI systems grounded in vetted textbooks and clinical guidelines, evidence that carefully constrained AI can deliver safer, more reliable precision education while reducing hallucinations and boosting confidence.

A generative AI teaching assistant for personalized learning in medical education. Image Credit: Miha Creative / Shutterstock
A new Dartmouth investigation reveals that artificial intelligence, when combined with carefully curated source material, can deliver individualized educational support at scale. The work also offers the first evidence that students place greater trust in AI systems that limit their responses to expert-curated information rather than general internet data.
Study Context and Development of NeuroBot TA
Professor Thomas Thesen and co-author Soo Hwan Park evaluated how 190 medical students in Dartmouth’s Geisel School of Medicine engaged with NeuroBot TA, an AI teaching assistant available at all times during a Neuroscience and Neurology course. The platform utilizes retrieval-augmented generation, anchoring AI responses in course textbooks, lecture slides, and clinical guidelines. By restricting the system to vetted content, the designers aim to reduce the prevalence of hallucinated or low-quality answers common in general-purpose chatbots.
Trust Advantages of Curated AI Systems
The study, published in npj Digital Medicine, reports that students overwhelmingly trusted a curated AI assistant more than general chatbots. Students valued the NeuroBot TA’s transparency and evidence-based responses, and they frequently used it during exam preparation for rapid fact-checking.
Thesen describes the work as a step toward precision education: instruction that adapts to individual needs, especially valuable in low-resource environments where students have limited access to instructors.
User Experience and Learning Behaviors
Researchers analyzed responses from 143 students who completed surveys across two academic years. Key perceptions included:
- High trust in answers grounded in official course materials
- Appreciation for speed and convenience
- Frequent use for fact verification rather than deeper conceptual exploration
Even frequent chatbot users noted frustration that NeuroBot TA could not access the wider internet, although this constraint was central to ensuring accuracy. The study also highlights that students often lack the domain expertise needed to detect hallucinations, reinforcing the safety advantages of curated sources.
Source:
Journal reference: