With AI tools becoming central to digital pathology, a new checklist offers a practical solution to improve how studies are reported, ensuring clarity, reproducibility, and real-world adoption.

Editorial: Reporting guidelines for manuscripts that use artificial intelligence–based automated image analysis in Veterinary Pathology. Image Credit: Digital Photo / Shutterstock
A new editorial article in the journal Veterinary Pathology introduces a 9-point checklist designed to improve the reporting quality of studies that use artificial intelligence (AI)-based automated image analysis (AIA). As AI tools become more widely used in pathology-based research, concerns have emerged about the reproducibility and transparency of published findings, especially when such tools are applied by researchers without formal training in computer science or machine learning.
Interdisciplinary guidance to support transparent AI methodology
Developed by an interdisciplinary team of veterinary pathologists, machine learning experts, and journal editors, the checklist outlines key methodological details that should be included in manuscripts. These include dataset creation, model training and performance evaluation, and interaction with the AI system. The checklist is tailored explicitly to studies involving microscopic image analysis, which remains the most common application of AI in veterinary pathology.
Among the most critical elements highlighted in the guidelines is the availability of supporting data, including training datasets, source code, and model weights, which the authors describe as essential for reproducibility and external validation. Without access to these components, the reliability of AI-based methods across laboratories and studies can be compromised, even when the same algorithms are used.
Promoting reproducibility, clarity, and implementation
The aim is to support clear communication of methods in a way that enables thorough evaluation by reviewers, reproducibility by other researchers, and the eventual integration of validated AI tools into diagnostic pathology workflows. The authors explain that due to the statistical nature and data-dependency of AI model training, identical methods can yield divergent results when applied in different settings.
"Transparent reporting is critical for reproducibility and for translating AI tools into routine pathology workflows," the authors write. They emphasize that the availability of supporting data, such as training datasets, source code, and model weights, is essential for meaningful validation and broader application.
Supporting authors and streamlining peer review
The editorial strongly encourages authors to complete the checklist when preparing AI-based studies and to submit it as supplemental material, which will be made available to editors and peer reviewers. This process is expected to reduce publication delays by limiting the need for extensive revisions and clarifying expectations from the outset.
The guidelines are intended to assist authors, reviewers, and editors and will be particularly useful for submissions to Veterinary Pathology's upcoming special issue on AI. More broadly, the checklist reflects a growing consensus around the need for standardized reporting practices in biomedical AI research, particularly in disciplines like veterinary pathology, where such standards have only recently begun to take shape.
Source:
Journal reference:
- Bertram CA, Schutten M, Ressel L, Breininger K, Webster JD, Aubreville M. Reporting guidelines for manuscripts that use artificial intelligence–based automated image analysis in Veterinary Pathology. Veterinary Pathology. 2025;62(5):615-617. doi:10.1177/03009858251344320, https://journals.sagepub.com/doi/10.1177/03009858251344320