ChatGPT Speeds Up Patient Interview Analysis with Human Oversight

ChatGPT 4.0 boosts analysis efficiency in qualitative research, offering significant time savings. However, it still requires human expertise to refine results and capture subtle patient insights.

Research: Unravelling ChatGPT’s potential in summarising qualitative in-depth interviews. Image Credit: TippaPatt / ShutterstockResearch: Unravelling ChatGPT’s potential in summarising qualitative in-depth interviews. Image Credit: TippaPatt / Shutterstock

In an article published in the journal Nature Eye, researchers compared the efficiency and theme-identification accuracy of chat generative pre-trained transformers (ChatGPT) (versions 3.5 and 4.0) with traditional human analysis in processing patient interview transcripts from a community eye clinic.

Results showed ChatGPT significantly reduced analysis time with moderate to high theme concordance, suggesting it could support rapid, preliminary qualitative analysis, though final theme refinement remains necessary by human researchers.

Background

Qualitative research is essential for gaining insights into complex, real-world issues by capturing participants' experiences and perspectives. While valuable, this approach is often resource-intensive due to time-consuming steps like data collection, transcription, and analysis.

Previous studies report substantial labor and costs, with transcription alone consuming hours per interview and incurring thousands in expenses. To address these challenges, artificial intelligence (AI) showed potential to streamline qualitative analysis. ChatGPT, OpenAI’s language model, emerged as a promising tool for efficiently processing and analyzing large datasets.

Earlier research by De Paoli demonstrated ChatGPT 3.5’s ability to identify themes from interview transcripts but didn’t assess ChatGPT 4.0’s capabilities. This paper built on prior findings by comparing ChatGPT versions 3.5 and 4.0 to traditional analysis and evaluating both their speed and accuracy in theme identification.

Methods and Data Analysis Approach

The authors evaluated the use of ChatGPT 3.5 and 4.0 in analyzing qualitative data from in-depth interviews on patient experiences at a community clinic. Three anonymized transcripts were selected, and themes were coded manually by researchers, who developed a working codebook through iterative analysis. ChatGPT 3.5 and 4.0 were then given the same transcripts in four-page segments, along with specific instructions to maintain thematic continuity.

To assess the accuracy of ChatGPT's thematic analysis, the researchers calculated concordance by comparing ChatGPT-generated themes with their manually established themes. The findings indicated that ChatGPT significantly reduced analysis time, averaging 11.5-11.9 minutes per transcript compared to 240 minutes for manual analysis. ChatGPT 3.5 achieved an 83.5% concordance, while ChatGPT 4.0 showed similar concordance, with fewer unrelated subthemes.

The researchers suggested that ChatGPT could streamline qualitative analysis, though additional refinement by researchers was necessary. Ethical approval was granted, and informed consent was obtained from all interview participants.

Result and Analysis

The researchers examined the feasibility of using ChatGPT models 3.5 and 4.0 for qualitative data analysis in healthcare, specifically analyzing patient experiences at a community eye clinic. Among three Chinese participants with diverse eye conditions, ChatGPT processed data much faster than manual methods, taking roughly 11.5 minutes per transcript compared to 240 minutes for researchers.

While both ChatGPT versions demonstrated similar concordance (83.7%) with the researcher-generated themes, ChatGPT 4.0 generated fewer irrelevant subthemes than ChatGPT 3.5, potentially indicating improved contextual relevance. However, ChatGPT’s generated subthemes sometimes lacked alignment with the study’s aims, possibly due to gaps in its interpretative abilities regarding subtle, nuanced human factors.

Despite ChatGPT’s efficiency, limitations remained in its capacity to capture deeper emotions and implicit themes, which a human researcher would likely discern. For example, irrelevant subthemes such as "contact lens prescription" and "personal history of chronic conditions" were flagged by ChatGPT as unrelated. Ethical considerations also surfaced, as AI-driven transcription services might pose confidentiality risks if sensitive data is shared with external entities. Additionally, AI biases rooted in training data—often from Western perspectives—could limit the cultural sensitivity needed for populations with diverse backgrounds, such as Asian communities.

Nevertheless, ChatGPT’s role in streamlining preliminary analyses offered promise for future healthcare applications. For example, it could help clinicians summarize fundamental patient interactions, enhance patient-provider relationships, and save valuable time.

Future studies should focus on refining prompts to better guide AI models and ensure data accuracy through human researchers' cross-referencing. The combined use of ChatGPT with transcription tools like Whisper could further reduce costs, making qualitative research more accessible and relevant for healthcare improvement.

Conclusion

In conclusion, the researchers highlighted ChatGPT’s potential as a valuable tool for streamlining qualitative data analysis. It offers significant time savings and moderate to good concordance with human-generated themes. While ChatGPT could support rapid, preliminary analysis, human involvement remained essential for interpreting nuanced themes and ensuring accuracy.

Future refinements, including tailored prompts and enhanced cross-checking, may further improve ChatGPT’s applicability in qualitative healthcare research. Ultimately, ChatGPT is best positioned as a supportive, collaborative tool in qualitative analysis, providing efficiency gains while preserving the depth and rigor human researchers bring to complex data interpretation.

Journal reference:
  • Kon, M. H. A., Pereira, M. J., Molina, J. A. D. C., Yip, V. C. H., Abisheganaden, J. A., & Yip, W. (2024). Unravelling ChatGPT’s potential in summarising qualitative in-depth interviews. Eye. DOI:10.1038/s41433-024-03419-0, https://www.nature.com/articles/s41433-024-03419-0

Article Revisions

  • Nov 11 2024 - Correction to journal name, from Nature to Nature Eye.
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, November 10). ChatGPT Speeds Up Patient Interview Analysis with Human Oversight. AZoAi. Retrieved on December 12, 2024 from https://www.azoai.com/news/20241110/ChatGPT-Speeds-Up-Patient-Interview-Analysis-with-Human-Oversight.aspx.

  • MLA

    Nandi, Soham. "ChatGPT Speeds Up Patient Interview Analysis with Human Oversight". AZoAi. 12 December 2024. <https://www.azoai.com/news/20241110/ChatGPT-Speeds-Up-Patient-Interview-Analysis-with-Human-Oversight.aspx>.

  • Chicago

    Nandi, Soham. "ChatGPT Speeds Up Patient Interview Analysis with Human Oversight". AZoAi. https://www.azoai.com/news/20241110/ChatGPT-Speeds-Up-Patient-Interview-Analysis-with-Human-Oversight.aspx. (accessed December 12, 2024).

  • Harvard

    Nandi, Soham. 2024. ChatGPT Speeds Up Patient Interview Analysis with Human Oversight. AZoAi, viewed 12 December 2024, https://www.azoai.com/news/20241110/ChatGPT-Speeds-Up-Patient-Interview-Analysis-with-Human-Oversight.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.