The Transformative Role of AI in Oncology Data Analysis

In an article recently published in the journal Npj Precision Oncology, researchers reviewed the potential of multimodal models and large language models (LLMs) in precision oncology.

Overview of medical adaptations of LLMs.
Overview of medical adaptations of LLMs.


In oncology, the patient-specific data volume is expanding rapidly due to the advancements in medical imaging, large-scale genomic analyses' integration into clinical routines, and extensive utilization of electronic health records (EHRs). Effectively using this vast quantity of data is crucial for providing optimal treatment to cancer patients.

Artificial intelligence (AI) has witnessed rapid technological progress since 2022, which has significant implications for cancer and oncology research. AI and machine learning can assist healthcare professionals in processing huge amounts of data in oncology. Currently, LLMs can perform text processing at human-level competency.

Additionally, image and text processing networks increasingly leverage transformer neural networks. This convergence can facilitate multimodal AI model development. These models can simultaneously take different data types as input, which indicates a qualitative shift from the specialized niche models common in the 2010s.

This paper reviews the recent innovations in AI, specifically in LLMs and multimodal models, which will impact precision oncology in the future.

LLMs for precision oncology

LLMs are primarily deep learning (DL) models that generate and process text-based data. These models are trained using training data that consists of a diverse and large amount of text, including diverse medical data types. Recently, the most effective models have depended on transformer-based architectures due to their attention mechanisms.

LLMs can also be used for new tasks without any explicit training, which is known as a zero-shot application. These models have been investigated for different applications in healthcare. Different approaches, like training these models only using medical data, can be employed to apply LLMs to medical problems. For instance, Bio-BERT, one of the first LLMs in the medical domain, demonstrated good capabilities for understanding biomedical text.

Similarly, Med-PaLM, which was developed by fine-tuning Google's LLM "PALM" on medical training data, showed superior performance in medical use cases compared to its earlier version. Med-PaLM 2, the subsequent iteration of Med-PaLM, scored 86.5% in the United States Medical Licensing Exam (USMLE).

Although USMLE question solving is of limited use for practical applications, fine-tuned LLMs have also solved practical/real-world problems like clinical outcome prediction based on only unstructured text data present in EHRs. Generalist LLMs can be used for medical tasks without fine-tuning using only a detailed input prompt. Retrieval augmented generation is another alternative in which the domain knowledge is provided in a machine-readable format to a trained LLM.

Multimodal models

Current LLMs are mostly transformer neural networks, which are suitable for all data types and enable multimodality. Multimodal AI systems can interpret multiple data types together, and their validation and development require collaboration between several disciplines, including technology experts in hardware and software and medical experts in specialties like medicine or surgery and diagnostic specialties like pathology or radiology.

These multimodal systems have been investigated for different precision oncology applications, such as outcome predictions. Models pre-trained on diverse and large tasks and then applied to only specialized tasks are known as foundation models. These models decrease the data requirements for different specialized tasks, such as disease prediction from retinal photographs.

Similarly, foundation models lessen the need for laborious and time-consuming manual annotation while maintaining human-level accuracies and surpassing the performance of supervised methods by linking chest X-ray images to corresponding report text data.

These models can be deployed as chatbot assistants aiding diagnosis interactively in clinical practice, while in pathology, the linking of large image datasets with case-specific information and contextual knowledge can yield high performance in both biomarker prediction and disease detection.

Moreover, early generalist models displayed high performance consistently across several medical tasks and domains by integrating knowledge from multiple domains. Recent advancements in open-sourced models can enable de novo model development and training at significantly lower computational and financial burdens.

Existing challenges

Although the expanding capabilities of foundation models make them suitable for potential applications in cancer and oncology research like drug discovery and multimodal diagnostics, overcoming several existing challenges is necessary to unlock the full potential of these models. For instance, the data used for training the model has to be assessed for diversity, quantity, and quality.

Similarly, the design of systems integrating the foundation models must be guided by computer science experts, patient advocates, medical professionals, and the wider scientific community. Moreover, integrating these models into operable clinical software systems faces both regulatory and legal challenges as these models must obtain approval as medical devices. The lack of interpretability of AI models is another significant challenge. Although model explainability has been realized to a greater extent for image-related tasks, explainability issues remain in multimodal or text-processing tasks in medicine.

To summarize, advances in multimodal models and LLMs can potentially impact precision oncology through different applications. However, more scientific evidence is required to ensure that they provide quantifiable benefits in oncology.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.


Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2024, March 29). The Transformative Role of AI in Oncology Data Analysis. AZoAi. Retrieved on April 16, 2024 from

  • MLA

    Dam, Samudrapom. "The Transformative Role of AI in Oncology Data Analysis". AZoAi. 16 April 2024. <>.

  • Chicago

    Dam, Samudrapom. "The Transformative Role of AI in Oncology Data Analysis". AZoAi. (accessed April 16, 2024).

  • Harvard

    Dam, Samudrapom. 2024. The Transformative Role of AI in Oncology Data Analysis. AZoAi, viewed 16 April 2024,


The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Bridging the Gap: ChatGPT ADA for Clinical Data Analysis