CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines

In an article published in the journal Scientific Reports, researchers from China introduced a pre-trained language model (PLM) named Chinese patent medicine instructions chat generative language model (CPMI-ChatGLM) to generate accurate and effective instructions for Chinese patent medicines.

Study: CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines. Image credit: marilyn barbone/Shutterstock
Study: CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines. Image credit: marilyn barbone/Shutterstock

They fine-tuned their technique to improve medication safety, promote traditional Chinese culture, contribute to the standardization of CPM usage, and ultimately assist doctors and patients in understanding the efficacy, dosage, and administration of Chinese medicines.

Background

Traditional Chinese medicine (TCM) has existed for thousands of years and is respected worldwide. Modern science has shown that Chinese herbs used in TCM can have real health benefits. The World Health Organization (WHO) even says TCM is useful alongside regular medicine. One example of TCM working well is an herbal mix called Lianhua Qingwen. It has been used since ancient times and has herbs like Apricot kernel, Isatis root, Weeping forsythia, Lonicera japonica, Ephedra, Licorice root, Male fern rhizome, Gypsum fibrosum, Cablin patchouli herb, Herba Rhodiolae, Houttuynia cordata, Rhubarb root and rhizome, and Bitter apricot seed. It has been found to help treat coronavirus disease 2019 (COVID-19).

In the realm of TCM, CPM plays a pivotal role. These preparations utilize Chinese herbs as raw materials and essential therapeutic tools for various illnesses. A critical aspect of CPM is the provision of accurate and effective instructions to guide patients in their safe and efficient use. Against this backdrop, manual CPMI arises as a critical component. They not only improve how Chinese herbs are used in clinics but also make sure patients use them safely by setting standard guidelines.

About the Research

In the present paper, the authors highlighted the gap between PLMs and the field of TCM and proposed CPMI-ChatGLM, a parameter-efficient language model fine-tuned specifically for cost-effectively generating CPMIs. They began by creating a novel dataset by carefully gathering, processing, and releasing the first-ever CPMI dataset. Following this, they fine-tuned the base model ChatGLM with six billion parameters (ChatGLM-6B) and conducted performance evaluations.

Dataset creation involves collecting and organizing relevant data. In this case, the researchers included information related to CPM treatments and consultation records. They labeled and processed this data to ensure its quality and effectiveness. The dataset contained a total of 3906 data records, covering various medical specialties. Then, the study used this dataset to train the model. This dataset contained many instructions, dosages, and guidelines for the use of various CPMs. By compiling this dataset, they provided CPMI-ChatGLM with the necessary knowledge to offer accurate and context-sensitive recommendations.

In the fine-tuning step, the authors leveraged consumer-grade graphics cards to fine-tune and adjust the pre-trained model to make it more suitable for CPM tasks. They experimented with methods like low-rank adoption (LoRA) and prompt tuning version-2 (P-Tuning v2) to adjust the model's parameters and improve its performance. The study also explored variations in data scales and instruction data settings to optimize performance. This process aimed to enhance the model's accuracy and reliability in generating responses or recommendations.

During the evaluation phase, the researchers utilized three metrics, including bilingual evaluation understudy (BLEU), recall-oriented understudy for gisting evaluation (ROUGE), and bidirectional encoder representations from transformers score (BARTScore). They used BLEU to measure the similarity between two texts, while ROUGE version-1 (ROUGE 1), ROUGE 2, ROUGE-length (ROUGE-L), and BARTScore were leveraged to evaluate the degree of match between the generated text and the reference text. These metrics comprehensively assessed the model's accuracy, fluency, and information completeness. Using these metrics in performance evaluation ensured that the model's responses were accurate, fluent, and informative.

Research Findings

The outcomes showed that the newly designed CPMI-ChatGLM model achieved impressive scores compared to other large language models (LLMs). This indicates that the model outperformed its counterparts in terms of generating high-quality and contextually relevant text. By achieving state-of-the-art scores, CPMI-ChatGLM demonstrates its effectiveness and reliability in generating responses related to TCM, making it a promising tool for assisting doctors and patients in the domain of TCM.

The novel technique could be utilized for various purposes. Some of the important implications are the following:

  • Clinical Guidance: Healthcare practitioners can utilize the model to offer precise and effective instructions regarding CPM treatments to their patients.
  • Data Mining: The presented CPMI dataset contains rich attributes related to CPM treatments. These attributes provide valuable insights and open avenues for insightful data analysis. Analysts can explore this dataset to deepen their understanding of CPM treatments, identify patterns, and extract meaningful information.
  • Standardization: By ensuring consistent and accurate instructions, the new model aids in standardizing CPM usage. The model generates recommendations and detailed instruction information for CPM treatments, thereby promoting a standardized approach to their usage. Consistent instructions can enhance patient safety, treatment efficacy, and overall care quality.

Conclusion

In summary, the novel model proved to be effective and robust in generating recommendations for CPM treatments based on user-provided symptoms. It achieved superior efficiency compared to the base model. Additionally, the authors explored how different fine-tuning methods and instruction data influenced the performance of the new technique.

The researchers acknowledged the limitations and challenges and suggested directions for future work. They recommended incorporating larger-scale foundation models and expanding the dataset beyond TCM to other categories of drugs. Additionally, they proposed integrating image-assisted information to broaden the model's applicability to a wider range of medical conditions.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, March 21). CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines. AZoAi. Retrieved on May 19, 2024 from https://www.azoai.com/news/20240321/CPMI-ChatGLM-Unlocking-the-Potency-of-Chinese-Patent-Medicines.aspx.

  • MLA

    Osama, Muhammad. "CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines". AZoAi. 19 May 2024. <https://www.azoai.com/news/20240321/CPMI-ChatGLM-Unlocking-the-Potency-of-Chinese-Patent-Medicines.aspx>.

  • Chicago

    Osama, Muhammad. "CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines". AZoAi. https://www.azoai.com/news/20240321/CPMI-ChatGLM-Unlocking-the-Potency-of-Chinese-Patent-Medicines.aspx. (accessed May 19, 2024).

  • Harvard

    Osama, Muhammad. 2024. CPMI-ChatGLM: Unlocking the Potency of Chinese Patent Medicines. AZoAi, viewed 19 May 2024, https://www.azoai.com/news/20240321/CPMI-ChatGLM-Unlocking-the-Potency-of-Chinese-Patent-Medicines.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.