A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study
Abstract BackgroundSeveral clinical cases and experiments have demonstrated the effectiveness of traditional Chinese medicine (TCM) formulas in treating and preventing diseases. These formulas contain critical information about their ingredients, efficacy, and indications. Cla...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
JMIR Publications
2025-07-01
|
Series: | JMIR Medical Informatics |
Online Access: | https://medinform.jmir.org/2025/1/e69286 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1839605709970341888 |
---|---|
author | Zhe Wang Keqian Li Suyuan Peng Lihong Liu Xiaolin Yang Keyu Yao Heinrich Herre Yan Zhu |
author_facet | Zhe Wang Keqian Li Suyuan Peng Lihong Liu Xiaolin Yang Keyu Yao Heinrich Herre Yan Zhu |
author_sort | Zhe Wang |
collection | DOAJ |
description |
Abstract
BackgroundSeveral clinical cases and experiments have demonstrated the effectiveness of traditional Chinese medicine (TCM) formulas in treating and preventing diseases. These formulas contain critical information about their ingredients, efficacy, and indications. Classifying TCM formulas based on this information can effectively standardize TCM formulas management, support clinical and research applications, and promote the modernization and scientific use of TCM. To further advance this task, TCM formulas can be classified using various approaches, including manual classification, machine learning, and deep learning. Additionally, large language models (LLMs) are gaining prominence in the biomedical field. Integrating LLMs into TCM research could significantly enhance and accelerate the discovery of TCM knowledge by leveraging their advanced linguistic understanding and contextual reasoning capabilities.
ObjectiveThe objective of this study is to evaluate the performance of different LLMs in the TCM formula classification task. Additionally, by employing ensemble learning with multiple fine-tuned LLMs, this study aims to enhance classification accuracy.
MethodsThe data for the TCM formula were manually refined and cleaned. We selected 10 LLMs that support Chinese for fine-tuning. We then employed an ensemble learning approach that combined the predictions of multiple models using both hard and weighted voting, with weights determined by the average accuracy of each model. Finally, we selected the top 5 most effective models from each series of LLMs for weighted voting (top 5) and the top 3 most accurate models of 10 for weighted voting (top 3).
ResultsA total of 2441 TCM formulas were curated manually from multiple sources, including the Coding Rules for Chinese Medicinal Formulas and Their Codes, the Chinese National Medical Insurance Catalog for proprietary Chinese medicines, textbooks of TCM formulas, and TCM literature. The dataset was divided into a training set of 1999 TCM formulas and test set of 442 TCM formulas. The testing results showed that Qwen-14B achieved the highest accuracy of 75.32% among the single models. The accuracy rates for hard voting, weighted voting, weighted voting (top 5), and weighted voting (top 3) were 75.79%, 76.47%, 75.57%, and 77.15%, respectively.
ConclusionsThis study aims to explore the effectiveness of LLMs in the TCM formula classification task. To this end, we propose an ensemble learning method that integrates multiple fine-tuned LLMs through a voting mechanism. This method not only improves classification accuracy but also enhances the existing classification system for classifying the efficacy of TCM formula. |
format | Article |
id | doaj-art-a0656ae493bc4d84979a8df5f274e9c8 |
institution | Matheson Library |
issn | 2291-9694 |
language | English |
publishDate | 2025-07-01 |
publisher | JMIR Publications |
record_format | Article |
series | JMIR Medical Informatics |
spelling | doaj-art-a0656ae493bc4d84979a8df5f274e9c82025-08-01T15:42:00ZengJMIR PublicationsJMIR Medical Informatics2291-96942025-07-0113e69286e6928610.2196/69286A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation StudyZhe Wanghttp://orcid.org/0009-0000-2387-784XKeqian Lihttp://orcid.org/0009-0002-5956-3038Suyuan Penghttp://orcid.org/0000-0002-8221-7574Lihong Liuhttp://orcid.org/0009-0004-8250-2772Xiaolin Yanghttp://orcid.org/0000-0001-9008-6650Keyu Yaohttp://orcid.org/0000-0003-2655-9243Heinrich Herrehttp://orcid.org/0000-0001-5343-9218Yan Zhuhttp://orcid.org/0000-0002-5592-8258 Abstract BackgroundSeveral clinical cases and experiments have demonstrated the effectiveness of traditional Chinese medicine (TCM) formulas in treating and preventing diseases. These formulas contain critical information about their ingredients, efficacy, and indications. Classifying TCM formulas based on this information can effectively standardize TCM formulas management, support clinical and research applications, and promote the modernization and scientific use of TCM. To further advance this task, TCM formulas can be classified using various approaches, including manual classification, machine learning, and deep learning. Additionally, large language models (LLMs) are gaining prominence in the biomedical field. Integrating LLMs into TCM research could significantly enhance and accelerate the discovery of TCM knowledge by leveraging their advanced linguistic understanding and contextual reasoning capabilities. ObjectiveThe objective of this study is to evaluate the performance of different LLMs in the TCM formula classification task. Additionally, by employing ensemble learning with multiple fine-tuned LLMs, this study aims to enhance classification accuracy. MethodsThe data for the TCM formula were manually refined and cleaned. We selected 10 LLMs that support Chinese for fine-tuning. We then employed an ensemble learning approach that combined the predictions of multiple models using both hard and weighted voting, with weights determined by the average accuracy of each model. Finally, we selected the top 5 most effective models from each series of LLMs for weighted voting (top 5) and the top 3 most accurate models of 10 for weighted voting (top 3). ResultsA total of 2441 TCM formulas were curated manually from multiple sources, including the Coding Rules for Chinese Medicinal Formulas and Their Codes, the Chinese National Medical Insurance Catalog for proprietary Chinese medicines, textbooks of TCM formulas, and TCM literature. The dataset was divided into a training set of 1999 TCM formulas and test set of 442 TCM formulas. The testing results showed that Qwen-14B achieved the highest accuracy of 75.32% among the single models. The accuracy rates for hard voting, weighted voting, weighted voting (top 5), and weighted voting (top 3) were 75.79%, 76.47%, 75.57%, and 77.15%, respectively. ConclusionsThis study aims to explore the effectiveness of LLMs in the TCM formula classification task. To this end, we propose an ensemble learning method that integrates multiple fine-tuned LLMs through a voting mechanism. This method not only improves classification accuracy but also enhances the existing classification system for classifying the efficacy of TCM formula.https://medinform.jmir.org/2025/1/e69286 |
spellingShingle | Zhe Wang Keqian Li Suyuan Peng Lihong Liu Xiaolin Yang Keyu Yao Heinrich Herre Yan Zhu A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study JMIR Medical Informatics |
title | A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study |
title_full | A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study |
title_fullStr | A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study |
title_full_unstemmed | A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study |
title_short | A Weighted Voting Approach for Traditional Chinese Medicine Formula Classification Using Large Language Models: Algorithm Development and Validation Study |
title_sort | weighted voting approach for traditional chinese medicine formula classification using large language models algorithm development and validation study |
url | https://medinform.jmir.org/2025/1/e69286 |
work_keys_str_mv | AT zhewang aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT keqianli aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT suyuanpeng aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT lihongliu aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT xiaolinyang aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT keyuyao aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT heinrichherre aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT yanzhu aweightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT zhewang weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT keqianli weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT suyuanpeng weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT lihongliu weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT xiaolinyang weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT keyuyao weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT heinrichherre weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy AT yanzhu weightedvotingapproachfortraditionalchinesemedicineformulaclassificationusinglargelanguagemodelsalgorithmdevelopmentandvalidationstudy |