Haotian Hu , Alex Jie Yang , Sanhong Deng , Dongbo Wang , Min Song
{"title":"CoTEL-D3X: A chain-of-thought enhanced large language model for drug–drug interaction triplet extraction","authors":"Haotian Hu , Alex Jie Yang , Sanhong Deng , Dongbo Wang , Min Song","doi":"10.1016/j.eswa.2025.126953","DOIUrl":null,"url":null,"abstract":"<div><div>Current state-of-the-art drug–drug interaction (DDI) triplet extraction methods not only fails to exhaustively capture potential overlapping entity relations but also grapples to extract discontinuous drug entities, leading to suboptimal performance in DDI triplet extraction. To address these challenges, we proposed a Chain-of-Thought Enhanced Large Language Model for DDI Triplet Extraction (CoTEL-D3X). Based on the transformer architecture, we designed joint and pipeline methods that can perform end-to-end DDI triplet extraction in a generative manner. Our proposed approach builds upon the novel LLaMA series model as the foundation model and incorporates instruction tuning and Chain-of-Thought techniques to enhance the model’s understanding of task requirements and reasoning capabilities. We validated the effectiveness of our methods on the widely-used DDI dataset, which comprises 1025 documents containing 17,805 entity mentions and 4,999 DDIs. Our joint and pipeline methods not only outperformed mainstream generative models, such as ChatGPT, GPT-3, and OPT, on the DDI Extraction 2013 dataset but also improved the current corresponding best F1-score by 9.75% and 5.86%, respectively. Particularly, compared to the currently most advanced few-shot learning methods, our approach achieved more than a two-fold improvement in F1-score. We further validated the method’s transferability and generalization performance on the TAC 2018 DDI Extraction and ADR Extraction datasets, and assessed its applicability on real-world data from DrugBank. Performance analysis of the proposed method revealed that the CoT component significantly enhanced the extraction effect. The introduction of generative LLMs allows us to freely define the content and format of inputs and outputs, offering superior usability and flexibility compared to traditional extraction methods based on sequence labeling. Furthermore, as our proposed approach does not rely on external knowledge or manually defined rules, it may lack domain-specific knowledge to some extent. However, it can easily be adapted to other domains.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"273 ","pages":"Article 126953"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425005755","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Current state-of-the-art drug–drug interaction (DDI) triplet extraction methods not only fails to exhaustively capture potential overlapping entity relations but also grapples to extract discontinuous drug entities, leading to suboptimal performance in DDI triplet extraction. To address these challenges, we proposed a Chain-of-Thought Enhanced Large Language Model for DDI Triplet Extraction (CoTEL-D3X). Based on the transformer architecture, we designed joint and pipeline methods that can perform end-to-end DDI triplet extraction in a generative manner. Our proposed approach builds upon the novel LLaMA series model as the foundation model and incorporates instruction tuning and Chain-of-Thought techniques to enhance the model’s understanding of task requirements and reasoning capabilities. We validated the effectiveness of our methods on the widely-used DDI dataset, which comprises 1025 documents containing 17,805 entity mentions and 4,999 DDIs. Our joint and pipeline methods not only outperformed mainstream generative models, such as ChatGPT, GPT-3, and OPT, on the DDI Extraction 2013 dataset but also improved the current corresponding best F1-score by 9.75% and 5.86%, respectively. Particularly, compared to the currently most advanced few-shot learning methods, our approach achieved more than a two-fold improvement in F1-score. We further validated the method’s transferability and generalization performance on the TAC 2018 DDI Extraction and ADR Extraction datasets, and assessed its applicability on real-world data from DrugBank. Performance analysis of the proposed method revealed that the CoT component significantly enhanced the extraction effect. The introduction of generative LLMs allows us to freely define the content and format of inputs and outputs, offering superior usability and flexibility compared to traditional extraction methods based on sequence labeling. Furthermore, as our proposed approach does not rely on external knowledge or manually defined rules, it may lack domain-specific knowledge to some extent. However, it can easily be adapted to other domains.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.