Harnessing Large Language Models for Rheumatic Disease Diagnosis: Advancing Hybrid Care and Task Shifting

IF 2 4区 医学 Q2 RHEUMATOLOGY International Journal of Rheumatic Diseases Pub Date : 2025-02-06 DOI:10.1111/1756-185X.70124
Fabian Lechner, Sebastian Kuhn, Johannes Knitza
{"title":"Harnessing Large Language Models for Rheumatic Disease Diagnosis: Advancing Hybrid Care and Task Shifting","authors":"Fabian Lechner,&nbsp;Sebastian Kuhn,&nbsp;Johannes Knitza","doi":"10.1111/1756-185X.70124","DOIUrl":null,"url":null,"abstract":"<p>Rheumatology is facing an expanding care gap, as the number of newly referred patients continues to outpace the availability of rheumatologists [<span>1</span>], resulting in longer diagnostic delays—often weeks to months—that lead to irreversible damage, poorer treatment outcomes, and higher societal costs [<span>2</span>]. Patients and physicians alike struggle with fluctuating, often nonspecific symptoms (e.g., joint pain), and this challenge is compounded by limited awareness of rheumatic diseases among both the general population and general practitioners. The poor specificity of referrals and the inability of traditional triage approaches to improve the situation widen the care gap further. Although patient education is integral to rheumatology care, it remains underutilized due to inadequate reimbursement and workforce shortages, leaving many patients feeling poorly informed about their disease. Clinicians also face a significant time burden with clinical documentation [<span>3</span>], especially for newly referred patients.</p><p>In response to these multifaceted challenges, digital health technologies (DHT) have emerged as a promising cornerstone to enhance diagnosis, information provision, patient education, documentation and alleviating workforce shortages. With the rapid proliferation of smartphones and advanced DHT, traditional care delivery models should be reevaluated to leverage these innovations [<span>4</span>]. Task-shifting is increasingly being implemented to mitigate workforce shortages, wherein tasks are delegated from physicians to nurses, medical students, or other healthcare professionals. However, task-shifting remains limited in scale and cost-efficiency and DHT could significantly leverage widespread implementation [<span>5</span>].</p><p>Currently increasing numbers of rheumatic patients turn to online platforms for initial symptom assessment [<span>6</span>], and diagnostic decision support systems (DDSS), that can empower patients to receive preliminary diagnoses within minutes. Although computer-aided diagnosis for rheumatologists has existed for decades [<span>7</span>], adoption has been hindered by poor usability [<span>8</span>], including time-intensive data entry [<span>9</span>] and restricted querying options. These limitations also affect patient education, as static, often printed information leaves patients scrolling through lengthy materials rather than engaging in open-ended, personalized exploration. To bridge these limitations recently made advancements in large-language-model-technology (LLM) can be used for unprecedented scalability and multimodal data processing. Therefore DHT usability, performance, and the patient-provider relationship could be significantly improved by integrating LLM-driven decision support within a collaborative digital health triad [<span>4</span>]. By continuously processing patient- and provider-generated data, LLMs can deliver more personalized, accessible, and dynamic support to transform care delivery aiming to close the rheumatology care gap.</p><p>LLMs have demonstrated remarkable proficiency in clinical reasoning due to their ability to process large datasets across various medical fields also including rare diseases [<span>10</span>]. By passively and continuously evaluating the vast amount of available clinical data, LLMs could facilitate accelerated diagnosis and the identification of at-risk individuals, enabling a more proactive approach to care without imposing additional burdens on physicians or patients. LLM capabilities have been highlighted by out-performing human experts on standardized exams such as the United States Medical Licensing Examination (USMLE) and rheumatology exams [<span>11</span>]. Importantly, in a direct comparison study, ChatGPT's diagnostic accuracy was found to be not inferior to that of experienced rheumatologists [<span>12</span>]. Both were given the same anamnestic information from real patients presenting to a rheumatology service. Notably, the model exhibited exceptional sensitivity in identifying inflammatory rheumatic diseases (IRDs), correctly listing the accurate diagnosis among the top three options in 86% of IRD cases—surpassing the 74% success rate of rheumatologists.</p><p>Building on this, another publication by Venerito and Iannone utilized a locally fine-tuned LLM, optimized through prompt engineering, to diagnose fibromyalgia by analyzing subtle expressions of pain and emotion in patient communications [<span>13</span>]. This innovative approach achieved an accuracy of 87% and an AUROC of 0.86, underscoring the potential of LLMs to tackle diagnostic challenges associated with subjective and linguistically intricate conditions by broadening the scope of considerations and highlighting less obvious conditions. Additionally multiple studies have demonstrated that LLMs are capable of extracting diagnostic information from patient dialogues, even when the symptom descriptions are expressed in simple or colloquial language [<span>14</span>]. This linguistic adaptability allows LLMs to effectively comprehend patient narratives and identify subtle cues that might be overlooked in traditional assessments. Combined with the structured nature of multi-turn dialogues, this capability has shown significant potential for clinical applications [<span>14</span>].</p><p>One of these applications gaining more traction is the introduction of LLMs for documentation tasks such as summarizing clinical conversations, generating structured clinical notes, and extracting critical keywords. Research in this area has introduced improved note formats like K-SOAP and domain-specific datasets such as CliniKnote, which combine simulated doctor-patient dialogues with meticulously curated notes. Through advanced fine-tuning, prompting strategies, and sophisticated NLP methods, LLMs can enhance the efficiency and quality of clinical documentation, ultimately reducing clinician workload and enabling more effective patient care [<span>15</span>].</p><p>Further the potentials of LLMs can be used for educational applications, as exemplified by LLMs' ability to address patient queries with accuracy, empathy, and comprehensiveness. For instance, when ChatGPT-4 was tested with questions commonly posed by patients with systemic lupus erythematosus, its responses were not only rated more empathic but also qualitatively better than those from expert rheumatologists [<span>16</span>]. These capabilities stem from the transformer-based architectures underlying LLMs [<span>17</span>]. By integrating large, diverse knowledge sources—from clinical guidelines to authoritative research publications [<span>18</span>]—these models can maintain extensive contextual understanding and dynamically incorporate new information. As a result, LLMs hold the potential to improve diagnostic accuracy, streamline documentation, enhance patient education, and broaden the range of differential diagnoses considered. In doing so, they may help alleviate clinician workload, support more proactive and patient-centered care, and ultimately elevate the overall quality of healthcare delivery.</p><p>However, the clinical deployment of AI-driven diagnostic tools faces significant regulatory hurdles. Determining the intended purpose of these technologies is central to their classification as either medical or non-medical devices, a distinction that directly influences compliance requirements. Under the EU AI Act, general-purpose AI models such as LLMs supporting clinical decisions may face stringent obligations, especially regarding transparency, risk classification, and post-market monitoring. Simultaneously, regulatory requirements necessitate robust clinical evaluation, posing challenges in validating AI's predictive capabilities. Ensuring alignment with these frameworks is critical for advancing AI adoption while safeguarding patient safety and compliance with regulations.</p><p>While these regulatory challenges must be addressed, LLMs also pose inherent risks such as generating medical hallucinations—plausible yet incorrect or unverifiable information. This has been highlighted in the Med-HALT framework, where models such as GPT 3.5 were severely hallucinating given different more complex tasks. In a field where precision is paramount, such inaccuracies could misguide clinical decisions, jeopardizing patient safety [<span>19</span>]. Ensuring LLM transparency and explainability has become increasingly challenging, making the grounding of these models a crucial area of research. A promising grounding technique gaining significant attention is Retrieval-Augmented Generation (RAG). RAG addresses the transparency issue by first querying a database containing known information related to a user's question or input. It retrieves only semantically similar text blocks that are likely to answer the question or generate appropriate content. The model then produces an output based solely on this retrieved information, allowing it to accurately cite the source of the input. This approach enables users not only to verify the model's output against known literature but also to explore the subject further by reviewing the referenced documents, such as publications or guidelines [<span>20</span>]. As illustrated in Figure 1, RAG enhances both the accuracy and verifiability of LLM outputs by grounding responses in relevant, validated information from a knowledge base. While RAG systems have found widespread use in academic search engines, their effectiveness in medical contexts—particularly for patient education or diagnosis—remains largely unexplored. Collaborative efforts among AI developers, clinicians, and researchers are essential to optimize LLM utility while mitigating risks. Further exploration into grounding methods and developing specialized models tailored to rheumatology can enhance their effectiveness.</p><p>Integrating LLMs into the diagnosis of rheumatic diseases presents a transformative opportunity to reduce diagnostic delays, alleviate clinician workload, and enhance patient education. Despite existing challenges, the synergistic advancement of AI innovation and regulatory compliance can help bridge care gaps, improve patient outcomes, and elevate the professional experience of healthcare providers, ultimately fostering more efficient and patient-centered rheumatology care.</p><p>Fabian Lechner and Johannes Knitza drafted the manuscript. Sebastian Kuhn provided suggestions, reviewed and edited the manuscript several times.</p><p>Fabian Lechner declares honoraria from Lilly, Novo Nordisk, Siemens Healthineers, Diabetes.de, and the German Diabetes Association (DDG). Sebastian Kuhn is founder and shareholder of MED.digital GmbH. Johannes Knitza declares research support from Abbvie, GSK, Vila Health, honoraria and consulting fees from Abbvie, AstraZeneca, BMS, Boehringer Ingelheim, Chugai, GAIA, Galapagos, GSK, Janssen, Lilly, Medac, Novartis, Pfizer, Sobi, Rheumaakademie, UCB, Vila Health and Werfen.</p>","PeriodicalId":14330,"journal":{"name":"International Journal of Rheumatic Diseases","volume":"28 2","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1756-185X.70124","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Rheumatic Diseases","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1756-185X.70124","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RHEUMATOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Rheumatology is facing an expanding care gap, as the number of newly referred patients continues to outpace the availability of rheumatologists [1], resulting in longer diagnostic delays—often weeks to months—that lead to irreversible damage, poorer treatment outcomes, and higher societal costs [2]. Patients and physicians alike struggle with fluctuating, often nonspecific symptoms (e.g., joint pain), and this challenge is compounded by limited awareness of rheumatic diseases among both the general population and general practitioners. The poor specificity of referrals and the inability of traditional triage approaches to improve the situation widen the care gap further. Although patient education is integral to rheumatology care, it remains underutilized due to inadequate reimbursement and workforce shortages, leaving many patients feeling poorly informed about their disease. Clinicians also face a significant time burden with clinical documentation [3], especially for newly referred patients.

In response to these multifaceted challenges, digital health technologies (DHT) have emerged as a promising cornerstone to enhance diagnosis, information provision, patient education, documentation and alleviating workforce shortages. With the rapid proliferation of smartphones and advanced DHT, traditional care delivery models should be reevaluated to leverage these innovations [4]. Task-shifting is increasingly being implemented to mitigate workforce shortages, wherein tasks are delegated from physicians to nurses, medical students, or other healthcare professionals. However, task-shifting remains limited in scale and cost-efficiency and DHT could significantly leverage widespread implementation [5].

Currently increasing numbers of rheumatic patients turn to online platforms for initial symptom assessment [6], and diagnostic decision support systems (DDSS), that can empower patients to receive preliminary diagnoses within minutes. Although computer-aided diagnosis for rheumatologists has existed for decades [7], adoption has been hindered by poor usability [8], including time-intensive data entry [9] and restricted querying options. These limitations also affect patient education, as static, often printed information leaves patients scrolling through lengthy materials rather than engaging in open-ended, personalized exploration. To bridge these limitations recently made advancements in large-language-model-technology (LLM) can be used for unprecedented scalability and multimodal data processing. Therefore DHT usability, performance, and the patient-provider relationship could be significantly improved by integrating LLM-driven decision support within a collaborative digital health triad [4]. By continuously processing patient- and provider-generated data, LLMs can deliver more personalized, accessible, and dynamic support to transform care delivery aiming to close the rheumatology care gap.

LLMs have demonstrated remarkable proficiency in clinical reasoning due to their ability to process large datasets across various medical fields also including rare diseases [10]. By passively and continuously evaluating the vast amount of available clinical data, LLMs could facilitate accelerated diagnosis and the identification of at-risk individuals, enabling a more proactive approach to care without imposing additional burdens on physicians or patients. LLM capabilities have been highlighted by out-performing human experts on standardized exams such as the United States Medical Licensing Examination (USMLE) and rheumatology exams [11]. Importantly, in a direct comparison study, ChatGPT's diagnostic accuracy was found to be not inferior to that of experienced rheumatologists [12]. Both were given the same anamnestic information from real patients presenting to a rheumatology service. Notably, the model exhibited exceptional sensitivity in identifying inflammatory rheumatic diseases (IRDs), correctly listing the accurate diagnosis among the top three options in 86% of IRD cases—surpassing the 74% success rate of rheumatologists.

Building on this, another publication by Venerito and Iannone utilized a locally fine-tuned LLM, optimized through prompt engineering, to diagnose fibromyalgia by analyzing subtle expressions of pain and emotion in patient communications [13]. This innovative approach achieved an accuracy of 87% and an AUROC of 0.86, underscoring the potential of LLMs to tackle diagnostic challenges associated with subjective and linguistically intricate conditions by broadening the scope of considerations and highlighting less obvious conditions. Additionally multiple studies have demonstrated that LLMs are capable of extracting diagnostic information from patient dialogues, even when the symptom descriptions are expressed in simple or colloquial language [14]. This linguistic adaptability allows LLMs to effectively comprehend patient narratives and identify subtle cues that might be overlooked in traditional assessments. Combined with the structured nature of multi-turn dialogues, this capability has shown significant potential for clinical applications [14].

One of these applications gaining more traction is the introduction of LLMs for documentation tasks such as summarizing clinical conversations, generating structured clinical notes, and extracting critical keywords. Research in this area has introduced improved note formats like K-SOAP and domain-specific datasets such as CliniKnote, which combine simulated doctor-patient dialogues with meticulously curated notes. Through advanced fine-tuning, prompting strategies, and sophisticated NLP methods, LLMs can enhance the efficiency and quality of clinical documentation, ultimately reducing clinician workload and enabling more effective patient care [15].

Further the potentials of LLMs can be used for educational applications, as exemplified by LLMs' ability to address patient queries with accuracy, empathy, and comprehensiveness. For instance, when ChatGPT-4 was tested with questions commonly posed by patients with systemic lupus erythematosus, its responses were not only rated more empathic but also qualitatively better than those from expert rheumatologists [16]. These capabilities stem from the transformer-based architectures underlying LLMs [17]. By integrating large, diverse knowledge sources—from clinical guidelines to authoritative research publications [18]—these models can maintain extensive contextual understanding and dynamically incorporate new information. As a result, LLMs hold the potential to improve diagnostic accuracy, streamline documentation, enhance patient education, and broaden the range of differential diagnoses considered. In doing so, they may help alleviate clinician workload, support more proactive and patient-centered care, and ultimately elevate the overall quality of healthcare delivery.

However, the clinical deployment of AI-driven diagnostic tools faces significant regulatory hurdles. Determining the intended purpose of these technologies is central to their classification as either medical or non-medical devices, a distinction that directly influences compliance requirements. Under the EU AI Act, general-purpose AI models such as LLMs supporting clinical decisions may face stringent obligations, especially regarding transparency, risk classification, and post-market monitoring. Simultaneously, regulatory requirements necessitate robust clinical evaluation, posing challenges in validating AI's predictive capabilities. Ensuring alignment with these frameworks is critical for advancing AI adoption while safeguarding patient safety and compliance with regulations.

While these regulatory challenges must be addressed, LLMs also pose inherent risks such as generating medical hallucinations—plausible yet incorrect or unverifiable information. This has been highlighted in the Med-HALT framework, where models such as GPT 3.5 were severely hallucinating given different more complex tasks. In a field where precision is paramount, such inaccuracies could misguide clinical decisions, jeopardizing patient safety [19]. Ensuring LLM transparency and explainability has become increasingly challenging, making the grounding of these models a crucial area of research. A promising grounding technique gaining significant attention is Retrieval-Augmented Generation (RAG). RAG addresses the transparency issue by first querying a database containing known information related to a user's question or input. It retrieves only semantically similar text blocks that are likely to answer the question or generate appropriate content. The model then produces an output based solely on this retrieved information, allowing it to accurately cite the source of the input. This approach enables users not only to verify the model's output against known literature but also to explore the subject further by reviewing the referenced documents, such as publications or guidelines [20]. As illustrated in Figure 1, RAG enhances both the accuracy and verifiability of LLM outputs by grounding responses in relevant, validated information from a knowledge base. While RAG systems have found widespread use in academic search engines, their effectiveness in medical contexts—particularly for patient education or diagnosis—remains largely unexplored. Collaborative efforts among AI developers, clinicians, and researchers are essential to optimize LLM utility while mitigating risks. Further exploration into grounding methods and developing specialized models tailored to rheumatology can enhance their effectiveness.

Integrating LLMs into the diagnosis of rheumatic diseases presents a transformative opportunity to reduce diagnostic delays, alleviate clinician workload, and enhance patient education. Despite existing challenges, the synergistic advancement of AI innovation and regulatory compliance can help bridge care gaps, improve patient outcomes, and elevate the professional experience of healthcare providers, ultimately fostering more efficient and patient-centered rheumatology care.

Fabian Lechner and Johannes Knitza drafted the manuscript. Sebastian Kuhn provided suggestions, reviewed and edited the manuscript several times.

Fabian Lechner declares honoraria from Lilly, Novo Nordisk, Siemens Healthineers, Diabetes.de, and the German Diabetes Association (DDG). Sebastian Kuhn is founder and shareholder of MED.digital GmbH. Johannes Knitza declares research support from Abbvie, GSK, Vila Health, honoraria and consulting fees from Abbvie, AstraZeneca, BMS, Boehringer Ingelheim, Chugai, GAIA, Galapagos, GSK, Janssen, Lilly, Medac, Novartis, Pfizer, Sobi, Rheumaakademie, UCB, Vila Health and Werfen.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用大型语言模型进行风湿病诊断:推进混合护理和任务转移
风湿病学正面临着不断扩大的护理缺口,因为新转诊的患者数量继续超过风湿病学家的可用性,导致更长时间的诊断延误——通常是几周到几个月——从而导致不可逆转的损害、更差的治疗结果和更高的社会成本。患者和医生都在与波动的、通常是非特异性的症状(例如,关节疼痛)作斗争,这一挑战由于普通人群和全科医生对风湿病的认识有限而更加复杂。转诊的特异性差和传统的分诊方法无法改善这种情况,进一步扩大了护理差距。尽管患者教育是风湿病护理不可或缺的一部分,但由于报销不足和劳动力短缺,患者教育仍未得到充分利用,导致许多患者对自己的疾病知之甚少。临床医生还面临着临床文件的重大时间负担,特别是对于新转诊的患者。为了应对这些多方面的挑战,数字卫生技术(DHT)已成为加强诊断、信息提供、患者教育、记录和缓解劳动力短缺的有希望的基石。随着智能手机和先进DHT的迅速普及,传统的医疗服务模式应该重新评估,以利用这些创新。为了缓解劳动力短缺,越来越多的人开始实施任务转移,将任务从医生委派给护士、医学生或其他医疗保健专业人员。然而,任务转移在规模和成本效率方面仍然有限,DHT可以显著促进广泛实施bb0。目前,越来越多的风湿病患者转向在线平台进行初始症状评估[6]和诊断决策支持系统(DDSS),这可以使患者在几分钟内获得初步诊断。尽管风湿病学家的计算机辅助诊断已经存在了几十年[7],但由于可用性差[8],包括时间密集的数据输入[9]和有限的查询选项,它的采用一直受到阻碍。这些限制也影响了患者的教育,因为静态的、通常打印出来的信息让患者在冗长的材料中滚动,而不是参与开放式的、个性化的探索。为了克服这些限制,最近在大语言模型技术(LLM)方面取得的进展可以用于前所未有的可伸缩性和多模态数据处理。因此,通过将llm驱动的决策支持集成到协作数字健康三元组[4]中,DHT的可用性、性能和患者-提供者关系可以得到显著改善。通过持续处理患者和提供者生成的数据,法学硕士可以提供更加个性化、可访问和动态的支持,以改变旨在缩小风湿病护理差距的护理交付。法学硕士在临床推理方面表现出了非凡的熟练程度,因为他们有能力处理包括罕见疾病在内的各种医学领域的大型数据集。通过被动和持续地评估大量可用的临床数据,llm可以促进加速诊断和识别高危个体,使更主动的护理方法成为可能,而不会给医生或患者带来额外的负担。法学硕士的能力在标准化考试中表现优于人类专家,如美国医疗执照考试(USMLE)和风湿病学考试bbb。重要的是,在一项直接比较研究中,ChatGPT的诊断准确性并不逊于经验丰富的风湿病学家bb0。两组研究对象都获得了风湿病服务中心真实患者的相同记忆信息。值得注意的是,该模型在识别炎症性风湿病(IRDs)方面表现出了异常的敏感性,在86%的IRD病例中,正确列出了前三个选项中的准确诊断,超过了风湿病学家74%的成功率。在此基础上,Venerito和Iannone的另一篇文章利用局部微调LLM,通过快速工程优化,通过分析患者交流中疼痛和情绪的微妙表达来诊断纤维肌痛。这种创新的方法实现了87%的准确率和0.86的AUROC,强调了llm通过扩大考虑范围和突出不太明显的条件来解决与主观和语言复杂条件相关的诊断挑战的潜力。此外,多项研究表明,llm能够从患者对话中提取诊断信息,即使症状描述是用简单或口语表达的。这种语言适应性使法学硕士能够有效地理解患者的叙述,并识别在传统评估中可能被忽视的微妙线索。 尽管存在挑战,人工智能创新和法规遵从性的协同进步可以帮助弥合护理差距,改善患者的治疗效果,提升医疗保健提供者的专业经验,最终促进更高效、以患者为中心的风湿病护理。Fabian Lechner和Johannes Knitza起草了手稿。塞巴斯蒂安·库恩(Sebastian Kuhn)多次提出建议,审阅和编辑手稿。Fabian Lechner宣布礼来、诺和诺德、西门子健康工程师、Diabetes.de和德国糖尿病协会(DDG)的荣誉。Sebastian Kuhn是MED.digital GmbH的创始人和股东。Johannes Knitza宣布来自艾伯维、葛兰素史克、维拉健康的研究支持,以及来自艾伯维、阿斯利康、BMS、勃林格英格翰、Chugai、GAIA、Galapagos、葛兰素史克、杨森、礼来、Medac、诺华、辉瑞、Sobi、rheumataakademie、UCB、维拉健康和Werfen的酬金和咨询费用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.70
自引率
4.00%
发文量
362
审稿时长
1 months
期刊介绍: The International Journal of Rheumatic Diseases (formerly APLAR Journal of Rheumatology) is the official journal of the Asia Pacific League of Associations for Rheumatology. The Journal accepts original articles on clinical or experimental research pertinent to the rheumatic diseases, work on connective tissue diseases and other immune and allergic disorders. The acceptance criteria for all papers are the quality and originality of the research and its significance to our readership. Except where otherwise stated, manuscripts are peer reviewed by two anonymous reviewers and the Editor.
期刊最新文献
Clinical Characteristics and Management for Flares in Patients With Rheumatoid Arthritis Treated With Biologic and Targeted Synthetic Disease-Modifying Antirheumatic Drugs. Pulmonary Arterial Involvement Due to Behçet's Syndrome in a Pregnant Patient Diagnosed Previously as Hughes-Stovin Syndrome: A Case Report From a Tertiary Referral Center. Monitoring Anti-MDA5 Antibody Titers Associated With Higher GC-Free Maintenance in Patients With Anti-MDA5 Positive RP-ILD. Difficult-To-Treat Psoriatic Arthritis: An Evolving Clinical Construct. The Clinical Role of miR-744-5p in Osteoarthritis and Its Regulation of Chondrocyte Inflammation Through the Modulation of Thrombospondin-2 (THBS2) Expression.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1