首页 > 最新文献

Journal of Biomedical Informatics最新文献

英文 中文
Repeatable process for extracting health data from HL7 CDA documents 用于从HL7 CDA文档提取运行状况数据的可重复流程。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/j.jbi.2024.104765
Harry-Anton Talvik , Marek Oja , Sirli Tamm , Kerli Mooses , Dage Särg , Marcus Lõo , Õie Renata Siimon , Hendrik Šuvalov , Raivo Kolde , Jaak Vilo , Sulev Reisberg , Sven Laur

Objective

This study aims to address the gap in the literature on converting real-world Clinical Document Architecture (CDA) data into the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), focusing on the initial steps preceding the mapping phase. We highlight the importance of a repeatable Extract-Transform-Load (ETL) pipeline for health data extraction from HL7 CDA documents in Estonia for research purposes.

Methods

We developed a repeatable ETL pipeline to facilitate the extraction, cleaning, and restructuring of health data from CDA documents to OMOP CDM, ensuring a high-quality and structured data format. This pipeline was designed to adapt to continuously updated data exchange format changes and handle various CDA document subsets for different scientific studies.

Results

We demonstrated via selected use cases that our pipeline successfully transformed a significant portion of diagnosis codes, body weight and eGFR measurements, and PAP test results from CDA documents into OMOP CDM, showing the ease of extracting structured data. However, challenges such as harmonising diverse coding systems and extracting lab results from free-text sections were encountered. The iterative development of the pipeline facilitated swift error detection and correction, enhancing the process’s efficiency.

Conclusion

After a decade of focused work, our research has led to the development of an ETL pipeline that effectively transforms HL7 CDA documents into OMOP CDM in Estonia, addressing key data extraction and transformation challenges. The pipeline’s repeatability and adaptability to various data subsets make it a valuable resource for researchers dealing with health data. While tested on Estonian data, the principles outlined are broadly applicable, potentially aiding in handling health data standards that vary by country. Despite newer health data standards emerging, the relevance of CDA for retrospective health studies ensures the continuing importance of this work.
目的:本研究旨在解决将现实世界临床文档架构(CDA)数据转换为观察性医疗结果伙伴关系(OMOP)公共数据模型(CDM)的文献空白,重点关注映射阶段之前的初始步骤。我们强调了可重复的提取-转换-加载(ETL)管道的重要性,以便从爱沙尼亚的HL7 CDA文档中提取健康数据,用于研究目的。方法:我们开发了一个可重复的ETL管道,以促进从CDA文档到OMOP CDM的健康数据的提取、清理和重组,确保高质量和结构化的数据格式。该管道旨在适应不断更新的数据交换格式变化,并处理不同科学研究的各种CDA文档子集。结果:我们通过选定的用例证明,我们的管道成功地将很大一部分诊断代码、体重和eGFR测量值以及PAP测试结果从CDA文档转换为OMOP CDM,显示了提取结构化数据的易用性。然而,遇到了诸如协调不同的编码系统和从自由文本部分提取实验室结果等挑战。流水线的迭代开发有助于快速检测和纠正错误,提高了流程的效率。结论:经过十年的重点工作,我们的研究已经开发出一种ETL管道,可以有效地将HL7 CDA文档转换为爱沙尼亚的OMOP CDM,解决了关键数据提取和转换挑战。该管道的可重复性和对各种数据子集的适应性使其成为研究人员处理健康数据的宝贵资源。虽然在爱沙尼亚的数据上进行了测试,但概述的原则是广泛适用的,可能有助于处理因国家而异的卫生数据标准。尽管出现了新的健康数据标准,但CDA与回顾性健康研究的相关性确保了这项工作的持续重要性。
{"title":"Repeatable process for extracting health data from HL7 CDA documents","authors":"Harry-Anton Talvik ,&nbsp;Marek Oja ,&nbsp;Sirli Tamm ,&nbsp;Kerli Mooses ,&nbsp;Dage Särg ,&nbsp;Marcus Lõo ,&nbsp;Õie Renata Siimon ,&nbsp;Hendrik Šuvalov ,&nbsp;Raivo Kolde ,&nbsp;Jaak Vilo ,&nbsp;Sulev Reisberg ,&nbsp;Sven Laur","doi":"10.1016/j.jbi.2024.104765","DOIUrl":"10.1016/j.jbi.2024.104765","url":null,"abstract":"<div><h3>Objective</h3><div>This study aims to address the gap in the literature on converting real-world Clinical Document Architecture (CDA) data into the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), focusing on the initial steps preceding the mapping phase. We highlight the importance of a repeatable Extract-Transform-Load (ETL) pipeline for health data extraction from HL7 CDA documents in Estonia for research purposes.</div></div><div><h3>Methods</h3><div>We developed a repeatable ETL pipeline to facilitate the extraction, cleaning, and restructuring of health data from CDA documents to OMOP CDM, ensuring a high-quality and structured data format. This pipeline was designed to adapt to continuously updated data exchange format changes and handle various CDA document subsets for different scientific studies.</div></div><div><h3>Results</h3><div>We demonstrated via selected use cases that our pipeline successfully transformed a significant portion of diagnosis codes, body weight and eGFR measurements, and PAP test results from CDA documents into OMOP CDM, showing the ease of extracting structured data. However, challenges such as harmonising diverse coding systems and extracting lab results from free-text sections were encountered. The iterative development of the pipeline facilitated swift error detection and correction, enhancing the process’s efficiency.</div></div><div><h3>Conclusion</h3><div>After a decade of focused work, our research has led to the development of an ETL pipeline that effectively transforms HL7 CDA documents into OMOP CDM in Estonia, addressing key data extraction and transformation challenges. The pipeline’s repeatability and adaptability to various data subsets make it a valuable resource for researchers dealing with health data. While tested on Estonian data, the principles outlined are broadly applicable, potentially aiding in handling health data standards that vary by country. Despite newer health data standards emerging, the relevance of CDA for retrospective health studies ensures the continuing importance of this work.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"161 ","pages":"Article 104765"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142894645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reviewer acknowledgement 2024 评论家承认。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/j.jbi.2024.104763
{"title":"Reviewer acknowledgement 2024","authors":"","doi":"10.1016/j.jbi.2024.104763","DOIUrl":"10.1016/j.jbi.2024.104763","url":null,"abstract":"","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"161 ","pages":"Article 104763"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142882210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal approach for few-shot biomedical named entity recognition in low-resource languages 低资源语言中生物医学命名实体识别的多模态方法。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/j.jbi.2024.104754
Jian Chen , Leilei Su , Yihong Li , Mingquan Lin , Yifan Peng , Cong Sun
In this study, we revisit named entity recognition (NER) in the biomedical domain from a multimodal perspective, with a particular focus on applications in low-resource languages. Existing research primarily relies on unimodal methods for NER, which limits the potential for capturing diverse information. To address this limitation, we propose a novel method that integrates a cross-modal generation module to transform unimodal data into multimodal data, thereby enabling the use of enriched multimodal information for NER. Additionally, we design a cross-modal filtering module to mitigate the adverse effects of text–image mismatches in multimodal NER. We validate our proposed method on two biomedical datasets specifically curated for low-resource languages. Experimental results demonstrate that our method significantly enhances the performance of NER, highlighting its effectiveness and potential for broader applications in biomedical research and low-resource language contexts.
在这项研究中,我们从多模态的角度重新审视了生物医学领域的命名实体识别(NER),特别关注了在低资源语言中的应用。现有的研究主要依赖于NER的单峰方法,这限制了捕获多种信息的潜力。为了解决这一限制,我们提出了一种集成跨模态生成模块的新方法,将单模态数据转换为多模态数据,从而使NER能够使用丰富的多模态信息。此外,我们设计了一个跨模态滤波模块来减轻多模态NER中文本-图像不匹配的不利影响。我们在两个专门为低资源语言策划的生物医学数据集上验证了我们提出的方法。实验结果表明,我们的方法显著提高了NER的性能,突出了其在生物医学研究和低资源语言环境中更广泛应用的有效性和潜力。
{"title":"A multimodal approach for few-shot biomedical named entity recognition in low-resource languages","authors":"Jian Chen ,&nbsp;Leilei Su ,&nbsp;Yihong Li ,&nbsp;Mingquan Lin ,&nbsp;Yifan Peng ,&nbsp;Cong Sun","doi":"10.1016/j.jbi.2024.104754","DOIUrl":"10.1016/j.jbi.2024.104754","url":null,"abstract":"<div><div>In this study, we revisit named entity recognition (NER) in the biomedical domain from a multimodal perspective, with a particular focus on applications in low-resource languages. Existing research primarily relies on unimodal methods for NER, which limits the potential for capturing diverse information. To address this limitation, we propose a novel method that integrates a cross-modal generation module to transform unimodal data into multimodal data, thereby enabling the use of enriched multimodal information for NER. Additionally, we design a cross-modal filtering module to mitigate the adverse effects of text–image mismatches in multimodal NER. We validate our proposed method on two biomedical datasets specifically curated for low-resource languages. Experimental results demonstrate that our method significantly enhances the performance of NER, highlighting its effectiveness and potential for broader applications in biomedical research and low-resource language contexts.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"161 ","pages":"Article 104754"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142769352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual-linguistic Diagnostic Semantic Enhancement for medical report generation 医学报告生成的视觉语言诊断语义增强。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/j.jbi.2024.104764
Jiahong Chen , Guoheng Huang , Xiaochen Yuan , Guo Zhong , Zhe Tan , Chi-Man Pun , Qi Yang
Generative methods are currently popular for medical report generation, as they automatically generate professional reports from input images, assisting physicians in making faster and more accurate decisions. However, current methods face significant challenges: 1) Lesion areas in medical images are often difficult for models to capture accurately, and 2) even when captured, these areas are frequently not described using precise clinical diagnostic terms. To address these problems, we propose a Visual-Linguistic Diagnostic Semantic Enhancement model (VLDSE) to generate high-quality reports. Our approach employs supervised contrastive learning in the Image and Report Semantic Consistency (IRSC) module to bridge the semantic gap between visual and linguistic features. Additionally, we design the Visual Semantic Qualification and Quantification (VSQQ) module and the Post-hoc Semantic Correction (PSC) module to enhance visual semantics and inter-word relationships, respectively. Experiments demonstrate that our model achieves promising performance on the publicly available IU X-RAY and MIMIC-MV datasets. Specifically, on the IU X-RAY dataset, our model achieves a BLEU-4 score of 18.6%, improving the baseline by 12.7%. On the MIMIC-MV dataset, our model improves the BLEU-1 score by 10.7% over the baseline. These results demonstrate the ability of our model to generate accurate and fluent descriptions of lesion areas.
生成方法目前在医学报告生成中很流行,因为它们可以从输入的图像中自动生成专业报告,帮助医生做出更快、更准确的决策。然而,目前的方法面临着重大挑战:1)医学图像中的病变区域通常很难被模型准确捕获,2)即使捕获了这些区域,也经常不能使用精确的临床诊断术语来描述。为了解决这些问题,我们提出了一个视觉语言诊断语义增强模型(VLDSE)来生成高质量的报告。我们的方法在图像和报告语义一致性(IRSC)模块中使用监督对比学习来弥合视觉和语言特征之间的语义差距。此外,我们还设计了视觉语义定性和量化(VSQQ)模块和事后语义校正(PSC)模块来增强视觉语义和词间关系。实验表明,我们的模型在公开可用的IU X-RAY和MIMIC-MV数据集上取得了很好的性能。具体来说,在IU X-RAY数据集上,我们的模型达到了18.6%的BLEU-4得分,比基线提高了12.7%。在MIMIC-MV数据集上,我们的模型比基线提高了10.7%的blue -1分数。这些结果证明了我们的模型能够产生准确和流畅的病变区域描述。
{"title":"Visual-linguistic Diagnostic Semantic Enhancement for medical report generation","authors":"Jiahong Chen ,&nbsp;Guoheng Huang ,&nbsp;Xiaochen Yuan ,&nbsp;Guo Zhong ,&nbsp;Zhe Tan ,&nbsp;Chi-Man Pun ,&nbsp;Qi Yang","doi":"10.1016/j.jbi.2024.104764","DOIUrl":"10.1016/j.jbi.2024.104764","url":null,"abstract":"<div><div>Generative methods are currently popular for medical report generation, as they automatically generate professional reports from input images, assisting physicians in making faster and more accurate decisions. However, current methods face significant challenges: 1) Lesion areas in medical images are often difficult for models to capture accurately, and 2) even when captured, these areas are frequently not described using precise clinical diagnostic terms. To address these problems, we propose a Visual-Linguistic Diagnostic Semantic Enhancement model (VLDSE) to generate high-quality reports. Our approach employs supervised contrastive learning in the Image and Report Semantic Consistency (IRSC) module to bridge the semantic gap between visual and linguistic features. Additionally, we design the Visual Semantic Qualification and Quantification (VSQQ) module and the Post-hoc Semantic Correction (PSC) module to enhance visual semantics and inter-word relationships, respectively. Experiments demonstrate that our model achieves promising performance on the publicly available IU X-RAY and MIMIC-MV datasets. Specifically, on the IU X-RAY dataset, our model achieves a BLEU-4 score of 18.6%, improving the baseline by 12.7%. On the MIMIC-MV dataset, our model improves the BLEU-1 score by 10.7% over the baseline. These results demonstrate the ability of our model to generate accurate and fluent descriptions of lesion areas.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"161 ","pages":"Article 104764"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142921316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Journal's cover
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/S1532-0464(25)00004-8
{"title":"Journal's cover","authors":"","doi":"10.1016/S1532-0464(25)00004-8","DOIUrl":"10.1016/S1532-0464(25)00004-8","url":null,"abstract":"","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"161 ","pages":"Article 104775"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143159609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling pathology-related predictive uncertainty of glomerular lesion recognition using prototype learning 利用原型学习揭示肾小球病变识别的病理相关预测不确定性。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-01 DOI: 10.1016/j.jbi.2024.104745
Qiming He , Yingming Xu , Qiang Huang , Yanxia Wang , Jing Ye , Yonghong He , Jing Li , Lianghui Zhu , Zhe Wang , Tian Guan

Objective

Recognizing glomerular lesions is essential in diagnosing chronic kidney disease. However, deep learning faces challenges due to the lesion heterogeneity, superposition, progression, and tissue incompleteness, leading to uncertainty in model predictions. Therefore, it is crucial to analyze pathology-related predictive uncertainty in glomerular lesion recognition and unveil its relationship with pathological properties and its impact on model performance.

Methods

This paper presents a novel framework for pathology-related predictive uncertainty analysis towards glomerular lesion recognition, including prototype learning based predictive uncertainty estimation, pathology-characterized correlation analysis and weight-redistributed prediction rectification. The prototype learning based predictive uncertainty estimation includes deep prototyping, affinity embedding, and multi-dimensional uncertainty fusion. The pathology-characterized correlation analysis is the first to use expert-based and learning- based approach to construct the pathology-related characterization of lesions and tissues. The weight-redistributed prediction rectification module performs reweighting- based lesion recognition.

Results

To validate the performance, extensive experiments were conducted. Based on the Spearman and Pearson correlation analysis, the proposed framework enables more efficient correlation analysis, and strong correlation with pathology-related characterization can be achieved (c index > 0.6 and p < 0.01). Furthermore, the prediction rectification module demonstrated improved lesion recognition performance across most metrics, with enhancements of up to 6.36 %.

Conclusion

The proposed predictive uncertainty analysis in glomerular lesion recognition offers a valuable approach for assessing computational pathology’s predictive uncertainty from a pathology-related perspective.

Significance

The paper provides a solution for pathology-related predictive uncertainty estimation in algorithm development and clinical practice.
目的:鉴别肾小球病变对慢性肾脏病的诊断至关重要。然而,由于病变的异质性、叠加性、进展性和组织的不完全性,深度学习面临着挑战,导致模型预测的不确定性。因此,分析肾小球病变识别中病理相关的预测不确定性,揭示其与病理性质的关系及其对模型性能的影响至关重要。方法:提出基于原型学习的肾小球病变预测不确定性估计、病理特征相关性分析和权重重分布预测校正的病理相关预测不确定性分析框架。基于原型学习的预测不确定性估计包括深度原型、亲和嵌入和多维不确定性融合。病理特征相关分析是首次使用基于专家和基于学习的方法来构建病变和组织的病理相关特征。权重重分配预测校正模块执行基于权重重的病灶识别。结果:为了验证其性能,进行了大量的实验。基于Spearman和Pearson相关性分析,本文提出的框架能够实现更高效的相关性分析,并与病理相关表征具有较强的相关性(c index > 0.6和p )。结论:本文提出的肾小球病变识别预测不确定性分析,为从病理相关角度评估计算病理学的预测不确定性提供了一种有价值的方法。意义:为算法开发和临床实践中与病理相关的预测不确定性估计提供了解决方案。
{"title":"Unveiling pathology-related predictive uncertainty of glomerular lesion recognition using prototype learning","authors":"Qiming He ,&nbsp;Yingming Xu ,&nbsp;Qiang Huang ,&nbsp;Yanxia Wang ,&nbsp;Jing Ye ,&nbsp;Yonghong He ,&nbsp;Jing Li ,&nbsp;Lianghui Zhu ,&nbsp;Zhe Wang ,&nbsp;Tian Guan","doi":"10.1016/j.jbi.2024.104745","DOIUrl":"10.1016/j.jbi.2024.104745","url":null,"abstract":"<div><h3>Objective</h3><div>Recognizing glomerular lesions is essential in diagnosing chronic kidney disease. However, deep learning faces challenges due to the lesion heterogeneity, superposition, progression, and tissue incompleteness, leading to uncertainty in model predictions. Therefore, it is crucial to analyze pathology-related predictive uncertainty in glomerular lesion recognition and unveil its relationship with pathological properties and its impact on model performance.</div></div><div><h3>Methods</h3><div>This paper presents a novel framework for pathology-related predictive uncertainty analysis towards glomerular lesion recognition, including prototype learning based predictive uncertainty estimation, pathology-characterized correlation analysis and weight-redistributed prediction rectification. The prototype learning based predictive uncertainty estimation includes deep prototyping, affinity embedding, and multi-dimensional uncertainty fusion. The pathology-characterized correlation analysis is the first to use expert-based and learning- based approach to construct the pathology-related characterization of lesions and tissues. The weight-redistributed prediction rectification module performs reweighting- based lesion recognition.</div></div><div><h3>Results</h3><div>To validate the performance, extensive experiments were conducted. Based on the Spearman and Pearson correlation analysis, the proposed framework enables more efficient correlation analysis, and strong correlation with pathology-related characterization can be achieved (c index &gt; 0.6 and p &lt; 0.01). Furthermore, the prediction rectification module demonstrated improved lesion recognition performance across most metrics, with enhancements of up to 6.36 %.</div></div><div><h3>Conclusion</h3><div>The proposed predictive uncertainty analysis in glomerular lesion recognition offers a valuable approach for assessing computational pathology’s predictive uncertainty from a pathology-related perspective.</div></div><div><h3>Significance</h3><div>The paper provides a solution for pathology-related predictive uncertainty estimation in algorithm development and clinical practice.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"161 ","pages":"Article 104745"},"PeriodicalIF":4.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142921240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Early multi-cancer detection through deep learning: An anomaly detection approach using Variational Autoencoder 通过深度学习进行早期多癌检测:使用变异自动编码器的异常检测方法。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-01 DOI: 10.1016/j.jbi.2024.104751
Innocent Tatchum Sado , Louis Fippo Fitime , Geraud Fokou Pelap , Claude Tinku , Gaelle Mireille Meudje , Thomas Bouetou Bouetou
Cancer is a disease that causes many deaths worldwide. The treatment of cancer is first and foremost a matter of detection, a treatment that is most effective when the disease is detected at an early stage. With the evolution of technology, several computer-aided diagnosis tools have been developed around cancer; several image-based cancer detection methods have been developed too. However, cancer detection faces many difficulties related to early detection which is crucial for patient survival rate. To detect cancer early, scientists have been using transcriptomic data. However, this presents some challenges such as unlabeled data, a large amount of data, and image-based techniques that only focus on one type of cancer. The purpose of this work is to develop a deep learning model that can effectively detect as soon as possible, specifically in the early stages, any type of cancer as an anomaly in transcriptomic data. This model must have the ability to act independently and not be restricted to any specific type of cancer. To achieve this goal, we modeled a deep neural network (a Variational Autoencoder) and then defined an algorithm for detecting anomalies in the output of the Variational Autoencoder. The Variational Autoencoder consists of an encoder and a decoder with a hidden layer. With the TCGA and GTEx data, we were able to train the model for six types of cancer using the Adam optimizer with decay learning for training, and a two-component loss function. As a result, we obtained the lowest value of accuracy 0.950, and the lowest value of recall 0.830. This research leads us to the design of a deep learning model for the detection of cancer as an anomaly in transcriptomic data.
癌症是一种导致全球许多人死亡的疾病。癌症的治疗首先要靠检测,只有在早期发现癌症,治疗效果才会最好。随着技术的发展,围绕癌症开发出了多种计算机辅助诊断工具,还开发出了多种基于图像的癌症检测方法。然而,癌症检测面临着许多与早期检测有关的困难,而早期检测对患者的存活率至关重要。为了早期检测癌症,科学家们一直在使用转录组数据。然而,这也带来了一些挑战,如无标记数据、数据量大以及基于图像的技术只关注一种类型的癌症。这项工作的目的是开发一种深度学习模型,它能尽快(特别是在早期阶段)有效检测转录组数据中任何类型癌症的异常。该模型必须具备独立行动的能力,并且不局限于任何特定类型的癌症。为了实现这一目标,我们建立了一个深度神经网络模型(变异自动编码器),然后定义了一种算法,用于检测变异自动编码器输出中的异常。变异自动编码器由一个编码器和一个带隐藏层的解码器组成。利用 TCGA 和 GTEx 数据,我们使用亚当优化器(Adam optimizer)、衰减学习训练和双分量损失函数对六种类型的癌症进行了模型训练。结果,我们获得了准确率最低值 0.950 和召回率最低值 0.830。这项研究为我们设计了一种深度学习模型,用于检测转录组数据中的癌症异常。
{"title":"Early multi-cancer detection through deep learning: An anomaly detection approach using Variational Autoencoder","authors":"Innocent Tatchum Sado ,&nbsp;Louis Fippo Fitime ,&nbsp;Geraud Fokou Pelap ,&nbsp;Claude Tinku ,&nbsp;Gaelle Mireille Meudje ,&nbsp;Thomas Bouetou Bouetou","doi":"10.1016/j.jbi.2024.104751","DOIUrl":"10.1016/j.jbi.2024.104751","url":null,"abstract":"<div><div>Cancer is a disease that causes many deaths worldwide. The treatment of cancer is first and foremost a matter of detection, a treatment that is most effective when the disease is detected at an early stage. With the evolution of technology, several computer-aided diagnosis tools have been developed around cancer; several image-based cancer detection methods have been developed too. However, cancer detection faces many difficulties related to early detection which is crucial for patient survival rate. To detect cancer early, scientists have been using transcriptomic data. However, this presents some challenges such as unlabeled data, a large amount of data, and image-based techniques that only focus on one type of cancer. The purpose of this work is to develop a deep learning model that can effectively detect as soon as possible, specifically in the early stages, any type of cancer as an anomaly in transcriptomic data. This model must have the ability to act independently and not be restricted to any specific type of cancer. To achieve this goal, we modeled a deep neural network (a Variational Autoencoder) and then defined an algorithm for detecting anomalies in the output of the Variational Autoencoder. The Variational Autoencoder consists of an encoder and a decoder with a hidden layer. With the TCGA and GTEx data, we were able to train the model for six types of cancer using the Adam optimizer with decay learning for training, and a two-component loss function. As a result, we obtained the lowest value of accuracy 0.950, and the lowest value of recall 0.830. This research leads us to the design of a deep learning model for the detection of cancer as an anomaly in transcriptomic data.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"160 ","pages":"Article 104751"},"PeriodicalIF":4.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142687219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to identify patient perception of AI voice robots in the follow-up scenario? A multimodal identity perception method based on deep learning 在后续场景中如何识别患者对AI语音机器人的感知?一种基于深度学习的多模态身份感知方法。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-01 DOI: 10.1016/j.jbi.2024.104757
Mingjie Liu , Kuiyou Chen , Qing Ye , Hong Wu

Objectives

Post-discharge follow-up stands as a critical component of post-diagnosis management, and the constraints of healthcare resources impede comprehensive manual follow-up. However, patients are less cooperative with AI follow-up calls or may even hang up once AI voice robots are perceived. To improve the effectiveness of follow-up, alternative measures should be taken when patients perceive AI voice robots. Therefore, identifying how patients perceive AI voice robots is crucial. This study aims to construct a multimodal identity perception model based on deep learning to identify how patients perceive AI voice robots.

Methods

Our dataset includes 2030 response audio recordings and corresponding texts from patients. We conduct comparative experiments and perform an ablation study. The proposed model employs a transfer learning approach, utilizing BERT and TextCNN for text feature extraction, AST and LSTM for audio feature extraction, and self-attention for feature fusion.

Results

Our model demonstrates superior performance against existing baselines, with a precision of 86.67%, an AUC of 84%, and an accuracy of 94.38%. Additionally, a generalization experiment was conducted using 144 patients’ response audio recordings and corresponding text data from other departments in the hospital, confirming the model’s robustness and effectiveness.

Conclusion

Our multimodal identity perception model can identify how patients perceive AI voice robots effectively. Identifying how patients perceive AI not only helps to optimize the follow-up process and improve patient cooperation, but also provides support for the evaluation and optimization of AI voice robots.
目的:出院后随访是诊断后管理的重要组成部分,医疗资源的限制阻碍了全面的人工随访。然而,一旦感知到人工智能语音机器人,患者对人工智能的随访电话不太配合,甚至可能会挂断电话。为提高随访效果,患者感知AI语音机器人时应采取替代措施。因此,确定患者如何看待人工智能语音机器人至关重要。本研究旨在构建基于深度学习的多模态身份感知模型,识别患者对AI语音机器人的感知。方法:我们的数据集包括2030个患者的回应录音和相应的文本。我们进行了对比实验并进行了消融研究。该模型采用迁移学习方法,利用BERT和TextCNN进行文本特征提取,利用AST和LSTM进行音频特征提取,利用自关注进行特征融合。结果:我们的模型在现有基线上表现出优异的性能,精度为86.67%,AUC为84%,准确度为94.38%。此外,利用144例患者的应答录音和医院其他科室的相应文本数据进行了泛化实验,验证了模型的稳健性和有效性。结论:我们的多模态身份感知模型可以有效识别患者对AI语音机器人的感知。识别患者对AI的感知,不仅有助于优化随访流程,提高患者配合度,也为AI语音机器人的评估和优化提供支持。
{"title":"How to identify patient perception of AI voice robots in the follow-up scenario? A multimodal identity perception method based on deep learning","authors":"Mingjie Liu ,&nbsp;Kuiyou Chen ,&nbsp;Qing Ye ,&nbsp;Hong Wu","doi":"10.1016/j.jbi.2024.104757","DOIUrl":"10.1016/j.jbi.2024.104757","url":null,"abstract":"<div><h3>Objectives</h3><div>Post-discharge follow-up stands as a critical component of post-diagnosis management, and the constraints of healthcare resources impede comprehensive manual follow-up. However, patients are less cooperative with AI follow-up calls or may even hang up once AI voice robots are perceived. To improve the effectiveness of follow-up, alternative measures should be taken when patients perceive AI voice robots. Therefore, identifying how patients perceive AI voice robots is crucial. This study aims to construct a multimodal identity perception model based on deep learning to identify how patients perceive AI voice robots.</div></div><div><h3>Methods</h3><div>Our dataset includes 2030 response audio recordings and corresponding texts from patients. We conduct comparative experiments and perform an ablation study. The proposed model employs a transfer learning approach, utilizing BERT and TextCNN for text feature extraction, AST and LSTM for audio feature extraction, and self-attention for feature fusion.</div></div><div><h3>Results</h3><div>Our model demonstrates superior performance against existing baselines, with a precision of 86.67%, an AUC of 84%, and an accuracy of 94.38%. Additionally, a generalization experiment was conducted using 144 patients’ response audio recordings and corresponding text data from other departments in the hospital, confirming the model’s robustness and effectiveness.</div></div><div><h3>Conclusion</h3><div>Our multimodal identity perception model can identify how patients perceive AI voice robots effectively. Identifying how patients perceive AI not only helps to optimize the follow-up process and improve patient cooperation, but also provides support for the evaluation and optimization of AI voice robots.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"160 ","pages":"Article 104757"},"PeriodicalIF":4.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142780183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biomedical document-level relation extraction with thematic capture and localized entity pooling 基于主题捕获和局部实体池的生物医学文档级关系提取。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-01 DOI: 10.1016/j.jbi.2024.104756
Yuqing Li, Xinhui Shao
In contrast to sentence-level relational extraction, document-level relation extraction poses greater challenges as a document typically contains multiple entities, and one entity may be associated with multiple other entities. Existing methods often rely on graph structures to capture path representations between entity pairs. However, this paper introduces a novel approach called local entity pooling that solely relies on the pre-training model to identify the bridge entity related to the current entity pair and generate the reasoning path representation. This technique effectively mitigates the multi-entity problem. Additionally, the model leverages the multi-entity and multi-label characteristics of the document to acquire the document’s thematic representation, thereby enhancing the document-level relation extraction task. Experimental evaluations conducted on two biomedical datasets, CDR and GDA. Our TCLEP (Thematic Capture and Localized Entity Pooling) model achieved the Macro-F1 scores of 71.7% and 85.3%, respectively. Simultaneously, we incorporated local entity pooling and thematic capture modules into the state-of-the-art model, resulting in performance improvements of 1.5% and 0.2% on the respective datasets. These results highlight the advanced performance of our proposed approach.
与句子级关系提取相比,文档级关系提取面临更大的挑战,因为文档通常包含多个实体,并且一个实体可能与多个其他实体相关联。现有的方法通常依赖于图结构来捕获实体对之间的路径表示。然而,本文引入了一种称为局部实体池的新方法,该方法仅依赖于预训练模型来识别与当前实体对相关的桥实体并生成推理路径表示。该技术有效地缓解了多实体问题。此外,该模型利用文档的多实体和多标签特征来获取文档的主题表示,从而增强了文档级关系提取任务。对CDR和GDA两个生物医学数据集进行了实验评估。我们的TCLEP (Thematic Capture and localization Entity Pooling)模型的Macro-F1得分分别为71.7%和85.3%。同时,我们将本地实体池和主题捕获模块合并到最先进的模型中,从而使各自数据集的性能提高1.5%和0.2%。这些结果突出了我们提出的方法的先进性能。
{"title":"Biomedical document-level relation extraction with thematic capture and localized entity pooling","authors":"Yuqing Li,&nbsp;Xinhui Shao","doi":"10.1016/j.jbi.2024.104756","DOIUrl":"10.1016/j.jbi.2024.104756","url":null,"abstract":"<div><div>In contrast to sentence-level relational extraction, document-level relation extraction poses greater challenges as a document typically contains multiple entities, and one entity may be associated with multiple other entities. Existing methods often rely on graph structures to capture path representations between entity pairs. However, this paper introduces a novel approach called local entity pooling that solely relies on the pre-training model to identify the bridge entity related to the current entity pair and generate the reasoning path representation. This technique effectively mitigates the multi-entity problem. Additionally, the model leverages the multi-entity and multi-label characteristics of the document to acquire the document’s thematic representation, thereby enhancing the document-level relation extraction task. Experimental evaluations conducted on two biomedical datasets, CDR and GDA. Our TCLEP (<strong>T</strong>hematic <strong>C</strong>apture and <strong>L</strong>ocalized <strong>E</strong>ntity <strong>P</strong>ooling) model achieved the Macro-F1 scores of 71.7% and 85.3%, respectively. Simultaneously, we incorporated local entity pooling and thematic capture modules into the state-of-the-art model, resulting in performance improvements of 1.5% and 0.2% on the respective datasets. These results highlight the advanced performance of our proposed approach.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"160 ","pages":"Article 104756"},"PeriodicalIF":4.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142769374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taxonomy-based prompt engineering to generate synthetic drug-related patient portal messages 基于分类学的提示工程,生成合成的药物相关患者门户信息。
IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-12-01 DOI: 10.1016/j.jbi.2024.104752
Natalie Wang , Sukrit Treewaree , Ayah Zirikly , Yuzhi L. Lu , Michelle H. Nguyen , Bhavik Agarwal , Jash Shah , James Michael Stevenson , Casey Overby Taylor

Objective:

The objectives of this study were to: (1) create a corpus of synthetic drug-related patient portal messages to address the current lack of publicly available datasets for model development, (2) assess differences in language used and linguistics among the synthetic patient portal messages, and (3) assess the accuracy of patient-reported drug side effects for different racial groups.

Methods:

We leveraged a taxonomy for patient- and clinician-generated content to guide prompt engineering for synthetic drug-related patient portal messages. We generated two groups of messages: the first group (200 messages) used a subset of the taxonomy relevant to a broad range of drug-related messages and the second group (250 messages) used a subset of the taxonomy relevant to a narrow range of messages focused on side effects. Prompts also include one of five racial groups. Next, we assessed linguistic characteristics among message parts (subject, beginning, body, ending) across different prompt specifications (urgency, patient portal taxa, race). We also assessed the performance and frequency of patient-reported side effects across different racial groups and compared to data present in a real world data source (SIDER).

Results:

The study generated 450 synthetic patient portal messages, and we assessed linguistic patterns, accuracy of drug-side effect pairs, frequency of pairs compared to real world data. Linguistic analysis revealed variations in language usage and politeness and analysis of positive predictive values identified differences in symptoms reported based on urgency levels and racial groups in the prompt. We also found that low incident SIDER drug-side effect pairs were observed less frequently in our dataset.

Conclusion:

This study demonstrates the potential of synthetic patient portal messages as a valuable resource for healthcare research. After creating a corpus of synthetic drug-related patient portal messages, we identified significant language differences and provided evidence that drug-side effect pairs observed in messages are comparable to what is expected in real world settings.
研究目的本研究的目的是(1)创建一个合成药物相关患者门户网站信息的语料库,以解决目前缺乏公开可用数据集来开发模型的问题;(2)评估合成患者门户网站信息在语言使用和语言学方面的差异;以及(3)评估不同种族群体患者报告的药物副作用的准确性:我们利用患者和临床医生生成的内容分类法来指导合成药物相关患者门户网站信息的提示工程。我们生成了两组信息:第一组(200 条信息)使用了与广泛的药物相关信息相关的分类标准子集,第二组(250 条信息)使用了与范围较窄的副作用相关的分类标准子集。提示还包括五个种族群体中的一个。接下来,我们评估了不同提示规格(紧急程度、患者门户分类群、种族)下信息各部分(主题、开头、主体、结尾)的语言特点。我们还评估了不同种族群体患者报告副作用的准确性和频率,并与真实世界的数据进行了比较:研究生成了 450 条合成的患者门户信息,我们评估了语言模式、药物副作用配对的准确性以及与真实世界数据相比的配对频率。使用LIWC进行的语言分析揭示了语言使用和礼貌方面的差异,对阳性预测值的分析确定了根据紧急程度和提示中的种族群体报告症状的差异。我们还发现了与 SIDER 数据库相似的药物副作用配对发生率:本研究证明了合成患者门户网站信息作为医疗保健研究宝贵资源的潜力。在创建了与药物相关的合成患者门户网站信息语料库后,我们发现了显著的语言差异,并评估了各种提示中药物副作用配对的准确性和频率。
{"title":"Taxonomy-based prompt engineering to generate synthetic drug-related patient portal messages","authors":"Natalie Wang ,&nbsp;Sukrit Treewaree ,&nbsp;Ayah Zirikly ,&nbsp;Yuzhi L. Lu ,&nbsp;Michelle H. Nguyen ,&nbsp;Bhavik Agarwal ,&nbsp;Jash Shah ,&nbsp;James Michael Stevenson ,&nbsp;Casey Overby Taylor","doi":"10.1016/j.jbi.2024.104752","DOIUrl":"10.1016/j.jbi.2024.104752","url":null,"abstract":"<div><h3>Objective:</h3><div>The objectives of this study were to: (1) create a corpus of synthetic drug-related patient portal messages to address the current lack of publicly available datasets for model development, (2) assess differences in language used and linguistics among the synthetic patient portal messages, and (3) assess the accuracy of patient-reported drug side effects for different racial groups.</div></div><div><h3>Methods:</h3><div>We leveraged a taxonomy for patient- and clinician-generated content to guide prompt engineering for synthetic drug-related patient portal messages. We generated two groups of messages: the first group (200 messages) used a subset of the taxonomy relevant to a broad range of drug-related messages and the second group (250 messages) used a subset of the taxonomy relevant to a narrow range of messages focused on side effects. Prompts also include one of five racial groups. Next, we assessed linguistic characteristics among message parts (subject, beginning, body, ending) across different prompt specifications (urgency, patient portal taxa, race). We also assessed the performance and frequency of patient-reported side effects across different racial groups and compared to data present in a real world data source (SIDER).</div></div><div><h3>Results:</h3><div>The study generated 450 synthetic patient portal messages, and we assessed linguistic patterns, accuracy of drug-side effect pairs, frequency of pairs compared to real world data. Linguistic analysis revealed variations in language usage and politeness and analysis of positive predictive values identified differences in symptoms reported based on urgency levels and racial groups in the prompt. We also found that low incident SIDER drug-side effect pairs were observed less frequently in our dataset.</div></div><div><h3>Conclusion:</h3><div>This study demonstrates the potential of synthetic patient portal messages as a valuable resource for healthcare research. After creating a corpus of synthetic drug-related patient portal messages, we identified significant language differences and provided evidence that drug-side effect pairs observed in messages are comparable to what is expected in real world settings.</div></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"160 ","pages":"Article 104752"},"PeriodicalIF":4.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142739561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Biomedical Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1