首页 > 最新文献

Methods of Information in Medicine最新文献

英文 中文
Deciphering Abbreviations in Malaysian Clinical Notes Using Machine Learning.
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-11 DOI: 10.1055/a-2521-4372
Ismat Mohd Sulaiman, Awang Bulgiba, Sameem Abdul Kareem, Abdul Aziz Latip

Objective:  This is the first Malaysian machine learning model to detect and disambiguate abbreviations in clinical notes. The model has been designed to be incorporated into MyHarmony, a natural language processing system, that extracts clinical information for health care management. The model utilizes word embedding to ensure feasibility of use, not in real-time but for secondary analysis, within the constraints of low-resource settings.

Methods:  A Malaysian clinical embedding, based on Word2Vec model, was developed using 29,895 electronic discharge summaries. The embedding was compared against conventional rule-based and FastText embedding on two tasks: abbreviation detection and abbreviation disambiguation. Machine learning classifiers were applied to assess performance.

Results:  The Malaysian clinical word embedding contained 7 million word tokens, 24,352 unique vocabularies, and 100 dimensions. For abbreviation detection, the Decision Tree classifier augmented with the Malaysian clinical embedding showed the best performance (F-score of 0.9519). For abbreviation disambiguation, the classifier with the Malaysian clinical embedding had the best performance for most of the abbreviations (F-score of 0.9903).

Conclusion:  Despite having a smaller vocabulary and dimension, our local clinical word embedding performed better than the larger nonclinical FastText embedding. Word embedding with simple machine learning algorithms can decipher abbreviations well. It also requires lower computational resources and is suitable for implementation in low-resource settings such as Malaysia. The integration of this model into MyHarmony will improve recognition of clinical terms, thus improving the information generated for monitoring Malaysian health care services and policymaking.

{"title":"Deciphering Abbreviations in Malaysian Clinical Notes Using Machine Learning.","authors":"Ismat Mohd Sulaiman, Awang Bulgiba, Sameem Abdul Kareem, Abdul Aziz Latip","doi":"10.1055/a-2521-4372","DOIUrl":"10.1055/a-2521-4372","url":null,"abstract":"<p><strong>Objective: </strong> This is the first Malaysian machine learning model to detect and disambiguate abbreviations in clinical notes. The model has been designed to be incorporated into MyHarmony, a natural language processing system, that extracts clinical information for health care management. The model utilizes word embedding to ensure feasibility of use, not in real-time but for secondary analysis, within the constraints of low-resource settings.</p><p><strong>Methods: </strong> A Malaysian clinical embedding, based on Word2Vec model, was developed using 29,895 electronic discharge summaries. The embedding was compared against conventional rule-based and FastText embedding on two tasks: abbreviation detection and abbreviation disambiguation. Machine learning classifiers were applied to assess performance.</p><p><strong>Results: </strong> The Malaysian clinical word embedding contained 7 million word tokens, 24,352 unique vocabularies, and 100 dimensions. For abbreviation detection, the Decision Tree classifier augmented with the Malaysian clinical embedding showed the best performance (F-score of 0.9519). For abbreviation disambiguation, the classifier with the Malaysian clinical embedding had the best performance for most of the abbreviations (F-score of 0.9903).</p><p><strong>Conclusion: </strong> Despite having a smaller vocabulary and dimension, our local clinical word embedding performed better than the larger nonclinical FastText embedding. Word embedding with simple machine learning algorithms can decipher abbreviations well. It also requires lower computational resources and is suitable for implementation in low-resource settings such as Malaysia. The integration of this model into MyHarmony will improve recognition of clinical terms, thus improving the information generated for monitoring Malaysian health care services and policymaking.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Guideline-Based Clinical Decision Support Systems with Large Language Models: A Case Study with Breast Cancer.
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-29 DOI: 10.1055/a-2528-4299
Solène Delourme, Akram Redjdal, Jacques Bouaud, Brigitte Seroussi

Background: Multidisciplinary tumor boards (MTBs) have been established in most countries to allow experts collaboratively determine the best treatment decisions for cancer patients. However, MTBs often face challenges such as case overload, which can compromise MTB decision quality. Clinical decision support systems (CDSSs) have been introduced to assist clinicians in this process. Despite their potential, CDSSs are still underutilized in routine practice. The emergence of large language models (LLMs), such as ChatGPT, offers new opportunities to improve the efficiency and usability of traditional clinical decision support systems (CDSSs).

Objectives: OncoDoc2 is a guideline-based CDSS developed using a documentary approach and applied to breast cancer management. This study aims to evaluate the potential of LLMs, used as question-answering (QA) systems, to improve the usability of OncoDoc2 across different prompt engineering techniques (PETs).

Methods: Data extracted from breast cancer patient summaries (BCPSs), together with questions formulated by OncoDoc2, were used to create prompts for various LLMs, and several PETs were designed and tested. Using a sample of 200 randomized BCPSs, LLMs and PETs were initially compared on their responses to OncoDoc2 questions using classic metrics (accuracy, precision, recall, and F1 score). Best performing LLMs and PETs were further assessed by comparing the therapeutic recommendations generated by OncoDoc2, based on LLM inputs, to those provided by MTB clinicians using OncoDoc2. Finally, the best performing method was validated using a new sample of 30 randomized BCPSs.

Results: The combination of Mistral and OpenChat models under the enhanced zero-shot PET showed the best performance as a question-answering system. This approach gets a precision of 60.16%, a recall of 54.18%, an F1 Score of 56.59%, and an accuracy of 75.57% on the validation set of 30 BCPSs. However, this approach yielded poor results as a CDSS, with only 16.67% of the recommendations generated by OncoDoc2 based on LLM inputs matching the gold standard.

Conclusions: All the criteria in the OncoDoc2 decision tree are crucial for capturing the uniqueness of each patient. Any deviation from a criterion alters the recommendations generated. Despite a good accuracy rate of 75.57% was achieved, LLMs still face challenges in reliably understanding complex medical contexts and be effective as CDSSs.

背景:大多数国家都成立了多学科肿瘤委员会(MTBs),以便专家们共同为癌症患者做出最佳治疗决定。然而,多学科肿瘤委员会经常面临病例超载等挑战,这可能会影响多学科肿瘤委员会的决策质量。临床决策支持系统(CDSS)的出现就是为了在这一过程中为临床医生提供帮助。尽管 CDSS 具有潜力,但在常规实践中仍未得到充分利用。大型语言模型(LLM)(如 ChatGPT)的出现为提高传统临床决策支持系统(CDSS)的效率和可用性提供了新的机遇:目的:OncoDoc2 是一种基于指南的 CDSS,采用文档方法开发,适用于乳腺癌管理。本研究旨在评估作为问题解答(QA)系统使用的 LLMs 在不同提示工程技术(PET)下提高 OncoDoc2 可用性的潜力:方法:从乳腺癌患者摘要(BCPS)中提取的数据与 OncoDoc2 提出的问题一起用于创建各种 LLM 的提示,并设计和测试了几种 PET。利用 200 份随机 BCPS 样本,使用经典指标(准确度、精确度、召回率和 F1 分数)对 LLM 和 PET 对 OncoDoc2 问题的回答进行了初步比较。通过比较 OncoDoc2 根据 LLM 输入生成的治疗建议和 MTB 临床医生使用 OncoDoc2 提供的治疗建议,进一步评估了表现最佳的 LLM 和 PET。最后,使用新的 30 个随机 BCPS 样本验证了性能最佳的方法:结果:Mistral 和 OpenChat 模型在增强的零点 PET 下的组合作为问题解答系统表现最佳。在 30 个 BCPS 的验证集上,该方法的精确度为 60.16%,召回率为 54.18%,F1 分数为 56.59%,准确率为 75.57%。然而,作为 CDSS,这种方法的结果并不理想,OncoDoc2 基于 LLM 输入生成的建议中只有 16.67% 与黄金标准相匹配:OncoDoc2决策树中的所有标准对于捕捉每位患者的独特性都至关重要。与标准的任何偏差都会改变生成的建议。尽管实现了 75.57% 的良好准确率,但 LLM 在可靠地理解复杂的医疗环境并有效地用作 CDSS 方面仍面临挑战。
{"title":"Leveraging Guideline-Based Clinical Decision Support Systems with Large Language Models: A Case Study with Breast Cancer.","authors":"Solène Delourme, Akram Redjdal, Jacques Bouaud, Brigitte Seroussi","doi":"10.1055/a-2528-4299","DOIUrl":"https://doi.org/10.1055/a-2528-4299","url":null,"abstract":"<p><strong>Background: </strong>Multidisciplinary tumor boards (MTBs) have been established in most countries to allow experts collaboratively determine the best treatment decisions for cancer patients. However, MTBs often face challenges such as case overload, which can compromise MTB decision quality. Clinical decision support systems (CDSSs) have been introduced to assist clinicians in this process. Despite their potential, CDSSs are still underutilized in routine practice. The emergence of large language models (LLMs), such as ChatGPT, offers new opportunities to improve the efficiency and usability of traditional clinical decision support systems (CDSSs).</p><p><strong>Objectives: </strong>OncoDoc2 is a guideline-based CDSS developed using a documentary approach and applied to breast cancer management. This study aims to evaluate the potential of LLMs, used as question-answering (QA) systems, to improve the usability of OncoDoc2 across different prompt engineering techniques (PETs).</p><p><strong>Methods: </strong>Data extracted from breast cancer patient summaries (BCPSs), together with questions formulated by OncoDoc2, were used to create prompts for various LLMs, and several PETs were designed and tested. Using a sample of 200 randomized BCPSs, LLMs and PETs were initially compared on their responses to OncoDoc2 questions using classic metrics (accuracy, precision, recall, and F1 score). Best performing LLMs and PETs were further assessed by comparing the therapeutic recommendations generated by OncoDoc2, based on LLM inputs, to those provided by MTB clinicians using OncoDoc2. Finally, the best performing method was validated using a new sample of 30 randomized BCPSs.</p><p><strong>Results: </strong>The combination of Mistral and OpenChat models under the enhanced zero-shot PET showed the best performance as a question-answering system. This approach gets a precision of 60.16%, a recall of 54.18%, an F1 Score of 56.59%, and an accuracy of 75.57% on the validation set of 30 BCPSs. However, this approach yielded poor results as a CDSS, with only 16.67% of the recommendations generated by OncoDoc2 based on LLM inputs matching the gold standard.</p><p><strong>Conclusions: </strong>All the criteria in the OncoDoc2 decision tree are crucial for capturing the uniqueness of each patient. Any deviation from a criterion alters the recommendations generated. Despite a good accuracy rate of 75.57% was achieved, LLMs still face challenges in reliably understanding complex medical contexts and be effective as CDSSs.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Significance of Information Quality for the Secondary Use of the Information in the National Health Care Quality Registers in Finland. 信息质量对芬兰国家医疗保健质量登记信息二次利用的重要性。
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-29 DOI: 10.1055/a-2511-7866
Anna Frondelius, Ulla-Mari Kinnunen, Vesa Jormanainen

Background:  The aim of the national health care quality registers is to monitor, assess, and improve the quality of care. The information utilized in quality registers must be of high quality to ensure that the information produced by the registers is reliable and useful. In Finland, one of the key sources of information for the quality registers is the national Kanta services.

Objectives:  The objective of the study was to increase understanding of the significance of information quality for the secondary use of the information in the national health care quality registers and to provide information on whether the information quality of the national Kanta services supports the information needs of the national quality registers, and how information quality should be developed.

Methods:  The research data were collected by interviewing six experts responsible for national health care quality registers, and it was analyzed using theory-driven qualitative content analysis based on the DeLone and McLean model.

Results:  Based on the results, the relevance of the information in the Kanta services met the information needs of the national quality registers. However, due to the limited amount of structured information and deficiencies in the completeness of the information, relevant information could not be fully utilized. Deficiencies in information quality posed challenges in information retrieval and hindered drawing conclusions in reporting. Challenges in information quality did not diminish the intention to use the information when information was considered relevant. Solutions to improve information quality included structuring, development of documentation practices, patient information systems and quality assurance, as well as collaboration among stakeholders.

Conclusion:  The Kanta services' information is relevant for the national health care quality registers, but developing the quality of the information, especially in terms of structures and completeness, is the key to fully enabling the secondary use of this information.

背景:国家卫生保健质量登记的目的是监测、评估和提高卫生保健质量。在质量寄存器中使用的信息必须是高质量的,以确保由寄存器产生的信息是可靠和有用的。在芬兰,质量登记信息的主要来源之一是国家坎塔服务。本研究的目的是提高对信息质量对国家卫生保健质量登记册中信息二次使用的重要性的认识,并提供关于国家Kanta服务的信息质量是否支持国家质量登记册的信息需求以及应如何开发信息质量的信息。方法通过对6名国家卫生保健质量登记机构专家的访谈收集研究数据,采用基于DeLone和McLean模型的理论驱动定性内容分析方法进行分析。结果调查结果表明,坎塔服务信息的相关性满足了国家质量登记机构的信息需求。然而,由于结构化信息的数量有限,信息的完整性不足,相关信息不能得到充分利用。信息质量的不足给信息检索带来了挑战,也阻碍了报告结论的得出。信息质量方面的挑战并没有削弱人们在认为信息相关时使用这些信息的意愿。改善信息质量的解决方案包括结构、文档实践的开发、患者信息系统和质量保证,以及利益相关者之间的合作。结论Kanta服务信息与国家卫生保健质量登记相关,但提高信息质量,特别是信息结构和完整性,是充分利用这些信息的关键。
{"title":"The Significance of Information Quality for the Secondary Use of the Information in the National Health Care Quality Registers in Finland.","authors":"Anna Frondelius, Ulla-Mari Kinnunen, Vesa Jormanainen","doi":"10.1055/a-2511-7866","DOIUrl":"10.1055/a-2511-7866","url":null,"abstract":"<p><strong>Background: </strong> The aim of the national health care quality registers is to monitor, assess, and improve the quality of care. The information utilized in quality registers must be of high quality to ensure that the information produced by the registers is reliable and useful. In Finland, one of the key sources of information for the quality registers is the national Kanta services.</p><p><strong>Objectives: </strong> The objective of the study was to increase understanding of the significance of information quality for the secondary use of the information in the national health care quality registers and to provide information on whether the information quality of the national Kanta services supports the information needs of the national quality registers, and how information quality should be developed.</p><p><strong>Methods: </strong> The research data were collected by interviewing six experts responsible for national health care quality registers, and it was analyzed using theory-driven qualitative content analysis based on the DeLone and McLean model.</p><p><strong>Results: </strong> Based on the results, the relevance of the information in the Kanta services met the information needs of the national quality registers. However, due to the limited amount of structured information and deficiencies in the completeness of the information, relevant information could not be fully utilized. Deficiencies in information quality posed challenges in information retrieval and hindered drawing conclusions in reporting. Challenges in information quality did not diminish the intention to use the information when information was considered relevant. Solutions to improve information quality included structuring, development of documentation practices, patient information systems and quality assurance, as well as collaboration among stakeholders.</p><p><strong>Conclusion: </strong> The Kanta services' information is relevant for the national health care quality registers, but developing the quality of the information, especially in terms of structures and completeness, is the key to fully enabling the secondary use of this information.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-lingual Natural Language Processing on Limited Annotated Case/Radiology Reports in English and Japanese: Insights from the Real-MedNLP Workshop. 英语和日语有限注释病例/放射学报告的跨语言自然语言处理:Real-MedNLP 研讨会的启示。
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-29 DOI: 10.1055/a-2405-2489
Shuntaro Yada, Yuta Nakamura, Shoko Wakamiya, Eiji Aramaki

Background:  Textual datasets (corpora) are crucial for the application of natural language processing (NLP) models. However, corpus creation in the medical field is challenging, primarily because of privacy issues with raw clinical data such as health records. Thus, the existing clinical corpora are generally small and scarce. Medical NLP (MedNLP) methodologies perform well with limited data availability.

Objectives:  We present the outcomes of the Real-MedNLP workshop, which was conducted using limited and parallel medical corpora. Real-MedNLP exhibits three distinct characteristics: (1) limited annotated documents: the training data comprise only a small set (∼100) of case reports (CRs) and radiology reports (RRs) that have been annotated. (2) Bilingually parallel: the constructed corpora are parallel in Japanese and English. (3) Practical tasks: the workshop addresses fundamental tasks, such as named entity recognition (NER) and applied practical tasks.

Methods:  We propose three tasks: NER of ∼100 available documents (Task 1), NER based only on annotation guidelines for humans (Task 2), and clinical applications (Task 3) consisting of adverse drug effect (ADE) detection for CRs and identical case identification (CI) for RRs.

Results:  Nine teams participated in this study. The best systems achieved 0.65 and 0.89 F1-scores for CRs and RRs in Task 1, whereas the top scores in Task 2 decreased by 50 to 70%. In Task 3, ADE reports were detected by up to 0.64 F1-score, and CI scored up to 0.96 binary accuracy.

Conclusion:  Most systems adopt medical-domain-specific pretrained language models using data augmentation methods. Despite the challenge of limited corpus size in Tasks 1 and 2, recent approaches are promising because the partial match scores reached ∼0.8-0.9 F1-scores. Task 3 applications revealed that the different availabilities of external language resources affected the performance per language.

背景:文本数据集(语料库)对于自然语言处理(NLP)模型的应用至关重要。然而,在医疗领域创建语料库是一项挑战,主要是因为原始临床数据(如健康记录)存在隐私问题。因此,现有的临床语料库通常规模较小,数量稀少。医学 NLP(MedNLP)方法在数据可用性有限的情况下表现良好:我们介绍了 "真实-MedNLP "研讨会的成果,该研讨会使用了有限的并行医疗语料库。Real-MedNLP 有三个显著特点:(1)有限的注释文档:训练数据只包括一小部分(约 100 份)已注释的病例报告 (CR) 和放射报告 (RR)。(2) 双语平行:所构建的语料库在日语和英语中是平行的。(3) 实用任务:研讨会讨论了命名实体识别等基本任务和应用实践任务:我们提出了三项任务:对约 100 篇可用文档进行命名实体识别(NER)(任务 1);仅基于人类注释指南进行 NER(任务 2);以及临床应用(任务 3),包括针对 CR 的药物不良反应(ADE)检测和针对 RR 的相同病例识别(CI):九个团队参加了这项研究。在任务 1 中,最佳系统在 CR 和 RR 方面的 F1 分数分别为 0.65 和 0.89,而在任务 2 中的最高分则下降了 50-70%。在任务 3 中,ADE 报告的检测 F1 分数高达 0.64,CI 的二进制准确率高达 0.96:大多数系统都采用了针对特定医疗领域的预训练语言模型,并使用了数据增强方法。尽管在任务 1 和 2 中面临着语料库规模有限的挑战,但最近的方法还是很有前景的,因为部分匹配得分达到了约 0.8-0.9 F1 分数。任务 3 的应用表明,外部语言资源的可用性不同会影响每种语言的性能。
{"title":"Cross-lingual Natural Language Processing on Limited Annotated Case/Radiology Reports in English and Japanese: Insights from the Real-MedNLP Workshop.","authors":"Shuntaro Yada, Yuta Nakamura, Shoko Wakamiya, Eiji Aramaki","doi":"10.1055/a-2405-2489","DOIUrl":"10.1055/a-2405-2489","url":null,"abstract":"<p><strong>Background: </strong> Textual datasets (corpora) are crucial for the application of natural language processing (NLP) models. However, corpus creation in the medical field is challenging, primarily because of privacy issues with raw clinical data such as health records. Thus, the existing clinical corpora are generally small and scarce. Medical NLP (MedNLP) methodologies perform well with limited data availability.</p><p><strong>Objectives: </strong> We present the outcomes of the Real-MedNLP workshop, which was conducted using limited and parallel medical corpora. Real-MedNLP exhibits three distinct characteristics: (1) limited annotated documents: the training data comprise only a small set (∼100) of case reports (CRs) and radiology reports (RRs) that have been annotated. (2) Bilingually parallel: the constructed corpora are parallel in Japanese and English. (3) Practical tasks: the workshop addresses fundamental tasks, such as named entity recognition (NER) and applied practical tasks.</p><p><strong>Methods: </strong> We propose three tasks: NER of ∼100 available documents (Task 1), NER based only on annotation guidelines for humans (Task 2), and clinical applications (Task 3) consisting of adverse drug effect (ADE) detection for CRs and identical case identification (CI) for RRs.</p><p><strong>Results: </strong> Nine teams participated in this study. The best systems achieved 0.65 and 0.89 F1-scores for CRs and RRs in Task 1, whereas the top scores in Task 2 decreased by 50 to 70%. In Task 3, ADE reports were detected by up to 0.64 F1-score, and CI scored up to 0.96 binary accuracy.</p><p><strong>Conclusion: </strong> Most systems adopt medical-domain-specific pretrained language models using data augmentation methods. Despite the challenge of limited corpus size in Tasks 1 and 2, recent approaches are promising because the partial match scores reached ∼0.8-0.9 F1-scores. Task 3 applications revealed that the different availabilities of external language resources affected the performance per language.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Predicting Progression of Patellofemoral Osteoarthritis Based on Lateral Knee Radiographs, Demographic Data, and Symptomatic Assessments. 基于膝关节外侧X光片、人口统计学数据和症状评估的深度学习预测髌骨骨关节炎的进展情况
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-01 Epub Date: 2024-04-11 DOI: 10.1055/a-2305-2115
Neslihan Bayramoglu, Martin Englund, Ida K Haugen, Muneaki Ishijima, Simo Saarakkala

Objective: In this study, we propose a novel framework that utilizes deep learning and attention mechanisms to predict the radiographic progression of patellofemoral osteoarthritis (PFOA) over a period of 7 years.

Material and methods: This study included subjects (1,832 subjects, 3,276 knees) from the baseline of the Multicenter Osteoarthritis Study (MOST). Patellofemoral joint regions of interest were identified using an automated landmark detection tool (BoneFinder) on lateral knee X-rays. An end-to-end deep learning method was developed for predicting PFOA progression based on imaging data in a five-fold cross-validation setting. To evaluate the performance of the models, a set of baselines based on known risk factors were developed and analyzed using gradient boosting machine (GBM). Risk factors included age, sex, body mass index, and Western Ontario and McMaster Universities Arthritis Index score, and the radiographic osteoarthritis stage of the tibiofemoral joint (Kellgren and Lawrence [KL] score). Finally, to increase predictive power, we trained an ensemble model using both imaging and clinical data.

Results: Among the individual models, the performance of our deep convolutional neural network attention model achieved the best performance with an area under the receiver operating characteristic curve (AUC) of 0.856 and average precision (AP) of 0.431, slightly outperforming the deep learning approach without attention (AUC = 0.832, AP = 0.4) and the best performing reference GBM model (AUC = 0.767, AP = 0.334). The inclusion of imaging data and clinical variables in an ensemble model allowed statistically more powerful prediction of PFOA progression (AUC = 0.865, AP = 0.447), although the clinical significance of this minor performance gain remains unknown. The spatial attention module improved the predictive performance of the backbone model, and the visual interpretation of attention maps focused on the joint space and the regions where osteophytes typically occur.

Conclusion: This study demonstrated the potential of machine learning models to predict the progression of PFOA using imaging and clinical variables. These models could be used to identify patients who are at high risk of progression and prioritize them for new treatments. However, even though the accuracy of the models were excellent in this study using the MOST dataset, they should be still validated using external patient cohorts in the future.

目的:在本研究中,我们提出了一种新的框架,利用深度学习和注意力机制来预测髌骨骨关节炎(PFOA)在七年内的放射学进展:在这项研究中,我们提出了一个新颖的框架,利用深度学习和注意力机制来预测七年内髌股骨关节炎(PFOA)的放射学进展:本研究纳入了多中心骨关节炎研究(MOST)基线的受试者(1832名受试者,3276个膝关节)。使用膝关节侧位 X 光片上的自动地标检测工具(BoneFinder)确定髌股关节感兴趣区。开发了一种端到端的深度学习方法,用于在 5 倍交叉验证设置中根据成像数据预测 PFOA 的进展。为了评估模型的性能,使用梯度提升机(GBM)开发并分析了一组基于已知风险因素的基线。风险因素包括年龄、性别、体重指数(BMI)和 WOMAC 评分,以及胫股关节的放射骨关节炎分期(KL 评分)。最后,为了提高预测能力,我们利用影像学和临床数据训练了一个集合模型:在单个模型中,我们的深度卷积神经网络注意力模型性能最佳,AUC 为 0.856,AP 为 0.431;略优于无注意力的深度学习方法(AUC=0.832,AP=0.4)和性能最佳的参考 GBM 模型(AUC=0.767,AP=0.334)。在一个集合模型中加入成像数据和临床变量后,对 PFOA 进展的预测在统计学上更为有力(AUC=0.865,AP=0.447),但这一微小的性能提升的临床意义仍不得而知。空间注意力模块提高了骨干模型的预测性能,注意力图的可视化解读侧重于关节空间和骨质增生的典型发生区域:本研究证明了机器学习模型利用成像和临床变量预测 PFOA 进展的潜力。这些模型可用于识别病情恶化风险较高的患者,并优先选择新的治疗方法。不过,尽管本研究中使用 MOST 数据集的模型准确性很高,但今后仍应使用外部患者队列对其进行验证。
{"title":"Deep Learning for Predicting Progression of Patellofemoral Osteoarthritis Based on Lateral Knee Radiographs, Demographic Data, and Symptomatic Assessments.","authors":"Neslihan Bayramoglu, Martin Englund, Ida K Haugen, Muneaki Ishijima, Simo Saarakkala","doi":"10.1055/a-2305-2115","DOIUrl":"10.1055/a-2305-2115","url":null,"abstract":"<p><strong>Objective: </strong>In this study, we propose a novel framework that utilizes deep learning and attention mechanisms to predict the radiographic progression of patellofemoral osteoarthritis (PFOA) over a period of 7 years.</p><p><strong>Material and methods: </strong>This study included subjects (1,832 subjects, 3,276 knees) from the baseline of the Multicenter Osteoarthritis Study (MOST). Patellofemoral joint regions of interest were identified using an automated landmark detection tool (BoneFinder) on lateral knee X-rays. An end-to-end deep learning method was developed for predicting PFOA progression based on imaging data in a five-fold cross-validation setting. To evaluate the performance of the models, a set of baselines based on known risk factors were developed and analyzed using gradient boosting machine (GBM). Risk factors included age, sex, body mass index, and Western Ontario and McMaster Universities Arthritis Index score, and the radiographic osteoarthritis stage of the tibiofemoral joint (Kellgren and Lawrence [KL] score). Finally, to increase predictive power, we trained an ensemble model using both imaging and clinical data.</p><p><strong>Results: </strong>Among the individual models, the performance of our deep convolutional neural network attention model achieved the best performance with an area under the receiver operating characteristic curve (AUC) of 0.856 and average precision (AP) of 0.431, slightly outperforming the deep learning approach without attention (AUC = 0.832, AP = 0.4) and the best performing reference GBM model (AUC = 0.767, AP = 0.334). The inclusion of imaging data and clinical variables in an ensemble model allowed statistically more powerful prediction of PFOA progression (AUC = 0.865, AP = 0.447), although the clinical significance of this minor performance gain remains unknown. The spatial attention module improved the predictive performance of the backbone model, and the visual interpretation of attention maps focused on the joint space and the regions where osteophytes typically occur.</p><p><strong>Conclusion: </strong>This study demonstrated the potential of machine learning models to predict the progression of PFOA using imaging and clinical variables. These models could be used to identify patients who are at high risk of progression and prioritize them for new treatments. However, even though the accuracy of the models were excellent in this study using the MOST dataset, they should be still validated using external patient cohorts in the future.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":"1-10"},"PeriodicalIF":1.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140854286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Validation of a Natural Language Processing Algorithm to Pseudonymize Documents in the Context of a Clinical Data Warehouse. 开发和验证自然语言处理算法,在临床数据仓库中对文档进行匿名化处理。
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-01 Epub Date: 2024-03-05 DOI: 10.1055/s-0044-1778693
Xavier Tannier, Perceval Wajsbürt, Alice Calliger, Basile Dura, Alexandre Mouchet, Martin Hilka, Romain Bey

Objective: The objective of this study is to address the critical issue of deidentification of clinical reports to allow access to data for research purposes, while ensuring patient privacy. The study highlights the difficulties faced in sharing tools and resources in this domain and presents the experience of the Greater Paris University Hospitals (AP-HP for Assistance Publique-Hôpitaux de Paris) in implementing a systematic pseudonymization of text documents from its Clinical Data Warehouse.

Methods: We annotated a corpus of clinical documents according to 12 types of identifying entities and built a hybrid system, merging the results of a deep learning model as well as manual rules.

Results and discussion: Our results show an overall performance of 0.99 of F1-score. We discuss implementation choices and present experiments to better understand the effort involved in such a task, including dataset size, document types, language models, or rule addition. We share guidelines and code under a 3-Clause BSD license.

研究目的本研究旨在解决临床报告去标识化这一关键问题,以便在确保患者隐私的前提下为研究目的获取数据。研究强调了在这一领域共享工具和资源所面临的困难,并介绍了大巴黎大学医院(AP-HP,即巴黎公立医院协会)在对其临床数据仓库中的文本文档进行系统化匿名处理方面的经验:方法:我们根据 12 种识别实体对临床文件语料库进行了注释,并建立了一个混合系统,将深度学习模型和人工规则的结果合并在一起:我们的结果显示,F1-score 的总体性能为 0.99。我们讨论了实施选择,并通过实验更好地理解了此类任务所涉及的工作,包括数据集大小、文档类型、语言模型或规则添加。我们在 3 条款 BSD 许可下共享指南和代码。
{"title":"Development and Validation of a Natural Language Processing Algorithm to Pseudonymize Documents in the Context of a Clinical Data Warehouse.","authors":"Xavier Tannier, Perceval Wajsbürt, Alice Calliger, Basile Dura, Alexandre Mouchet, Martin Hilka, Romain Bey","doi":"10.1055/s-0044-1778693","DOIUrl":"10.1055/s-0044-1778693","url":null,"abstract":"<p><strong>Objective: </strong>The objective of this study is to address the critical issue of deidentification of clinical reports to allow access to data for research purposes, while ensuring patient privacy. The study highlights the difficulties faced in sharing tools and resources in this domain and presents the experience of the Greater Paris University Hospitals (AP-HP for Assistance Publique-Hôpitaux de Paris) in implementing a systematic pseudonymization of text documents from its Clinical Data Warehouse.</p><p><strong>Methods: </strong>We annotated a corpus of clinical documents according to 12 types of identifying entities and built a hybrid system, merging the results of a deep learning model as well as manual rules.</p><p><strong>Results and discussion: </strong>Our results show an overall performance of 0.99 of F1-score. We discuss implementation choices and present experiments to better understand the effort involved in such a task, including dataset size, document types, language models, or rule addition. We share guidelines and code under a 3-Clause BSD license.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":"21-34"},"PeriodicalIF":1.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does Differentially Private Synthetic Data Lead to Synthetic Discoveries? 差异化私有合成数据会带来合成发现吗?
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-01 Epub Date: 2024-08-13 DOI: 10.1055/a-2385-1355
Ileana Montoya Perez, Parisa Movahedi, Valtteri Nieminen, Antti Airola, Tapio Pahikkala

Background: Synthetic data have been proposed as a solution for sharing anonymized versions of sensitive biomedical datasets. Ideally, synthetic data should preserve the structure and statistical properties of the original data, while protecting the privacy of the individual subjects. Differential Privacy (DP) is currently considered the gold standard approach for balancing this trade-off.

Objectives: The aim of this study is to investigate how trustworthy are group differences discovered by independent sample tests from DP-synthetic data. The evaluation is carried out in terms of the tests' Type I and Type II errors. With the former, we can quantify the tests' validity, i.e., whether the probability of false discoveries is indeed below the significance level, and the latter indicates the tests' power in making real discoveries.

Methods: We evaluate the Mann-Whitney U test, Student's t-test, chi-squared test, and median test on DP-synthetic data. The private synthetic datasets are generated from real-world data, including a prostate cancer dataset (n = 500) and a cardiovascular dataset (n = 70,000), as well as on bivariate and multivariate simulated data. Five different DP-synthetic data generation methods are evaluated, including two basic DP histogram release methods and MWEM, Private-PGM, and DP GAN algorithms.

Conclusion: A large portion of the evaluation results expressed dramatically inflated Type I errors, especially at levels of ϵ ≤ 1. This result calls for caution when releasing and analyzing DP-synthetic data: low p-values may be obtained in statistical tests simply as a byproduct of the noise added to protect privacy. A DP Smoothed Histogram-based synthetic data generation method was shown to produce valid Type I error for all privacy levels tested but required a large original dataset size and a modest privacy budget (ϵ ≥ 5) in order to have reasonable Type II error levels.

背景:合成数据是共享敏感生物医学数据集匿名版本的一种解决方案。理想情况下,合成数据应保留原始数据的结构和统计特性,同时保护受试者的个人隐私。目前,差异隐私(DP)被认为是平衡这种权衡的黄金标准方法:本研究的目的是调查通过 DP 合成数据的独立样本测试发现的群体差异的可信度。评估从测试的 I 类和 II 类误差的角度进行。通过前者,我们可以量化检验的有效性,即错误发现的概率是否确实低于显著性水平:我们对 DP 合成数据进行了曼惠尼 U 检验、学生 t 检验、卡方检验和中位检验。私人合成数据集由真实世界数据生成,包括前列腺癌数据集(n=500)和心血管数据集(n=70 000),以及双变量和多变量模拟数据。评估了五种不同的 DP 合成数据生成方法,包括两种基本的 DP 直方图释放方法以及 MWEM、Private-PGM 和 DP GAN 算法:结论:大部分评估结果表明 I 类误差急剧扩大,尤其是在ϵ≤1 的水平上。这一结果要求在发布和分析 DP 合成数据时保持谨慎:在统计测试中可能会获得较低的 p 值,而这仅仅是为保护隐私而添加的噪声的副产品。基于 DP 平滑直方图的合成数据生成方法在所有测试的隐私级别中都能产生有效的 I 类误差,但需要较大的原始数据集规模和适度的隐私预算(ϵ≥ 5),以获得合理的 II 类误差水平。
{"title":"Does Differentially Private Synthetic Data Lead to Synthetic Discoveries?","authors":"Ileana Montoya Perez, Parisa Movahedi, Valtteri Nieminen, Antti Airola, Tapio Pahikkala","doi":"10.1055/a-2385-1355","DOIUrl":"10.1055/a-2385-1355","url":null,"abstract":"<p><strong>Background: </strong>Synthetic data have been proposed as a solution for sharing anonymized versions of sensitive biomedical datasets. Ideally, synthetic data should preserve the structure and statistical properties of the original data, while protecting the privacy of the individual subjects. Differential Privacy (DP) is currently considered the gold standard approach for balancing this trade-off.</p><p><strong>Objectives: </strong>The aim of this study is to investigate how trustworthy are group differences discovered by independent sample tests from DP-synthetic data. The evaluation is carried out in terms of the tests' Type I and Type II errors. With the former, we can quantify the tests' validity, i.e., whether the probability of false discoveries is indeed below the significance level, and the latter indicates the tests' power in making real discoveries.</p><p><strong>Methods: </strong>We evaluate the Mann-Whitney U test, Student's <i>t</i>-test, chi-squared test, and median test on DP-synthetic data. The private synthetic datasets are generated from real-world data, including a prostate cancer dataset (<i>n</i> = 500) and a cardiovascular dataset (<i>n</i> = 70,000), as well as on bivariate and multivariate simulated data. Five different DP-synthetic data generation methods are evaluated, including two basic DP histogram release methods and MWEM, Private-PGM, and DP GAN algorithms.</p><p><strong>Conclusion: </strong>A large portion of the evaluation results expressed dramatically inflated Type I errors, especially at levels of <i>ϵ</i> ≤ 1. This result calls for caution when releasing and analyzing DP-synthetic data: low <i>p</i>-values may be obtained in statistical tests simply as a byproduct of the noise added to protect privacy. A DP Smoothed Histogram-based synthetic data generation method was shown to produce valid Type I error for all privacy levels tested but required a large original dataset size and a modest privacy budget (<i>ϵ</i> ≥ 5) in order to have reasonable Type II error levels.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":"35-51"},"PeriodicalIF":1.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence-Based Prediction of Contrast Medium Doses for Computed Tomography Angiography Using Optimized Clinical Parameter Sets. 基于人工智能的计算机断层扫描血管造影术造影剂剂量预测,使用优化的临床参数集。
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-01 Epub Date: 2024-01-23 DOI: 10.1055/s-0044-1778694
Marja Fleitmann, Hristina Uzunova, René Pallenberg, Andreas M Stroth, Jan Gerlach, Alexander Fürschke, Jörg Barkhausen, Arpad Bischof, Heinz Handels

Objectives: In this paper, an artificial intelligence-based algorithm for predicting the optimal contrast medium dose for computed tomography (CT) angiography of the aorta is presented and evaluated in a clinical study. The prediction of the contrast dose reduction is modelled as a classification problem using the image contrast as the main feature.

Methods: This classification is performed by random decision forests (RDF) and k-nearest-neighbor methods (KNN). For the selection of optimal parameter subsets all possible combinations of the 22 clinical parameters (age, blood pressure, etc.) are considered using the classification accuracy and precision of the KNN classifier and RDF as quality criteria. Subsequently, the results of the evaluation were optimized by means of feature transformation using regression neural networks (RNN). These were used for a direct classification based on regressed Hounsfield units as well as preprocessing for a subsequent KNN classification.

Results: For feature selection, an RDF model achieved the highest accuracy of 84.42% and a KNN model achieved the best precision of 86.21%. The most important parameters include age, height, and hemoglobin. The feature transformation using an RNN considerably exceeded these values with an accuracy of 90.00% and a precision of 97.62% using all 22 parameters as input. However, also the feasibility of the parameter sets in routine clinical practice has to be considered, because some of the 22 parameters are not measured in routine clinical practice and additional measurement time of 15 to 20 minutes per patient is needed. Using the standard feature set available in clinical routine the best accuracy of 86.67% and precision of 93.18% was achieved by the RNN.

Conclusion: We developed a reliable hybrid system that helps radiologists determine the optimal contrast dose for CT angiography based on patient-specific parameters.

目的:本文介绍了一种基于人工智能的算法,用于预测主动脉计算机断层扫描(CT)血管造影的最佳造影剂剂量,并在一项临床研究中进行了评估。以图像对比度为主要特征,将减少造影剂剂量的预测模拟为一个分类问题:方法:采用随机决策森林(RDF)和 k 最近邻方法(KNN)进行分类。为了选择最佳参数子集,考虑了 22 个临床参数(年龄、血压等)的所有可能组合,将 KNN 分类器和 RDF 的分类准确度和精确度作为质量标准。随后,通过使用回归神经网络(RNN)进行特征转换,对评估结果进行了优化。这些特征被用于基于回归 Hounsfield 单元的直接分类以及后续 KNN 分类的预处理:在特征选择方面,RDF 模型的准确率最高,达到 84.42%,KNN 模型的准确率最高,达到 86.21%。最重要的参数包括年龄、身高和血红蛋白。使用 RNN 进行的特征转换大大超过了这些数值,在输入全部 22 个参数的情况下,准确率达到 90.00%,精确度达到 97.62%。不过,还必须考虑参数集在常规临床实践中的可行性,因为常规临床实践中无法测量 22 个参数中的某些参数,而且每个患者还需要 15 至 20 分钟的额外测量时间。使用临床常规的标准特征集,RNN 的准确率达到了 86.67%,精确率达到了 93.18%:我们开发了一种可靠的混合系统,可帮助放射医师根据患者的特定参数确定 CT 血管造影的最佳造影剂剂量。
{"title":"Artificial Intelligence-Based Prediction of Contrast Medium Doses for Computed Tomography Angiography Using Optimized Clinical Parameter Sets.","authors":"Marja Fleitmann, Hristina Uzunova, René Pallenberg, Andreas M Stroth, Jan Gerlach, Alexander Fürschke, Jörg Barkhausen, Arpad Bischof, Heinz Handels","doi":"10.1055/s-0044-1778694","DOIUrl":"10.1055/s-0044-1778694","url":null,"abstract":"<p><strong>Objectives: </strong>In this paper, an artificial intelligence-based algorithm for predicting the optimal contrast medium dose for computed tomography (CT) angiography of the aorta is presented and evaluated in a clinical study. The prediction of the contrast dose reduction is modelled as a classification problem using the image contrast as the main feature.</p><p><strong>Methods: </strong>This classification is performed by random decision forests (RDF) and k-nearest-neighbor methods (KNN). For the selection of optimal parameter subsets all possible combinations of the 22 clinical parameters (age, blood pressure, etc.) are considered using the classification accuracy and precision of the KNN classifier and RDF as quality criteria. Subsequently, the results of the evaluation were optimized by means of feature transformation using regression neural networks (RNN). These were used for a direct classification based on regressed Hounsfield units as well as preprocessing for a subsequent KNN classification.</p><p><strong>Results: </strong>For feature selection, an RDF model achieved the highest accuracy of 84.42% and a KNN model achieved the best precision of 86.21%. The most important parameters include age, height, and hemoglobin. The feature transformation using an RNN considerably exceeded these values with an accuracy of 90.00% and a precision of 97.62% using all 22 parameters as input. However, also the feasibility of the parameter sets in routine clinical practice has to be considered, because some of the 22 parameters are not measured in routine clinical practice and additional measurement time of 15 to 20 minutes per patient is needed. Using the standard feature set available in clinical routine the best accuracy of 86.67% and precision of 93.18% was achieved by the RNN.</p><p><strong>Conclusion: </strong>We developed a reliable hybrid system that helps radiologists determine the optimal contrast dose for CT angiography based on patient-specific parameters.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":"11-20"},"PeriodicalIF":1.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Europe's Largest Research Infrastructure for Curated Medical Data Models with Semantic Annotations. 欧洲最大的带语义注释的医学数据模型研究基础设施。
IF 1.3 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-01 Epub Date: 2024-05-13 DOI: 10.1055/s-0044-1786839
Sarah Riepenhausen, Max Blumenstock, Christian Niklas, Stefan Hegselmann, Philipp Neuhaus, Alexandra Meidt, Cornelia Püttmann, Michael Storck, Matthias Ganzinger, Julian Varghese, Martin Dugas

Background: Structural metadata from the majority of clinical studies and routine health care systems is currently not yet available to the scientific community.

Objective: To provide an overview of available contents in the Portal of Medical Data Models (MDM Portal).

Methods: The MDM Portal is a registered European information infrastructure for research and health care, and its contents are curated and semantically annotated by medical experts. It enables users to search, view, discuss, and download existing medical data models.

Results: The most frequent keyword is "clinical trial" (n = 18,777), and the most frequent disease-specific keyword is "breast neoplasms" (n = 1,943). Most data items are available in English (n = 545,749) and German (n = 109,267). Manually curated semantic annotations are available for 805,308 elements (554,352 items, 58,101 item groups, and 192,855 code list items), which were derived from 25,257 data models. In total, 1,609,225 Unified Medical Language System (UMLS) codes have been assigned, with 66,373 unique UMLS codes.

Conclusion: To our knowledge, the MDM Portal constitutes Europe's largest collection of medical data models with semantically annotated elements. As such, it can be used to increase compatibility of medical datasets and can be utilized as a large expert-annotated medical text corpus for natural language processing.

背景:大多数临床研究和常规医疗保健系统的结构元数据目前尚未向科学界开放:概述医学数据模型门户网站(MDM Portal)的可用内容:医学数据模型门户网站是欧洲注册的研究与医疗保健信息基础设施,其内容由医学专家策划并进行语义注释。用户可以通过它搜索、查看、讨论和下载现有的医学数据模型:最常见的关键词是 "临床试验"(n = 18,777),最常见的特定疾病关键词是 "乳腺肿瘤"(n = 1,943)。大多数数据项以英语(n = 545,749 个)和德语(n = 109,267 个)提供。805,308 个元素(554,352 个条目、58,101 个条目组和 192,855 个代码表条目)的语义注释由人工编辑,这些注释来自 25,257 个数据模型。总共分配了 1,609,225 个统一医学语言系统(UMLS)代码,其中有 66,373 个独特的 UMLS 代码:据我们所知,MDM 门户网站是欧洲最大的带有语义注释元素的医学数据模型集合。因此,该门户网站可用于提高医疗数据集的兼容性,并可作为大型专家注释医疗文本语料库用于自然语言处理。
{"title":"Europe's Largest Research Infrastructure for Curated Medical Data Models with Semantic Annotations.","authors":"Sarah Riepenhausen, Max Blumenstock, Christian Niklas, Stefan Hegselmann, Philipp Neuhaus, Alexandra Meidt, Cornelia Püttmann, Michael Storck, Matthias Ganzinger, Julian Varghese, Martin Dugas","doi":"10.1055/s-0044-1786839","DOIUrl":"10.1055/s-0044-1786839","url":null,"abstract":"<p><strong>Background: </strong>Structural metadata from the majority of clinical studies and routine health care systems is currently not yet available to the scientific community.</p><p><strong>Objective: </strong>To provide an overview of available contents in the Portal of Medical Data Models (MDM Portal).</p><p><strong>Methods: </strong>The MDM Portal is a registered European information infrastructure for research and health care, and its contents are curated and semantically annotated by medical experts. It enables users to search, view, discuss, and download existing medical data models.</p><p><strong>Results: </strong>The most frequent keyword is \"clinical trial\" (<i>n</i> = 18,777), and the most frequent disease-specific keyword is \"breast neoplasms\" (<i>n</i> = 1,943). Most data items are available in English (<i>n</i> = 545,749) and German (<i>n</i> = 109,267). Manually curated semantic annotations are available for 805,308 elements (554,352 items, 58,101 item groups, and 192,855 code list items), which were derived from 25,257 data models. In total, 1,609,225 Unified Medical Language System (UMLS) codes have been assigned, with 66,373 unique UMLS codes.</p><p><strong>Conclusion: </strong>To our knowledge, the MDM Portal constitutes Europe's largest collection of medical data models with semantically annotated elements. As such, it can be used to increase compatibility of medical datasets and can be utilized as a large expert-annotated medical text corpus for natural language processing.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":"52-61"},"PeriodicalIF":1.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495939/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140917387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Characteristics of a Rule-Based Electronic Health Record Algorithm to Identify Patients with Gross and Microscopic Hematuria. 基于规则的电子健康记录算法识别肉眼和显微镜下血尿患者的性能特征。
IF 1.7 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-01 Epub Date: 2023-09-04 DOI: 10.1055/a-2165-5552
Jasmine Kashkoush, Mudit Gupta, Matthew A Meissner, Matthew E Nielsen, H Lester Kirchner, Tullika Garg

Background: Two million patients per year are referred to urologists for hematuria, or blood in the urine. The American Urological Association recently adopted a risk-stratified hematuria evaluation guideline to limit multi-phase computed tomography to individuals at highest risk of occult malignancy.

Objectives: To understand population-level hematuria evaluations, we developed an algorithm to accurately identify hematuria cases from electronic health records (EHRs).

Methods: We used International Classification of Diseases (ICD)-9/ICD-10 diagnosis codes, urine color, and urine microscopy values to identify hematuria cases and to differentiate between gross and microscopic hematuria. Using an iterative process, we refined the ICD-9 algorithm on a gold standard, chart-reviewed cohort of 3,094 hematuria cases, and the ICD-10 algorithm on a 300 patient cohort. We applied the algorithm to Geisinger patients ≥35 years (n = 539,516) and determined performance by conducting chart review (n = 500).

Results: After applying the hematuria algorithm, we identified 51,500 hematuria cases and 488,016 clean controls. Of the hematuria cases, 11,435 were categorized as gross, 26,658 as microscopic, 12,562 as indeterminate, and 845 were uncategorized. The positive predictive value (PPV) of identifying hematuria cases using the algorithm was 100% and the negative predictive value (NPV) was 99%. The gross hematuria algorithm had a PPV of 100% and NPV of 99%. The microscopic hematuria algorithm had lower PPV of 78% and NPV of 100%.

Conclusion: We developed an algorithm utilizing diagnosis codes and urine laboratory values to accurately identify hematuria and categorize as gross or microscopic in EHRs. Applying the algorithm will help researchers to understand patterns of care for this common condition.

背景: 每年有200万患者因血尿或尿中带血而转诊至泌尿科医生。美国泌尿外科协会最近通过了一项风险分层血尿评估指南,将多期计算机断层扫描限制在隐性恶性肿瘤风险最高的个体。目标: 为了了解人群水平的血尿评估,我们开发了一种算法,从电子健康记录(EHR)中准确识别血尿病例。方法: 我们使用国际疾病分类(ICD)-9/ICD-10诊断代码、尿液颜色和尿液显微镜检查值来识别血尿病例,并区分肉眼血尿和显微镜血尿。使用迭代过程,我们在3094例血尿病例的金标准、图表回顾队列中改进了ICD-9算法,在300名患者队列中完善了ICD-10算法。我们将该算法应用于≥35岁(n = 539516),并通过进行图表审查来确定性能(n = 500)。结果: 在应用血尿算法后,我们确定了51500例血尿病例和488016例清洁对照。在血尿病例中,11435例属于肉眼血尿,26658例属于显微镜血尿,12562例属于不确定血尿,845例属于未分类血尿。使用该算法识别血尿病例的阳性预测值(PPV)为100%,阴性预测值(NPV)为99%。肉眼血尿算法的PPV为100%,NPV为99%。镜下血尿算法PPV降低78%,NPV降低100%。结论: 我们开发了一种算法,利用诊断代码和尿液实验室值来准确识别血尿,并在EHRs中分类为肉眼或显微镜。应用该算法将有助于研究人员了解这种常见疾病的护理模式。
{"title":"Performance Characteristics of a Rule-Based Electronic Health Record Algorithm to Identify Patients with Gross and Microscopic Hematuria.","authors":"Jasmine Kashkoush, Mudit Gupta, Matthew A Meissner, Matthew E Nielsen, H Lester Kirchner, Tullika Garg","doi":"10.1055/a-2165-5552","DOIUrl":"10.1055/a-2165-5552","url":null,"abstract":"<p><strong>Background: </strong>Two million patients per year are referred to urologists for hematuria, or blood in the urine. The American Urological Association recently adopted a risk-stratified hematuria evaluation guideline to limit multi-phase computed tomography to individuals at highest risk of occult malignancy.</p><p><strong>Objectives: </strong>To understand population-level hematuria evaluations, we developed an algorithm to accurately identify hematuria cases from electronic health records (EHRs).</p><p><strong>Methods: </strong>We used International Classification of Diseases (ICD)-9/ICD-10 diagnosis codes, urine color, and urine microscopy values to identify hematuria cases and to differentiate between gross and microscopic hematuria. Using an iterative process, we refined the ICD-9 algorithm on a gold standard, chart-reviewed cohort of 3,094 hematuria cases, and the ICD-10 algorithm on a 300 patient cohort. We applied the algorithm to Geisinger patients ≥35 years (<i>n</i> = 539,516) and determined performance by conducting chart review (<i>n</i> = 500).</p><p><strong>Results: </strong>After applying the hematuria algorithm, we identified 51,500 hematuria cases and 488,016 clean controls. Of the hematuria cases, 11,435 were categorized as gross, 26,658 as microscopic, 12,562 as indeterminate, and 845 were uncategorized. The positive predictive value (PPV) of identifying hematuria cases using the algorithm was 100% and the negative predictive value (NPV) was 99%. The gross hematuria algorithm had a PPV of 100% and NPV of 99%. The microscopic hematuria algorithm had lower PPV of 78% and NPV of 100%.</p><p><strong>Conclusion: </strong>We developed an algorithm utilizing diagnosis codes and urine laboratory values to accurately identify hematuria and categorize as gross or microscopic in EHRs. Applying the algorithm will help researchers to understand patterns of care for this common condition.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":" ","pages":"183-192"},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10153429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Methods of Information in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1