首页 > 最新文献

AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science最新文献

英文 中文
Leveraging SNOMED CT for patient cohort identification over heterogeneous EHR data. 利用SNOMED CT对异构EHR数据进行患者队列识别。
Xubing Hao, Yan Huang, Licong Cui, Xiaojin Li

SNOMED CT is extensively employed to standardize data across diverse patient datasets and support cohort identification, with studies revealing its benefits and challenges. In this work, we developed a SNOMED CT-driven cohort query system over a heterogeneous Optum® de-identified COVID-19 Electronic Health Record dataset leveraging concept mappings between ICD-9-CM/ICD-10-CM and SNOMED CT. We evaluated the benefits and challenges of using SNOMED CT to perform cohort queries based on both query code sets and actual patients retrieved from the database, leveraging the original ICD-9-CM and ICD-10-CM as baselines. Manual review of 80 random cases revealed 65 cases containing 148 true positive codes and 25 cases containing 63 false positive codes. The manual evaluation also revealed issues in code naming, mappings, and hierarchical relations. Overall, our study indicates that while the SNOMED CT-driven query system holds considerable promise for comprehensive cohort queries, careful attention must be given to the challenges offalsely included codes and patients.

SNOMED CT被广泛用于标准化不同患者数据集的数据,并支持队列识别,研究揭示了它的好处和挑战。在这项工作中,我们利用ICD-9-CM/ICD-10-CM与SNOMED CT之间的概念映射,在异构Optum®去识别的COVID-19电子健康记录数据集上开发了SNOMED CT驱动的队列查询系统。我们利用原始ICD-9-CM和ICD-10-CM作为基线,基于查询代码集和从数据库检索的实际患者,评估了使用SNOMED CT进行队列查询的好处和挑战。对80个随机病例进行人工审查,发现65个病例包含148个真阳性码,25个病例包含63个假阳性码。手工评估还揭示了代码命名、映射和层次关系中的问题。总的来说,我们的研究表明,尽管SNOMED ct驱动的查询系统对于全面的队列查询具有相当大的前景,但必须仔细注意错误包含代码和患者的挑战。
{"title":"Leveraging SNOMED CT for patient cohort identification over heterogeneous EHR data.","authors":"Xubing Hao, Yan Huang, Licong Cui, Xiaojin Li","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>SNOMED CT is extensively employed to standardize data across diverse patient datasets and support cohort identification, with studies revealing its benefits and challenges. In this work, we developed a SNOMED CT-driven cohort query system over a heterogeneous Optum<sup>®</sup> de-identified COVID-19 Electronic Health Record dataset leveraging concept mappings between ICD-9-CM/ICD-10-CM and SNOMED CT. We evaluated the benefits and challenges of using SNOMED CT to perform cohort queries based on both query code sets and actual patients retrieved from the database, leveraging the original ICD-9-CM and ICD-10-CM as baselines. Manual review of 80 random cases revealed 65 cases containing 148 true positive codes and 25 cases containing 63 false positive codes. The manual evaluation also revealed issues in code naming, mappings, and hierarchical relations. Overall, our study indicates that while the SNOMED CT-driven query system holds considerable promise for comprehensive cohort queries, careful attention must be given to the challenges offalsely included codes and patients.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"205-214"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Healthcare Data Integration: A Machine Learning Approach to Harmonizing Laboratory Labels. 增强医疗保健数据集成:协调实验室标签的机器学习方法。
Mehmet F Bagci, Samantha R Spierling, Anna L Ritko, Truong Nguyen, Brian D Modena, Yusuf Ozturk

Variations in laboratory test names across healthcare systems-stemming from inconsistent terminologies, abbreviations, misspellings, and assay vendors-pose significant challenges to the integration and analysis of clinical data. These discrepancies hinder interoperability and complicate efforts to extract meaningful insights for both clinical research and patient care. In this study, we propose a machine learning-driven solution, enhanced by natural language processing techniques, to standardize lab test names. By employing feature extraction methods that analyze both string similarity and the distributional properties of test results, we improve the harmonization of test names, resulting in a more robust dataset. Our model achieves a 99% accuracy rate in matching lab names, showcasing the potential of AI-driven approaches in resolving long-standing standardization challenges. Importantly, this method enhances the reliability and consistency of clinical data, which is crucial for ensuring accurate results in large-scale clinical studies and improving the overall efficiency of informatics-based research and diagnostics.

医疗保健系统中实验室检测名称的变化(源于不一致的术语、缩写、拼写错误和检测供应商)对临床数据的整合和分析构成了重大挑战。这些差异阻碍了互操作性,并使提取临床研究和患者护理有意义的见解的努力复杂化。在本研究中,我们提出了一种由自然语言处理技术增强的机器学习驱动的解决方案,以标准化实验室测试名称。通过使用分析字符串相似性和测试结果分布特性的特征提取方法,我们提高了测试名称的协调性,从而获得更健壮的数据集。我们的模型在匹配实验室名称方面达到了99%的准确率,展示了人工智能驱动方法在解决长期存在的标准化挑战方面的潜力。重要的是,该方法增强了临床数据的可靠性和一致性,这对于确保大规模临床研究结果的准确性和提高基于信息学的研究和诊断的整体效率至关重要。
{"title":"Enhancing Healthcare Data Integration: A Machine Learning Approach to Harmonizing Laboratory Labels.","authors":"Mehmet F Bagci, Samantha R Spierling, Anna L Ritko, Truong Nguyen, Brian D Modena, Yusuf Ozturk","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Variations in laboratory test names across healthcare systems-stemming from inconsistent terminologies, abbreviations, misspellings, and assay vendors-pose significant challenges to the integration and analysis of clinical data. These discrepancies hinder interoperability and complicate efforts to extract meaningful insights for both clinical research and patient care. In this study, we propose a machine learning-driven solution, enhanced by natural language processing techniques, to standardize lab test names. By employing feature extraction methods that analyze both string similarity and the distributional properties of test results, we improve the harmonization of test names, resulting in a more robust dataset. Our model achieves a 99% accuracy rate in matching lab names, showcasing the potential of AI-driven approaches in resolving long-standing standardization challenges. Importantly, this method enhances the reliability and consistency of clinical data, which is crucial for ensuring accurate results in large-scale clinical studies and improving the overall efficiency of informatics-based research and diagnostics.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"65-73"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Artificial Intelligence (XAI) in the Era of Large Language Models: Applying an XAI Framework in Pediatric Ophthalmology Diagnosis using the Gemini Model. 大语言模型时代的可解释人工智能(XAI):基于Gemini模型的XAI框架在儿童眼科诊断中的应用
Dipak P Upadhyaya, Katrina Prantzalos, Pedram Golnari, Aasef G Shaikh, Subhashini Sivagnanam, Amitava Majumdar, Fatema F Ghasia, Satya S Sahoo

Amblyopia is a neurodevelopmental disorder affecting children's visual acuity, requiring early diagnosis for effective treatment. Traditional diagnostic methods rely on subjective evaluations of eye tracking recordings from high fidelity eye tracking instruments performed by specialized pediatric ophthalmologists, often unavailable in rural, low resource clinics. As such, there is an urgent need to develop a scalable, low cost, high accuracy approach to automatically analyze eye tracking recordings. Large Language Models (LLM) show promise in accurate detection of amblyopia; our prior work has shown that the Google Gemini model, guided by expert ophthalmologists, can detect control and amblyopic subjects from eye tracking recordings. However, there is a clear need to address the issues of transparency and trust in medical applications of LLMs. To bolster the reliability and interpretability of LLM analysis of eye tracking records, we developed a Feature Guided Interprative Prompting (FGIP) framework focused on critical clinical features. Using the Google Gemini model, we classify high-fidelity eye-tracking data to detect amblyopia in children and apply the Quantus framework to evaluate the classification results across key metrics (faithfulness, robustness, localization, and complexity). These metrics provide a quantitative basis for understanding the model's decision-making process. This work presents the first implementation of an Explainable Artificial Intelligence (XAI) framework to systematically characterize the results generated by the Gemini model using high-fidelity eye-tracking data to detect amblyopia in children. Results demonstrated that the model accurately classified control and amblyopic subjects, including those with nystagmus while maintaining transparency and clinical alignment. The results of this study support the development of a scalable and interpretable clinical decision support (CDS) tool using LLMs that has the potential to enhance the trustworthiness of AI applications.

弱视是一种影响儿童视力的神经发育障碍,需要早期诊断才能有效治疗。传统的诊断方法依赖于由专业儿科眼科医生使用的高保真眼动追踪仪器对眼动追踪记录的主观评估,这在农村、资源匮乏的诊所往往无法获得。因此,迫切需要开发一种可扩展、低成本、高精度的方法来自动分析眼动追踪记录。大语言模型(Large Language Models, LLM)在弱视的准确检测方面有前景;我们之前的工作表明,在眼科专家的指导下,谷歌双子座模型可以从眼动追踪记录中检测出控制性和弱视受试者。然而,显然需要解决法学硕士在医疗应用中的透明度和信任问题。为了提高眼动记录LLM分析的可靠性和可解释性,我们开发了一个特征引导解释提示(FGIP)框架,重点关注关键临床特征。使用谷歌Gemini模型,我们对高保真眼动追踪数据进行分类,以检测儿童弱视,并应用Quantus框架评估关键指标(忠实度、鲁棒性、本地化和复杂性)的分类结果。这些指标为理解模型的决策过程提供了定量基础。这项工作首次实现了可解释人工智能(XAI)框架,该框架使用高保真眼动追踪数据系统地表征Gemini模型产生的结果,以检测儿童弱视。结果表明,该模型在保持透明度和临床一致性的同时,准确地分类了对照组和弱视受试者,包括眼球震颤患者。这项研究的结果支持使用法学硕士开发可扩展和可解释的临床决策支持(CDS)工具,该工具有可能提高人工智能应用的可信度。
{"title":"Explainable Artificial Intelligence (XAI) in the Era of Large Language Models: Applying an XAI Framework in Pediatric Ophthalmology Diagnosis using the Gemini Model.","authors":"Dipak P Upadhyaya, Katrina Prantzalos, Pedram Golnari, Aasef G Shaikh, Subhashini Sivagnanam, Amitava Majumdar, Fatema F Ghasia, Satya S Sahoo","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Amblyopia is a neurodevelopmental disorder affecting children's visual acuity, requiring early diagnosis for effective treatment. Traditional diagnostic methods rely on subjective evaluations of eye tracking recordings from high fidelity eye tracking instruments performed by specialized pediatric ophthalmologists, often unavailable in rural, low resource clinics. As such, there is an urgent need to develop a scalable, low cost, high accuracy approach to automatically analyze eye tracking recordings. Large Language Models (LLM) show promise in accurate detection of amblyopia; our prior work has shown that the Google Gemini model, guided by expert ophthalmologists, can detect control and amblyopic subjects from eye tracking recordings. However, there is a clear need to address the issues of transparency and trust in medical applications of LLMs. To bolster the reliability and interpretability of LLM analysis of eye tracking records, we developed a Feature Guided Interprative Prompting (FGIP) framework focused on critical clinical features. Using the Google Gemini model, we classify high-fidelity eye-tracking data to detect amblyopia in children and apply the Quantus framework to evaluate the classification results across key metrics (faithfulness, robustness, localization, and complexity). These metrics provide a quantitative basis for understanding the model's decision-making process. This work presents the first implementation of an Explainable Artificial Intelligence (XAI) framework to systematically characterize the results generated by the Gemini model using high-fidelity eye-tracking data to detect amblyopia in children. Results demonstrated that the model accurately classified control and amblyopic subjects, including those with nystagmus while maintaining transparency and clinical alignment. The results of this study support the development of a scalable and interpretable clinical decision support (CDS) tool using LLMs that has the potential to enhance the trustworthiness of AI applications.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"566-575"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification. MRISeqClassifier:一个用于精确MRI序列分类的深度学习工具包。
Jinqian Pan, Qi Chen, Chengkun Sun, Renjie Liang, Jiang Bian, Jie Xu

Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.

磁共振成像(MRI)是一种重要的医学诊断工具,广泛用于检测和评估各种健康状况。不同的MRI序列,如t1加权、t2加权和FLAIR,通过突出不同的组织特征和对比而发挥不同的作用。然而,由于混淆或不正确的注释,仅根据描述文件区分它们目前是不可能的。此外,明显缺乏有效的工具来区分这些序列。作为回应,我们开发了一个基于深度学习的工具包,专门针对小型、未精炼的MRI数据集。该工具包能够实现精确的序列分类,并提供与在大型精心策划的数据集上训练的系统相当的性能。该工具包利用轻量级模型体系结构并结合投票集成方法,提高了准确性和稳定性。它只使用其他研究中通常需要的10%的数据就能达到99%的准确率。代码可在https://github.com/JinqianPan/MRISeqClassifier上获得。
{"title":"MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification.","authors":"Jinqian Pan, Qi Chen, Chengkun Sun, Renjie Liang, Jiang Bian, Jie Xu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"405-413"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Standardized Guideline for Assessing Extracted Electronic Health Records Cohorts: A Scoping Review. 评估提取的电子健康记录队列的标准化指南:范围审查。
Nattanit Songthangtham, Ratchada Jantraporn, Elizabeth Weinfurter, Gyorgy Simon, Wei Pan, Sripriya Rajamani, Steven G Johnson

Assessing how accurately a cohort extracted from Electronic Health Records (EHR) represents the intended target population, or cohort fitness, is critical but often overlooked in secondary EHR data use. This scoping review aimed to (1) identify guidelines for assessing cohort fitness and (2) determine their thoroughness by examining whether they offer sufficient detail and computable methods for researchers. This scoping review follows the JBI guidance for scoping reviews and is refined based on the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) checklists. Searches were performed in Medline, Embase, and Scopus. From 1,904 results, 30 articles and 2 additional references were reviewed. Nine articles (28.13%) include a framework for evaluating cohort fitness but only 5 (15.63%) contain sufficient details and quantitative methodologies. Overall, a more comprehensive guideline that provides best practices for measuring the cohort fitness is still needed.

评估从电子健康记录(EHR)中提取的队列代表预期目标人群的准确性或队列适应度是至关重要的,但在二次EHR数据使用中经常被忽视。本综述旨在(1)确定评估队列适应度的指导方针,(2)通过检查它们是否为研究人员提供足够的细节和可计算的方法来确定其彻彻性。此范围审查遵循JBI范围审查指南,并根据系统审查的首选报告项和范围审查的元分析扩展(PRISMA-ScR)检查清单进行了改进。在Medline、Embase和Scopus中进行搜索。从1,904个结果中,审查了30篇文章和2个额外的参考文献。9篇文章(28.13%)包含评估队列适合度的框架,但只有5篇(15.63%)包含足够的细节和定量方法。总的来说,仍然需要一个更全面的指导方针,为测量队列适合度提供最佳实践。
{"title":"A Standardized Guideline for Assessing Extracted Electronic Health Records Cohorts: A Scoping Review.","authors":"Nattanit Songthangtham, Ratchada Jantraporn, Elizabeth Weinfurter, Gyorgy Simon, Wei Pan, Sripriya Rajamani, Steven G Johnson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Assessing how accurately a cohort extracted from Electronic Health Records (EHR) represents the intended target population, or cohort fitness, is critical but often overlooked in secondary EHR data use. This scoping review aimed to (1) identify guidelines for assessing cohort fitness and (2) determine their thoroughness by examining whether they offer sufficient detail and computable methods for researchers. This scoping review follows the JBI guidance for scoping reviews and is refined based on the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) checklists. Searches were performed in Medline, Embase, and Scopus. From 1,904 results, 30 articles and 2 additional references were reviewed. Nine articles (28.13%) include a framework for evaluating cohort fitness but only 5 (15.63%) contain sufficient details and quantitative methodologies. Overall, a more comprehensive guideline that provides best practices for measuring the cohort fitness is still needed.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"527-536"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility Assessment of a Wearable App to Manage Symptoms of Postural Orthostatic Tachycardia Syndrome Using Real-Time Heart Rate Monitoring. 利用实时心率监测管理体位性心动过速综合征症状的可穿戴应用的可行性评估
Aileen S Gabriel, Te-Yi Tsai, C Mahony Reategui-Rivera, Patricia Rocco, Aref Smiley, Clayton Powers, Jeanette P Brown, Joseph Finkelstein

Postural Tachycardia Syndrome (POTS) is a chronic condition characterized by orthostatic intolerance and a significant rise in heart rate upon standing. Patients often experience debilitating symptoms, such as brain fog and chronic fatigue, which hinder daily functioning. Non-pharmacological management strategies, particularly pacing, are crucial for reducing symptom fluctuations and improving quality of life. Heart rate monitoring plays a key role in effective pacing, enabling patients to plan activities and prevent severe symptom onset. Recent technological advancements have increased interest in wearable devices for managing chronic conditions. This study examines the feasibility of using wearable technology to support symptom management in POTS patients. Through an Exploratory- Descriptive Qualitative approach, five key themes emerged, including personalized management strategies and the beneficial impact of real-time feedback. The findings suggest that wearable devices can enhance self-management, improve communication with healthcare providers, and empower patients to take a more proactive approach to their care.

体位性心动过速综合征(POTS)是一种以站立不耐受和站立时心率显著升高为特征的慢性疾病。患者通常会出现使人衰弱的症状,如脑雾和慢性疲劳,这些症状会阻碍日常功能。非药物管理策略,特别是起搏,对于减少症状波动和提高生活质量至关重要。心率监测在有效起搏中起着关键作用,使患者能够计划活动并预防严重症状的发作。最近的技术进步增加了人们对管理慢性病的可穿戴设备的兴趣。本研究探讨了使用可穿戴技术支持POTS患者症状管理的可行性。通过探索性-描述性定性方法,出现了五个关键主题,包括个性化管理策略和实时反馈的有益影响。研究结果表明,可穿戴设备可以增强自我管理,改善与医疗保健提供者的沟通,并使患者能够采取更积极主动的治疗方法。
{"title":"Feasibility Assessment of a Wearable App to Manage Symptoms of Postural Orthostatic Tachycardia Syndrome Using Real-Time Heart Rate Monitoring.","authors":"Aileen S Gabriel, Te-Yi Tsai, C Mahony Reategui-Rivera, Patricia Rocco, Aref Smiley, Clayton Powers, Jeanette P Brown, Joseph Finkelstein","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Postural Tachycardia Syndrome (POTS) is a chronic condition characterized by orthostatic intolerance and a significant rise in heart rate upon standing. Patients often experience debilitating symptoms, such as brain fog and chronic fatigue, which hinder daily functioning. Non-pharmacological management strategies, particularly pacing, are crucial for reducing symptom fluctuations and improving quality of life. Heart rate monitoring plays a key role in effective pacing, enabling patients to plan activities and prevent severe symptom onset. Recent technological advancements have increased interest in wearable devices for managing chronic conditions. This study examines the feasibility of using wearable technology to support symptom management in POTS patients. Through an Exploratory- Descriptive Qualitative approach, five key themes emerged, including personalized management strategies and the beneficial impact of real-time feedback. The findings suggest that wearable devices can enhance self-management, improve communication with healthcare providers, and empower patients to take a more proactive approach to their care.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"159-166"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Scanner to Science: Reusing Clinically Acquired Medical Images for Research. 从扫描仪到科学:重新使用临床获得的医学图像进行研究。
Jenna M Schabdach, Remo M S Williams, Joseph Logan, Viveknarayanan Padmanabhan, Russell D'Aiello Iii, Johnny Mclaughlin, Alexander Gonzalez, Edward M Krause, Gregory E Tasian, Susan Sotardi, Aaron F Alexander-Bloch

Growth in the field of medical imaging research has revealed a need for larger volume and variety in available data. This need could be met using curated clinically acquired data, but the process for getting this data from the scanners to the scientists is complex and lengthy. We present a manifest-driven modular Extract, Transform, and Load (ETL) process named Locutus designed to appropriately handle difficulties present in the process of reusing clinically acquired medical imaging data. The design of Locutus was based on four foundational assumptions about medical data, research data, and communication. All parts of a workflow must communicate with each other and be adaptable to unique data delivery requests. In addition, the workflow must be robust to possible errors and uncertainties in clinically-acquired data, which may require human intervention to resolve. With these assumptions in mind,Locutus presents a five-phase workflow for downloading, deidentifying, and delivering unique requests for imaging data. The phases include initialization, data preparation, extraction of data from the research server to a pre-deidentification data warehouse, transformation into deidentified space, and loading into post-deidentification data warehouse. To date, this workflow has been used to process 32,962 imaging accessions for research use. This number is expected to grow as technical challenges are addressed and the role of humans is expected to shift from frequent intervention to regular monitoring.

医学影像研究领域的增长表明,需要更大的数量和种类的可用数据。这种需求可以通过收集临床数据来满足,但是将这些数据从扫描仪传送给科学家的过程是复杂而漫长的。我们提出了一个清单驱动的模块化提取、转换和加载(ETL)过程,命名为Locutus,旨在适当地处理在重复使用临床获得的医学成像数据过程中存在的困难。loctus的设计基于关于医疗数据、研究数据和通信的四个基本假设。工作流的所有部分必须相互通信,并适应独特的数据传递请求。此外,工作流必须对临床数据中可能出现的错误和不确定性具有鲁棒性,这可能需要人工干预来解决。考虑到这些假设,loctus提出了一个五阶段的工作流程,用于下载、去识别和提供独特的成像数据请求。这些阶段包括初始化、数据准备、从研究服务器提取数据到预去识别数据仓库、转换到去识别空间以及加载到后去识别数据仓库。迄今为止,该工作流程已用于处理32,962个用于研究用途的成像接入。随着技术挑战的解决,以及人类的作用预计将从频繁干预转变为定期监测,这一数字预计还会增长。
{"title":"From Scanner to Science: Reusing Clinically Acquired Medical Images for Research.","authors":"Jenna M Schabdach, Remo M S Williams, Joseph Logan, Viveknarayanan Padmanabhan, Russell D'Aiello Iii, Johnny Mclaughlin, Alexander Gonzalez, Edward M Krause, Gregory E Tasian, Susan Sotardi, Aaron F Alexander-Bloch","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Growth in the field of medical imaging research has revealed a need for larger volume and variety in available data. This need could be met using curated clinically acquired data, but the process for getting this data from the scanners to the scientists is complex and lengthy. We present a manifest-driven modular Extract, Transform, and Load (ETL) process named Locutus designed to appropriately handle difficulties present in the process of reusing clinically acquired medical imaging data. The design of Locutus was based on four foundational assumptions about medical data, research data, and communication. All parts of a workflow must communicate with each other and be adaptable to unique data delivery requests. In addition, the workflow must be robust to possible errors and uncertainties in clinically-acquired data, which may require human intervention to resolve. With these assumptions in mind,Locutus presents a five-phase workflow for downloading, deidentifying, and delivering unique requests for imaging data. The phases include initialization, data preparation, extraction of data from the research server to a pre-deidentification data warehouse, transformation into deidentified space, and loading into post-deidentification data warehouse. To date, this workflow has been used to process 32,962 imaging accessions for research use. This number is expected to grow as technical challenges are addressed and the role of humans is expected to shift from frequent intervention to regular monitoring.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"471-480"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying Opioid Overdose and Opioid Use Disorder and Related Information from Clinical Narratives Using Large Language Models. 用大语言模型从临床叙述中识别阿片类药物过量和阿片类药物使用障碍及其相关信息
Daniel Paredes, Sankalp Talankar, Cheng Peng, Patrick Balian, Motomoti Lewis, Shunhun Yan, Wen-Shan Tsai PharmD, Ching-Yuan Chang, Debbie L Wilson, Wei-Hsuan Lo-Ciganic, Yonghui Wu

Opioid overdose and opioid use disorder (OUD) remain a growing public health issue in the United States, affecting 6.1 million individuals in 2022, more than doubling the 2.5 million from 2021. Accurately identifying the opioid overdose and OUD related information is critical to study the outcomes and develop interventions. This study aims to identify opioid overdose and OUD mentions and their related information from clinical narratives. We compared encoder-based large language models (LLMs) and decoder-based generative LLMs in extracting nine crucial concepts related with opioid overdose and OUD including problematic opioid use. Through a cost-effective p-tuning algorithm, our decoder-based generative LLM, GatorTronGPT, achieved the best strict/lenient F1-score of 0.8637, and 0.9057, demonstrating the efficient of using generative LLMs for opioid overdose/OUD related information extraction. This study provided a tool to systematically extract opioid overdose, OUD, and their related information to facilitate opioid-related studies using clinical narratives.

阿片类药物过量和阿片类药物使用障碍(OUD)在美国仍然是一个日益严重的公共卫生问题,2022年影响610万人,比2021年的250万人增加了一倍多。准确识别阿片类药物过量和OUD相关信息对于研究结果和制定干预措施至关重要。本研究旨在从临床叙述中识别阿片类药物过量和OUD提及及其相关信息。我们比较了基于编码器的大型语言模型(llm)和基于解码器的生成式llm,以提取与阿片类药物过量和OUD相关的九个关键概念,包括阿片类药物的问题使用。通过高性价比的p-tuning算法,我们基于解码器的生成式LLM GatorTronGPT获得了最佳的严格/宽松f1分数0.8637和0.9057,证明了使用生成式LLM进行阿片类药物过量/OUD相关信息提取的有效性。本研究为系统提取阿片类药物过量、OUD及其相关信息提供了工具,为使用临床叙述进行阿片类药物相关研究提供了便利。
{"title":"Identifying Opioid Overdose and Opioid Use Disorder and Related Information from Clinical Narratives Using Large Language Models.","authors":"Daniel Paredes, Sankalp Talankar, Cheng Peng, Patrick Balian, Motomoti Lewis, Shunhun Yan, Wen-Shan Tsai PharmD, Ching-Yuan Chang, Debbie L Wilson, Wei-Hsuan Lo-Ciganic, Yonghui Wu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Opioid overdose and opioid use disorder (OUD) remain a growing public health issue in the United States, affecting 6.1 million individuals in 2022, more than doubling the 2.5 million from 2021. Accurately identifying the opioid overdose and OUD related information is critical to study the outcomes and develop interventions. This study aims to identify opioid overdose and OUD mentions and their related information from clinical narratives. We compared encoder-based large language models (LLMs) and decoder-based generative LLMs in extracting nine crucial concepts related with opioid overdose and OUD including problematic opioid use. Through a cost-effective p-tuning algorithm, our decoder-based generative LLM, GatorTronGPT, achieved the best strict/lenient F1-score of 0.8637, and 0.9057, demonstrating the efficient of using generative LLMs for opioid overdose/OUD related information extraction. This study provided a tool to systematically extract opioid overdose, OUD, and their related information to facilitate opioid-related studies using clinical narratives.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"414-421"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content Analysis of Over-the-Counter Hearing Aid Reviews. 非处方助听器点评内容分析。
Alisa Stolyar, Jamie Katz, Catherine Dymowski, Tierney Lyons, Aravind Parthasarathy, Hari Bharadwaj, Elaine Mormer, Catherine Palmer, Yanshan Wang

Hearing loss is a prevalent and impactful condition that affects millions globally. In 2022, the U.S. Food and Drug Administration (FDA) approved over-the-counter (OTC) hearing aids for individuals with mild to moderate hearing loss, establishing a distinct category separate from prescription hearing aids. This regulatory change may leave some patients, particularly those unfamiliar with hearing aids, without medical guidance in their decision-making process. To address this, our team developed the CLEARdashboard (Consumer Led Evidence - Amplification Resources dashboard) as an educational platform to assist users in comparing the technical specifications of various OTC hearing aids. In this study, we proposed a new key feature on the CLEARdashboard that is to utilize Natural Language Processing (NLP) methods to analyze product reviews from two prominent hearing aid online retailers. Analyzing product reviews using NLP is particularly helpful because these reviews often contain detailed, real-world insights into the performance and usability of hearing aids that may not be captured in technical specifications alone. We used NLP techniques in the automatic summarization of large volumes of user feedback into concise "pros and cons" lists, providing patients with a clearer understanding of the strengths and limitations of each device. This approach saves patients from manually sifting through extensive reviews and helps them make informed choices based on aggregated consumer experiences. The generated summaries were validated by three human evaluators to ensure the most comprehensive and reliable method of presenting this information, enhancing the decision-making process for individuals selecting OTC hearing aids.

听力损失是一种普遍而有影响的疾病,影响着全球数百万人。2022年,美国食品和药物管理局(FDA)批准了针对轻度至中度听力损失患者的非处方(OTC)助听器,建立了一个与处方助听器分开的独特类别。这一监管变化可能会使一些患者,特别是那些不熟悉助听器的患者,在他们的决策过程中没有医疗指导。为了解决这个问题,我们的团队开发了CLEARdashboard(消费者主导的证据-放大资源仪表板)作为一个教育平台,帮助用户比较各种OTC助听器的技术规格。在这项研究中,我们提出了一个新的CLEARdashboard的关键功能,即利用自然语言处理(NLP)方法来分析来自两家著名助听器在线零售商的产品评论。使用NLP分析产品评论特别有用,因为这些评论通常包含对助听器性能和可用性的详细的,真实的见解,这些见解可能无法单独在技术规范中捕获。我们使用NLP技术将大量用户反馈自动汇总为简洁的“利弊”列表,让患者更清楚地了解每种设备的优势和局限性。这种方法使患者不必手动筛选大量的评论,并帮助他们根据汇总的消费者体验做出明智的选择。生成的摘要由三名人工评估人员验证,以确保最全面和可靠的方法来呈现这些信息,从而提高个人选择非处方助听器的决策过程。
{"title":"Content Analysis of Over-the-Counter Hearing Aid Reviews.","authors":"Alisa Stolyar, Jamie Katz, Catherine Dymowski, Tierney Lyons, Aravind Parthasarathy, Hari Bharadwaj, Elaine Mormer, Catherine Palmer, Yanshan Wang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Hearing loss is a prevalent and impactful condition that affects millions globally. In 2022, the U.S. Food and Drug Administration (FDA) approved over-the-counter (OTC) hearing aids for individuals with mild to moderate hearing loss, establishing a distinct category separate from prescription hearing aids. This regulatory change may leave some patients, particularly those unfamiliar with hearing aids, without medical guidance in their decision-making process. To address this, our team developed the CLEARdashboard (Consumer Led Evidence - Amplification Resources dashboard) as an educational platform to assist users in comparing the technical specifications of various OTC hearing aids. In this study, we proposed a new key feature on the CLEARdashboard that is to utilize Natural Language Processing (NLP) methods to analyze product reviews from two prominent hearing aid online retailers. Analyzing product reviews using NLP is particularly helpful because these reviews often contain detailed, real-world insights into the performance and usability of hearing aids that may not be captured in technical specifications alone. We used NLP techniques in the automatic summarization of large volumes of user feedback into concise \"pros and cons\" lists, providing patients with a clearer understanding of the strengths and limitations of each device. This approach saves patients from manually sifting through extensive reviews and helps them make informed choices based on aggregated consumer experiences. The generated summaries were validated by three human evaluators to ensure the most comprehensive and reliable method of presenting this information, enhancing the decision-making process for individuals selecting OTC hearing aids.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"546-555"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Precision Medicine for Rare Diseases through Cloud Infrastructure Refactoring. 通过云基础设施重构为罕见疾病提供精准医疗。
Hui Li, Jinlian Wang, Hongfang Liu

Rare diseases affect approximately 1 in 11 Americans, yet their diagnosis remains challenging due to limited clinical evidence, low awareness, and lack of definitive treatments. Our project aims to accelerate rare disease diagnosis by developing a comprehensive informatics framework leveraging data mining, semantic web technologies, deep learning, and graph-based embedding techniques. However, our on-premises computational infrastructure faces significant challenges in scalability, maintenance, and collaboration. This study focuses on developing and evaluating a cloud-based computing infrastructure to address these challenges. By migrating to a scalable, secure, and collaborative cloud environment, we aim to enhance data integration, support advanced predictive modeling for differential diagnoses, and facilitate widespread dissemination of research findings to stakeholders, the research community, and the public and also proposed a facilitated through a reliable, standardized workflow designed to ensure minimal disruption and maintain data integrity for existing research project.

大约每11个美国人中就有1人患有罕见病,但由于临床证据有限、认识不足和缺乏明确的治疗方法,对罕见病的诊断仍然具有挑战性。我们的项目旨在通过开发综合信息学框架,利用数据挖掘、语义网技术、深度学习和基于图的嵌入技术,加速罕见病的诊断。然而,我们的本地计算基础设施在可伸缩性、维护和协作方面面临着重大挑战。本研究的重点是开发和评估基于云的计算基础设施,以应对这些挑战。通过迁移到一个可扩展的、安全的、协作的云环境,我们的目标是增强数据集成,支持先进的鉴别诊断预测建模,促进研究成果向利益相关者、研究界和公众的广泛传播,并提出了一个可靠的、标准化的工作流程,旨在确保最小的中断和维护现有研究项目的数据完整性。
{"title":"Empowering Precision Medicine for Rare Diseases through Cloud Infrastructure Refactoring.","authors":"Hui Li, Jinlian Wang, Hongfang Liu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Rare diseases affect approximately 1 in 11 Americans, yet their diagnosis remains challenging due to limited clinical evidence, low awareness, and lack of definitive treatments. Our project aims to accelerate rare disease diagnosis by developing a comprehensive informatics framework leveraging data mining, semantic web technologies, deep learning, and graph-based embedding techniques. However, our on-premises computational infrastructure faces significant challenges in scalability, maintenance, and collaboration. This study focuses on developing and evaluating a cloud-based computing infrastructure to address these challenges. By migrating to a scalable, secure, and collaborative cloud environment, we aim to enhance data integration, support advanced predictive modeling for differential diagnoses, and facilitate widespread dissemination of research findings to stakeholders, the research community, and the public and also proposed a facilitated through a reliable, standardized workflow designed to ensure minimal disruption and maintain data integrity for existing research project.</p>","PeriodicalId":72181,"journal":{"name":"AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science","volume":"2025 ","pages":"300-311"},"PeriodicalIF":0.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12150693/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144276866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1