Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco
In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.
2021 年,美国医学院协会出版了《远程医疗能力与学习连续性》(Telehealth Competencies Across the Learning Continuum),这是设计远程医疗课程和评估学习者的路线图。虽然这份文件促进了教育工作者对远程医疗核心内容和绩效预期的共同理解,但它并不包括可随时使用的评估工具。在俄克拉荷马大学社区医学院,我们为三年级医学生和二年级助理医师学生开发了为期一年的远程医疗课程。我们使用 AAMC 框架创建了课程目标和教学模拟。我们设计并试行了 AAMC 八项能力的评估标准,以配合模拟教学。在这本专著中,我们介绍了评分标准的开发、参与模拟教学的学生的得分,以及教师和标准化病人评估者之间相互评分可靠性的比较结果。我们的初步工作表明,我们的评分标准为教师在远程医疗模拟中评估学习者提供了一种实用的方法。我们还发现了进行更多可靠性和有效性测试的机会。
{"title":"Assessing Telemedicine Competencies: Developing and Validating Learner Measures for Simulation-Based Telemedicine Training.","authors":"Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"474-483"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785836/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural Language Processing (NLP) methods have been broadly applied to clinical tasks. Machine learning and deep learning approaches have been used to improve the performance of clinical NLP. However, these approaches require sufficiently large datasets for training, and trained models have been shown to transfer poorly across sites. These issues have led to the promotion of data collection and integration across different institutions for accurate and portable models. However, this can introduce a form of bias called confounding by provenance. When source-specific data distributions differ at deployment, this may harm model performance. To address this issue, we evaluate the utility of backdoor adjustment for text classification in a multi-site dataset of clinical notes annotated for mentions of substance abuse. Using an evaluation framework devised to measure robustness to distributional shifts, we assess the utility of backdoor adjustment. Our results indicate that backdoor adjustment can effectively mitigate for confounding shift.
自然语言处理(NLP)方法已广泛应用于临床任务。机器学习和深度学习方法已被用于提高临床 NLP 的性能。然而,这些方法需要足够大的数据集进行训练,而且训练后的模型在不同机构间的转移效果不佳。这些问题促使人们提倡在不同机构间收集和整合数据,以建立准确、可移植的模型。然而,这可能会引入一种称为 "来源混杂"(confounding by provenance)的偏差。当特定来源的数据分布在部署时有所不同时,这可能会损害模型的性能。为了解决这个问题,我们评估了在一个多站点数据集中对文本分类进行后门调整的效用,该数据集包含了对药物滥用进行注释的临床笔记。我们使用一个评估框架来衡量对分布变化的稳健性,评估了后门调整的效用。结果表明,"后门调整 "可以有效地减少混杂转移。
{"title":"Backdoor Adjustment of Confounding by Provenance for Robust Text Classification of Multi-institutional Clinical Notes.","authors":"Xiruo Ding, Zhecheng Sheng, Meliha Yetişgen, Serguei Pakhomov, Trevor Cohen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Natural Language Processing (NLP) methods have been broadly applied to clinical tasks. Machine learning and deep learning approaches have been used to improve the performance of clinical NLP. However, these approaches require sufficiently large datasets for training, and trained models have been shown to transfer poorly across sites. These issues have led to the promotion of data collection and integration across different institutions for accurate and portable models. However, this can introduce a form of bias called confounding by provenance. When source-specific data distributions differ at deployment, this may harm model performance. To address this issue, we evaluate the utility of backdoor adjustment for text classification in a multi-site dataset of clinical notes annotated for mentions of substance abuse. Using an evaluation framework devised to measure robustness to distributional shifts, we assess the utility of backdoor adjustment. Our results indicate that backdoor adjustment can effectively mitigate for confounding shift.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"923-932"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kurt Miller, Sungrim Moon, Sunyang Fu, Hongfang Liu
The structure and semantics of clinical notes vary considerably across different Electronic Health Record (EHR) systems, sites, and institutions. Such heterogeneity hampers the portability of natural language processing (NLP) models in extracting information from the text for clinical research or practice. In this study, we evaluate the contextual variation of clinical notes by measuring the semantic and syntactic similarity of the notes of two sets of physicians comprising four medical specialties across EHR migrations at two Mayo Clinic sites. We find significant semantic and syntactic variation imposed by the context of the EHR system and between medical specialties whereas only minor variation is caused by variation of spatial context across sites. Our findings suggest that clinical language models need to account for process differences at the specialty sublanguage level to be generalizable.
在不同的电子健康记录(EHR)系统、网站和机构中,临床笔记的结构和语义差异很大。这种异质性阻碍了自然语言处理(NLP)模型从文本中提取信息用于临床研究或实践的可移植性。在本研究中,我们评估了临床笔记的上下文差异,方法是测量两组医生笔记的语义和句法相似性,这两组医生由四个医学专业组成,在梅奥诊所的两个站点进行了 EHR 迁移。我们发现,电子病历系统的上下文和医学专科之间的语义和句法差异很大,而不同地点的空间上下文差异造成的差异很小。我们的研究结果表明,临床语言模型需要考虑专科子语言层面的过程差异,这样才能具有普遍性。
{"title":"Contextual Variation of Clinical Notes induced by EHR Migration.","authors":"Kurt Miller, Sungrim Moon, Sunyang Fu, Hongfang Liu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The structure and semantics of clinical notes vary considerably across different Electronic Health Record (EHR) systems, sites, and institutions. Such heterogeneity hampers the portability of natural language processing (NLP) models in extracting information from the text for clinical research or practice. In this study, we evaluate the contextual variation of clinical notes by measuring the semantic and syntactic similarity of the notes of two sets of physicians comprising four medical specialties across EHR migrations at two Mayo Clinic sites. We find significant semantic and syntactic variation imposed by the context of the EHR system and between medical specialties whereas only minor variation is caused by variation of spatial context across sites. Our findings suggest that clinical language models need to account for process differences at the specialty sublanguage level to be generalizable.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"1155-1164"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785835/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Selim, Jie Zhang, Michael A Brooks, Ge Wang, Jin Chen
Computed tomography (CT) is one of the modalities for effective lung cancer screening, diagnosis, treatment, and prognosis. The features extracted from CT images are now used to quantify spatial and temporal variations in tumors. However, CT images obtained from various scanners with customized acquisition protocols may introduce considerable variations in texture features, even for the same patient. This presents a fundamental challenge to downstream studies that require consistent and reliable feature analysis. Existing CT image harmonization models rely on GAN-based supervised or semi-supervised learning, with limited performance. This work addresses the issue of CT image harmonization using a new diffusion-based model, named DiffusionCT, to standardize CT images acquired from different vendors and protocols. DiffusionCT operates in the latent space by mapping a latent non-standard distribution into a standard one. DiffusionCT incorporates a U-Net-based encoder-decoder, augmented by a diffusion model integrated into the bottleneck part. The model is designed in two training phases. The encoder-decoder is first trained, without embedding the diffusion model, to learn the latent representation of the input data. The latent diffusion model is then trained in the next training phase while fixing the encoder-decoder. Finally, the decoder synthesizes a standardized image with the transformed latent representation. The experimental results demonstrate a significant improvement in the performance of the standardization task using DiffusionCT.
{"title":"DiffusionCT: Latent Diffusion Model for CT Image Standardization.","authors":"Md Selim, Jie Zhang, Michael A Brooks, Ge Wang, Jin Chen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Computed tomography (CT) is one of the modalities for effective lung cancer screening, diagnosis, treatment, and prognosis. The features extracted from CT images are now used to quantify spatial and temporal variations in tumors. However, CT images obtained from various scanners with customized acquisition protocols may introduce considerable variations in texture features, even for the same patient. This presents a fundamental challenge to downstream studies that require consistent and reliable feature analysis. Existing CT image harmonization models rely on GAN-based supervised or semi-supervised learning, with limited performance. This work addresses the issue of CT image harmonization using a new diffusion-based model, named DiffusionCT, to standardize CT images acquired from different vendors and protocols. DiffusionCT operates in the latent space by mapping a latent non-standard distribution into a standard one. DiffusionCT incorporates a U-Net-based encoder-decoder, augmented by a diffusion model integrated into the bottleneck part. The model is designed in two training phases. The encoder-decoder is first trained, without embedding the diffusion model, to learn the latent representation of the input data. The latent diffusion model is then trained in the next training phase while fixing the encoder-decoder. Finally, the decoder synthesizes a standardized image with the transformed latent representation. The experimental results demonstrate a significant improvement in the performance of the standardization task using DiffusionCT.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"624-633"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785850/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method to enrich controlled medication terminology from free-text drug labels. This is important because, while controlled medication terminology capture well-structured medication information, much of the information pertaining to medications is still found in free-text. First, we compared different Named Entity Recognition (NER) models including rule-based, feature-based, deep learning-based models with Transformers as well as ChatGPT, few-shot and fine-tuned GPT-3 to find the most suitable model that accurately extracts medication entities (ingredients, brand, dose, etc.) from free-text. Then, a rule-based Relation Extraction algorithm transforms NER results into a well-structured medication knowledge graph. Finally, a Medication Searching method takes the knowledge graph and matches it to relevant medications in the terminology server. An empirical evaluation on real-world drug labels shows that BERT-CRF was the most effective NER model with F-measure 95%. After performing terms normalization, the Medication Searching achieved an accuracy of 77% for when matching a label to relevant medication in the terminology server. The NER and Medication Searching models could be deployed as a web service capable of accepting free-text queries and returning structured medication information; thus providing a useful means of better managing medications information found in different health systems.
我们提出了一种从自由文本药物标签中丰富受控药物术语的方法。这一点非常重要,因为虽然受控药物术语能捕捉到结构良好的药物信息,但许多与药物相关的信息仍然存在于自由文本中。首先,我们比较了不同的命名实体识别(NER)模型,包括基于规则的模型、基于特征的模型、基于深度学习的模型、Transformers 模型以及 ChatGPT 模型、少拍模型和微调 GPT-3 模型,以找到最适合的模型,从自由文本中准确提取药物实体(成分、品牌、剂量等)。然后,基于规则的关系提取算法将 NER 结果转化为结构良好的药物知识图谱。最后,药物搜索方法将知识图谱与术语服务器中的相关药物进行匹配。对真实世界药物标签的经验评估表明,BERT-CRF 是最有效的 NER 模型,F-measure 为 95%。在对术语进行归一化处理后,当将标签与术语服务器中的相关药物进行匹配时,药物搜索的准确率达到了 77%。NER 和用药搜索模型可作为网络服务部署,能够接受自由文本查询并返回结构化的用药信息,从而为更好地管理不同医疗系统中的用药信息提供了有用的手段。
{"title":"From Free-text Drug Labels to Structured Medication Terminology with BERT and GPT.","authors":"Duy-Hoa Ngo, Bevan Koopman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a method to enrich controlled medication terminology from free-text drug labels. This is important because, while controlled medication terminology capture well-structured medication information, much of the information pertaining to medications is still found in free-text. First, we compared different Named Entity Recognition (NER) models including rule-based, feature-based, deep learning-based models with Transformers as well as ChatGPT, few-shot and fine-tuned GPT-3 to find the most suitable model that accurately extracts medication entities (ingredients, brand, dose, etc.) from free-text. Then, a rule-based Relation Extraction algorithm transforms NER results into a well-structured medication knowledge graph. Finally, a Medication Searching method takes the knowledge graph and matches it to relevant medications in the terminology server. An empirical evaluation on real-world drug labels shows that BERT-CRF was the most effective NER model with F-measure 95%. After performing terms normalization, the Medication Searching achieved an accuracy of 77% for when matching a label to relevant medication in the terminology server. The NER and Medication Searching models could be deployed as a web service capable of accepting free-text queries and returning structured medication information; thus providing a useful means of better managing medications information found in different health systems.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"540-549"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Xu, Rui Yin, Yu Huang, Hannah Gao, Yonghui Wu, Jingchuan Guo, Glenn E Smith, Steven T DeKosky, Fei Wang, Yi Guo, Jiang Bian
Alzheimer's disease (AD) is a complex heterogeneous neurodegenerative disease that requires an in-depth understanding of its progression pathways and contributing factors to develop effective risk stratification and prevention strategies. In this study, we proposed an outcome-oriented model to identify progression pathways from mild cognitive impairment (MCI) to AD using electronic health records (EHRs) from the OneFlorida+ Clinical Research Consortium. To achieve this, we employed the long short-term memory (LSTM) network to extract relevant information from the sequential records of each patient. The hierarchical agglomerative clustering was then applied to the learned representation to group patients based on their progression subtypes. Our approach identified multiple progression pathways, each of which represented distinct patterns of disease progression from MCI to AD. These pathways can serve as a valuable resource for researchers to understand the factors influencing AD progression and to develop personalized interventions to delay or prevent the onset of the disease.
阿尔茨海默病(AD)是一种复杂的异质性神经退行性疾病,需要深入了解其进展途径和诱因,以制定有效的风险分层和预防策略。在本研究中,我们提出了一个以结果为导向的模型,利用 OneFlorida+ 临床研究联盟的电子健康记录(EHR)来识别从轻度认知障碍(MCI)到老年痴呆症(AD)的进展路径。为此,我们利用长短期记忆(LSTM)网络从每位患者的连续记录中提取相关信息。然后将分层聚类应用于所学的表示,根据患者的进展亚型对其进行分组。我们的方法确定了多条疾病进展路径,每条路径都代表了从 MCI 到 AD 的不同疾病进展模式。这些路径可作为研究人员的宝贵资源,帮助他们了解影响注意力缺失症进展的因素,并开发个性化干预措施来延缓或预防疾病的发生。
{"title":"Identification of Outcome-Oriented Progression Subtypes from Mild Cognitive Impairment to Alzheimer's Disease Using Electronic Health Records.","authors":"Jie Xu, Rui Yin, Yu Huang, Hannah Gao, Yonghui Wu, Jingchuan Guo, Glenn E Smith, Steven T DeKosky, Fei Wang, Yi Guo, Jiang Bian","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a complex heterogeneous neurodegenerative disease that requires an in-depth understanding of its progression pathways and contributing factors to develop effective risk stratification and prevention strategies. In this study, we proposed an outcome-oriented model to identify progression pathways from mild cognitive impairment (MCI) to AD using electronic health records (EHRs) from the OneFlorida+ Clinical Research Consortium. To achieve this, we employed the long short-term memory (LSTM) network to extract relevant information from the sequential records of each patient. The hierarchical agglomerative clustering was then applied to the learned representation to group patients based on their progression subtypes. Our approach identified multiple progression pathways, each of which represented distinct patterns of disease progression from MCI to AD. These pathways can serve as a valuable resource for researchers to understand the factors influencing AD progression and to develop personalized interventions to delay or prevent the onset of the disease.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"764-773"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Withall, Mai Tran, Bobby Schroeder, Rachel Lee, Amanda Moy, Syed Mohtashim Abbas Bokhari, Kenrick Cato, Sarah Rossetti
Documentation burden is experienced by clinical end-users of the electronic health record. Flowsheet measure reuse and clinical concept redundancy are two contributors to documentation burden. In this paper, we described nursing flowsheet documentation hierarchy and frequency of use for one month from two hospitals in our health system. We examined respiratory care management documentation in greater detail. We found 59 instances of reuse of respiratory care flowsheet measure fields over two or more templates and groups, and 5 instances of clinical concept redundancy. Flowsheet measure fields for physical assessment observations and measurements were the most frequently documented and most reused, whereas respiratory intervention documentation was less frequently reused. Further research should investigate the relationship between flowsheet measure reuse and redundancy and EHR information overload and documentation burden.
{"title":"Identifying Reuse and Redundancies in Respiratory Flowsheet Documentation: Implications for Clinician Documentation Burden.","authors":"Jennifer Withall, Mai Tran, Bobby Schroeder, Rachel Lee, Amanda Moy, Syed Mohtashim Abbas Bokhari, Kenrick Cato, Sarah Rossetti","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Documentation burden is experienced by clinical end-users of the electronic health record. Flowsheet measure reuse and clinical concept redundancy are two contributors to documentation burden. In this paper, we described nursing flowsheet documentation hierarchy and frequency of use for one month from two hospitals in our health system. We examined respiratory care management documentation in greater detail. We found 59 instances of reuse of respiratory care flowsheet measure fields over two or more templates and groups, and 5 instances of clinical concept redundancy. Flowsheet measure fields for physical assessment observations and measurements were the most frequently documented and most reused, whereas respiratory intervention documentation was less frequently reused. Further research should investigate the relationship between flowsheet measure reuse and redundancy and EHR information overload and documentation burden.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"1297-1303"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785890/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Savitha Sangameswaran, Reggie Casanova-Perez, Harsh Patel, David J Cronkite, Ayah Idris, Dori E Rosenberg, Jonathan L Wright, John L Gore, Andrea L Hartzler
Physical activity is important for prostate cancer survivors. Yet survivors face significant barriers to traditional structured exercise programs, limiting engagement and impact. Digital programs that incorporate fitness trackers and peer support via social media have potential to improve the reach and impact of traditional support. Using a digital walking program with prostate cancer survivors, we employed mixed methods to assess program outcomes, engagement, perceived utility, and social influence. After 6 weeks of program use, survivors and loved ones (n=18) significantly increased their average daily step count. Although engagement and perceived utility of using a fitness tracker and interacting with walking buddies was high, social media engagement and utility were limited. Group strategies associated with social influence were driven more by group attraction to the collective task of walking than by interpersonal bonds. Findings demonstrate the feasibility of a digital walking program to improve physical activity and extend the reach of traditional support.
{"title":"Improving physical activity among prostate cancer survivors through a peer-based digital walking program.","authors":"Savitha Sangameswaran, Reggie Casanova-Perez, Harsh Patel, David J Cronkite, Ayah Idris, Dori E Rosenberg, Jonathan L Wright, John L Gore, Andrea L Hartzler","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Physical activity is important for prostate cancer survivors. Yet survivors face significant barriers to traditional structured exercise programs, limiting engagement and impact. Digital programs that incorporate fitness trackers and peer support via social media have potential to improve the reach and impact of traditional support. Using a digital walking program with prostate cancer survivors, we employed mixed methods to assess program outcomes, engagement, perceived utility, and social influence. After 6 weeks of program use, survivors and loved ones (n=18) significantly increased their average daily step count. Although engagement and perceived utility of using a fitness tracker and interacting with walking buddies was high, social media engagement and utility were limited. Group strategies associated with social influence were driven more by group attraction to the collective task of walking than by interpersonal bonds. Findings demonstrate the feasibility of a digital walking program to improve physical activity and extend the reach of traditional support.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"608-617"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Zuo, Yujia Zhou, Jon Duke, George Hripcsak, Nigam Shah, Juan M Banda, Ruth Reeves, Timothy Miller, Lemuel R Waitman, Karthik Natarajan, Hua Xu
The types of clinical notes in electronic health records (EHRs) are diverse and it would be great to standardize them to ensure unified data retrieval, exchange, and integration. The LOINC Document Ontology (DO) is a subset of LOINC that is created specifically for naming and describing clinical documents. Despite the efforts of promoting and improving this ontology, how to efficiently deploy it in real-world clinical settings has yet to be explored. In this study we evaluated the utility of LOINC DO by mapping clinical note titles collected from five institutions to the LOINC DO and classifying the mapping into three classes based on semantic similarity between note titles and LOINC DO codes. Additionally, we developed a standardization pipeline that automatically maps clinical note titles from multiple sites to suitable LOINC DO codes, without accessing the content of clinical notes. The pipeline can be initialized with different large language models, and we compared the performances between them. The results showed that our automated pipeline achieved an accuracy of 0.90. By comparing the manual and automated mapping results, we analyzed the coverage of LOINC DO in describing multi-site clinical note titles and summarized the potential scope for extension.
电子健康记录(EHR)中的临床笔记类型多种多样,如果能将它们标准化,以确保统一的数据检索、交换和整合,那将是一件非常好的事情。LOINC 文档本体(DO)是 LOINC 的一个子集,专门用于命名和描述临床文档。尽管人们一直在努力推广和改进这一本体,但如何在现实世界的临床环境中有效地部署这一本体仍有待探索。在这项研究中,我们将从五家机构收集到的临床病历标题与 LOINC DO 进行了映射,并根据病历标题与 LOINC DO 代码之间的语义相似性将映射分为三类,从而评估了 LOINC DO 的实用性。此外,我们还开发了一个标准化流水线,可将多个机构的临床病历标题自动映射为合适的 LOINC DO 代码,而无需访问临床病历的内容。该管道可使用不同的大型语言模型进行初始化,我们还比较了它们之间的性能。结果显示,我们的自动管道准确率达到了 0.90。通过比较手动和自动映射结果,我们分析了 LOINC DO 在描述多站点临床笔记标题方面的覆盖范围,并总结了潜在的扩展范围。
{"title":"Standardizing Multi-site Clinical Note Titles to LOINC Document Ontology: A Transformer-based Approach.","authors":"Xu Zuo, Yujia Zhou, Jon Duke, George Hripcsak, Nigam Shah, Juan M Banda, Ruth Reeves, Timothy Miller, Lemuel R Waitman, Karthik Natarajan, Hua Xu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The types of clinical notes in electronic health records (EHRs) are diverse and it would be great to standardize them to ensure unified data retrieval, exchange, and integration. The LOINC Document Ontology (DO) is a subset of LOINC that is created specifically for naming and describing clinical documents. Despite the efforts of promoting and improving this ontology, how to efficiently deploy it in real-world clinical settings has yet to be explored. In this study we evaluated the utility of LOINC DO by mapping clinical note titles collected from five institutions to the LOINC DO and classifying the mapping into three classes based on semantic similarity between note t<i>itl</i>es and LOINC DO codes. Additionally, we developed a standardization pipeline that automatically maps clinical note titles from multiple sites to suitable LOINC DO codes, without accessing the content of clinical notes. The pipeline can be initialized with different large language models, and we compared the performances between them. The results showed that our automated pipeline achieved an accuracy of 0.90. By comparing the manual and automated mapping results, we analyzed the coverage of LOINC DO in describing multi-site clinical note titles and summarized the potential scope for extension.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"834-843"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785935/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yumeng Yang, Soumya Jayaraj, Ethan Ludmir, Kirk Roberts
Automatic identification of clinical trials for which a patient is eligible is complicated by the fact that trial eligibility are stated in natural language. A potential solution to this problem is to employ text classification methods for common types of eligibility criteria. In this study, we focus on seven common exclusion criteria in cancer trials: prior malignancy, human immunodeficiency virus, hepatitis B, hepatitis C, psychiatric illness, drug/substance abuse, and autoimmune illness. Our dataset consists of 764 phase III cancer trials with these exclusions annotated at the trial level. We experiment with common transformer models as well as a new pre-trained clinical trial BERT model. Our results demonstrate the feasibility of automatically classifying common exclusion criteria. Additionally, we demonstrate the value of a pre-trained language model specifically for clinical trials, which yield the highest average performance across all criteria.
自动识别患者符合条件的临床试验非常复杂,因为试验资格是用自然语言表述的。解决这一问题的潜在方法是针对常见类型的资格标准采用文本分类方法。在本研究中,我们重点关注癌症试验中常见的七种排除标准:既往恶性肿瘤、人类免疫缺陷病毒、乙型肝炎、丙型肝炎、精神疾病、药物/物质滥用和自身免疫性疾病。我们的数据集由 764 项 III 期癌症试验组成,这些试验在试验层面上标注了这些排除项。我们使用常见的转换器模型以及新的预训练临床试验 BERT 模型进行了实验。我们的结果证明了自动分类常见排除标准的可行性。此外,我们还证明了专门针对临床试验的预训练语言模型的价值,该模型在所有标准中的平均性能最高。
{"title":"Text Classification of Cancer Clinical Trial Eligibility Criteria.","authors":"Yumeng Yang, Soumya Jayaraj, Ethan Ludmir, Kirk Roberts","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Automatic identification of clinical trials for which a patient is eligible is complicated by the fact that trial eligibility are stated in natural language. A potential solution to this problem is to employ text classification methods for common types of eligibility criteria. In this study, we focus on seven common exclusion criteria in cancer trials: prior malignancy, human immunodeficiency virus, hepatitis B, hepatitis C, psychiatric illness, drug/substance abuse, and autoimmune illness. Our dataset consists of 764 phase III cancer trials with these exclusions annotated at the trial level. We experiment with common transformer models as well as a new pre-trained clinical trial BERT model. Our results demonstrate the feasibility of automatically classifying common exclusion criteria. Additionally, we demonstrate the value of a pre-trained language model specifically for clinical trials, which yield the highest average performance across all criteria.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"1304-1313"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}