利用自然语言处理技术识别产科临床笔记中的污名化语言和积极/偏好语言。

IF 4.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of the American Medical Informatics Association Pub Date : 2024-11-21 DOI:10.1093/jamia/ocae290
Jihye Kim Scroggins, Ismael I Hulchafo, Sarah Harkins, Danielle Scharp, Hans Moen, Anahita Davoudi, Kenrick Cato, Michele Tadiello, Maxim Topaz, Veronica Barcelona
{"title":"利用自然语言处理技术识别产科临床笔记中的污名化语言和积极/偏好语言。","authors":"Jihye Kim Scroggins, Ismael I Hulchafo, Sarah Harkins, Danielle Scharp, Hans Moen, Anahita Davoudi, Kenrick Cato, Michele Tadiello, Maxim Topaz, Veronica Barcelona","doi":"10.1093/jamia/ocae290","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To identify stigmatizing language in obstetric clinical notes using natural language processing (NLP).</p><p><strong>Materials and methods: </strong>We analyzed electronic health records from birth admissions in the Northeast United States in 2017. We annotated 1771 clinical notes to generate the initial gold standard dataset. Annotators labeled for exemplars of 5 stigmatizing and 1 positive/preferred language categories. We used a semantic similarity-based search approach to expand the initial dataset by adding additional exemplars, composing an enhanced dataset. We employed traditional classifiers (Support Vector Machine, Decision Trees, and Random Forest) and a transformer-based model, ClinicalBERT (Bidirectional Encoder Representations from Transformers) and BERT base. Models were trained and validated on initial and enhanced datasets and were tested on enhanced testing dataset.</p><p><strong>Results: </strong>In the initial dataset, we annotated 963 exemplars as stigmatizing or positive/preferred. The most frequently identified category was marginalized language/identities (n = 397, 41%), and the least frequent was questioning patient credibility (n = 51, 5%). After employing a semantic similarity-based search approach, 502 additional exemplars were added, increasing the number of low-frequency categories. All NLP models also showed improved performance, with Decision Trees demonstrating the greatest improvement (21%). ClinicalBERT outperformed other models, with the highest average F1-score of 0.78.</p><p><strong>Discussion: </strong>Clinical BERT seems to most effectively capture the nuanced and context-dependent stigmatizing language found in obstetric clinical notes, demonstrating its potential clinical applications for real-time monitoring and alerts to prevent usages of stigmatizing language use and reduce healthcare bias. Future research should explore stigmatizing language in diverse geographic locations and clinical settings to further contribute to high-quality and equitable perinatal care.</p><p><strong>Conclusion: </strong>ClinicalBERT effectively captures the nuanced stigmatizing language in obstetric clinical notes. Our semantic similarity-based search approach to rapidly extract additional exemplars enhanced the performances while reducing the need for labor-intensive annotation.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":""},"PeriodicalIF":4.7000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Identifying stigmatizing and positive/preferred language in obstetric clinical notes using natural language processing.\",\"authors\":\"Jihye Kim Scroggins, Ismael I Hulchafo, Sarah Harkins, Danielle Scharp, Hans Moen, Anahita Davoudi, Kenrick Cato, Michele Tadiello, Maxim Topaz, Veronica Barcelona\",\"doi\":\"10.1093/jamia/ocae290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To identify stigmatizing language in obstetric clinical notes using natural language processing (NLP).</p><p><strong>Materials and methods: </strong>We analyzed electronic health records from birth admissions in the Northeast United States in 2017. We annotated 1771 clinical notes to generate the initial gold standard dataset. Annotators labeled for exemplars of 5 stigmatizing and 1 positive/preferred language categories. We used a semantic similarity-based search approach to expand the initial dataset by adding additional exemplars, composing an enhanced dataset. We employed traditional classifiers (Support Vector Machine, Decision Trees, and Random Forest) and a transformer-based model, ClinicalBERT (Bidirectional Encoder Representations from Transformers) and BERT base. Models were trained and validated on initial and enhanced datasets and were tested on enhanced testing dataset.</p><p><strong>Results: </strong>In the initial dataset, we annotated 963 exemplars as stigmatizing or positive/preferred. The most frequently identified category was marginalized language/identities (n = 397, 41%), and the least frequent was questioning patient credibility (n = 51, 5%). After employing a semantic similarity-based search approach, 502 additional exemplars were added, increasing the number of low-frequency categories. All NLP models also showed improved performance, with Decision Trees demonstrating the greatest improvement (21%). ClinicalBERT outperformed other models, with the highest average F1-score of 0.78.</p><p><strong>Discussion: </strong>Clinical BERT seems to most effectively capture the nuanced and context-dependent stigmatizing language found in obstetric clinical notes, demonstrating its potential clinical applications for real-time monitoring and alerts to prevent usages of stigmatizing language use and reduce healthcare bias. Future research should explore stigmatizing language in diverse geographic locations and clinical settings to further contribute to high-quality and equitable perinatal care.</p><p><strong>Conclusion: </strong>ClinicalBERT effectively captures the nuanced stigmatizing language in obstetric clinical notes. Our semantic similarity-based search approach to rapidly extract additional exemplars enhanced the performances while reducing the need for labor-intensive annotation.</p>\",\"PeriodicalId\":50016,\"journal\":{\"name\":\"Journal of the American Medical Informatics Association\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2024-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American Medical Informatics Association\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1093/jamia/ocae290\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Medical Informatics Association","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/jamia/ocae290","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

目的: 利用自然语言处理技术(NLP)识别产科临床笔记中的污名化语言:使用自然语言处理(NLP)识别产科临床记录中的污名化语言:我们分析了美国东北部地区 2017 年入院分娩的电子健康记录。我们对 1771 份临床笔记进行了注释,以生成初始黄金标准数据集。注释者标注了 5 个污名化语言类别和 1 个积极/偏好语言类别的示例。我们采用了一种基于语义相似性的搜索方法,通过添加额外的示例来扩展初始数据集,从而组成了一个增强数据集。我们采用了传统的分类器(支持向量机、决策树和随机森林)和基于变压器的模型 ClinicalBERT(来自变压器的双向编码器表示)和 BERT base。模型在初始数据集和增强数据集上进行了训练和验证,并在增强测试数据集上进行了测试:在初始数据集中,我们将 963 个示例标注为污名化或积极/优先。最常识别的类别是边缘化语言/身份(n = 397,41%),最少识别的类别是质疑患者可信度(n = 51,5%)。在采用基于语义相似性的搜索方法后,又增加了 502 个示例,从而增加了低频类别的数量。所有 NLP 模型的性能也都有所提高,其中决策树的性能提高幅度最大(21%)。临床 BERT 的表现优于其他模型,平均 F1 分数最高,为 0.78:临床 BERT 似乎能最有效地捕捉到产科临床笔记中细微的、与上下文相关的鄙视性语言,这证明了它在实时监控和警报方面的潜在临床应用,以防止鄙视性语言的使用,减少医疗偏见。未来的研究应探索不同地理位置和临床环境中的鄙视性语言,以进一步促进高质量和公平的围产期护理:ClinicalBERT能有效捕捉产科临床笔记中细微的鄙视性语言。我们基于语义相似性的搜索方法可快速提取更多范例,从而提高了性能,同时减少了对劳动密集型注释的需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Identifying stigmatizing and positive/preferred language in obstetric clinical notes using natural language processing.

Objective: To identify stigmatizing language in obstetric clinical notes using natural language processing (NLP).

Materials and methods: We analyzed electronic health records from birth admissions in the Northeast United States in 2017. We annotated 1771 clinical notes to generate the initial gold standard dataset. Annotators labeled for exemplars of 5 stigmatizing and 1 positive/preferred language categories. We used a semantic similarity-based search approach to expand the initial dataset by adding additional exemplars, composing an enhanced dataset. We employed traditional classifiers (Support Vector Machine, Decision Trees, and Random Forest) and a transformer-based model, ClinicalBERT (Bidirectional Encoder Representations from Transformers) and BERT base. Models were trained and validated on initial and enhanced datasets and were tested on enhanced testing dataset.

Results: In the initial dataset, we annotated 963 exemplars as stigmatizing or positive/preferred. The most frequently identified category was marginalized language/identities (n = 397, 41%), and the least frequent was questioning patient credibility (n = 51, 5%). After employing a semantic similarity-based search approach, 502 additional exemplars were added, increasing the number of low-frequency categories. All NLP models also showed improved performance, with Decision Trees demonstrating the greatest improvement (21%). ClinicalBERT outperformed other models, with the highest average F1-score of 0.78.

Discussion: Clinical BERT seems to most effectively capture the nuanced and context-dependent stigmatizing language found in obstetric clinical notes, demonstrating its potential clinical applications for real-time monitoring and alerts to prevent usages of stigmatizing language use and reduce healthcare bias. Future research should explore stigmatizing language in diverse geographic locations and clinical settings to further contribute to high-quality and equitable perinatal care.

Conclusion: ClinicalBERT effectively captures the nuanced stigmatizing language in obstetric clinical notes. Our semantic similarity-based search approach to rapidly extract additional exemplars enhanced the performances while reducing the need for labor-intensive annotation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of the American Medical Informatics Association
Journal of the American Medical Informatics Association 医学-计算机:跨学科应用
CiteScore
14.50
自引率
7.80%
发文量
230
审稿时长
3-8 weeks
期刊介绍: JAMIA is AMIA''s premier peer-reviewed journal for biomedical and health informatics. Covering the full spectrum of activities in the field, JAMIA includes informatics articles in the areas of clinical care, clinical research, translational science, implementation science, imaging, education, consumer health, public health, and policy. JAMIA''s articles describe innovative informatics research and systems that help to advance biomedical science and to promote health. Case reports, perspectives and reviews also help readers stay connected with the most important informatics developments in implementation, policy and education.
期刊最新文献
Health system-wide access to generative artificial intelligence: the New York University Langone Health experience. National COVID Cohort Collaborative Data Enhancements: A Path for Expanding Common Data Models. Opportunities for the informatics community to advance learning health systems. Predicting mortality in hospitalized influenza patients: integration of deep learning-based chest X-ray severity score (FluDeep-XR) and clinical variables. Using human factors methods to mitigate bias in artificial intelligence-based clinical decision support.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1