Pub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1055/a-2786-0291
Fabiana Cristina Dos Santos, Sophia Mclnerney, Miya C Tate, Aadia Rana, D Scott Batey, Rebecca Schnall
Drive to Zero is a mobile health application (app) designed to identify and retain people with HIV (PWH) who have experienced challenges with achieving or maintaining viral suppression. The app targets PWH who have lacked documented HIV care in the past months and are experiencing medication adherence barriers. Features include an interactive chat for communicating with the study team and access to educational resources to support care engagement and health management.This usability study aimed to assess the Drive to Zero app's ease of use and interface design through expert heuristic evaluation and end-user testing.Usability was evaluated through two approaches: heuristic evaluations conducted by five informatics experts following Nielsen's usability principles, and end-user testing with 20 PWH using the validated Post-Study System Usability Questionnaire and qualitative interviews to collect feedback on app functionality and user experience.Heuristic experts and end-users demonstrated satisfaction with the app's appearance, reporting that it has a simple and intuitive interface for identifying and retaining PWH, which will assist them with study engagement and ultimately reengage with HIV care. However, participants highlighted areas needing improvement, suggesting better accessibility of "home" and "help" buttons to improve user control and a more detailed explanation of the incentive program to enhance user engagement and retention.Usability evaluations provided valuable insights into the Drive to Zero app's design. Areas for improvement were enhancing user controls and improving the readability of the incentive program. These findings will guide iterative refinements, ensuring that future versions of the app improve the usability and acceptability of its target audience.
“走向零”是一款移动健康应用程序,旨在识别和留住在实现或维持病毒抑制方面遇到挑战的艾滋病毒感染者。该应用程序针对的是在过去几个月里缺乏记录在案的艾滋病毒护理的PWH,并且正在经历药物依从性障碍。其功能包括与学习团队进行交流的交互式聊天,以及访问教育资源以支持护理参与和健康管理。这项可用性研究旨在通过专家启发式评估和最终用户测试来评估Drive to Zero应用程序的易用性和界面设计。可用性通过两种方法进行评估:由五位信息学专家根据尼尔森可用性原则进行启发式评估,以及使用经过验证的研究后系统可用性问卷和定性访谈对20个PWH进行最终用户测试,以收集对应用功能和用户体验的反馈。启发式专家和最终用户对应用程序的外观表示满意,报告说它具有简单直观的界面,用于识别和保留PWH,这将帮助他们参与学习并最终重新参与艾滋病毒护理。然而,与会者强调了需要改进的地方,建议增加“主页”和“帮助”按钮的可访问性,以改善用户控制,并更详细地解释激励计划,以提高用户参与度和留存率。可用性评估为Drive to Zero应用的设计提供了有价值的见解。需要改进的领域是加强用户控制和改进奖励方案的可读性。这些发现将指导迭代改进,确保应用程序的未来版本提高其目标受众的可用性和可接受性。
{"title":"Optimizing HIV Care Engagement: Usability of a mHealth App for Identifying and Retaining Individuals with Nonviral Suppression in Digital Cohort.","authors":"Fabiana Cristina Dos Santos, Sophia Mclnerney, Miya C Tate, Aadia Rana, D Scott Batey, Rebecca Schnall","doi":"10.1055/a-2786-0291","DOIUrl":"10.1055/a-2786-0291","url":null,"abstract":"<p><p>Drive to Zero is a mobile health application (app) designed to identify and retain people with HIV (PWH) who have experienced challenges with achieving or maintaining viral suppression. The app targets PWH who have lacked documented HIV care in the past months and are experiencing medication adherence barriers. Features include an interactive chat for communicating with the study team and access to educational resources to support care engagement and health management.This usability study aimed to assess the Drive to Zero app's ease of use and interface design through expert heuristic evaluation and end-user testing.Usability was evaluated through two approaches: heuristic evaluations conducted by five informatics experts following Nielsen's usability principles, and end-user testing with 20 PWH using the validated Post-Study System Usability Questionnaire and qualitative interviews to collect feedback on app functionality and user experience.Heuristic experts and end-users demonstrated satisfaction with the app's appearance, reporting that it has a simple and intuitive interface for identifying and retaining PWH, which will assist them with study engagement and ultimately reengage with HIV care. However, participants highlighted areas needing improvement, suggesting better accessibility of \"home\" and \"help\" buttons to improve user control and a more detailed explanation of the incentive program to enhance user engagement and retention.Usability evaluations provided valuable insights into the Drive to Zero app's design. Areas for improvement were enhancing user controls and improving the readability of the incentive program. These findings will guide iterative refinements, ensuring that future versions of the app improve the usability and acceptability of its target audience.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"17 1","pages":"39-45"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Getting patients out of intensive care units (ICUs) is a major goal for acute care clinicians, as prolonged stays increase the risk of complications and strain critical resources such as staff, equipment, and beds. The ICU Liberation Bundle, or the ABCDEF (A-F) care bundle, is an evidence-based framework for improving outcomes in critically ill patients by addressing pain, sedation, delirium, mobility, and family engagement. However, variability in documentation and a lack of standardized data elements hinder effective implementation and evaluation of adherence to bundle components.This study aims to characterize data elements of the A-F liberation bundle using a large, single-center critical care database and to develop standardized bundle cards that map bundle components to controlled vocabularies.We conducted a retrospective analysis of data elements related to the A-F bundle using the MIMIC-IV database. Clinical concepts were mapped to standardized vocabularies and aligned with the Observational Medical Outcomes Partnership (OMOP) common data model (CDM). Bundle cards were developed for each component to provide structured, accessible documentation of assessment tools, adherence criteria, and terminology mappings.Pain assessments were documented in over 11,000 patients, with a median of 23 assessments per day. Sedation levels for nearly 59,000 patients were evaluated, with 37.7% meeting the Society of Critical Care Medicine (SCCM) adherence criteria. Delirium assessments followed standardized protocols incorporating Richmond Agitation-Sedation Scale (RASS) and CAM-ICU scores. Components E and F lacked formal compliance specifications; bundle cards for these components identified key activities and highlighted gaps in standardized vocabularies. Adherence analyses revealed variability likely due to non-standardized documentation practices.We developed and validated six ICU Liberation Bundle cards that map bundle components to standardized vocabularies and CDMs, enabling retrospective adherence evaluation in real-world data. These information resources promote consistent documentation, support interoperability, and provide a foundation for prospective monitoring to enhance bundle implementation in critical care.
{"title":"Standardizing Data Elements for Implementation of ICU Liberation Bundle.","authors":"Md Fantacher Islam, Molly Douglas, Jarrod Mosier, Vignesh Subbian","doi":"10.1055/a-2802-7458","DOIUrl":"10.1055/a-2802-7458","url":null,"abstract":"<p><p>Getting patients out of intensive care units (ICUs) is a major goal for acute care clinicians, as prolonged stays increase the risk of complications and strain critical resources such as staff, equipment, and beds. The ICU Liberation Bundle, or the ABCDEF (A-F) care bundle, is an evidence-based framework for improving outcomes in critically ill patients by addressing pain, sedation, delirium, mobility, and family engagement. However, variability in documentation and a lack of standardized data elements hinder effective implementation and evaluation of adherence to bundle components.This study aims to characterize data elements of the A-F liberation bundle using a large, single-center critical care database and to develop standardized bundle cards that map bundle components to controlled vocabularies.We conducted a retrospective analysis of data elements related to the A-F bundle using the MIMIC-IV database. Clinical concepts were mapped to standardized vocabularies and aligned with the Observational Medical Outcomes Partnership (OMOP) common data model (CDM). Bundle cards were developed for each component to provide structured, accessible documentation of assessment tools, adherence criteria, and terminology mappings.Pain assessments were documented in over 11,000 patients, with a median of 23 assessments per day. Sedation levels for nearly 59,000 patients were evaluated, with 37.7% meeting the Society of Critical Care Medicine (SCCM) adherence criteria. Delirium assessments followed standardized protocols incorporating Richmond Agitation-Sedation Scale (RASS) and CAM-ICU scores. Components E and F lacked formal compliance specifications; bundle cards for these components identified key activities and highlighted gaps in standardized vocabularies. Adherence analyses revealed variability likely due to non-standardized documentation practices.We developed and validated six ICU Liberation Bundle cards that map bundle components to standardized vocabularies and CDMs, enabling retrospective adherence evaluation in real-world data. These information resources promote consistent documentation, support interoperability, and provide a foundation for prospective monitoring to enhance bundle implementation in critical care.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"52-59"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-09DOI: 10.1055/a-2807-4256
Robert E Hoyt, Maria Bajwa
The integration of large language models (LLMs) into clinical diagnostics presents significant challenges regarding their accuracy and reliability.This study aimed to evaluate the performance of DeepSeek R1, an open-source reasoning model, alongside two other LLMs, GPT-4.1 and Claude 3.5 Sonnet, across multiple-choice clinical cases.A dataset of complex medical cases representative of real-world clinical practice was selected.For efficiency, models were accessed via application programming interfaces (APIs) and assessed using standardized prompts and a predefined evaluation protocol.The models demonstrated an overall accuracy of 77.1%, with GPT-4 producing the fewest errors and Claude 3.5 the most. The reproducibility analysis indicated that the tests were very repeatable: DeepSeek (100%), GPT-4.1 (97.5%), and Claude 3.5 Sonnet (92%).While LLMs show promise for enhancing diagnostics, ongoing scrutiny is required to address error rates and validate standard medical answers. Given the limited dataset and prompting protocol, findings should not be interpreted as broader equivalence in real-world clinical reasoning. This study demonstrates the need for robust evaluation standards, attention to error rates, and further research.
{"title":"Measuring the Accuracy and Reproducibility of DeepSeek R1, Claude 3.5 Sonnet, and GPT-4.1 on Complex Clinical Scenarios.","authors":"Robert E Hoyt, Maria Bajwa","doi":"10.1055/a-2807-4256","DOIUrl":"10.1055/a-2807-4256","url":null,"abstract":"<p><p>The integration of large language models (LLMs) into clinical diagnostics presents significant challenges regarding their accuracy and reliability.This study aimed to evaluate the performance of DeepSeek R1, an open-source reasoning model, alongside two other LLMs, GPT-4.1 and Claude 3.5 Sonnet, across multiple-choice clinical cases.A dataset of complex medical cases representative of real-world clinical practice was selected.For efficiency, models were accessed via application programming interfaces (APIs) and assessed using standardized prompts and a predefined evaluation protocol.The models demonstrated an overall accuracy of 77.1%, with GPT-4 producing the fewest errors and Claude 3.5 the most. The reproducibility analysis indicated that the tests were very repeatable: DeepSeek (100%), GPT-4.1 (97.5%), and Claude 3.5 Sonnet (92%).While LLMs show promise for enhancing diagnostics, ongoing scrutiny is required to address error rates and validate standard medical answers. Given the limited dataset and prompting protocol, findings should not be interpreted as broader equivalence in real-world clinical reasoning. This study demonstrates the need for robust evaluation standards, attention to error rates, and further research.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"64-72"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12923312/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-24DOI: 10.1055/a-2818-1706
Janet Webb, Ling Chu, Robert W Turer, Catherine Chen, Justin F Rousseau, Amber Salter, D Mark Courtney, Robin T Higashi, Nardabel Guzman, Wendy W Chapman, DuWayne Willett, Samuel A McDonald
This study aimed to compare two ambient AI documentation tools, Dragon Ambient eXperience (DAX) and Abridge in the emergency department (ED), assessing perceived effects on work burden, usability, documentation quality, satisfaction, and overall preference.We conducted a single-site, prospective crossover study in an ED over 6 weeks, from April to June 2025. Out of 20 faculty enrolled, 18 completed both phases. Participants used both ambient AI scribe tools in alternating 3-week phases. Pre-tool, tool-specific, and post-tool surveys captured four domains: burden, usability, quality, and satisfaction. Adoption was the proportion of notes containing any ambient output. Paired Wilcoxon tests and linear mixed-effects models were used to compare tools, adjusting for order and adoption.DAX was associated with greater reduction in overall perceived work burden compared with Abridge (median: 1.5 vs. 2; p = 0.025). Usability was high and comparable between the tools (SUS medians: 73.5 vs. 73.5, p = 0.94; UMUX-Lite medians: 86 vs. 82.5, p = 0.079). Scores from a modified version of the Physician Documentation Quality Instrument (PDQI-9) favored DAX (median: 39 vs. 36.5; p = 0.011). DAX received higher satisfaction ratings (median likelihood-to-recommend: 9 vs. 7.5; p = 0.015), but adjusted models suggested these differences reflected order effects more than inherent tool differences. Post-pilot preferences showed no overall preference after accounting for order, with first-tool exposure significantly shaping ratings.In this 6-week crossover study in the ED, both ambient AI scribes were highly usable, and perceived to reduce documentation burden while preserving note quality. Findings support the feasibility and perceived value of ambient AI scribes in the ED and motivate larger, longer-duration, multi-site evaluations with objective outcomes.
目的:比较两种环境人工智能文档工具:急诊科(ED)的Dragon ambient eXperience (DAX)和bridge,评估对工作负担、可用性、文档质量、满意度和总体偏好的感知影响。方法:我们于2025年4月至6月在ED进行了为期6周的单点前瞻性交叉研究。招收了20名教员;18个完成了这两个阶段。参与者在3周的交替阶段使用两种环境人工智能记录工具。预先的、工具特定的和事后的调查捕获了四个领域:负担、可用性、质量和满意度。采用率是指包含任何环境输出的注释的比例。配对Wilcoxon检验和线性混合效应模型用于比较工具,调整顺序和采用。结果:与Abridge相比,DAX与总体工作负担的减少有关(中位数为1.5 vs 2; p=0.025)。可用性较高,且各工具之间具有可比性(SUS中位数为73.5 vs 73.5, p = 0.94; UMUX-Lite中位数为86 vs 82.5, p = 0.079)。PDQI-9评分偏向DAX(中位数39 vs 36.5; p = 0.011)。DAX获得了更高的满意度评级(推荐可能性中位数为9 vs 7.5; p=0.015),但调整后的模型表明,这些差异更多地反映了顺序效应,而不是固有的工具差异。在考虑了顺序之后,试验后的偏好没有显示出总体偏好,第一种工具的暴露显著地影响了评级。结论:在ED进行的为期6周的交叉研究中,两种环境人工智能抄写器都是高度可用的,并且在保持笔记质量的同时减少了文档负担。研究结果支持了环境人工智能记录仪在ED中的可行性和感知价值,并激发了更大规模、更长时间、多地点的客观结果评估。
{"title":"Crossover Evaluation of Two Ambient AI Scribe Tools in the Emergency Department.","authors":"Janet Webb, Ling Chu, Robert W Turer, Catherine Chen, Justin F Rousseau, Amber Salter, D Mark Courtney, Robin T Higashi, Nardabel Guzman, Wendy W Chapman, DuWayne Willett, Samuel A McDonald","doi":"10.1055/a-2818-1706","DOIUrl":"10.1055/a-2818-1706","url":null,"abstract":"<p><p>This study aimed to compare two ambient AI documentation tools, Dragon Ambient eXperience (DAX) and Abridge in the emergency department (ED), assessing perceived effects on work burden, usability, documentation quality, satisfaction, and overall preference.We conducted a single-site, prospective crossover study in an ED over 6 weeks, from April to June 2025. Out of 20 faculty enrolled, 18 completed both phases. Participants used both ambient AI scribe tools in alternating 3-week phases. Pre-tool, tool-specific, and post-tool surveys captured four domains: burden, usability, quality, and satisfaction. Adoption was the proportion of notes containing any ambient output. Paired Wilcoxon tests and linear mixed-effects models were used to compare tools, adjusting for order and adoption.DAX was associated with greater reduction in overall perceived work burden compared with Abridge (median: 1.5 vs. 2; <i>p</i> = 0.025). Usability was high and comparable between the tools (SUS medians: 73.5 vs. 73.5, <i>p</i> = 0.94; UMUX-Lite medians: 86 vs. 82.5, <i>p</i> = 0.079). Scores from a modified version of the Physician Documentation Quality Instrument (PDQI-9) favored DAX (median: 39 vs. 36.5; <i>p</i> = 0.011). DAX received higher satisfaction ratings (median likelihood-to-recommend: 9 vs. 7.5; <i>p</i> = 0.015), but adjusted models suggested these differences reflected order effects more than inherent tool differences. Post-pilot preferences showed no overall preference after accounting for order, with first-tool exposure significantly shaping ratings.In this 6-week crossover study in the ED, both ambient AI scribes were highly usable, and perceived to reduce documentation burden while preserving note quality. Findings support the feasibility and perceived value of ambient AI scribes in the ED and motivate larger, longer-duration, multi-site evaluations with objective outcomes.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"118-126"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12981961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1055/a-2786-0551
Anne Grauer, Yuyang Yang, Jo Applebaum, Yelstin Fernandes, David Liebovitz, Jason Adelman, Bruce Lambert, William Galanter
Abandoned medication orders-those initiated but not signed-represent a potential safety risk and an indicator of electronic health record (EHR) inefficiency. This study explores inpatient medication abandonment across two large tertiary healthcare systems using different EHRs.Silent alerts were deployed to identify abandoned orders at Site 1 (June 2018-May 2019) and Site 2 (July 2020-May 2023). At Site 1, alerts triggered on all inpatient medication orders. At Site 2, alerts were part of a broader study implementing indication alerts; only orders for study medications triggered alerts. An abandoned order was defined as an order initiated but not signed within 24 hours of initiation. We calculated abandonment rates and rates of reorders, and performed regression to examine the association between abandonment and clinician, patient, and order characteristics. Exponential models were fit to characterize the chronology of reordering.Among 6.8 million medication orders, abandonment rates were 11.2% at Site 1 and 25.0% at Site 2. Due to fundamental differences in alert configuration and order capture, no direct statistical comparison of abandonment rates between the two sites was conducted. Over half of abandoned orders were reordered within 24 hours (65.3% at Site 1; 54.2% at Site 2). The chronology of reordering was similar at both institutions. Attendings, the most senior clinicians, had the lowest rates of abandonment. Abandonment rates decreased as clinicians placed more orders, but rose as clinicians ordered on more unique patients. Abandonments were higher when ordering for children compared with adults.Order abandonment is common and varies by patient's age, clinician type, and workload. Abandonment rates declined as house staff providers advanced in training, signifying clinical experience plays a role. Frequent reordering suggests that workflow interruptions or modifications, rather than intentional medication cancellation, may lead to a significant proportion of abandonments. Similarity in the timing of reordering between healthcare systems suggest common reordering processes across sites. Our findings demonstrate significant order abandonment rates, with the potential to use abandonment as a metric to improve computerized provider order entry (CPOE) functionality, clinicians' workflows, and patient safety.
{"title":"Abandoned Inpatient Orders: An Opportunity for Improving CPOE Safety and Efficiency.","authors":"Anne Grauer, Yuyang Yang, Jo Applebaum, Yelstin Fernandes, David Liebovitz, Jason Adelman, Bruce Lambert, William Galanter","doi":"10.1055/a-2786-0551","DOIUrl":"10.1055/a-2786-0551","url":null,"abstract":"<p><p>Abandoned medication orders-those initiated but not signed-represent a potential safety risk and an indicator of electronic health record (EHR) inefficiency. This study explores inpatient medication abandonment across two large tertiary healthcare systems using different EHRs.Silent alerts were deployed to identify abandoned orders at Site 1 (June 2018-May 2019) and Site 2 (July 2020-May 2023). At Site 1, alerts triggered on all inpatient medication orders. At Site 2, alerts were part of a broader study implementing indication alerts; only orders for study medications triggered alerts. An abandoned order was defined as an order initiated but not signed within 24 hours of initiation. We calculated abandonment rates and rates of reorders, and performed regression to examine the association between abandonment and clinician, patient, and order characteristics. Exponential models were fit to characterize the chronology of reordering.Among 6.8 million medication orders, abandonment rates were 11.2% at Site 1 and 25.0% at Site 2. Due to fundamental differences in alert configuration and order capture, no direct statistical comparison of abandonment rates between the two sites was conducted. Over half of abandoned orders were reordered within 24 hours (65.3% at Site 1; 54.2% at Site 2). The chronology of reordering was similar at both institutions. Attendings, the most senior clinicians, had the lowest rates of abandonment. Abandonment rates decreased as clinicians placed more orders, but rose as clinicians ordered on more unique patients. Abandonments were higher when ordering for children compared with adults.Order abandonment is common and varies by patient's age, clinician type, and workload. Abandonment rates declined as house staff providers advanced in training, signifying clinical experience plays a role. Frequent reordering suggests that workflow interruptions or modifications, rather than intentional medication cancellation, may lead to a significant proportion of abandonments. Similarity in the timing of reordering between healthcare systems suggest common reordering processes across sites. Our findings demonstrate significant order abandonment rates, with the potential to use abandonment as a metric to improve computerized provider order entry (CPOE) functionality, clinicians' workflows, and patient safety.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"17 1","pages":"28-38"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-22DOI: 10.1055/a-2777-1358
Nymisha Chilukuri, Erin Ballard, Xuan Xu, Tom McPherson, Victor Ritter, Hannah K Bassett, Jennifer Carlson, Natalie M Pageler
Identifying patient portals (PP) activation disparities, especially in electronic health record (EHR) activation workflows, can help facilitate equitable health care access.Our study aimed to assess whether the parent/guardian's preferred language was associated with being offered, activating, and using the PP and the methods used to offer activation codes.This retrospective cohort study examined PP offer, activation, and usage rates at a large freestanding children's hospital. Patients <12 years old with ambulatory visits from July 1, 2022, to June 30, 2023, without prior active proxy PP accounts were included. The primary independent variable was the self-reported parent/guardian preferred language (English/Spanish). Outcomes included the probability of being offered, overall and by specific offer methods, activation, and usage. Zou's modified multivariate Poisson regression models examined the association between preferred language and offer/activate/use status.Among 39,578 patients, 85.1% were patients with English as preferred language (PEPL) and 14.9% had Spanish as preferred language (PSPL). PSPL had a lower probability of being offered (adjusted relative risk ratio [aRR]: 0.65, 95% confidence interval [CI]: 0.63-0.67), activated (aRR: 0.72, 95% CI: 0.70-0.75), and used (aRR: 0.68, 95% CI: 0.65-0.72) a PP compared to PEPL. Specifically, PSPL had a lower probability of activating if ever offered via instant activation (aRR: 0.72, 95% CI: 0.69-0.75), parent/guardian with existing account (aRR: 0.73, 95% CI: 0.69-0.76), and had equal probability of activating if ever offered via letter (aRR: 0.42, 95% CI: 0.19-0.94) and clinician-assisted method (aRR: 0.99, 95% CI: 0.86-1.16), compared to PEPL.PSPL at a large, free-standing pediatric health system had a lower probability of PP offer, activation, and usage than PEPL. Activation methods were not universally effective across language groups, emphasizing the need for equitable workflow optimization. This study highlights an approach to analyzing health disparities in activation workflows to inform targeted interventions to improve equitable PP access.
{"title":"EHR Workflows Contribute to Disparities by Language Preference in Parent Patient Portal Access.","authors":"Nymisha Chilukuri, Erin Ballard, Xuan Xu, Tom McPherson, Victor Ritter, Hannah K Bassett, Jennifer Carlson, Natalie M Pageler","doi":"10.1055/a-2777-1358","DOIUrl":"10.1055/a-2777-1358","url":null,"abstract":"<p><p>Identifying patient portals (PP) activation disparities, especially in electronic health record (EHR) activation workflows, can help facilitate equitable health care access.Our study aimed to assess whether the parent/guardian's preferred language was associated with being offered, activating, and using the PP and the methods used to offer activation codes.This retrospective cohort study examined PP offer, activation, and usage rates at a large freestanding children's hospital. Patients <12 years old with ambulatory visits from July 1, 2022, to June 30, 2023, without prior active proxy PP accounts were included. The primary independent variable was the self-reported parent/guardian preferred language (English/Spanish). Outcomes included the probability of being offered, overall and by specific offer methods, activation, and usage. Zou's modified multivariate Poisson regression models examined the association between preferred language and offer/activate/use status.Among 39,578 patients, 85.1% were patients with English as preferred language (PEPL) and 14.9% had Spanish as preferred language (PSPL). PSPL had a lower probability of being offered (adjusted relative risk ratio [aRR]: 0.65, 95% confidence interval [CI]: 0.63-0.67), activated (aRR: 0.72, 95% CI: 0.70-0.75), and used (aRR: 0.68, 95% CI: 0.65-0.72) a PP compared to PEPL. Specifically, PSPL had a lower probability of activating if ever offered via instant activation (aRR: 0.72, 95% CI: 0.69-0.75), parent/guardian with existing account (aRR: 0.73, 95% CI: 0.69-0.76), and had equal probability of activating if ever offered via letter (aRR: 0.42, 95% CI: 0.19-0.94) and clinician-assisted method (aRR: 0.99, 95% CI: 0.86-1.16), compared to PEPL.PSPL at a large, free-standing pediatric health system had a lower probability of PP offer, activation, and usage than PEPL. Activation methods were not universally effective across language groups, emphasizing the need for equitable workflow optimization. This study highlights an approach to analyzing health disparities in activation workflows to inform targeted interventions to improve equitable PP access.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"17 1","pages":"19-27"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826850/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digital transformation of healthcare is reshaping care delivery among healthcare professionals, requiring nurses to develop digital competencies. These competencies are essential but often underdeveloped due to limited training and resources. Global initiatives emphasize integrating these competencies into nursing education, necessitating valid instruments to assess them.This systematic review aims to identify instruments measuring digital competence in nursing and to assess their measurement properties.This review was registered in PROSPERO (identifier: CRD42024522349) and conducted according to PRISMA guidelines. A systematic search was performed in CINAHL, PubMed/MEDLINE, and Scopus on instruments assessing digital competencies in nursing and reporting measurement properties. Measurement properties and their methodological quality were assessed using the COSMIN criteria, and the overall quality of the evidence was graded using a modified GRADE approach.A total of 27 instruments were identified, relating to three interconnected constructs: nursing informatics, digital health, and information and communication technology. Based on their measurement properties, the instruments were categorized into three groups (A, B, C) following the COSMIN methodology to support recommendations for use. Six instruments were classified under category A (recommended for use): the DigiHealthCom and DigiComInf instruments, the Turkish version of TANIC, the short version of ITASH, the Digital Competence Questionnaire, and the 30-item Arabic version of SANICS. Twenty instruments were categorized under category B (potentially recommendable, but further validation is needed). One instrument was placed in category C (not recommended for use).As digital competence becomes an increasing priority in education and public health, valid and reliable instruments are essential for assessing and monitoring these competencies. Such instruments support the identification of training needs, the evaluation of educational outcomes, and the integration of digital skills into nursing curricula and clinical practice, ultimately strengthening the digital readiness of the nursing workforce.
{"title":"Measurement Properties of Instruments Assessing Digital Competence in Nursing: A Systematic Review.","authors":"Fabio D'Agostino, Ilaria Erba, Elske Ammenwerth, Vered Robinzon, Gad Segal, Nissim Harel, Elisabetta Corvo, Refael Barkan, Hadas Lewy, Noemi Giannetta","doi":"10.1055/a-2780-7093","DOIUrl":"10.1055/a-2780-7093","url":null,"abstract":"<p><p>The digital transformation of healthcare is reshaping care delivery among healthcare professionals, requiring nurses to develop digital competencies. These competencies are essential but often underdeveloped due to limited training and resources. Global initiatives emphasize integrating these competencies into nursing education, necessitating valid instruments to assess them.This systematic review aims to identify instruments measuring digital competence in nursing and to assess their measurement properties.This review was registered in PROSPERO (identifier: CRD42024522349) and conducted according to PRISMA guidelines. A systematic search was performed in CINAHL, PubMed/MEDLINE, and Scopus on instruments assessing digital competencies in nursing and reporting measurement properties. Measurement properties and their methodological quality were assessed using the COSMIN criteria, and the overall quality of the evidence was graded using a modified GRADE approach.A total of 27 instruments were identified, relating to three interconnected constructs: nursing informatics, digital health, and information and communication technology. Based on their measurement properties, the instruments were categorized into three groups (A, B, C) following the COSMIN methodology to support recommendations for use. Six instruments were classified under category A (recommended for use): the DigiHealthCom and DigiComInf instruments, the Turkish version of TANIC, the short version of ITASH, the Digital Competence Questionnaire, and the 30-item Arabic version of SANICS. Twenty instruments were categorized under category B (potentially recommendable, but further validation is needed). One instrument was placed in category C (not recommended for use).As digital competence becomes an increasing priority in education and public health, valid and reliable instruments are essential for assessing and monitoring these competencies. Such instruments support the identification of training needs, the evaluation of educational outcomes, and the integration of digital skills into nursing curricula and clinical practice, ultimately strengthening the digital readiness of the nursing workforce.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"17 1","pages":"1-18"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-18DOI: 10.1055/a-2815-2064
Kyle Bernard, Arwen B L Declan, Kevin Buell, Eric Moyer, Michael Gottlieb
Real-world data from vendor-aggregated health information exchanges represent a powerful resource for retrospective clinical research. Epic's Cosmos platform, a large-scale centralized data warehouse, enables querying deidentified electronic health record data.We describe the iterative development of a generalizable clinical informatics workflow for retrospective studies using Cosmos.We applied the Plan-Do-Study-Act cycle to iteratively refine a collaborative research process using Cosmo's SlicerDicer interface, focusing on setting, team, technical preparation, and workflow.We identified multiple areas of improvement to facilitate streamlined collaborative studies. Our institutional review board confirmed Cosmos studies as nonhumans subject research. We expanded collaborations across institutions. We identified key insights for team definition across clinical, research, and skill domains. We expanded our team based on technical skills, domain expertise, and educational aims. We optimized query building and cohort construction, data analysis and validation, and communication processes. We clarified and optimized a collaborative workflow across clinical and informatics expertise.Our collaborative approach to secondary data analysis in Cosmos supports the development of meaningful clinical evidence to support high-quality, well-evidenced patient care. This study demonstrates the application of the Plan-Do-Study-Act cycle to a collaborative workflow for clinically focused secondary data analysis. Our approach allows rapid, reproducible cohort construction and analysis, is adaptable across clinical domains, and scales to multiorganizational collaboration. This approach offers a model for others seeking to develop key clinical insights via retrospective studies within the Cosmos data aggregation tool.
{"title":"Unlocking Practice Patterns at Scale: A Framework for Developing Clinical Insights Using Epic's Cosmos.","authors":"Kyle Bernard, Arwen B L Declan, Kevin Buell, Eric Moyer, Michael Gottlieb","doi":"10.1055/a-2815-2064","DOIUrl":"10.1055/a-2815-2064","url":null,"abstract":"<p><p>Real-world data from vendor-aggregated health information exchanges represent a powerful resource for retrospective clinical research. Epic's Cosmos platform, a large-scale centralized data warehouse, enables querying deidentified electronic health record data.We describe the iterative development of a generalizable clinical informatics workflow for retrospective studies using Cosmos.We applied the Plan-Do-Study-Act cycle to iteratively refine a collaborative research process using Cosmo's SlicerDicer interface, focusing on setting, team, technical preparation, and workflow.We identified multiple areas of improvement to facilitate streamlined collaborative studies. Our institutional review board confirmed Cosmos studies as nonhumans subject research. We expanded collaborations across institutions. We identified key insights for team definition across clinical, research, and skill domains. We expanded our team based on technical skills, domain expertise, and educational aims. We optimized query building and cohort construction, data analysis and validation, and communication processes. We clarified and optimized a collaborative workflow across clinical and informatics expertise.Our collaborative approach to secondary data analysis in Cosmos supports the development of meaningful clinical evidence to support high-quality, well-evidenced patient care. This study demonstrates the application of the Plan-Do-Study-Act cycle to a collaborative workflow for clinically focused secondary data analysis. Our approach allows rapid, reproducible cohort construction and analysis, is adaptable across clinical domains, and scales to multiorganizational collaboration. This approach offers a model for others seeking to develop key clinical insights via retrospective studies within the Cosmos data aggregation tool.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"99-106"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12962793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146221486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-27DOI: 10.1055/a-2807-4098
Kajal N Patel, Mary M Stout, Mary A Solis, Nasrin Riyahi, Christopher Holland, Pamela D Vohra-Khullar, Miranda A Moore, Reema H Dbouk
Ambient listening tools utilize generative artificial intelligence (AI) to create clinical notes from real-time conversations between clinicians and patients during an encounter. One of the potential benefits of ambient listening tools is an improvement in reported patient experience.This study aimed to compare the patient experience of an outpatient visit during which an ambient listening tool is used with a standard visit and to quantify any perceived improvements in care.Patients completed a targeted survey following outpatient clinic visits across all departments at a large academic institution. We conducted ordered logistic regression analyses to examine the association between ambient scribe use and patient satisfaction across six survey domains: provider communication, provider attention, perceived time spent with the provider, overall interaction with the provider, understanding of health information, and quality of the after-visit summary.Our analysis included 8,120 patients who submitted a survey following their outpatient visit in February to April 2025. Patients whose provider used an ambient scribe had higher odds of reporting satisfaction with the perceived duration of time spent with the provider (OR = 1.13, 95% CI: 1.01-1.26, p = 0.033).In this observational study, use of an ambient AI scribe was associated with a small improvement in one patient-reported satisfaction domain of perceived attention from the provider and no detectable differences across other domains assessing patient experience. These findings suggest that, in early real-world implementation, ambient AI documentation tools may be acceptable to patients and do not appear to adversely affect perceived visit quality.
环境聆听工具利用生成式人工智能(AI)从临床医生和患者之间的实时对话中创建临床笔记。环境聆听工具的潜在好处之一是改善报告的患者体验。本研究旨在比较使用环境聆听工具与标准访问期间门诊就诊的患者体验,并量化护理方面的任何感知改善。在一家大型学术机构的所有部门,患者完成了一项有针对性的调查。我们进行了有序的逻辑回归分析,通过六个调查领域来检验环境记录仪的使用与患者满意度之间的关系:提供者沟通、提供者关注、与提供者相处的感知时间、与提供者的整体互动、对健康信息的理解以及就诊后总结的质量。我们的分析包括8120名患者,他们在2025年2月至4月的门诊就诊后提交了一份调查。使用环境记录仪的患者报告对与提供者共度的感知时间满意的几率更高(OR = 1.13, 95% CI: 1.01-1.26, p = 0.033)。在这项观察性研究中,使用环境人工智能记录仪与一个患者报告的提供者感知注意力满意度领域的小幅改善有关,并且在评估患者体验的其他领域没有可检测到的差异。这些发现表明,在现实世界的早期实施中,环境人工智能文档工具可能是患者可以接受的,并且似乎不会对感知的就诊质量产生不利影响。
{"title":"The Effect of Ambient Listening Technology on the Patient Experience.","authors":"Kajal N Patel, Mary M Stout, Mary A Solis, Nasrin Riyahi, Christopher Holland, Pamela D Vohra-Khullar, Miranda A Moore, Reema H Dbouk","doi":"10.1055/a-2807-4098","DOIUrl":"10.1055/a-2807-4098","url":null,"abstract":"<p><p>Ambient listening tools utilize generative artificial intelligence (AI) to create clinical notes from real-time conversations between clinicians and patients during an encounter. One of the potential benefits of ambient listening tools is an improvement in reported patient experience.This study aimed to compare the patient experience of an outpatient visit during which an ambient listening tool is used with a standard visit and to quantify any perceived improvements in care.Patients completed a targeted survey following outpatient clinic visits across all departments at a large academic institution. We conducted ordered logistic regression analyses to examine the association between ambient scribe use and patient satisfaction across six survey domains: provider communication, provider attention, perceived time spent with the provider, overall interaction with the provider, understanding of health information, and quality of the after-visit summary.Our analysis included 8,120 patients who submitted a survey following their outpatient visit in February to April 2025. Patients whose provider used an ambient scribe had higher odds of reporting satisfaction with the perceived duration of time spent with the provider (OR = 1.13, 95% CI: 1.01-1.26, <i>p</i> = 0.033).In this observational study, use of an ambient AI scribe was associated with a small improvement in one patient-reported satisfaction domain of perceived attention from the provider and no detectable differences across other domains assessing patient experience. These findings suggest that, in early real-world implementation, ambient AI documentation tools may be acceptable to patients and do not appear to adversely affect perceived visit quality.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"17 1","pages":"82-88"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12948635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-24DOI: 10.1055/a-2804-5877
Christina L Cifra, Priynt Patel, Cody R Tigges, Sarah L Miller, Olivia Lin, Irene Pantekidis, Alison Bronson, Raquel E Gomez, Priyadarshini R Pennathur, Erik Westlund, Dean F Sittig, Hardeep Singh
Learning about critically ill children's outcomes after transfer to the pediatric intensive care unit (PICU) can help emergency department (ED) physicians improve future performance. However, there are no standard processes in place to systematically provide this information; thus, most ED physicians obtain inconsistent feedback.We aimed to determine the effect of delivering patient outcome feedback through the electronic health record (EHR) on the frequency of ED physicians' re-access of patients' EHRs after PICU transfer.We performed a retrospective cohort study at an academic tertiary referral hospital before and after implementing an EHR-based system delivering individual patient outcome feedback to ED physicians who admitted children from the ED to the same institution's PICU (2019-2021).A total of 180 patients transferred to the PICU by 30 unique ED physicians were included (100 pre- and 80 postintervention). After implementing the feedback system, the proportion of patients for whom ED physicians re-accessed the EHR increased from 26% preintervention to 80% postintervention (p < 0.001). Propensity score-adjusted multivariable modeling accounting for patient, clinician, encounter, and diagnostic covariates showed a significant association between receipt of patient outcome feedback reports and ED physicians' EHR re-access, with the rate of EHR re-access 2.58 times higher in the postintervention cohort (p < 0.001). The estimated marginal means, which provide an adjusted average outcome for each cohort, showed a significantly higher number of EHR re-access episodes per patient postintervention (0.44 [95% CI: 0.3, 0.66] pre- vs. 1.14 [95% CI: 0.86, 1.51] postintervention, p < 0.001).Receipt of consistent patient outcome feedback increased ED physicians' re-access of patients' EHRs after PICU transfer, potentially allowing them to obtain information that can be used to improve future clinical performance. Further study is needed to determine the effectiveness of feedback systems in improving clinician practice and outcomes of critically ill children.
了解危重儿童转至儿科重症监护病房(PICU)后的预后可以帮助急诊科(ED)医生提高未来的表现。然而,目前尚无标准程序系统地提供这些信息;因此,大多数急诊科医生得到的反馈并不一致。我们的目的是确定通过电子健康记录(EHR)提供患者结果反馈对急诊科医生在PICU转移后重新访问患者EHR的频率的影响。我们在一家学术三级转诊医院实施基于电子病历的系统前后进行了一项回顾性队列研究(2019-2021年),该系统向从急诊科接收儿童到同一机构PICU的急诊科医生提供个体患者结果反馈。共有180名患者被30名独特的ED医生转移到PICU(干预前100名,干预后80名)。实施反馈系统后,急诊室医生重新访问电子病历的患者比例从干预前的26%增加到干预后的80% (p p p)
{"title":"Effect of an Outcome Feedback Reporting System on Emergency Department Physicians' Chart Reaccess.","authors":"Christina L Cifra, Priynt Patel, Cody R Tigges, Sarah L Miller, Olivia Lin, Irene Pantekidis, Alison Bronson, Raquel E Gomez, Priyadarshini R Pennathur, Erik Westlund, Dean F Sittig, Hardeep Singh","doi":"10.1055/a-2804-5877","DOIUrl":"10.1055/a-2804-5877","url":null,"abstract":"<p><p>Learning about critically ill children's outcomes after transfer to the pediatric intensive care unit (PICU) can help emergency department (ED) physicians improve future performance. However, there are no standard processes in place to systematically provide this information; thus, most ED physicians obtain inconsistent feedback.We aimed to determine the effect of delivering patient outcome feedback through the electronic health record (EHR) on the frequency of ED physicians' re-access of patients' EHRs after PICU transfer.We performed a retrospective cohort study at an academic tertiary referral hospital before and after implementing an EHR-based system delivering individual patient outcome feedback to ED physicians who admitted children from the ED to the same institution's PICU (2019-2021).A total of 180 patients transferred to the PICU by 30 unique ED physicians were included (100 pre- and 80 postintervention). After implementing the feedback system, the proportion of patients for whom ED physicians re-accessed the EHR increased from 26% preintervention to 80% postintervention (<i>p</i> < 0.001). Propensity score-adjusted multivariable modeling accounting for patient, clinician, encounter, and diagnostic covariates showed a significant association between receipt of patient outcome feedback reports and ED physicians' EHR re-access, with the rate of EHR re-access 2.58 times higher in the postintervention cohort (<i>p</i> < 0.001). The estimated marginal means, which provide an adjusted average outcome for each cohort, showed a significantly higher number of EHR re-access episodes per patient postintervention (0.44 [95% CI: 0.3, 0.66] pre- vs. 1.14 [95% CI: 0.86, 1.51] postintervention, <i>p</i> < 0.001).Receipt of consistent patient outcome feedback increased ED physicians' re-access of patients' EHRs after PICU transfer, potentially allowing them to obtain information that can be used to improve future clinical performance. Further study is needed to determine the effectiveness of feedback systems in improving clinician practice and outcomes of critically ill children.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":"17 1","pages":"73-81"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12932033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}