Pub Date : 2026-02-05DOI: 10.1038/s41746-026-02353-7
Hanyang Li, Xiao He, Adarsh Subbaswamy, Patrick Vossler, Alexej Gossmann, Karandeep Singh, Jean Feng
Advances in artificial intelligence (AI) and machine learning (ML) have led to a surge in AI/ML-enabled medical devices, posing new challenges for regulators because best practices for developing, testing, and monitoring these devices are still emerging. Consequently, there is a critical need for up-to-date data analyses of the regulatory landscape to inform policy-making. However, such analyses have historically relied upon manual annotation efforts because regulatory documents are unstructured, complex, multi-modal, and filled with jargon. Efforts to automate annotation using simple natural language processing methods have achieved limited success, as they lack the reasoning needed to interpret regulatory materials. Recent progress in large language models (LLMs) presents an unprecedented opportunity to unlock information embedded in regulatory documents. This work conducts the first wide-ranging validation study of LLMs for scaling data analyses in the field of medical device regulatory science. Evaluating LLM outputs using expert manual annotations and "LLM-as-a-judge," we find that LLMs can accurately extract attributes spanning pre- and post-market settings, with accuracy rates often reaching 80% or higher. We then demonstrate how LLMs can scale up analyses in three applications: (1) monitoring device validation practices, (2) coding medical device reports, and (3) identifying potential risk factors for post-market adverse events.
{"title":"Scaling medical device regulatory science using large language models.","authors":"Hanyang Li, Xiao He, Adarsh Subbaswamy, Patrick Vossler, Alexej Gossmann, Karandeep Singh, Jean Feng","doi":"10.1038/s41746-026-02353-7","DOIUrl":"https://doi.org/10.1038/s41746-026-02353-7","url":null,"abstract":"<p><p>Advances in artificial intelligence (AI) and machine learning (ML) have led to a surge in AI/ML-enabled medical devices, posing new challenges for regulators because best practices for developing, testing, and monitoring these devices are still emerging. Consequently, there is a critical need for up-to-date data analyses of the regulatory landscape to inform policy-making. However, such analyses have historically relied upon manual annotation efforts because regulatory documents are unstructured, complex, multi-modal, and filled with jargon. Efforts to automate annotation using simple natural language processing methods have achieved limited success, as they lack the reasoning needed to interpret regulatory materials. Recent progress in large language models (LLMs) presents an unprecedented opportunity to unlock information embedded in regulatory documents. This work conducts the first wide-ranging validation study of LLMs for scaling data analyses in the field of medical device regulatory science. Evaluating LLM outputs using expert manual annotations and \"LLM-as-a-judge,\" we find that LLMs can accurately extract attributes spanning pre- and post-market settings, with accuracy rates often reaching 80% or higher. We then demonstrate how LLMs can scale up analyses in three applications: (1) monitoring device validation practices, (2) coding medical device reports, and (3) identifying potential risk factors for post-market adverse events.</p>","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":" ","pages":""},"PeriodicalIF":15.1,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1038/s41746-026-02403-0
Shu Yang, Fengtao Zhou, Leon Mayer, Fuxiang Huang, Yiliang Chen, Yihui Wang, Sunan He, Yuxiang Nie, Xi Wang, Yueming Jin, Huihui Sun, Shuchang Xu, Alex Qinyang Liu, Zheng Li, Jing Qin, Jeremy YuenChun Teoh, Lena Maier-Hein, Hao Chen
{"title":"Large-scale self-supervised video foundation model for intelligent surgery","authors":"Shu Yang, Fengtao Zhou, Leon Mayer, Fuxiang Huang, Yiliang Chen, Yihui Wang, Sunan He, Yuxiang Nie, Xi Wang, Yueming Jin, Huihui Sun, Shuchang Xu, Alex Qinyang Liu, Zheng Li, Jing Qin, Jeremy YuenChun Teoh, Lena Maier-Hein, Hao Chen","doi":"10.1038/s41746-026-02403-0","DOIUrl":"https://doi.org/10.1038/s41746-026-02403-0","url":null,"abstract":"","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"91 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1038/s41746-026-02387-x
Sara Raza, Sara Gerke, Christina Silcox, Rachele Hendricks-Sturrup, Carmel Shachar
{"title":"Medicare advantage becoming a disadvantage with use of artificial intelligence in prior authorization review","authors":"Sara Raza, Sara Gerke, Christina Silcox, Rachele Hendricks-Sturrup, Carmel Shachar","doi":"10.1038/s41746-026-02387-x","DOIUrl":"https://doi.org/10.1038/s41746-026-02387-x","url":null,"abstract":"","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"58 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate delineation of lung lesions in computed tomography (CT) scans is critical for diagnosis, staging, and treatment planning, yet remains a challenging task. While foundation models like the Segment Anything Model (SAM) excel in natural images, they often falter in medical imaging due to low contrast, ambiguous boundaries, and a lack of 3D context. To address these limitations, we propose StructSAM, a structure-aware prompt adaptation framework designed for robust volumetric segmentation, with a primary focus on lung cancer. StructSAM injects anatomical priors into the prompt pathway, employs a 3D inter-slice aggregator for volumetric consistency, and leverages PEFT for scalability. Experiments on the LIDC-IDRI dataset demonstrate that StructSAM achieves state-of-the-art accuracy on lung nodule segmentation, outperforming both classical architectures and SAM-based adaptations. Crucially, extended cross-organ evaluations on KiTS19 and MSD Pancreas datasets reveal that StructSAM effectively generalizes to other anatomical structures, highlighting its robustness to domain shifts. These findings suggest that embedding structural priors into foundation models is a promising strategy toward generic, clinically reliable, and efficient medical image segmentation.
{"title":"StructSAM: structure-aware prompt adaptation for robust lung cancer lesion segmentation in CT","authors":"Mengjie Liu, Yuxin Yao, Jinyong Jia, Jiali Yao, Zhengze Huang, Ziyang Zeng, Guangjin Pu, Yan Wu, Yuqi Bai, Bin Wang, Lili Jiang","doi":"10.1038/s41746-025-02306-6","DOIUrl":"https://doi.org/10.1038/s41746-025-02306-6","url":null,"abstract":"Accurate delineation of lung lesions in computed tomography (CT) scans is critical for diagnosis, staging, and treatment planning, yet remains a challenging task. While foundation models like the Segment Anything Model (SAM) excel in natural images, they often falter in medical imaging due to low contrast, ambiguous boundaries, and a lack of 3D context. To address these limitations, we propose StructSAM, a structure-aware prompt adaptation framework designed for robust volumetric segmentation, with a primary focus on lung cancer. StructSAM injects anatomical priors into the prompt pathway, employs a 3D inter-slice aggregator for volumetric consistency, and leverages PEFT for scalability. Experiments on the LIDC-IDRI dataset demonstrate that StructSAM achieves state-of-the-art accuracy on lung nodule segmentation, outperforming both classical architectures and SAM-based adaptations. Crucially, extended cross-organ evaluations on KiTS19 and MSD Pancreas datasets reveal that StructSAM effectively generalizes to other anatomical structures, highlighting its robustness to domain shifts. These findings suggest that embedding structural priors into foundation models is a promising strategy toward generic, clinically reliable, and efficient medical image segmentation.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"253 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1038/s41746-026-02398-8
Ziyue Luo, Ruihao Zhou, Jingwen Wei, Kailei Nong, Xiran Peng, Lu Chen, Peiyi Li, Sisi Deng, Mengchan Ou, Ling Ye, Yaqiang Wang, Guo Chen, Xuechao Hao, Sheyu Li, Tao Zhu
Digital health interventions (DHIs), delivered via digital platforms such as internet-based programs, mobile applications or short messages, may improve patient-reported outcomes (PROs), but comparative effectiveness is unclear. We conducted a network meta-analysis of randomized controlled trials in adults undergoing elective surgery under general anesthesia, identified in PubMed, Embase, CENTRAL, and Web of Science to March 1, 2025. Standardized mean differences (SMDs), mean differences (MDs) with minimal important differences (MIDs), and 95% CIs were estimated. Risk of bias was assessed with RoB 2 and certainty of evidence with GRADE. Fifty-six trials (6,154 patients) were included. Extended reality (XR) most effectively reduced perioperative anxiety (SMD 0.60; 95% CI 0.37–0.84; MD 8.05; MID 6.71; moderate-certainty). For postoperative pain, mobile applications (SMD 0.64; 95% CI 0.32–0.95; MD 1.36; MID 1.0; moderate-certainty) and XR (SMD 0.51; 95% CI 0.26–0.76; MD 1.09; MID 1.0; moderate-certainty) were probably effective. For quality of life, 2D video yielded the greatest gain (SMD 0.99; 95% CI 0.11–1.88; MD 0.11; MID 0.05; high-certainty). XR also improved satisfaction (SMD 1.27; 95% CI 0.63–1.91; MD 1.91; MID 0.75; moderate-certainty). These findings suggest that DHIs may improve perioperative PROs.
通过基于互联网的项目、移动应用程序或短信等数字平台提供的数字健康干预措施(DHIs)可能会改善患者报告的结果(PROs),但相对有效性尚不清楚。我们对在全身麻醉下接受择期手术的成人随机对照试验进行了网络荟萃分析,这些试验已在PubMed、Embase、CENTRAL和Web of Science上确认至2025年3月1日。估计标准化平均差异(SMDs)、最小重要差异(MIDs)的平均差异(MDs)和95% ci。偏倚风险用RoB 2评估,证据确定性用GRADE评估。纳入56项试验(6154例患者)。扩展现实(XR)最有效地减少围手术期焦虑(SMD 0.60; 95% CI 0.37-0.84; MD 8.05; MID 6.71;中等确定性)。对于术后疼痛,移动应用(SMD 0.64; 95% CI 0.32-0.95; MD 1.36; MID 1.0;中等确定性)和XR (SMD 0.51; 95% CI 0.26-0.76; MD 1.09; MID 1.0;中等确定性)可能有效。对于生活质量,2D视频获得了最大的增益(SMD 0.99; 95% CI 0.11 - 1.88; MD 0.11; MID 0.05;高确定性)。XR也提高了满意度(SMD 1.27; 95% CI 0.63-1.91; MD 1.91; MID 0.75;中等确定性)。这些发现提示DHIs可以改善围手术期的PROs。
{"title":"Digital health interventions for perioperative patient-reported outcomes: a network meta-analysis","authors":"Ziyue Luo, Ruihao Zhou, Jingwen Wei, Kailei Nong, Xiran Peng, Lu Chen, Peiyi Li, Sisi Deng, Mengchan Ou, Ling Ye, Yaqiang Wang, Guo Chen, Xuechao Hao, Sheyu Li, Tao Zhu","doi":"10.1038/s41746-026-02398-8","DOIUrl":"https://doi.org/10.1038/s41746-026-02398-8","url":null,"abstract":"Digital health interventions (DHIs), delivered via digital platforms such as internet-based programs, mobile applications or short messages, may improve patient-reported outcomes (PROs), but comparative effectiveness is unclear. We conducted a network meta-analysis of randomized controlled trials in adults undergoing elective surgery under general anesthesia, identified in PubMed, Embase, CENTRAL, and Web of Science to March 1, 2025. Standardized mean differences (SMDs), mean differences (MDs) with minimal important differences (MIDs), and 95% CIs were estimated. Risk of bias was assessed with RoB 2 and certainty of evidence with GRADE. Fifty-six trials (6,154 patients) were included. Extended reality (XR) most effectively reduced perioperative anxiety (SMD 0.60; 95% CI 0.37–0.84; MD 8.05; MID 6.71; moderate-certainty). For postoperative pain, mobile applications (SMD 0.64; 95% CI 0.32–0.95; MD 1.36; MID 1.0; moderate-certainty) and XR (SMD 0.51; 95% CI 0.26–0.76; MD 1.09; MID 1.0; moderate-certainty) were probably effective. For quality of life, 2D video yielded the greatest gain (SMD 0.99; 95% CI 0.11–1.88; MD 0.11; MID 0.05; high-certainty). XR also improved satisfaction (SMD 1.27; 95% CI 0.63–1.91; MD 1.91; MID 0.75; moderate-certainty). These findings suggest that DHIs may improve perioperative PROs.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"1 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1038/s41746-026-02394-y
F. Schwarz, L. Levien, M. Maulhardt, G. Wulf, N. Brökers, E. Aydilek
Autologous stem-cell transplantation is a fundamental therapy for multiple myeloma. Although inpatient chemo-based stem-cell mobilization (SCM) is standard care in Germany, outpatient approaches could ease healthcare constraints. We analyzed 109 myeloma patients undergoing SCM and collection at the University Medical Center Göttingen for safety. We then trained machine learning models to predict adverse events (AEs) requiring hospitalization and to forecast AE onset timing for optimized ward management. In our cohort, 97% achieved successful collection, but 69% experienced severe AEs necessitating hospitalization. Simulations suggest a risk-stratified outpatient protocol could cut bed usage by at least one third without compromising safety. Classification models accurately predicted some AE types (e.g., elevated creatinine, ROC-AUC 1.0), though neutropenic fever remained challenging (ROC-AUC 0.67). Regression models forecast AE onset with a mean error of just over one day. These results outline a data-driven roadmap for safely adopting outpatient SCM and optimizing resource allocation in clinical practice.
{"title":"Predicting adverse events for risk stratification of chemotherapy based stem cell mobilization in multiple myeloma","authors":"F. Schwarz, L. Levien, M. Maulhardt, G. Wulf, N. Brökers, E. Aydilek","doi":"10.1038/s41746-026-02394-y","DOIUrl":"https://doi.org/10.1038/s41746-026-02394-y","url":null,"abstract":"Autologous stem-cell transplantation is a fundamental therapy for multiple myeloma. Although inpatient chemo-based stem-cell mobilization (SCM) is standard care in Germany, outpatient approaches could ease healthcare constraints. We analyzed 109 myeloma patients undergoing SCM and collection at the University Medical Center Göttingen for safety. We then trained machine learning models to predict adverse events (AEs) requiring hospitalization and to forecast AE onset timing for optimized ward management. In our cohort, 97% achieved successful collection, but 69% experienced severe AEs necessitating hospitalization. Simulations suggest a risk-stratified outpatient protocol could cut bed usage by at least one third without compromising safety. Classification models accurately predicted some AE types (e.g., elevated creatinine, ROC-AUC 1.0), though neutropenic fever remained challenging (ROC-AUC 0.67). Regression models forecast AE onset with a mean error of just over one day. These results outline a data-driven roadmap for safely adopting outpatient SCM and optimizing resource allocation in clinical practice.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"4 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence (AI) is reshaping medical education, particularly in the domain of competency-based assessment, where current methods remain subjective and resource-intensive. We introduce a multimodal AI framework that integrates video, audio, and patient monitor data to provide objective and interpretable competency assessments. Using 90 anesthesia residents, we established “ideal” performance benchmarks and trained an anomaly detection model (MEMTO) to quantify deviations from these benchmarks. Competency scores derived from these deviations showed strong alignment with expert ratings (Spearman’s ρ = 0.78; ICC = 0.75) and demonstrated high ranking precision (Relative L2-distance = 0.12). SHAP analysis revealed that communication and eye contact with the patient monitor are key drivers of variability. By linking AI-assisted anomaly detection with interpretable feedback, our framework addresses critical challenges of fairness, reliability, and transparency in simulation-based education. This work provides actionable evidence for integrating AI into medical training and advancing scalable, equitable evaluation of competence.
{"title":"Towards accurate and interpretable competency-based assessment: enhancing clinical competency assessment through multimodal AI and anomaly detection","authors":"Sapir Gershov, Fadi Mahameed, Aeyal Raz, Shlomi Laufer","doi":"10.1038/s41746-025-02299-2","DOIUrl":"https://doi.org/10.1038/s41746-025-02299-2","url":null,"abstract":"Artificial Intelligence (AI) is reshaping medical education, particularly in the domain of competency-based assessment, where current methods remain subjective and resource-intensive. We introduce a multimodal AI framework that integrates video, audio, and patient monitor data to provide objective and interpretable competency assessments. Using 90 anesthesia residents, we established “ideal” performance benchmarks and trained an anomaly detection model (MEMTO) to quantify deviations from these benchmarks. Competency scores derived from these deviations showed strong alignment with expert ratings (Spearman’s ρ = 0.78; ICC = 0.75) and demonstrated high ranking precision (Relative L2-distance = 0.12). SHAP analysis revealed that communication and eye contact with the patient monitor are key drivers of variability. By linking AI-assisted anomaly detection with interpretable feedback, our framework addresses critical challenges of fairness, reliability, and transparency in simulation-based education. This work provides actionable evidence for integrating AI into medical training and advancing scalable, equitable evaluation of competence.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"44 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1038/s41746-026-02397-9
Joshua C. Pritchett, Pravesh Sharma, Ming Huang, Ruchita Dholakia, Tabetha A. Brockman, James P. Moriarty, Celia C. Kamath, Hannah Ahn, Paul A. Decker, Ruoxiang Jiang, Jonathan Ticku, Nandita Khera, LaPrincess C. Brewer, Jon C. Tilburt, Bijan J. Borah, Christi A. Patten, Tufia C. Haddad
Video telehealth visits (VTV) have emerged as a critical tool for oncology care delivery, with potential to address longstanding access disparities. We examined the association between broadband internet availability, individual digital literacy factors, and VTV utilization among patients with cancer. In a retrospective cohort of 13,897 patients across a multi-site practice, VTV utilization was significantly lower in areas with ≤1 internet service provider (ISP) offering download speeds ≥25 Mbps (p = 0.0009). Validation in a regional cohort (n = 6665) confirmed lower VTV utilization in low-broadband areas. Among 1134 surveyed patients, higher digital literacy was the strongest predictor of VTV use (OR 2.5; p < 0.001), even where broadband was limited. This study demonstrates that while both broadband availability and digital literacy independently influence VTV utilization, individual digital skills can partially offset structural limitations, underscoring the need for concurrent investment in broadband infrastructure and targeted digital literacy initiatives to advance access to care.
视频远程医疗访问(VTV)已成为肿瘤护理提供的关键工具,有可能解决长期存在的获取差距。我们研究了宽带互联网可用性、个人数字素养因素和癌症患者VTV使用之间的关系。在一项多地点实践的13897名患者的回顾性队列研究中,在≤1个互联网服务提供商(ISP)提供下载速度≥25 Mbps的地区,VTV利用率明显较低(p = 0.0009)。区域队列验证(n = 6665)证实低宽带地区的VTV利用率较低。在1134名接受调查的患者中,更高的数字素养是VTV使用的最强预测因子(OR 2.5; p < 0.001),即使在宽带有限的地方也是如此。本研究表明,虽然宽带可用性和数字素养都独立影响着VTV的利用,但个人数字技能可以部分抵消结构性限制,强调需要同时投资宽带基础设施和有针对性的数字素养举措,以促进获得医疗服务。
{"title":"Impact of broadband availability and digital literacy on video telehealth use among cancer patients","authors":"Joshua C. Pritchett, Pravesh Sharma, Ming Huang, Ruchita Dholakia, Tabetha A. Brockman, James P. Moriarty, Celia C. Kamath, Hannah Ahn, Paul A. Decker, Ruoxiang Jiang, Jonathan Ticku, Nandita Khera, LaPrincess C. Brewer, Jon C. Tilburt, Bijan J. Borah, Christi A. Patten, Tufia C. Haddad","doi":"10.1038/s41746-026-02397-9","DOIUrl":"https://doi.org/10.1038/s41746-026-02397-9","url":null,"abstract":"Video telehealth visits (VTV) have emerged as a critical tool for oncology care delivery, with potential to address longstanding access disparities. We examined the association between broadband internet availability, individual digital literacy factors, and VTV utilization among patients with cancer. In a retrospective cohort of 13,897 patients across a multi-site practice, VTV utilization was significantly lower in areas with ≤1 internet service provider (ISP) offering download speeds ≥25 Mbps (p = 0.0009). Validation in a regional cohort (n = 6665) confirmed lower VTV utilization in low-broadband areas. Among 1134 surveyed patients, higher digital literacy was the strongest predictor of VTV use (OR 2.5; p < 0.001), even where broadband was limited. This study demonstrates that while both broadband availability and digital literacy independently influence VTV utilization, individual digital skills can partially offset structural limitations, underscoring the need for concurrent investment in broadband infrastructure and targeted digital literacy initiatives to advance access to care.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"87 1","pages":""},"PeriodicalIF":15.2,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}