Background: Diabetic foot (DF) is a severe complication of type 2 diabetes mellitus (T2DM), contributing to significant morbidity and healthcare costs globally. Early prediction and intervention are critical for preventing amputations and improving patient outcomes. However, traditional statistical methods lack the capacity to handle high-dimensional clinical data and identify optimal predictive features. This study aimed to develop and validate machine learning models for DF risk prediction using feature selection strategies based on binary logistic regression and information theory.
Methods: A retrospective cohort of 1,179 patients (95 DF cases, 1,084 T2DM controls) was analyzed using clinical and biochemical data from 2019 to 2025. Three data sets were constructed: (1) original features; (2) features selected via binary logistic regression (F1); and (3) features selected via information-theoretic global learning (F2). Six models-extreme learning machine (ELM), kernel extreme learning machine (KELM), and their variants trained on the three data sets-were evaluated using fivefold cross-validation. Performance metrics included area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and computational efficiency.
Results: Age, blood-urea-nitrogen (BUN), homocysteine (Hcy), albumin (ALB), and fasting blood glucose (FBG) were identified as independent DF risk factors. The information theory-based KELM (IT-KELM) model achieved the highest AUC of 0.799 (sensitivity: 0.792 and specificity: 0.710) on F2, outperforming other models. Feature selection improved predictive accuracy while reducing computational time, with IT-KELM requiring 0.138 s for training and 0.0023 s for testing. The SHAP summary dot plot and bar chart revealed that the top five features contributing to the model were TP, RBC, ALB, BMI and HB.
Conclusions: Integrating information theory with KELM enhances DF risk prediction by optimizing feature subsets and leveraging nonlinear kernel mapping. The IT-KELM model demonstrates robust diagnostic performance and clinical feasibility for early DF screening. Future multi-center studies are needed to validate generalizability and refine model interpretability in real-world settings. This approach provides a cost-effective tool for precision medicine in diabetes care.
{"title":"Predicting the diabetic foot in patients with type 2 diabetes mellitus based on machine learning.","authors":"Haixiang Zhang, Weijian Fan, Peipei Li, Xiangzi Chen, Shiwu Yin","doi":"10.1186/s12938-025-01494-2","DOIUrl":"https://doi.org/10.1186/s12938-025-01494-2","url":null,"abstract":"<p><strong>Background: </strong>Diabetic foot (DF) is a severe complication of type 2 diabetes mellitus (T2DM), contributing to significant morbidity and healthcare costs globally. Early prediction and intervention are critical for preventing amputations and improving patient outcomes. However, traditional statistical methods lack the capacity to handle high-dimensional clinical data and identify optimal predictive features. This study aimed to develop and validate machine learning models for DF risk prediction using feature selection strategies based on binary logistic regression and information theory.</p><p><strong>Methods: </strong>A retrospective cohort of 1,179 patients (95 DF cases, 1,084 T2DM controls) was analyzed using clinical and biochemical data from 2019 to 2025. Three data sets were constructed: (1) original features; (2) features selected via binary logistic regression (F1); and (3) features selected via information-theoretic global learning (F2). Six models-extreme learning machine (ELM), kernel extreme learning machine (KELM), and their variants trained on the three data sets-were evaluated using fivefold cross-validation. Performance metrics included area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and computational efficiency.</p><p><strong>Results: </strong>Age, blood-urea-nitrogen (BUN), homocysteine (Hcy), albumin (ALB), and fasting blood glucose (FBG) were identified as independent DF risk factors. The information theory-based KELM (IT-KELM) model achieved the highest AUC of 0.799 (sensitivity: 0.792 and specificity: 0.710) on F2, outperforming other models. Feature selection improved predictive accuracy while reducing computational time, with IT-KELM requiring 0.138 s for training and 0.0023 s for testing. The SHAP summary dot plot and bar chart revealed that the top five features contributing to the model were TP, RBC, ALB, BMI and HB.</p><p><strong>Conclusions: </strong>Integrating information theory with KELM enhances DF risk prediction by optimizing feature subsets and leveraging nonlinear kernel mapping. The IT-KELM model demonstrates robust diagnostic performance and clinical feasibility for early DF screening. Future multi-center studies are needed to validate generalizability and refine model interpretability in real-world settings. This approach provides a cost-effective tool for precision medicine in diabetes care.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1186/s12938-026-01515-8
Arthur Ellis, Vishal Pendse, Calvin C Ngan, Jan Andrysek
Background: Conventional shape capture in prosthetics and orthotics (P&O) relies on plaster casting of a positive mold, which is then hand-rectified to the desired shape. While effective under expert practice, this workflow is labor-intensive, equipment-dependent, and difficult to archive or share. Digital approaches using structured-light scanners address some of these limitations but remain costly and require dedicated training. This study evaluated whether smartphone photogrammetry can accurately and reliably capture prosthetic and orthotic cast geometries, and assessed its usability in comparison to a clinical structured-light scanner for integration into clinical workflows.
Results: A clinical-grade structured-light scanner (EinScan H2) served as the reference and demonstrated small volumetric and dimensional errors, at 0.21 ± 0.15% and 0.35 ± 0.18 mm, with intraclass correlation coefficients (ICCs) greater than 0.9999. Relative to this reference, across 12 cast models (upper limb, lower limb, and ankle-foot orthosis), smartphone photogrammetry achieved a volumetric error of 0.89 ± 0.68% and a dimensional error of 0.89 ± 0.51 mm; the mean surface point-to-point distance was 0.24 ± 0.19 mm. Reliability across operators was near-perfect (ICCs ≥ 0.9997). Usability data showed approximately 62 photographs and 88 s per capture for photogrammetry (about 12 min cloud processing) versus 34 s capture (about 85 s desktop processing) for the reference scanner. Photogrammetry scored higher on the System Usability Scale (79 versus 58).
Conclusions: On casts, smartphone photogrammetry produced accurate and reliable meshes with favorable perceived usability and minimal hardware demands. These findings support its integration into digital workflows in P&O, particularly for scanning rectified positive casts and other stable geometries. Further multi-site evaluations on live limbs should determine acceptable capture-time thresholds and effective stabilization strategies to ensure clinical feasibility in routine practice.
{"title":"Smartphone photogrammetry for prosthetics and orthotics: accuracy and reliability across upper-limb, lower-limb, and AFO casts.","authors":"Arthur Ellis, Vishal Pendse, Calvin C Ngan, Jan Andrysek","doi":"10.1186/s12938-026-01515-8","DOIUrl":"https://doi.org/10.1186/s12938-026-01515-8","url":null,"abstract":"<p><strong>Background: </strong>Conventional shape capture in prosthetics and orthotics (P&O) relies on plaster casting of a positive mold, which is then hand-rectified to the desired shape. While effective under expert practice, this workflow is labor-intensive, equipment-dependent, and difficult to archive or share. Digital approaches using structured-light scanners address some of these limitations but remain costly and require dedicated training. This study evaluated whether smartphone photogrammetry can accurately and reliably capture prosthetic and orthotic cast geometries, and assessed its usability in comparison to a clinical structured-light scanner for integration into clinical workflows.</p><p><strong>Results: </strong>A clinical-grade structured-light scanner (EinScan H2) served as the reference and demonstrated small volumetric and dimensional errors, at 0.21 ± 0.15% and 0.35 ± 0.18 mm, with intraclass correlation coefficients (ICCs) greater than 0.9999. Relative to this reference, across 12 cast models (upper limb, lower limb, and ankle-foot orthosis), smartphone photogrammetry achieved a volumetric error of 0.89 ± 0.68% and a dimensional error of 0.89 ± 0.51 mm; the mean surface point-to-point distance was 0.24 ± 0.19 mm. Reliability across operators was near-perfect (ICCs ≥ 0.9997). Usability data showed approximately 62 photographs and 88 s per capture for photogrammetry (about 12 min cloud processing) versus 34 s capture (about 85 s desktop processing) for the reference scanner. Photogrammetry scored higher on the System Usability Scale (79 versus 58).</p><p><strong>Conclusions: </strong>On casts, smartphone photogrammetry produced accurate and reliable meshes with favorable perceived usability and minimal hardware demands. These findings support its integration into digital workflows in P&O, particularly for scanning rectified positive casts and other stable geometries. Further multi-site evaluations on live limbs should determine acceptable capture-time thresholds and effective stabilization strategies to ensure clinical feasibility in routine practice.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145988003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1186/s12938-025-01507-0
Mohammad Shushtari, William Pei, Derrick Lim, Kei Masani
Individuals with incomplete spinal cord injury (iSCI) often fall due to decreased sensorimotor integration. Functional electrical stimulation (FES) therapy combined with visual feedback balance training (VFBT), termed FES+VFBT, can effectively improve standing balance in iSCI populations. Although promising, the need for force plates (FP), which are expensive and bulky, limits the translation of these methods to clinical and home settings. In this work, we propose a solution by replacing FP with Wii Balance Board (WBB), allowing for more accessible FES+VFBT at a lower cost in both clinical and community settings. Our investigations on ten non-injured participants reveal that WBB-based estimated center of mass (COM) has low prediction error and high correlation in both anteroposterior (RMSE: 4.13 ± 0.69 mm, r: 0.94 ± 0.02) and mediolateral directions (RMSE: 6.25 ± 1.80 mm, r: 0.92 ± 0.04) with ground FP-estimated COM, resulting in similar stimulation patterns obtained with the WBB-based approach, indicating that the WBB-based FES+VFBT system could yield a more accessible therapeutic strategy for balance rehabilitation in iSCI.
不完全性脊髓损伤(iSCI)患者经常因感觉运动整合功能下降而跌倒。功能电刺激(FES)疗法联合视觉反馈平衡训练(VFBT),称为FES+VFBT,可以有效改善iSCI人群的站立平衡。虽然前景很好,但对力板(FP)的需求,既昂贵又笨重,限制了这些方法在临床和家庭环境中的应用。在这项工作中,我们提出了一种解决方案,即用Wii平衡板(WBB)取代FP,从而在临床和社区环境中以更低的成本提供更容易获得的FES+VFBT。我们对10名未受伤受试者的研究表明,基于胸围的质心(COM)在正前方(RMSE: 4.13±0.69 mm, r: 0.94±0.02)和中外侧方向(RMSE: 6.25±1.80 mm, r:)预测误差小,相关性高。0.92±0.04),与地面fp估计的COM相比,得到的刺激模式与基于wbb的方法相似,这表明基于wbb的FES+VFBT系统可以为iSCI的平衡康复提供更容易获得的治疗策略。
{"title":"Standing balance therapy through portable and low-cost visual feedback training.","authors":"Mohammad Shushtari, William Pei, Derrick Lim, Kei Masani","doi":"10.1186/s12938-025-01507-0","DOIUrl":"https://doi.org/10.1186/s12938-025-01507-0","url":null,"abstract":"<p><p>Individuals with incomplete spinal cord injury (iSCI) often fall due to decreased sensorimotor integration. Functional electrical stimulation (FES) therapy combined with visual feedback balance training (VFBT), termed FES+VFBT, can effectively improve standing balance in iSCI populations. Although promising, the need for force plates (FP), which are expensive and bulky, limits the translation of these methods to clinical and home settings. In this work, we propose a solution by replacing FP with Wii Balance Board (WBB), allowing for more accessible FES+VFBT at a lower cost in both clinical and community settings. Our investigations on ten non-injured participants reveal that WBB-based estimated center of mass (COM) has low prediction error and high correlation in both anteroposterior (RMSE: 4.13 ± 0.69 mm, r: 0.94 ± 0.02) and mediolateral directions (RMSE: 6.25 ± 1.80 mm, r: 0.92 ± 0.04) with ground FP-estimated COM, resulting in similar stimulation patterns obtained with the WBB-based approach, indicating that the WBB-based FES+VFBT system could yield a more accessible therapeutic strategy for balance rehabilitation in iSCI.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145988009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1186/s12938-025-01485-3
K Umapathi, L Priya, Hady Habib Fayek
Background: Smart patch healthcare devices are emerging as a distinct user interface in decoding the bidirectional interaction of the five sense organs. Powered by recent advancements in nano-materials, and artificial intelligence predictions, smart patches could understand the immune response of the body by analysing the biofluids, microenvironment and analytes in the five sense organs. These eminent potentials in smart patches, inspired the necessity for a review. Thus, this review aims to bring in to the limelight the current progress in smart patch technologies, highlighting their functions, opportunities and challenges in healthcare applications.
Methods: A comprehensive review of literature was conducted focusing on smart patches designed for skin, ocular, cochlear, oral, and nasal applications. Further, the review is structured emphasising details on materials used, fabrication methods adapted, sensing mechanisms employed, enabling technologies such as artificial intelligence and Internet of Things.
Results: The review analysis revealed that smart patches play a multifaceted role in healthcare applications providing (i) continuous health monitoring, (ii) controlled drug delivery, (iii) supports tissue regeneration and (iv) enables modulation of nerve responses. Further, smart patch integration with Internet of Things (IoT) capabilities enables remote healthcare solutions which benefits both physician and patient communities equally. Despite these progresses, challenges remain in term of biocompatibility of the materials chosen, long-term use and stability of the patch, data security and large-scale manufacturing.
Conclusion: Smart patches hold transformative potential in biomedical engineering by bridging biosensing, therapeutic, and digital healthcare domains. This article provides an in-depth review of the current advancements, identifying the existing challenges and emerging opportunities in the field of smart patch research, and thus could guide future research and development. With its broad scope, this review would act as a valuable resource for both researchers and healthcare innovators working towards next-generation biomedical devices.
{"title":"Smart patches for healthcare industry: a review of emerging technologies, challenges, and developmental opportunities.","authors":"K Umapathi, L Priya, Hady Habib Fayek","doi":"10.1186/s12938-025-01485-3","DOIUrl":"https://doi.org/10.1186/s12938-025-01485-3","url":null,"abstract":"<p><strong>Background: </strong>Smart patch healthcare devices are emerging as a distinct user interface in decoding the bidirectional interaction of the five sense organs. Powered by recent advancements in nano-materials, and artificial intelligence predictions, smart patches could understand the immune response of the body by analysing the biofluids, microenvironment and analytes in the five sense organs. These eminent potentials in smart patches, inspired the necessity for a review. Thus, this review aims to bring in to the limelight the current progress in smart patch technologies, highlighting their functions, opportunities and challenges in healthcare applications.</p><p><strong>Methods: </strong>A comprehensive review of literature was conducted focusing on smart patches designed for skin, ocular, cochlear, oral, and nasal applications. Further, the review is structured emphasising details on materials used, fabrication methods adapted, sensing mechanisms employed, enabling technologies such as artificial intelligence and Internet of Things.</p><p><strong>Results: </strong>The review analysis revealed that smart patches play a multifaceted role in healthcare applications providing (i) continuous health monitoring, (ii) controlled drug delivery, (iii) supports tissue regeneration and (iv) enables modulation of nerve responses. Further, smart patch integration with Internet of Things (IoT) capabilities enables remote healthcare solutions which benefits both physician and patient communities equally. Despite these progresses, challenges remain in term of biocompatibility of the materials chosen, long-term use and stability of the patch, data security and large-scale manufacturing.</p><p><strong>Conclusion: </strong>Smart patches hold transformative potential in biomedical engineering by bridging biosensing, therapeutic, and digital healthcare domains. This article provides an in-depth review of the current advancements, identifying the existing challenges and emerging opportunities in the field of smart patch research, and thus could guide future research and development. With its broad scope, this review would act as a valuable resource for both researchers and healthcare innovators working towards next-generation biomedical devices.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1186/s12938-026-01510-z
Jian Guo, Songbing Qin, Chenlei Guo, Meng Zhu, Yin Zhou, He Wang, Xiaoting Xu, Wei Zhan, Long Chen, Jie Ni, Yu Tang, Jun Chen, Yi Shen, Haibo Chen, Kuo Men, Hui Liu, Yuning Pan, Jin Ye, Jian Huan, Juying Zhou
Purpose: To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios.
Results: All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan B group is 0.084 and 0.081, respectively, with no statistically significant difference from those of plan C group. The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups.
Conclusions: It is concluded that the overall efficacy and safety of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge.
Materials and methods: The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans were designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments, respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.
{"title":"Desargues cloud TPS: a cloud-based automatic radiation treatment planning system for IMRT.","authors":"Jian Guo, Songbing Qin, Chenlei Guo, Meng Zhu, Yin Zhou, He Wang, Xiaoting Xu, Wei Zhan, Long Chen, Jie Ni, Yu Tang, Jun Chen, Yi Shen, Haibo Chen, Kuo Men, Hui Liu, Yuning Pan, Jin Ye, Jian Huan, Juying Zhou","doi":"10.1186/s12938-026-01510-z","DOIUrl":"10.1186/s12938-026-01510-z","url":null,"abstract":"<p><strong>Purpose: </strong>To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios.</p><p><strong>Results: </strong>All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan B group is 0.084 and 0.081, respectively, with no statistically significant difference from those of plan C group. The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups.</p><p><strong>Conclusions: </strong>It is concluded that the overall efficacy and safety of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge.</p><p><strong>Materials and methods: </strong>The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans were designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments, respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":"22"},"PeriodicalIF":2.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: This study aimed to develop and validate a high-precision brain age prediction model by integrating multimodal MRI radiomics features from T1- and T2-weighted images with deep learning. The model was trained on healthy individuals for chronological age estimation and applied to patients with insomnia to calculate the Brain Age Gap (BAG), evaluating whether chronic insomnia is associated with accelerated brain aging.
Methods: A total of 1,200 participants were retrospectively included, comprising 942 healthy controls and 258 patients with insomnia. Healthy data were obtained from the IXI public dataset and Shenzhen Hospital (Futian), Guangzhou University of Chinese Medicine. All insomnia patients were recruited from the same hospital. T1- and T2-weighted MRI underwent standardized preprocessing, including resampling, gray-level discretization, and automated segmentation for radiomics feature extraction. After variance-based feature selection, multimodal features were combined to construct a deep learning regression model trained on healthy subjects and evaluated using mean absolute error (MAE), root mean square error (RMSE), and R2. The model was then applied to the insomnia cohort to estimate BAG, followed by age-bias correction and group comparisons.
Results: Three models were constructed: T1-based, T2-based, and multimodal fusion. In validation, the T1 model achieved MAE of 7.58 years (R2 = 0.57), the T2 model 7.90 years (R2 = 0.51), and the fusion model 6.42 years (R2 = 0.68; all p < 0.001). The insomnia group showed significantly higher BAG than controls both before (8.10 ± 8.57 vs. 1.26 ± 8.30 years, p < 0.001) and after age correction (1.60 ± 6.49 vs. - 2.18 ± 7.75 years, p < 0.001).
Conclusion: The multimodal MRI radiomics-deep learning fusion model enables accurate brain age prediction and reveals evidence of accelerated brain aging in patients with insomnia.
{"title":"Multimodal MRI radiomics and deep learning for brain age prediction: age-corrected brain age gap analysis in patients with insomnia.","authors":"Shasha Zeng, Jiandong Guo, Junxiong Zhao, Yue Zhou, Yongyi Li, Jingshan Gong","doi":"10.1186/s12938-026-01512-x","DOIUrl":"https://doi.org/10.1186/s12938-026-01512-x","url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to develop and validate a high-precision brain age prediction model by integrating multimodal MRI radiomics features from T1- and T2-weighted images with deep learning. The model was trained on healthy individuals for chronological age estimation and applied to patients with insomnia to calculate the Brain Age Gap (BAG), evaluating whether chronic insomnia is associated with accelerated brain aging.</p><p><strong>Methods: </strong>A total of 1,200 participants were retrospectively included, comprising 942 healthy controls and 258 patients with insomnia. Healthy data were obtained from the IXI public dataset and Shenzhen Hospital (Futian), Guangzhou University of Chinese Medicine. All insomnia patients were recruited from the same hospital. T1- and T2-weighted MRI underwent standardized preprocessing, including resampling, gray-level discretization, and automated segmentation for radiomics feature extraction. After variance-based feature selection, multimodal features were combined to construct a deep learning regression model trained on healthy subjects and evaluated using mean absolute error (MAE), root mean square error (RMSE), and R<sup>2</sup>. The model was then applied to the insomnia cohort to estimate BAG, followed by age-bias correction and group comparisons.</p><p><strong>Results: </strong>Three models were constructed: T1-based, T2-based, and multimodal fusion. In validation, the T1 model achieved MAE of 7.58 years (R<sup>2</sup> = 0.57), the T2 model 7.90 years (R<sup>2</sup> = 0.51), and the fusion model 6.42 years (R<sup>2</sup> = 0.68; all p < 0.001). The insomnia group showed significantly higher BAG than controls both before (8.10 ± 8.57 vs. 1.26 ± 8.30 years, p < 0.001) and after age correction (1.60 ± 6.49 vs. - 2.18 ± 7.75 years, p < 0.001).</p><p><strong>Conclusion: </strong>The multimodal MRI radiomics-deep learning fusion model enables accurate brain age prediction and reveals evidence of accelerated brain aging in patients with insomnia.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145932017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1186/s12938-026-01511-y
Qing Liu, Jia-Wang Cao, Zhao-Jin Wang, Yan-Mei Gu
Background: Electrical impedance tomography (EIT) is a non-invasive bedside tool for real-time regional ventilation data, but its correlation with outcomes in different critical illnesses remains unclear.
Methods: This retrospective study included 108 ICU patients (liver disease, n = 48; respiratory failure, n = 36; gastrointestinal bleeding, n = 24) who underwent EIT monitoring between December 2023 and March 2025. EIT measured tidal volume percentage in four regions of interest (ROIs), with 28-day mortality as the primary outcome. Multivariate logistic regression adjusted for key confounders was used for analysis.
Results: Ventilation distribution patterns differed significantly among the three disease groups. The liver disease group showed predominant ventilation in ROI 1-2 (anterior regions), with a mean dorsal-to-ventral ratio (DVR) of 0.41 ± 0.18. The respiratory failure group exhibited more homogeneous distribution with a DVR of 0.76 ± 0.34. Patients with higher ventilation heterogeneity (coefficient of variation > 40%) had significantly higher 28-day mortality (29.5% vs 16.7%, p = 0.022), longer duration of mechanical ventilation (8.7 vs 5.4 days, p = 0.012), and fewer ventilator-free days (14.3 vs 19.6 days, p = 0.009). Multivariate analysis identified DVR < 0.4 (OR 2.84, 95% CI 1.46-5.53, p = 0.002) and ventilation heterogeneity (OR 2.31, 95% CI 1.18-4.52, p = 0.014) as independent predictors of 28-day mortality after adjusting for disease severity, age, and mechanical ventilation parameters.
Conclusions: Regional ventilation patterns vary by underlying disease. Greater heterogeneity and lower DVR independently correlate with worse outcomes. EIT-derived parameters may be prognostic indicators and therapeutic targets for optimizing ventilation in critical illness.
背景:电阻抗断层扫描(EIT)是一种用于实时区域通气数据的无创床边工具,但其与不同危重疾病结局的相关性尚不清楚。方法:回顾性研究纳入2023年12月至2025年3月期间接受EIT监测的108例ICU患者(肝病患者48例,呼吸衰竭患者36例,胃肠道出血患者24例)。EIT测量了四个感兴趣区域(roi)的潮汐体积百分比,以28天死亡率为主要终点。采用校正关键混杂因素的多因素logistic回归进行分析。结果:三组患者通气分布方式差异有统计学意义。肝病组以ROI 1-2(前区)通气为主,平均背腹比(DVR)为0.41±0.18。呼吸衰竭组分布更为均匀,DVR为0.76±0.34。通气异质性较高(变异系数bbbb40 %)的患者28天死亡率(29.5% vs 16.7%, p = 0.022)、机械通气持续时间较长(8.7 vs 5.4天,p = 0.012)、无呼吸机天数较短(14.3 vs 19.6天,p = 0.009)。多变量分析确定了DVR结论:区域通气模式因潜在疾病而异。较大的异质性和较低的DVR与较差的结果独立相关。eit衍生参数可作为危重患者优化通气的预后指标和治疗靶点。
{"title":"Impact of EIT-based regional ventilation distribution on outcomes in different types of critical illness: a retrospective cohort study.","authors":"Qing Liu, Jia-Wang Cao, Zhao-Jin Wang, Yan-Mei Gu","doi":"10.1186/s12938-026-01511-y","DOIUrl":"10.1186/s12938-026-01511-y","url":null,"abstract":"<p><strong>Background: </strong>Electrical impedance tomography (EIT) is a non-invasive bedside tool for real-time regional ventilation data, but its correlation with outcomes in different critical illnesses remains unclear.</p><p><strong>Methods: </strong>This retrospective study included 108 ICU patients (liver disease, n = 48; respiratory failure, n = 36; gastrointestinal bleeding, n = 24) who underwent EIT monitoring between December 2023 and March 2025. EIT measured tidal volume percentage in four regions of interest (ROIs), with 28-day mortality as the primary outcome. Multivariate logistic regression adjusted for key confounders was used for analysis.</p><p><strong>Results: </strong>Ventilation distribution patterns differed significantly among the three disease groups. The liver disease group showed predominant ventilation in ROI 1-2 (anterior regions), with a mean dorsal-to-ventral ratio (DVR) of 0.41 ± 0.18. The respiratory failure group exhibited more homogeneous distribution with a DVR of 0.76 ± 0.34. Patients with higher ventilation heterogeneity (coefficient of variation > 40%) had significantly higher 28-day mortality (29.5% vs 16.7%, p = 0.022), longer duration of mechanical ventilation (8.7 vs 5.4 days, p = 0.012), and fewer ventilator-free days (14.3 vs 19.6 days, p = 0.009). Multivariate analysis identified DVR < 0.4 (OR 2.84, 95% CI 1.46-5.53, p = 0.002) and ventilation heterogeneity (OR 2.31, 95% CI 1.18-4.52, p = 0.014) as independent predictors of 28-day mortality after adjusting for disease severity, age, and mechanical ventilation parameters.</p><p><strong>Conclusions: </strong>Regional ventilation patterns vary by underlying disease. Greater heterogeneity and lower DVR independently correlate with worse outcomes. EIT-derived parameters may be prognostic indicators and therapeutic targets for optimizing ventilation in critical illness.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":"20"},"PeriodicalIF":2.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12882238/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145942458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1186/s12938-025-01488-0
Tingrui Zhang, Honglin Wu, Zekun Jiang, Yingying Wang, Rui Ye, Huiming Ni, Chang Liu, Jin Cao, Xuan Sun, Rong Shao, Xiaorong Wei, Yingchun Sun
{"title":"Correction: CT radiomics‑based explainable machine learning model for accurate differentiation of malignant and benign endometrial tumors: a two‑center study.","authors":"Tingrui Zhang, Honglin Wu, Zekun Jiang, Yingying Wang, Rui Ye, Huiming Ni, Chang Liu, Jin Cao, Xuan Sun, Rong Shao, Xiaorong Wei, Yingchun Sun","doi":"10.1186/s12938-025-01488-0","DOIUrl":"10.1186/s12938-025-01488-0","url":null,"abstract":"","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":"25 1","pages":"2"},"PeriodicalIF":2.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12784577/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145942537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-04DOI: 10.1186/s12938-025-01502-5
Shuo Gao, Jia Liu, Linqian Li, Di Yang, Yafei Miao, Xu Zhang, Qianqian Han, Yasong Shi, Jianguo Wu, Ke Zhang
Objective: To provide a critical and clinically oriented synthesis of recent deep learning developments for breast cancer imaging across major modalities, with emphasis on model architectures, dataset characteristics, methodological quality, and implications for clinical translation.
Methods: Following PRISMA guidelines, we systematically searched PubMed, Scopus, Web of Science, ScienceDirect, and Google Scholar for studies published from 2020 to 2024 on deep learning applied to breast imaging. Sixty-five studies using convolutional neural networks (CNNs), Transformers, or hybrid architectures were included. Datasets were comparatively profiled, and study quality and risk of bias were appraised using QUADAS-2.
Results: CNN-based classifiers, particularly on mammography and pathology, commonly achieved median accuracies above 90% and AUCs around or above 0.95, while CNN detectors reported high sensitivities and mid-90% accuracies, supporting their potential role as second readers. CNN-derived U-Net variants dominated segmentation tasks, yielding high Dice and IoU values for tumour and fibroglandular-tissue delineation. Transformer and hybrid models showed advantages when global context, multi-view inputs or volumetric data were critical (e.g. dense breasts, DBT, DCE-MRI), where they improved lesion localisation and patient-level risk stratification. However, QUADAS-2 and dataset profiling revealed substantial limitations: most studies were retrospective, single-centre and class-imbalanced, with narrow demographic representation, heterogeneous reference standards and scarce external or prospective validation. These factors raise concerns about bias, overfitting, fairness and robustness in real-world deployment. Only a minority of studies systematically addressed interpretability, workflow integration or regulatory requirements.
Conclusions: Deep learning offers considerable promise to support early detection, risk stratification and workflow efficiency across breast imaging modalities, with CNNs and Transformers providing complementary strengths for local fine-detail versus global contextual modelling. Nevertheless, the current evidence base is constrained by heterogeneous designs, limited reporting of study quality and biased datasets, so reported performance should not be interpreted as definitive proof of clinical readiness. Future research should prioritise multi-centre, demographically diverse cohorts, transparent quality assessment, external and prospective validation, and evaluation of reader and workflow impact. Developing explainable, fairness-aware and privacy-preserving systems-such as those enabled by interpretable architectures and federated learning-will be essential for safe and equitable translation of deep learning tools into routine breast cancer care.
目的:提供一个关键的和临床导向的综合最近深度学习发展的乳腺癌成像主要模式,重点是模型架构,数据集特征,方法学质量和临床翻译的意义。方法:按照PRISMA指南,系统检索PubMed、Scopus、Web of Science、ScienceDirect和谷歌Scholar,检索2020 - 2024年发表的关于深度学习在乳腺成像中的应用的研究。其中包括65项使用卷积神经网络(cnn)、变形金刚或混合架构的研究。对数据集进行比较分析,使用QUADAS-2对研究质量和偏倚风险进行评价。结果:基于CNN的分类器,特别是乳房x线摄影和病理分类器,通常达到90%以上的中位准确率和0.95左右或以上的auc,而CNN检测器报告高灵敏度和90%左右的准确率,支持其作为第二读者的潜在作用。cnn衍生的U-Net变体主导了分割任务,对肿瘤和纤维腺组织的描绘产生了高Dice和IoU值。变压器模型和混合模型在全局背景、多视图输入或体积数据至关重要(例如致密乳房、DBT、DCE-MRI)时显示出优势,它们可以改善病变定位和患者层面的风险分层。然而,QUADAS-2和数据集分析显示了很大的局限性:大多数研究是回顾性的,单中心的,类别不平衡的,人口统计学代表性狭窄,参考标准异构,缺乏外部或前瞻性验证。这些因素引起了人们对现实世界部署中的偏差、过拟合、公平性和稳健性的担忧。只有少数研究系统地解决了可解释性、工作流集成或法规要求。结论:深度学习为支持乳房成像模式的早期检测、风险分层和工作流程效率提供了相当大的希望,cnn和transformer在局部细节和全局上下文建模方面提供了互补优势。然而,目前的证据基础受到异质性设计、有限的研究质量报告和有偏见的数据集的限制,因此报告的表现不应被解释为临床准备的明确证据。未来的研究应优先考虑多中心、人口统计学多样化的队列、透明的质量评估、外部和前瞻性验证,以及对读者和工作流程影响的评估。开发可解释的、公平的、保护隐私的系统——比如那些由可解释的架构和联合学习实现的系统——对于将深度学习工具安全、公平地转化为常规乳腺癌护理至关重要。
{"title":"Application of deep learning technology in breast cancer: a systematic review of segmentation, detection, and classification approaches.","authors":"Shuo Gao, Jia Liu, Linqian Li, Di Yang, Yafei Miao, Xu Zhang, Qianqian Han, Yasong Shi, Jianguo Wu, Ke Zhang","doi":"10.1186/s12938-025-01502-5","DOIUrl":"10.1186/s12938-025-01502-5","url":null,"abstract":"<p><strong>Objective: </strong>To provide a critical and clinically oriented synthesis of recent deep learning developments for breast cancer imaging across major modalities, with emphasis on model architectures, dataset characteristics, methodological quality, and implications for clinical translation.</p><p><strong>Methods: </strong>Following PRISMA guidelines, we systematically searched PubMed, Scopus, Web of Science, ScienceDirect, and Google Scholar for studies published from 2020 to 2024 on deep learning applied to breast imaging. Sixty-five studies using convolutional neural networks (CNNs), Transformers, or hybrid architectures were included. Datasets were comparatively profiled, and study quality and risk of bias were appraised using QUADAS-2.</p><p><strong>Results: </strong>CNN-based classifiers, particularly on mammography and pathology, commonly achieved median accuracies above 90% and AUCs around or above 0.95, while CNN detectors reported high sensitivities and mid-90% accuracies, supporting their potential role as second readers. CNN-derived U-Net variants dominated segmentation tasks, yielding high Dice and IoU values for tumour and fibroglandular-tissue delineation. Transformer and hybrid models showed advantages when global context, multi-view inputs or volumetric data were critical (e.g. dense breasts, DBT, DCE-MRI), where they improved lesion localisation and patient-level risk stratification. However, QUADAS-2 and dataset profiling revealed substantial limitations: most studies were retrospective, single-centre and class-imbalanced, with narrow demographic representation, heterogeneous reference standards and scarce external or prospective validation. These factors raise concerns about bias, overfitting, fairness and robustness in real-world deployment. Only a minority of studies systematically addressed interpretability, workflow integration or regulatory requirements.</p><p><strong>Conclusions: </strong>Deep learning offers considerable promise to support early detection, risk stratification and workflow efficiency across breast imaging modalities, with CNNs and Transformers providing complementary strengths for local fine-detail versus global contextual modelling. Nevertheless, the current evidence base is constrained by heterogeneous designs, limited reporting of study quality and biased datasets, so reported performance should not be interpreted as definitive proof of clinical readiness. Future research should prioritise multi-centre, demographically diverse cohorts, transparent quality assessment, external and prospective validation, and evaluation of reader and workflow impact. Developing explainable, fairness-aware and privacy-preserving systems-such as those enabled by interpretable architectures and federated learning-will be essential for safe and equitable translation of deep learning tools into routine breast cancer care.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":"19"},"PeriodicalIF":2.9,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12866484/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145899210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1186/s12938-025-01506-1
Melina Giagiozis, Armin Curt, Catherine R Jutzeler, José Zariffa
Background: Wearable cameras provide a means to assess hand function in individuals with spinal cord injury (SCI) beyond clinical settings. Previous studies have found that clinicians acknowledge the potential of egocentric video to monitor and inform rehabilitation. Nonetheless, the need for time-intensive manual review of the footage remains a challenge to its integration into clinical practice. To address this barrier, we investigated the utility of video summarization for egocentric videos of hand use after SCI.
Methods: A dataset comprising 316 egocentric videos from 20 individuals with cervical SCI was used. Individuals wore head-mounted cameras to record daily activities in their home. Three unsupervised video summarization algorithms were applied: DR-DSN (reinforcement learning), CTVSUM (contrastive learning), and CA-SUM (attention-based learning). The resulting summaries were manually evaluated on a subset of five videos (each summarized by all three algorithms) by 15 participants using five criteria rated on a 5-point Likert scale: (C1) inclusion of hand movements, (C2) visibility of difficulties and compensation, (C3) contextual clarity, (C4) depiction of hand function, and (C5) preservation of key information. Additionally, summaries were assessed using computational metrics: coverage, temporal distribution, diversity, and representativeness.
Results: An average manual rating of 3.7 ± 1.2 was observed. Ratings differed significantly across both evaluation criteria (F = 13.69, p < 0.001, = 0.167) and algorithms (F = 24.00, p < 0.001, = 0.103). In particular, summaries were rated higher for C3 and lower for C2, while CA-SUM consistently received the highest scores. Among the computational metrics, diversity showed a strong negative association with manual ratings (b = -4.8, p = 0.032, R2 = 0.827), while representativeness was positively associated (b = 17.8, p = 0.047, R2 = 0.779).
Conclusion: All three algorithms produced adequate video summaries that captured essential content. However, enhancing the depiction of aspects such as functional difficulties and compensatory strategies could further improve the clinical value of the summaries. Moreover, discrepancies between computational and manual evaluations highlight the need to train algorithms on more human-centered criteria. Overall, this work demonstrates the potential of automatic video summarization to support the integration of wearable cameras into outpatient SCI rehabilitation.
背景:可穿戴相机提供了一种评估脊髓损伤(SCI)患者手部功能的方法。先前的研究发现,临床医生承认以自我为中心的视频在监测和告知康复方面的潜力。尽管如此,需要花费大量的时间对录像进行人工审查,这仍然是将其整合到临床实践中的一个挑战。为了解决这一障碍,我们研究了视频摘要对脊髓损伤后以自我为中心的手部使用视频的效用。方法:使用来自20例颈椎脊髓损伤患者的316个自我中心视频数据集。每个人都戴着头戴式摄像头来记录他们家里的日常活动。应用了三种无监督视频摘要算法:DR-DSN(强化学习)、CTVSUM(对比学习)和CA-SUM(基于注意的学习)。由此产生的摘要由15名参与者在5个视频的子集(每个视频由所有三种算法总结)上进行手动评估,使用5点李克特量表的5个标准:(C1)手部运动的包含,(C2)困难和补偿的可见性,(C3)上下文清晰度,(C4)手部功能的描述,以及(C5)关键信息的保存。此外,使用计算指标评估总结:覆盖率、时间分布、多样性和代表性。结果:平均手工评分3.7±1.2。评分在评价标准(F = 13.69, p η2 = 0.167)和算法(F = 24.00, p η2 = 0.103)之间差异显著。特别是,总结在C3中得分较高,在C2中得分较低,而CA-SUM始终获得最高分。在计算指标中,多样性与人工评分呈显著负相关(b = -4.8, p = 0.032, R2 = 0.827),代表性与人工评分呈显著正相关(b = 17.8, p = 0.047, R2 = 0.779)。结论:所有三种算法都产生了足够的视频摘要,捕获了基本内容。然而,加强对功能困难和代偿策略等方面的描述可以进一步提高总结的临床价值。此外,计算和人工评估之间的差异突出了在更以人为中心的标准上训练算法的必要性。总的来说,这项工作证明了自动视频总结的潜力,以支持可穿戴摄像机集成到门诊脊髓损伤康复中。
{"title":"Video summarization for home-based egocentric footage in spinal cord injury rehabilitation.","authors":"Melina Giagiozis, Armin Curt, Catherine R Jutzeler, José Zariffa","doi":"10.1186/s12938-025-01506-1","DOIUrl":"10.1186/s12938-025-01506-1","url":null,"abstract":"<p><strong>Background: </strong>Wearable cameras provide a means to assess hand function in individuals with spinal cord injury (SCI) beyond clinical settings. Previous studies have found that clinicians acknowledge the potential of egocentric video to monitor and inform rehabilitation. Nonetheless, the need for time-intensive manual review of the footage remains a challenge to its integration into clinical practice. To address this barrier, we investigated the utility of video summarization for egocentric videos of hand use after SCI.</p><p><strong>Methods: </strong>A dataset comprising 316 egocentric videos from 20 individuals with cervical SCI was used. Individuals wore head-mounted cameras to record daily activities in their home. Three unsupervised video summarization algorithms were applied: DR-DSN (reinforcement learning), CTVSUM (contrastive learning), and CA-SUM (attention-based learning). The resulting summaries were manually evaluated on a subset of five videos (each summarized by all three algorithms) by 15 participants using five criteria rated on a 5-point Likert scale: (C1) inclusion of hand movements, (C2) visibility of difficulties and compensation, (C3) contextual clarity, (C4) depiction of hand function, and (C5) preservation of key information. Additionally, summaries were assessed using computational metrics: coverage, temporal distribution, diversity, and representativeness.</p><p><strong>Results: </strong>An average manual rating of 3.7 ± 1.2 was observed. Ratings differed significantly across both evaluation criteria (F = 13.69, p < 0.001, <math> <msup><mrow><mi>η</mi></mrow> <mn>2</mn></msup> </math> = 0.167) and algorithms (F = 24.00, p < 0.001, <math> <msup><mrow><mi>η</mi></mrow> <mn>2</mn></msup> </math> = 0.103). In particular, summaries were rated higher for C3 and lower for C2, while CA-SUM consistently received the highest scores. Among the computational metrics, diversity showed a strong negative association with manual ratings (b = -4.8, p = 0.032, R<sup>2</sup> = 0.827), while representativeness was positively associated (b = 17.8, p = 0.047, R<sup>2</sup> = 0.779).</p><p><strong>Conclusion: </strong>All three algorithms produced adequate video summaries that captured essential content. However, enhancing the depiction of aspects such as functional difficulties and compensatory strategies could further improve the clinical value of the summaries. Moreover, discrepancies between computational and manual evaluations highlight the need to train algorithms on more human-centered criteria. Overall, this work demonstrates the potential of automatic video summarization to support the integration of wearable cameras into outpatient SCI rehabilitation.</p>","PeriodicalId":8927,"journal":{"name":"BioMedical Engineering OnLine","volume":" ","pages":"17"},"PeriodicalIF":2.9,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12866043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145896074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}