In recent years, automated analysis of gastrointestinal endoscopy videos has become increasingly important for early clinical screening as the incidence of gastrointestinal diseases continues to increase. However, the complex characteristics of gastrointestinal lesions pose significant challenges for accurate identification and diagnosis. This paper proposes an Endoscopic Video Anomaly Detection based on improved YOLOV11(EVAD-YOLO) for detecting typical lesions such as gastric ulcers and gastric cancer. Specifically, we construct a Residual global expansion attention (RGEA) module to enhance global contextual perception and improve sensitivity to lesions with complex shapes and color variations. In addition, we design an Enhanced multi-scale fusion (EMSF) module to effectively integrate lesion features across different spatial scales, thereby improving detection robustness for lesions of varying sizes. Furthermore, we built a mixed endoscopic dataset containing polyps, gastric ulcers, and early gastric cancers to comprehensively evaluate the proposed method. Experimental results demonstrate that EVAD-YOLO achieves superior performance, with 90.4% precision, 84.3% recall, and 90.4 % mAP50, indicating its strong robustness and potential for reliable clinical-assisted endoscopic diagnosis.
{"title":"EVAD-YOLO: An endoscopic video anomaly detection based on improved YOLOV11.","authors":"Minghan Dong, Xia Zhang, Xiangwei Zheng, Mingzhe Zhang","doi":"10.1007/s11517-026-03532-0","DOIUrl":"https://doi.org/10.1007/s11517-026-03532-0","url":null,"abstract":"<p><p>In recent years, automated analysis of gastrointestinal endoscopy videos has become increasingly important for early clinical screening as the incidence of gastrointestinal diseases continues to increase. However, the complex characteristics of gastrointestinal lesions pose significant challenges for accurate identification and diagnosis. This paper proposes an Endoscopic Video Anomaly Detection based on improved YOLOV11(EVAD-YOLO) for detecting typical lesions such as gastric ulcers and gastric cancer. Specifically, we construct a Residual global expansion attention (RGEA) module to enhance global contextual perception and improve sensitivity to lesions with complex shapes and color variations. In addition, we design an Enhanced multi-scale fusion (EMSF) module to effectively integrate lesion features across different spatial scales, thereby improving detection robustness for lesions of varying sizes. Furthermore, we built a mixed endoscopic dataset containing polyps, gastric ulcers, and early gastric cancers to comprehensively evaluate the proposed method. Experimental results demonstrate that EVAD-YOLO achieves superior performance, with 90.4% precision, 84.3% recall, and 90.4 % mAP50, indicating its strong robustness and potential for reliable clinical-assisted endoscopic diagnosis.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146259572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-19DOI: 10.1007/s11517-025-03500-0
Jason Abounader, Bryan Caldwell, Mark Hardy, Jill Kawalec, Kwangtaek Kim
Background/purpose: Researchers and medical experts devised a virtual reality (VR) force feedback system to simulate ingrown toenail removal as a stepping-stone towards a new, immersive form of learning material. The fusion of VR and haptic technologies is an innovative approach to stimulate visual and kinesthetic human senses for learning engagement.
Method: Our bimanual haptic feedback system, tuned with the advice of experts, allows users to physically interact with a 3D deformable virtual foot and perform surgery with various tools, tackling the shortcomings of existing surgical simulations tools in a portable system. The graphic and haptic rendering techniques to simulate each step of the surgical procedure are described.
Results: The usability and effectiveness were tested with 37 participants, including both podiatric medical students and non-medical students. Medical students improved completion time in all surgical tasks by over 160%. Statistical analysis indicates a significant difference in skill of medical students and non-medical students to establish a baseline correlation between performance and experience suggesting preliminary system usability.
Conclusion: Post-simulation assessment techniques provide insight into necessary improvement areas before launching comparative learning impact study in the future. Nonetheless, the results show a promising direction for using our developed system to improve ingrown toenail removal skills.
{"title":"Innovative podiatry practice: an immersive VR surgery simulation with bimanual haptic interaction.","authors":"Jason Abounader, Bryan Caldwell, Mark Hardy, Jill Kawalec, Kwangtaek Kim","doi":"10.1007/s11517-025-03500-0","DOIUrl":"https://doi.org/10.1007/s11517-025-03500-0","url":null,"abstract":"<p><strong>Background/purpose: </strong>Researchers and medical experts devised a virtual reality (VR) force feedback system to simulate ingrown toenail removal as a stepping-stone towards a new, immersive form of learning material. The fusion of VR and haptic technologies is an innovative approach to stimulate visual and kinesthetic human senses for learning engagement.</p><p><strong>Method: </strong>Our bimanual haptic feedback system, tuned with the advice of experts, allows users to physically interact with a 3D deformable virtual foot and perform surgery with various tools, tackling the shortcomings of existing surgical simulations tools in a portable system. The graphic and haptic rendering techniques to simulate each step of the surgical procedure are described.</p><p><strong>Results: </strong>The usability and effectiveness were tested with 37 participants, including both podiatric medical students and non-medical students. Medical students improved completion time in all surgical tasks by over 160%. Statistical analysis indicates a significant difference in skill of medical students and non-medical students to establish a baseline correlation between performance and experience suggesting preliminary system usability.</p><p><strong>Conclusion: </strong>Post-simulation assessment techniques provide insight into necessary improvement areas before launching comparative learning impact study in the future. Nonetheless, the results show a promising direction for using our developed system to improve ingrown toenail removal skills.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146229568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Streak artifacts in non-contrast computed tomography (NCCT) can obscure anatomical details and even confuse radiologic signs. Existing methods for artifact reduction have limitations: specialized training data and high annotation costs hinder the performance scalability, inadequate anatomical constraints struggle to preserve fine details, and limited generative stability along with suboptimal artifact reduction compromises diagnostic applicability. Leveraging 96,641 CT slices (763 series) from four different CT scanners (100-140 kilovolt peak (kVp), 55-167 tube current-time product (mAs), 0.5-10 mm thickness), we proposed a novel guided diffusion method using multi-level anatomical segmentations to optimize streak artifact reduction in chest NCCT scans. During training, the model integrates artifact-free CT slices with segmentation maps and anatomical regions of interest (ROIs) via channel-wise concatenation at each diffusion step. During inference, artifact-affected samples are fed into the trained model to generate artifact-free outputs with structural integrity. Statistical analysis revealed a significant (p < 0.05) difference in Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR) when comparing 47,032 artifact-affected samples to 49,609 artifact-free counterparts. Quantitative assessments demonstrated high consistency between generated outputs and reference standard artifact-free samples, with lung field SNR values of (26.67, standard deviation (SD) 2.01) vs. (26.11, SD 1.89) and lung-trachea CNR of (3.76, SD 0.77) vs. (3.78, SD 0.56) (both p > 0.05). Compared to four novel studies, our method achieved superior overall Peak Signal-to-Noise Ratio (PSNR) (36.952, SD 0.671), Structural Similarity Index (SSIM) (0.863, SD 0.013), and Dice Similarity Coefficient (DSC) (0.959, SD 0.031), with all p < 0.05. Moreover, ablation studies indicated that an appropriate segmentation guidance (Level-2) optimally balances anatomical structure constraints and artifact reduction efficiency, demonstrating superior performance in distinct organ or tissue regions compared to coarser and finer-grained guidance strategies. The proposed method has the potential to improve clinical analysis for chest NCCT by optimizing streak artifact reduction while enhancing medical image quality.
{"title":"A multi-level segmentation-guided diffusion model for streak artifact reduction in routine non-contrast chest CT.","authors":"Jingxin Liu, Xinran Zhu, Zhangzhen Shi, Donghong An, Lihui Zu, Kailiang Cheng, Zhong Zhang","doi":"10.1007/s11517-026-03515-1","DOIUrl":"https://doi.org/10.1007/s11517-026-03515-1","url":null,"abstract":"<p><p>Streak artifacts in non-contrast computed tomography (NCCT) can obscure anatomical details and even confuse radiologic signs. Existing methods for artifact reduction have limitations: specialized training data and high annotation costs hinder the performance scalability, inadequate anatomical constraints struggle to preserve fine details, and limited generative stability along with suboptimal artifact reduction compromises diagnostic applicability. Leveraging 96,641 CT slices (763 series) from four different CT scanners (100-140 kilovolt peak (kVp), 55-167 tube current-time product (mAs), 0.5-10 mm thickness), we proposed a novel guided diffusion method using multi-level anatomical segmentations to optimize streak artifact reduction in chest NCCT scans. During training, the model integrates artifact-free CT slices with segmentation maps and anatomical regions of interest (ROIs) via channel-wise concatenation at each diffusion step. During inference, artifact-affected samples are fed into the trained model to generate artifact-free outputs with structural integrity. Statistical analysis revealed a significant (p < 0.05) difference in Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR) when comparing 47,032 artifact-affected samples to 49,609 artifact-free counterparts. Quantitative assessments demonstrated high consistency between generated outputs and reference standard artifact-free samples, with lung field SNR values of (26.67, standard deviation (SD) 2.01) vs. (26.11, SD 1.89) and lung-trachea CNR of (3.76, SD 0.77) vs. (3.78, SD 0.56) (both p > 0.05). Compared to four novel studies, our method achieved superior overall Peak Signal-to-Noise Ratio (PSNR) (36.952, SD 0.671), Structural Similarity Index (SSIM) (0.863, SD 0.013), and Dice Similarity Coefficient (DSC) (0.959, SD 0.031), with all p < 0.05. Moreover, ablation studies indicated that an appropriate segmentation guidance (Level-2) optimally balances anatomical structure constraints and artifact reduction efficiency, demonstrating superior performance in distinct organ or tissue regions compared to coarser and finer-grained guidance strategies. The proposed method has the potential to improve clinical analysis for chest NCCT by optimizing streak artifact reduction while enhancing medical image quality.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2026-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146195730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-13DOI: 10.1007/s11517-026-03529-9
Lizhen Zhang, Mengxiang Zhu, Sai Jiang, Bo Jiang
{"title":"Design and evaluation of a passive knee-ankle exoskeleton for walking and squatting: a musculoskeletal simulation study.","authors":"Lizhen Zhang, Mengxiang Zhu, Sai Jiang, Bo Jiang","doi":"10.1007/s11517-026-03529-9","DOIUrl":"https://doi.org/10.1007/s11517-026-03529-9","url":null,"abstract":"","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiac segmentation and quantification of cardiac function indicators play a crucial role in the clinical diagnosis and treatment of cardiovascular diseases. To address the issue of blurred cardiac chamber boundaries and adjacent tissue interference resulting from similar intensity in computed tomograph (CT) images, this paper proposes a 3D cardiac multi-structure segmentation network utilizing Multi-scale Channel Enhancement Attention (MCEA) and Spatial Decomposition with Channel Fusion Attention (SD-CA). The MCEA module integrates channel information from feature maps of various scales within the coding layer, thereby enhancing contextual linkage, strengthening the network's multi-scale feature representation capability, and improving decoding and segmentation performance. The SD-CA module generates spatial and channel attention weights in parallel and combines the three directional features of height, width, and depth. This enables the network to effectively concentrate on the region of interest and mitigate the interference of irrelevant structures. Experimental evaluations were conducted using a dataset of 192 cases provided by the People's Hospital of Liaoning Province and the MM-WHS dataset. Segmentation was achieved for the left ventricle, myocardium, left atrium, right ventricle, and right atrium, with average Dice coefficients of 94.21% and 93.9%, and average 95% Hausdorff distances of 6.5483 and 4.36, respectively. Furthermore, quantitative predictions of the left ventricular ejection fraction (LVEF) and substructure volumes were derived from the segmentation results. The correlation coefficients between the predicted and true values exceeded 0.9587, and all fell within the maximum error range of the Bland-Altman test for over 94.8% of the data, indicating a strong correlation and agreement between the predicted and true values.
{"title":"Cardiac multi-structure segmentation network based on the fused dual attention mechanism.","authors":"Guodong Zhang, Luchang Yang, Yanlin Li, Wenwen Gu, Ronghui Ju, Zhaoxuan Gong, Wei Guo","doi":"10.1007/s11517-025-03512-w","DOIUrl":"https://doi.org/10.1007/s11517-025-03512-w","url":null,"abstract":"<p><p>Cardiac segmentation and quantification of cardiac function indicators play a crucial role in the clinical diagnosis and treatment of cardiovascular diseases. To address the issue of blurred cardiac chamber boundaries and adjacent tissue interference resulting from similar intensity in computed tomograph (CT) images, this paper proposes a 3D cardiac multi-structure segmentation network utilizing Multi-scale Channel Enhancement Attention (MCEA) and Spatial Decomposition with Channel Fusion Attention (SD-CA). The MCEA module integrates channel information from feature maps of various scales within the coding layer, thereby enhancing contextual linkage, strengthening the network's multi-scale feature representation capability, and improving decoding and segmentation performance. The SD-CA module generates spatial and channel attention weights in parallel and combines the three directional features of height, width, and depth. This enables the network to effectively concentrate on the region of interest and mitigate the interference of irrelevant structures. Experimental evaluations were conducted using a dataset of 192 cases provided by the People's Hospital of Liaoning Province and the MM-WHS dataset. Segmentation was achieved for the left ventricle, myocardium, left atrium, right ventricle, and right atrium, with average Dice coefficients of 94.21% and 93.9%, and average 95% Hausdorff distances of 6.5483 and 4.36, respectively. Furthermore, quantitative predictions of the left ventricular ejection fraction (LVEF) and substructure volumes were derived from the segmentation results. The correlation coefficients between the predicted and true values exceeded 0.9587, and all fell within the maximum error range of the Bland-Altman test for over 94.8% of the data, indicating a strong correlation and agreement between the predicted and true values.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146150023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1007/s11517-026-03528-w
Yunye Cai, Enxiang Shen, Weijing Zhang, Zhibin Jin, Jie Yuan
Accurate muscle volume measurement is crucial for evaluating muscle impairment in healthcare and sports medicine. Compared to traditional methods, 3D ultrasound imaging offers noninvasive, flexible, cost-effectiveness advantages. This study aims to develop a precise volume assessment method for skeletal muscle, specifically gastrocnemius muscle, based on 3D ultrasound imaging. A feasible practice integrating 3D freehand ultrasound imaging based on optical tracking, slice extraction and alpha-shape-based surface reconstruction was proposed for precise volume assessment. 2D ultrasound images with spatial positions were acquired. Target slices were extracted for segmentation, and the alpha‑shape algorithm reconstructed the 3D muscle mesh for volume calculation. Phantom experiment using a pork tenderloin validated our method with a relative deviation of 0.47% compared to water displacement method. Clinical validation against MRI yielded relative deviations of 0.66% to 5.06% for manual segmentation and 0.28% to 2.58% for automated segmentation (using TransUNet). The method achieved smooth, detailed surfaces and outperformed Marching Cubes and Poisson reconstruction in accuracy and morphological fidelity. The proposed 3D freehand ultrasound workflow enables precise, detailed muscle volume assessment, showing strong agreement with MRI. Its accessibility and accuracy suggest significant potential for clinical and sports medicine applications in monitoring muscle health.
{"title":"Precise volume assessment for gastrocnemius muscles based on 3D ultrasound imaging.","authors":"Yunye Cai, Enxiang Shen, Weijing Zhang, Zhibin Jin, Jie Yuan","doi":"10.1007/s11517-026-03528-w","DOIUrl":"10.1007/s11517-026-03528-w","url":null,"abstract":"<p><p>Accurate muscle volume measurement is crucial for evaluating muscle impairment in healthcare and sports medicine. Compared to traditional methods, 3D ultrasound imaging offers noninvasive, flexible, cost-effectiveness advantages. This study aims to develop a precise volume assessment method for skeletal muscle, specifically gastrocnemius muscle, based on 3D ultrasound imaging. A feasible practice integrating 3D freehand ultrasound imaging based on optical tracking, slice extraction and alpha-shape-based surface reconstruction was proposed for precise volume assessment. 2D ultrasound images with spatial positions were acquired. Target slices were extracted for segmentation, and the alpha‑shape algorithm reconstructed the 3D muscle mesh for volume calculation. Phantom experiment using a pork tenderloin validated our method with a relative deviation of 0.47% compared to water displacement method. Clinical validation against MRI yielded relative deviations of 0.66% to 5.06% for manual segmentation and 0.28% to 2.58% for automated segmentation (using TransUNet). The method achieved smooth, detailed surfaces and outperformed Marching Cubes and Poisson reconstruction in accuracy and morphological fidelity. The proposed 3D freehand ultrasound workflow enables precise, detailed muscle volume assessment, showing strong agreement with MRI. Its accessibility and accuracy suggest significant potential for clinical and sports medicine applications in monitoring muscle health.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}