Pub Date : 2025-01-02DOI: 10.1007/s11517-024-03248-z
Qingrun Zeng, Lin Yang, Yongqiang Li, Lei Xie, Yuanjing Feng
The segmentation of the retinogeniculate visual pathway (RGVP) enables quantitative analysis of its anatomical structure. Multimodal learning has exhibited considerable potential in segmenting the RGVP based on structural MRI (sMRI) and diffusion MRI (dMRI). However, the intricate nature of the skull base environment and the slender morphology of the RGVP pose challenges for existing methodologies to adequately leverage the complementary information from each modality. In this study, we propose a multimodal information fusion network designed to optimize and select the complementary information across multiple modalities: the T1-weighted (T1w) images, the fractional anisotropy (FA) images, and the fiber orientation distribution function (fODF) peaks, and the modalities can supervise each other during the process. Specifically, we add a supervised master-assistant cross-modal learning framework between the encoder layers of different modalities so that the characteristics of different modalities can be more fully utilized to achieve a more accurate segmentation result. We apply RGVPSeg to an MRI dataset with 102 subjects from the Human Connectome Project (HCP) and 10 subjects from Multi-shell Diffusion MRI (MDM), the experimental results show promising results, which demonstrate that the proposed framework is feasible and outperforms the methods mentioned in this paper. Our code is freely available at https://github.com/yanglin9911/RGVPSeg .
{"title":"RGVPSeg: multimodal information fusion network for retinogeniculate visual pathway segmentation.","authors":"Qingrun Zeng, Lin Yang, Yongqiang Li, Lei Xie, Yuanjing Feng","doi":"10.1007/s11517-024-03248-z","DOIUrl":"https://doi.org/10.1007/s11517-024-03248-z","url":null,"abstract":"<p><p>The segmentation of the retinogeniculate visual pathway (RGVP) enables quantitative analysis of its anatomical structure. Multimodal learning has exhibited considerable potential in segmenting the RGVP based on structural MRI (sMRI) and diffusion MRI (dMRI). However, the intricate nature of the skull base environment and the slender morphology of the RGVP pose challenges for existing methodologies to adequately leverage the complementary information from each modality. In this study, we propose a multimodal information fusion network designed to optimize and select the complementary information across multiple modalities: the T1-weighted (T1w) images, the fractional anisotropy (FA) images, and the fiber orientation distribution function (fODF) peaks, and the modalities can supervise each other during the process. Specifically, we add a supervised master-assistant cross-modal learning framework between the encoder layers of different modalities so that the characteristics of different modalities can be more fully utilized to achieve a more accurate segmentation result. We apply RGVPSeg to an MRI dataset with 102 subjects from the Human Connectome Project (HCP) and 10 subjects from Multi-shell Diffusion MRI (MDM), the experimental results show promising results, which demonstrate that the proposed framework is feasible and outperforms the methods mentioned in this paper. Our code is freely available at https://github.com/yanglin9911/RGVPSeg .</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142915995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-07-31DOI: 10.1007/s11517-024-03177-x
Mark Karlov, Ali Abedi, Shehroz S Khan
Exercise-based rehabilitation programs have proven to be effective in enhancing the quality of life and reducing mortality and rehospitalization rates. AI-driven virtual rehabilitation, which allows patients to independently complete exercises at home, utilizes AI algorithms to analyze exercise data, providing feedback to patients and updating clinicians on their progress. These programs commonly prescribe a variety of exercise types, leading to a distinct challenge in rehabilitation exercise assessment datasets: while abundant in overall training samples, these datasets often have a limited number of samples for each individual exercise type. This disparity hampers the ability of existing approaches to train generalizable models with such a small sample size per exercise type. Addressing this issue, this paper introduces a novel supervised contrastive learning framework with hard and soft negative samples that effectively utilizes the entire dataset to train a single model applicable to all exercise types. This model, with a Spatial-Temporal Graph Convolutional Network (ST-GCN) architecture, demonstrated enhanced generalizability across exercises and a decrease in overall complexity. Through extensive experiments on three publicly available rehabilitation exercise assessment datasets, UI-PRMD, IRDS, and KIMORE, our method has proven to surpass existing methods, setting a new benchmark in rehabilitation exercise quality assessment.
{"title":"Rehabilitation exercise quality assessment through supervised contrastive learning with hard and soft negatives.","authors":"Mark Karlov, Ali Abedi, Shehroz S Khan","doi":"10.1007/s11517-024-03177-x","DOIUrl":"10.1007/s11517-024-03177-x","url":null,"abstract":"<p><p>Exercise-based rehabilitation programs have proven to be effective in enhancing the quality of life and reducing mortality and rehospitalization rates. AI-driven virtual rehabilitation, which allows patients to independently complete exercises at home, utilizes AI algorithms to analyze exercise data, providing feedback to patients and updating clinicians on their progress. These programs commonly prescribe a variety of exercise types, leading to a distinct challenge in rehabilitation exercise assessment datasets: while abundant in overall training samples, these datasets often have a limited number of samples for each individual exercise type. This disparity hampers the ability of existing approaches to train generalizable models with such a small sample size per exercise type. Addressing this issue, this paper introduces a novel supervised contrastive learning framework with hard and soft negative samples that effectively utilizes the entire dataset to train a single model applicable to all exercise types. This model, with a Spatial-Temporal Graph Convolutional Network (ST-GCN) architecture, demonstrated enhanced generalizability across exercises and a decrease in overall complexity. Through extensive experiments on three publicly available rehabilitation exercise assessment datasets, UI-PRMD, IRDS, and KIMORE, our method has proven to surpass existing methods, setting a new benchmark in rehabilitation exercise quality assessment.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"15-28"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141856995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-05DOI: 10.1007/s11517-024-03172-2
Sanli Yi, Lingxiang Zhou
Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.
{"title":"Multi-step framework for glaucoma diagnosis in retinal fundus images using deep learning.","authors":"Sanli Yi, Lingxiang Zhou","doi":"10.1007/s11517-024-03172-2","DOIUrl":"10.1007/s11517-024-03172-2","url":null,"abstract":"<p><p>Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1-13"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-26DOI: 10.1007/s11517-024-03184-y
Shambo Bhattacharya, Devendra K Dubey
Annulus fibrosus' (AF) ability to transmit multi-directional spinal motion is contributed by a combination of chemical interactions among biomolecular constituents-collagen type I (COL-I), collagen type II (COL-II), and proteoglycans (aggrecan and hyaluronan)-and mechanical interactions at multiple length scales. However, the mechanistic role of such interactions on spinal motion is unclear. The present work employs a molecular mechanics-finite element (FE) multiscale approach to investigate the mechanistic role of molecular-scale collagen and hyaluronan nanostructures in AF, on spinal motion. For this, an FE model of the lumbar segment is developed wherein a multiscale model of AF collagen fiber, developed from COL-I, COL-II, and hyaluronan using the molecular dynamics-cohesive finite element multiscale method, is incorporated. Analyses show AF collagen fibers primarily contribute to axial rotation (AR) motion, owing to angle-ply orientation. Maximum fiber strain values of 2.45% in AR, observed at the outer annulus, are 25% lower than the reported values. This indicates native collagen fibers are softer, attributed to the softer non-fibrillar matrix and higher interfibrillar sliding. Additionally, elastic zone stiffness of 8.61 Nm/° is observed to be 20% higher than the reported range, suggesting native AF lamellae exhibit lower stiffness, resulting from inter-collagen fiber bundle sliding. The presented study has further implications towards the hierarchy-driven designing of AF-substitute materials.
纤维环(AF)传递多向脊柱运动的能力是由生物分子成分--I型胶原(COL-I)、II型胶原(COL-II)和蛋白聚糖(凝集素和透明质酸)--之间的化学相互作用以及多种长度尺度的机械相互作用共同作用的结果。然而,这些相互作用对脊柱运动的机理作用尚不清楚。本研究采用分子力学-有限元(FE)多尺度方法来研究 AF 中分子尺度胶原蛋白和透明质酸纳米结构对脊柱运动的机理作用。为此,研究人员开发了一个腰椎段的有限元模型,其中包含了利用分子动力学-内聚有限元多尺度方法从 COL-I、COL-II 和透明质酸开发的 AF 胶原纤维多尺度模型。分析表明,由于角-层取向,AF 胶原纤维主要对轴向旋转 (AR) 运动做出了贡献。在外环处观察到的 AR 最大纤维应变值为 2.45%,比报告值低 25%。这表明原生胶原纤维较软,原因是非纤维基质较软,纤维间滑动较大。此外,弹性区刚度为 8.61 Nm/°,比报告的范围高出 20%,表明原生 AF 片层的刚度较低,这是胶原纤维束间滑动造成的。本研究对分层驱动设计 AF 替代材料具有进一步的意义。
{"title":"Role of intra-lamellar collagen and hyaluronan nanostructures in annulus fibrosus on lumbar spine biomechanics: insights from molecular mechanics-finite element-based multiscale analyses.","authors":"Shambo Bhattacharya, Devendra K Dubey","doi":"10.1007/s11517-024-03184-y","DOIUrl":"10.1007/s11517-024-03184-y","url":null,"abstract":"<p><p>Annulus fibrosus' (AF) ability to transmit multi-directional spinal motion is contributed by a combination of chemical interactions among biomolecular constituents-collagen type I (COL-I), collagen type II (COL-II), and proteoglycans (aggrecan and hyaluronan)-and mechanical interactions at multiple length scales. However, the mechanistic role of such interactions on spinal motion is unclear. The present work employs a molecular mechanics-finite element (FE) multiscale approach to investigate the mechanistic role of molecular-scale collagen and hyaluronan nanostructures in AF, on spinal motion. For this, an FE model of the lumbar segment is developed wherein a multiscale model of AF collagen fiber, developed from COL-I, COL-II, and hyaluronan using the molecular dynamics-cohesive finite element multiscale method, is incorporated. Analyses show AF collagen fibers primarily contribute to axial rotation (AR) motion, owing to angle-ply orientation. Maximum fiber strain values of 2.45% in AR, observed at the outer annulus, are 25% lower than the reported values. This indicates native collagen fibers are softer, attributed to the softer non-fibrillar matrix and higher interfibrillar sliding. Additionally, elastic zone stiffness of 8.61 Nm/° is observed to be 20% higher than the reported range, suggesting native AF lamellae exhibit lower stiffness, resulting from inter-collagen fiber bundle sliding. The presented study has further implications towards the hierarchy-driven designing of AF-substitute materials.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"139-157"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142057091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-12DOI: 10.1007/s11517-024-03182-0
Joon Yul Choi, Tae Keun Yoo
We developed a scoring system for assessing glaucoma risk using demographic and laboratory factors by employing a no-code approach (automated coding) using ChatGPT-4. Comprehensive health checkup data were collected from the Korea National Health and Nutrition Examination Survey. Using ChatGPT-4, logistic regression was conducted to predict glaucoma without coding or manual numerical processes, and the scoring system was developed based on the odds ratios (ORs). ChatGPT-4 also facilitated the no-code creation of an easy-to-use risk calculator for glaucoma. The ORs for the high-risk groups were calculated to measure performance. ChatGPT-4 automatically developed a scoring system based on demographic and laboratory factors, and successfully implemented a risk calculator tool. The predictive ability of the scoring system was comparable to that of traditional machine learning approaches. For high-risk groups with 1-2, 3-4, and 5 + points, the calculated ORs for glaucoma were 1.87, 2.72, and 15.36 in the validation set, respectively, compared with the group with 0 or fewer points. This study presented a novel no-code approach for developing a glaucoma risk assessment tool using ChatGPT-4, highlighting its potential for democratizing advanced predictive analytics, making them readily available for clinical use in glaucoma detection.
{"title":"Development of a novel scoring system for glaucoma risk based on demographic and laboratory factors using ChatGPT-4.","authors":"Joon Yul Choi, Tae Keun Yoo","doi":"10.1007/s11517-024-03182-0","DOIUrl":"10.1007/s11517-024-03182-0","url":null,"abstract":"<p><p>We developed a scoring system for assessing glaucoma risk using demographic and laboratory factors by employing a no-code approach (automated coding) using ChatGPT-4. Comprehensive health checkup data were collected from the Korea National Health and Nutrition Examination Survey. Using ChatGPT-4, logistic regression was conducted to predict glaucoma without coding or manual numerical processes, and the scoring system was developed based on the odds ratios (ORs). ChatGPT-4 also facilitated the no-code creation of an easy-to-use risk calculator for glaucoma. The ORs for the high-risk groups were calculated to measure performance. ChatGPT-4 automatically developed a scoring system based on demographic and laboratory factors, and successfully implemented a risk calculator tool. The predictive ability of the scoring system was comparable to that of traditional machine learning approaches. For high-risk groups with 1-2, 3-4, and 5 + points, the calculated ORs for glaucoma were 1.87, 2.72, and 15.36 in the validation set, respectively, compared with the group with 0 or fewer points. This study presented a novel no-code approach for developing a glaucoma risk assessment tool using ChatGPT-4, highlighting its potential for democratizing advanced predictive analytics, making them readily available for clinical use in glaucoma detection.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"75-87"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-17DOI: 10.1007/s11517-024-03176-y
Shamanth Shanmuga Prasad, Ulfah Khairiyah Luthfiyani, Youngwoo Kim
Robot-assisted rehabilitation and training systems are utilized to improve the functional recovery of individuals with mobility limitations. These systems offer structured rehabilitation through precise human-robot interaction, outperforming traditional physical therapy by delivering advantages such as targeted muscle recovery, optimization of walking patterns, and automated training routines tailored to the user's objectives and musculoskeletal attributes. In our research, we propose the development of a walking simulator that considers user-specific musculoskeletal information to replicate natural walking dynamics, accounting for factors like joint angles, muscular forces, internal user-specific constraints, and external environmental factors. The integration of these factors into robot-assisted training can provide a more realistic rehabilitation environment and serve as a foundation for achieving natural bipedal locomotion. Our research team has developed a robot-assisted training platform (RATP) that generates gait training sets based on user-specific internal and external constraints by incorporating a genetic algorithm (GA). We utilize the Lagrangian multipliers to accommodate requirements from the rehabilitation field to instantly reshape the gait patterns while maintaining their overall characteristics, without an additional gait pattern search process. Depending on the patient's rehabilitation progress, there are times when it is necessary to reorganize the training session by changing training conditions such as terrain conditions, walking speed, and joint range of motion. The proposed method allows gait rehabilitation to be performed while stably satisfying ground contact constraints, even after modifying the training parameters.
{"title":"Gait pattern modification based on ground contact adaptation using the robot-assisted training platform (RATP).","authors":"Shamanth Shanmuga Prasad, Ulfah Khairiyah Luthfiyani, Youngwoo Kim","doi":"10.1007/s11517-024-03176-y","DOIUrl":"10.1007/s11517-024-03176-y","url":null,"abstract":"<p><p>Robot-assisted rehabilitation and training systems are utilized to improve the functional recovery of individuals with mobility limitations. These systems offer structured rehabilitation through precise human-robot interaction, outperforming traditional physical therapy by delivering advantages such as targeted muscle recovery, optimization of walking patterns, and automated training routines tailored to the user's objectives and musculoskeletal attributes. In our research, we propose the development of a walking simulator that considers user-specific musculoskeletal information to replicate natural walking dynamics, accounting for factors like joint angles, muscular forces, internal user-specific constraints, and external environmental factors. The integration of these factors into robot-assisted training can provide a more realistic rehabilitation environment and serve as a foundation for achieving natural bipedal locomotion. Our research team has developed a robot-assisted training platform (RATP) that generates gait training sets based on user-specific internal and external constraints by incorporating a genetic algorithm (GA). We utilize the Lagrangian multipliers to accommodate requirements from the rehabilitation field to instantly reshape the gait patterns while maintaining their overall characteristics, without an additional gait pattern search process. Depending on the patient's rehabilitation progress, there are times when it is necessary to reorganize the training session by changing training conditions such as terrain conditions, walking speed, and joint range of motion. The proposed method allows gait rehabilitation to be performed while stably satisfying ground contact constraints, even after modifying the training parameters.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"111-125"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image segmentation is a key step of the 3D reconstruction of the hepatobiliary duct tree, which is significant for preoperative planning. In this paper, a novel 3D U-Net variant is designed for CT image segmentation of hepatobiliary ducts from the abdominal CT scans, which is composed of a 3D encoder-decoder and a 3D multi-feedforward self-attention module (MFSAM). To well sufficient semantic and spatial features with high inference speed, the 3D ConvNeXt block is designed as the 3D extension of the 2D ConvNeXt. To improve the ability of semantic feature extraction, the MFSAM is designed to transfer the semantic and spatial features at different scales from the encoder to the decoder. Also, to balance the losses for the voxels and the edges of the hepatobiliary ducts, a boundary-aware overlap cross-entropy loss is proposed by combining the cross-entropy loss, the Dice loss, and the boundary loss. Experimental results indicate that the proposed method is superior to some existing deep networks as well as the radiologist without rich experience in terms of CT segmentation of hepatobiliary ducts, with a segmentation performance of 76.54% Dice and 6.56 HD.
{"title":"mm3DSNet: multi-scale and multi-feedforward self-attention 3D segmentation network for CT scans of hepatobiliary ducts.","authors":"Yinghong Zhou, Yiying Xie, Nian Cai, Yuchen Liang, Ruifeng Gong, Ping Wang","doi":"10.1007/s11517-024-03183-z","DOIUrl":"10.1007/s11517-024-03183-z","url":null,"abstract":"<p><p>Image segmentation is a key step of the 3D reconstruction of the hepatobiliary duct tree, which is significant for preoperative planning. In this paper, a novel 3D U-Net variant is designed for CT image segmentation of hepatobiliary ducts from the abdominal CT scans, which is composed of a 3D encoder-decoder and a 3D multi-feedforward self-attention module (MFSAM). To well sufficient semantic and spatial features with high inference speed, the 3D ConvNeXt block is designed as the 3D extension of the 2D ConvNeXt. To improve the ability of semantic feature extraction, the MFSAM is designed to transfer the semantic and spatial features at different scales from the encoder to the decoder. Also, to balance the losses for the voxels and the edges of the hepatobiliary ducts, a boundary-aware overlap cross-entropy loss is proposed by combining the cross-entropy loss, the Dice loss, and the boundary loss. Experimental results indicate that the proposed method is superior to some existing deep networks as well as the radiologist without rich experience in terms of CT segmentation of hepatobiliary ducts, with a segmentation performance of 76.54% Dice and 6.56 HD.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"127-138"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of breast density as a biomarker for breast cancer treatment has not been well established owing to the difficulty in measuring time-series changes in breast density. In this study, we developed a surmising model for breast density using prior mammograms through a multiple regression analysis, enabling a time series analysis of breast density. We acquired 1320 mediolateral oblique view mammograms to construct the surmising model using multiple regression analysis. The dependent variable was the breast density of the mammary gland region segmented by certified radiological technologists, and independent variables included the compressed breast thickness (CBT), exposure current times exposure second (mAs), tube voltage (kV), and patients' age. The coefficient of determination of the surmising model was 0.868. After applying the model, the correlation coefficients of the three groups based on the CBT (thin group, 18-36 mm; standard group, 38-46 mm; and thick group, 48-78 mm) were 0.913, 0.945, and 0.867, respectively, suggesting that the thick breast group had a significantly low correlation coefficient (p = 0.00231). In conclusion, breast density can be accurately surmised using the CBT, mAs, tube voltage, and patients' age, even in the absence of a mammogram image.
由于难以测量乳腺密度的时间序列变化,因此将乳腺密度作为乳腺癌治疗的生物标志物尚未得到很好的证实。在这项研究中,我们通过多元回归分析,利用先前的乳房X光照片建立了乳房密度推测模型,从而实现了乳房密度的时间序列分析。我们采集了 1320 张内侧斜视乳房 X 光照片,利用多元回归分析建立了推测模型。因变量为经认证的放射技师分割的乳腺区域的乳腺密度,自变量包括压缩乳腺厚度(CBT)、曝光电流乘以曝光秒(mAs)、管电压(kV)和患者年龄。推测模型的决定系数为 0.868。应用该模型后,基于 CBT 的三组(薄组,18-36 毫米;标准组,38-46 毫米;厚组,48-78 毫米)的相关系数分别为 0.913、0.945 和 0.867,表明厚乳房组的相关系数明显较低(p = 0.00231)。总之,即使没有乳房 X 光图像,也可以通过 CBT、mAs、管电压和患者年龄准确推测乳房密度。
{"title":"Development and validation of the surmising model for volumetric breast density using X-ray exposure conditions in digital mammography.","authors":"Mika Yamamuro, Yoshiyuki Asai, Takahiro Yamada, Yuichi Kimura, Kazunari Ishii, Yohan Kondo","doi":"10.1007/s11517-024-03186-w","DOIUrl":"10.1007/s11517-024-03186-w","url":null,"abstract":"<p><p>The use of breast density as a biomarker for breast cancer treatment has not been well established owing to the difficulty in measuring time-series changes in breast density. In this study, we developed a surmising model for breast density using prior mammograms through a multiple regression analysis, enabling a time series analysis of breast density. We acquired 1320 mediolateral oblique view mammograms to construct the surmising model using multiple regression analysis. The dependent variable was the breast density of the mammary gland region segmented by certified radiological technologists, and independent variables included the compressed breast thickness (CBT), exposure current times exposure second (mAs), tube voltage (kV), and patients' age. The coefficient of determination of the surmising model was 0.868. After applying the model, the correlation coefficients of the three groups based on the CBT (thin group, 18-36 mm; standard group, 38-46 mm; and thick group, 48-78 mm) were 0.913, 0.945, and 0.867, respectively, suggesting that the thick breast group had a significantly low correlation coefficient (p = 0.00231). In conclusion, breast density can be accurately surmised using the CBT, mAs, tube voltage, and patients' age, even in the absence of a mammogram image.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"169-179"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gait abnormalities are common in patients with chronic vestibular syndrome (CVS), and stability analysis and gait feature recognition in CVS patients have clinical significance for diagnosing CVS. This study explored two-dimensional dynamic stability indicators for evaluating gait instability in patients with CVS. The Center of Mass acceleration (COMa) peak of CVS patients was significantly faster than that of the control group (p < 0.05), closer to the back of the body, and slower at the Toe-off (TO) moment, which enlarged the Center of Mass position-velocity combination proportion within the Region of Velocity Stability (ROSv). The sensitivity, specificity, and accuracy of the Center of Mass velocity (COMv) or COMa peaks were 75.0%, 93.7%, and 90.2% for CVS patients and control groups, respectively. The two-dimensional ROSv parameters improved sensitivity, specificity, and accuracy in judging gait instability in patients over traditional dynamic stability parameters. Dynamic stability parameters quantitatively described the differences in dynamic stability during walking between patients with different degrees of CVS and those in the control group. As CVS impairment increases, the patient's dynamic stability decreases. This study provides a reference for the quantitative evaluation of gait stability in patients with CVS.
{"title":"Evaluation of instability in patients with chronic vestibular syndrome using dynamic stability indicators.","authors":"Yingnan Ma, Xing Gao, Li Wang, Ziyang Lyu, Fei Shen, Haijun Niu","doi":"10.1007/s11517-024-03185-x","DOIUrl":"10.1007/s11517-024-03185-x","url":null,"abstract":"<p><p>Gait abnormalities are common in patients with chronic vestibular syndrome (CVS), and stability analysis and gait feature recognition in CVS patients have clinical significance for diagnosing CVS. This study explored two-dimensional dynamic stability indicators for evaluating gait instability in patients with CVS. The Center of Mass acceleration (COMa) peak of CVS patients was significantly faster than that of the control group (p < 0.05), closer to the back of the body, and slower at the Toe-off (TO) moment, which enlarged the Center of Mass position-velocity combination proportion within the Region of Velocity Stability (ROSv). The sensitivity, specificity, and accuracy of the Center of Mass velocity (COMv) or COMa peaks were 75.0%, 93.7%, and 90.2% for CVS patients and control groups, respectively. The two-dimensional ROSv parameters improved sensitivity, specificity, and accuracy in judging gait instability in patients over traditional dynamic stability parameters. Dynamic stability parameters quantitatively described the differences in dynamic stability during walking between patients with different degrees of CVS and those in the control group. As CVS impairment increases, the patient's dynamic stability decreases. This study provides a reference for the quantitative evaluation of gait stability in patients with CVS.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"159-168"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous 3D encoder-decoder segmentation architectures struggled with fine-grained feature decomposition, resulting in unclear feature hierarchies when fused across layers. Furthermore, the blurred nature of contour boundaries in medical imaging limits the focus on high-frequency contour features. To address these challenges, we propose a Multi-oriented Hierarchical Extraction and Dual-frequency Decoupling Network (HEDN), which consists of three modules: Encoder-Decoder Module (E-DM), Multi-oriented Hierarchical Extraction Module (Multi-HEM), and Dual-frequency Decoupling Module (Dual-DM). The E-DM performs the basic encoding and decoding tasks, while Multi-HEM decomposes and fuses spatial and slice-level features in 3D, enriching the feature hierarchy by weighting them through 3D fusion. Dual-DM separates high-frequency features from the reconstructed network using self-supervision. Finally, the self-supervised high-frequency features separated by Dual-DM are inserted into the process following Multi-HEM, enhancing interactions and complementarities between contour features and hierarchical features, thereby mutually reinforcing both aspects. On the Synapse dataset, HEDN outperforms existing methods, boosting Dice Similarity Score (DSC) by 1.38% and decreasing 95% Hausdorff Distance (HD95) by 1.03 mm. Likewise, on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset, HEDN achieves 0.5% performance gains across all categories.
{"title":"HEDN: multi-oriented hierarchical extraction and dual-frequency decoupling network for 3D medical image segmentation.","authors":"Yu Wang, Guoheng Huang, Zeng Lu, Ying Wang, Xuhang Chen, Xiaochen Yuan, Yan Li, Jieni Liu, Yingping Huang","doi":"10.1007/s11517-024-03192-y","DOIUrl":"10.1007/s11517-024-03192-y","url":null,"abstract":"<p><p>Previous 3D encoder-decoder segmentation architectures struggled with fine-grained feature decomposition, resulting in unclear feature hierarchies when fused across layers. Furthermore, the blurred nature of contour boundaries in medical imaging limits the focus on high-frequency contour features. To address these challenges, we propose a Multi-oriented Hierarchical Extraction and Dual-frequency Decoupling Network (HEDN), which consists of three modules: Encoder-Decoder Module (E-DM), Multi-oriented Hierarchical Extraction Module (Multi-HEM), and Dual-frequency Decoupling Module (Dual-DM). The E-DM performs the basic encoding and decoding tasks, while Multi-HEM decomposes and fuses spatial and slice-level features in 3D, enriching the feature hierarchy by weighting them through 3D fusion. Dual-DM separates high-frequency features from the reconstructed network using self-supervision. Finally, the self-supervised high-frequency features separated by Dual-DM are inserted into the process following Multi-HEM, enhancing interactions and complementarities between contour features and hierarchical features, thereby mutually reinforcing both aspects. On the Synapse dataset, HEDN outperforms existing methods, boosting Dice Similarity Score (DSC) by 1.38% and decreasing 95% Hausdorff Distance (HD95) by 1.03 mm. Likewise, on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset, HEDN achieves 0.5% performance gains across all categories.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"267-291"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}