Source localization in EEG necessitates co-registering the EEG sensor locations with the subject's MRI, where EEG sensor locations are typically captured using electromagnetic tracking or 3D scanning of the subject's head with EEG cap, using commercially available 3D scanners. Both methods have drawbacks, where, electromagnetic tracking is slow and immobile, while 3D scanners are expensive. Photogrammetry offers a cost-effective alternative but requires multiple photos to sample the head, with good spatial sampling to adequately reconstruct the head surface. Post-reconstruction, the existing tools for electrode position labelling on the 3D head-surface have limited visual feedback and do not easily accommodate customized montages, which are typical in multi-modal measurements. We introduce Mark3D, an open-source, integrated tool for 3D head-surface reconstruction from phone camera video. It eliminates the need for keeping track of spatial sampling during image capture for video-based photogrammetry reconstruction. It also includes blur detection algorithms, a user-friendly interface for electrode and tracking, and integrates with popular toolboxes such as FieldTrip and MNE Python. The accuracy of the proposed method was benchmarked with the head-surface derived from a commercially available handheld 3D scanner Einscan-Pro + (Shining 3D Inc.,) which we treat as the "ground truth". We used reconstructed head-surfaces of ground truth (G1) and phone camera video (M1080) to mark the EEG electrode locations in 3D space using a dedicated UI provided in the tool. The electrode locations were then used to form pseudo-specific MRI templates for individual subjects to reconstruct source information. Somatosensory source activations in response to vibrotactile stimuli were estimated and compared between G1 and M1080. The mean positional errors of the EEG electrodes between G1 and M1080 in 3D space were found to be 0.09 ± 0.01 mm across different cortical areas, with temporal and occipital areas registering a relatively higher error than other regions such as frontal, central or parietal areas. The error in source reconstruction was found to be 0.033 ± 0.016 mm and 0.037 ± 0.017 mm in the left and right cortical hemispheres respectively.
{"title":"Mark3D - A semi-automated open-source toolbox for 3D head- surface reconstruction and electrode position registration using a smartphone camera video.","authors":"Suranjita Ganguly, Malaaika Mihir Chhaya, Ankita Jain, Aditya Koppula, Mohan Raghavan, Kousik Sarathy Sridharan","doi":"10.1007/s11517-024-03228-3","DOIUrl":"https://doi.org/10.1007/s11517-024-03228-3","url":null,"abstract":"<p><p>Source localization in EEG necessitates co-registering the EEG sensor locations with the subject's MRI, where EEG sensor locations are typically captured using electromagnetic tracking or 3D scanning of the subject's head with EEG cap, using commercially available 3D scanners. Both methods have drawbacks, where, electromagnetic tracking is slow and immobile, while 3D scanners are expensive. Photogrammetry offers a cost-effective alternative but requires multiple photos to sample the head, with good spatial sampling to adequately reconstruct the head surface. Post-reconstruction, the existing tools for electrode position labelling on the 3D head-surface have limited visual feedback and do not easily accommodate customized montages, which are typical in multi-modal measurements. We introduce Mark3D, an open-source, integrated tool for 3D head-surface reconstruction from phone camera video. It eliminates the need for keeping track of spatial sampling during image capture for video-based photogrammetry reconstruction. It also includes blur detection algorithms, a user-friendly interface for electrode and tracking, and integrates with popular toolboxes such as FieldTrip and MNE Python. The accuracy of the proposed method was benchmarked with the head-surface derived from a commercially available handheld 3D scanner Einscan-Pro + (Shining 3D Inc.,) which we treat as the \"ground truth\". We used reconstructed head-surfaces of ground truth (G1) and phone camera video (M<sub>1080</sub>) to mark the EEG electrode locations in 3D space using a dedicated UI provided in the tool. The electrode locations were then used to form pseudo-specific MRI templates for individual subjects to reconstruct source information. Somatosensory source activations in response to vibrotactile stimuli were estimated and compared between G1 and M<sub>1080</sub>. The mean positional errors of the EEG electrodes between G1 and M<sub>1080</sub> in 3D space were found to be 0.09 ± 0.01 mm across different cortical areas, with temporal and occipital areas registering a relatively higher error than other regions such as frontal, central or parietal areas. The error in source reconstruction was found to be 0.033 ± 0.016 mm and 0.037 ± 0.017 mm in the left and right cortical hemispheres respectively.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study utilizes radiomics to explore imaging biomarkers for predicting the recurrence of chronic subdural hematoma (CSDH), aiming to improve the prediction of CSDH recurrence risk. Analyzing CT scans from 64 patients with CSDH, we extracted 107 radiomic features and employed recursive feature elimination (RFE) and the XGBoost algorithm for feature selection and model construction. The feature selection process identified six key imaging biomarkers closely associated with CSDH recurrence: flatness, surface area to volume ratio, energy, run entropy, small area emphasis, and maximum axial diameter. The selection of these imaging biomarkers was based on their significance in predicting CSDH recurrence, revealing deep connections between postoperative variables and recurrence. After feature selection, there was a significant improvement in model performance. The XGBoost model demonstrated the best classification performance, with the average accuracy improving from 46.82% (before feature selection) to 80.74% and the AUC value increasing from 0.5864 to 0.7998. These results prove that precise feature selection significantly enhances the model's predictive capability. This study not only reveals imaging biomarkers for CSDH recurrence but also provides valuable insights for future personalized treatment strategies.
{"title":"Research on imaging biomarkers for chronic subdural hematoma recurrence.","authors":"Liyang Wu, Yvmei Zhu, Qiuyong Huang, Shuchao Chen, Haoyang Zhou, Zihao Xu, Bo Li, Hongbo Chen, Junhui Lv","doi":"10.1007/s11517-024-03232-7","DOIUrl":"https://doi.org/10.1007/s11517-024-03232-7","url":null,"abstract":"<p><p>This study utilizes radiomics to explore imaging biomarkers for predicting the recurrence of chronic subdural hematoma (CSDH), aiming to improve the prediction of CSDH recurrence risk. Analyzing CT scans from 64 patients with CSDH, we extracted 107 radiomic features and employed recursive feature elimination (RFE) and the XGBoost algorithm for feature selection and model construction. The feature selection process identified six key imaging biomarkers closely associated with CSDH recurrence: flatness, surface area to volume ratio, energy, run entropy, small area emphasis, and maximum axial diameter. The selection of these imaging biomarkers was based on their significance in predicting CSDH recurrence, revealing deep connections between postoperative variables and recurrence. After feature selection, there was a significant improvement in model performance. The XGBoost model demonstrated the best classification performance, with the average accuracy improving from 46.82% (before feature selection) to 80.74% and the AUC value increasing from 0.5864 to 0.7998. These results prove that precise feature selection significantly enhances the model's predictive capability. This study not only reveals imaging biomarkers for CSDH recurrence but also provides valuable insights for future personalized treatment strategies.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-06DOI: 10.1007/s11517-024-03234-5
Zhengxin Tu, Jinghua Xu, Zhenyu Dong, Shuyou Zhang, Jianrong Tan
This paper presents a load-bearing optimization method for customized exoskeleton design based on kinematic gait reconstruction (KGR). For people with acute joint injury, it is no longer probable to obtain the movement gait via computer vision. With this in mind, the 3D reconstruction can be executed from the CT (computed tomography) or MRI (magnetic resonance imaging) of the injured area, in order to generate micro-morphology of the joint occlusion. Innovatively, the disconnected entities can be registered into a whole by surface topography matching with semi-definite computing, further implementing KGR by rebuilding continuous kinematic skeletal flexion postures. To verify the effectiveness of reconstructed kinematic gait, finite element analysis (FEA) is conducted via Hertz contact theory. The lower limb exoskeleton is taken as a verification instance, where rod length ratio and angular rotation range can be set as the design considerations, so as to optimize the load-bearing parameters, which is suitable for individual kinematic gaits. The instance demonstrates that the proposed KGR helps to provide a design paradigm for optimizing load-bearing capacity, on the basis of which the ergonomic customized exoskeleton can be designed from merely medical images, thereby making it more suitable for the large rehabilitation population.
{"title":"Load-bearing optimization for customized exoskeleton design based on kinematic gait reconstruction.","authors":"Zhengxin Tu, Jinghua Xu, Zhenyu Dong, Shuyou Zhang, Jianrong Tan","doi":"10.1007/s11517-024-03234-5","DOIUrl":"https://doi.org/10.1007/s11517-024-03234-5","url":null,"abstract":"<p><p>This paper presents a load-bearing optimization method for customized exoskeleton design based on kinematic gait reconstruction (KGR). For people with acute joint injury, it is no longer probable to obtain the movement gait via computer vision. With this in mind, the 3D reconstruction can be executed from the CT (computed tomography) or MRI (magnetic resonance imaging) of the injured area, in order to generate micro-morphology of the joint occlusion. Innovatively, the disconnected entities can be registered into a whole by surface topography matching with semi-definite computing, further implementing KGR by rebuilding continuous kinematic skeletal flexion postures. To verify the effectiveness of reconstructed kinematic gait, finite element analysis (FEA) is conducted via Hertz contact theory. The lower limb exoskeleton is taken as a verification instance, where rod length ratio and angular rotation range can be set as the design considerations, so as to optimize the load-bearing parameters, which is suitable for individual kinematic gaits. The instance demonstrates that the proposed KGR helps to provide a design paradigm for optimizing load-bearing capacity, on the basis of which the ergonomic customized exoskeleton can be designed from merely medical images, thereby making it more suitable for the large rehabilitation population.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiofrequency ablation is a widely accepted minimal-invasive and effective local treatment for tumors. However, its current application in esophageal cancer treatment is limited to targeting thin and superficial lesions, such as Barrett's Esophagus. This study proposes an optimization method using multiple electrodes simultaneously to regulate the temperature field and achieve conformal ablation of tumors. A particle swarm optimization algorithm, coupled with a three-dimensional thermal ablation model, was developed to optimize the status of the functioning electrodes, the optimal voltage (Vopt), and treatment duration (ttre) for targeted esophageal tumors. This approach takes into account both the electrical and thermal interactions of the electrodes. The results indicate that for esophageal cancers at various stages, with thickness (c) ranging from 4.5 mm to 10.0 mm, major axis (a) ranging from 7.3 mm to 27.3 mm, and minor axis (b) equaling 7.3 mm or 27.3 mm, as well as non-symmetrical geometries, complete tumor coverage (over 99.5%) close to conformal can be achieved. This method illustrates possible precise conformal ablation of esophageal cancers and it may also be used for conformal treatments of other intraluminal lesions.
{"title":"Optimization of three-dimensional esophageal tumor ablation by simultaneous functioning of multiple electrodes.","authors":"Hongying Wang, Jincheng Zou, Shiqing Zhao, Aili Zhang","doi":"10.1007/s11517-024-03230-9","DOIUrl":"https://doi.org/10.1007/s11517-024-03230-9","url":null,"abstract":"<p><p>Radiofrequency ablation is a widely accepted minimal-invasive and effective local treatment for tumors. However, its current application in esophageal cancer treatment is limited to targeting thin and superficial lesions, such as Barrett's Esophagus. This study proposes an optimization method using multiple electrodes simultaneously to regulate the temperature field and achieve conformal ablation of tumors. A particle swarm optimization algorithm, coupled with a three-dimensional thermal ablation model, was developed to optimize the status of the functioning electrodes, the optimal voltage (V<sub>opt</sub>), and treatment duration (t<sub>tre</sub>) for targeted esophageal tumors. This approach takes into account both the electrical and thermal interactions of the electrodes. The results indicate that for esophageal cancers at various stages, with thickness (c) ranging from 4.5 mm to 10.0 mm, major axis (a) ranging from 7.3 mm to 27.3 mm, and minor axis (b) equaling 7.3 mm or 27.3 mm, as well as non-symmetrical geometries, complete tumor coverage (over 99.5%) close to conformal can be achieved. This method illustrates possible precise conformal ablation of esophageal cancers and it may also be used for conformal treatments of other intraluminal lesions.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1007/s11517-024-03222-9
Yining Xie, Jingling Gao, Xueyan Bi, Jing Zhao
Cell instance segmentation is a key technology for cervical cancer auxiliary diagnosis systems. However, pixel-level annotation is time-consuming and labor-intensive, making it difficult to obtain a large amount of annotated data. This results in the model not being fully trained. In response to these problems, this paper proposes an unsupervised cervical cell instance segmentation method that integrates cell characteristics. Cervical cells have a clear corresponding structure between the nucleus and cytoplasm. This method fully takes this feature into account by building a dual-flow framework to locate the nucleus and cytoplasm and generate high-quality pseudo-labels. In the nucleus segmentation stage, the position and range of the nucleus are determined using the standard cell-restricted nucleus segmentation method. In the cytoplasm segmentation stage, a multi-angle collaborative segmentation method is used to achieve the positioning of the cytoplasm. First, taking advantage of the self-similarity characteristics of pixel blocks in cells, a cytoplasmic segmentation method based on self-similarity map iteration is proposed. The pixel blocks are mapped from the perspective of local details, and the iterative segmentation is repeated. Secondly, using low-level features such as cell color and shape, a self-supervised heatmap-aware cytoplasm segmentation method is proposed to obtain the activation map of the cytoplasm from the perspective of global attention. The two methods are fused to determine cytoplasmic regions, and combined with nuclear locations, high-quality pseudo-labels are generated. These pseudo-labels are used to train the model cyclically, and the loss strategy is used to encourage the model to discover new object masks, thereby obtaining a segmentation model with better performance. Experimental results show that this method achieves good results in cytoplasm segmentation. On the three datasets of ISBI, MS_CellSeg, and Cx22, 54.32%, 44.64%, and 66.52% AJI were obtained, respectively, which is better than other typical unsupervised methods selected in this article.
{"title":"Unsupervised cervical cell instance segmentation method integrating cellular characteristics.","authors":"Yining Xie, Jingling Gao, Xueyan Bi, Jing Zhao","doi":"10.1007/s11517-024-03222-9","DOIUrl":"https://doi.org/10.1007/s11517-024-03222-9","url":null,"abstract":"<p><p>Cell instance segmentation is a key technology for cervical cancer auxiliary diagnosis systems. However, pixel-level annotation is time-consuming and labor-intensive, making it difficult to obtain a large amount of annotated data. This results in the model not being fully trained. In response to these problems, this paper proposes an unsupervised cervical cell instance segmentation method that integrates cell characteristics. Cervical cells have a clear corresponding structure between the nucleus and cytoplasm. This method fully takes this feature into account by building a dual-flow framework to locate the nucleus and cytoplasm and generate high-quality pseudo-labels. In the nucleus segmentation stage, the position and range of the nucleus are determined using the standard cell-restricted nucleus segmentation method. In the cytoplasm segmentation stage, a multi-angle collaborative segmentation method is used to achieve the positioning of the cytoplasm. First, taking advantage of the self-similarity characteristics of pixel blocks in cells, a cytoplasmic segmentation method based on self-similarity map iteration is proposed. The pixel blocks are mapped from the perspective of local details, and the iterative segmentation is repeated. Secondly, using low-level features such as cell color and shape, a self-supervised heatmap-aware cytoplasm segmentation method is proposed to obtain the activation map of the cytoplasm from the perspective of global attention. The two methods are fused to determine cytoplasmic regions, and combined with nuclear locations, high-quality pseudo-labels are generated. These pseudo-labels are used to train the model cyclically, and the loss strategy is used to encourage the model to discover new object masks, thereby obtaining a segmentation model with better performance. Experimental results show that this method achieves good results in cytoplasm segmentation. On the three datasets of ISBI, MS_CellSeg, and Cx22, 54.32%, 44.64%, and 66.52% AJI were obtained, respectively, which is better than other typical unsupervised methods selected in this article.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-18DOI: 10.1007/s11517-024-03151-7
Muskan Chawla, Surya Narayan Panda, Vikas Khullar
According to the available studies, mobile applications have provided significant support in improving the diverse skills of special individuals with social pragmatic communication disorder (SPCD). Over the last decade, SPCD has affected 8 to 11% of individuals, and therapy sessions cost between $50 and $150 per hour. This preliminary study aims to develop an interactive, user-friendly intervention to enhance social and emotional interaction skills in individuals with SPCD. The proposed intervention is an Android application that enhances social and emotional interaction skills. This pilot study involved 29 human subjects aged 7-13 years with pragmatic communication deficits. In a randomized controlled trial, the intervention was developed and implemented with consideration of caregiver and professional requirements. The improvement was analyzed using standard scales, including the Social Communication Questionnaire (SCQ) and the Social Communication Disorder Scale (SCDS). Moreover, the outcomes were examined through statistical parameters (mean, standard deviation) and tests (t-test). The intervention significantly improved the social and emotional skills of individuals with deficits. Before using the intervention, the identified statistical values for SCQ (mean = 6.48 and standard deviation = 3.37) and SCDS (mean = 8.17 and standard deviation = 4.79). However, after using the intervention, values for SCQ (mean = 8.24 and standard deviation = 3.95) and SCDS (mean = 9.48 and standard deviation = 4.72) were improved in comparison to the before-intervention outcome. The evaluation of the t-scores and p-values indicates that there has been significant improvement in the performance of individuals after the successful completion of the intervention. The proposed and applied intervention resulted in a significant impact in terms of improvement in social and emotional skills. The study concluded that it allows individuals to practice social and emotional interaction skills in a structured, controlled, and interactive environment. The proposed intervention has been found acceptable as per the reviews of caregivers and professionals, based on essential criteria including user experience, usability, interactive nature, reliability, and creditability.
{"title":"SMILEY-assistive application to support social and emotional skills in SPCD individuals.","authors":"Muskan Chawla, Surya Narayan Panda, Vikas Khullar","doi":"10.1007/s11517-024-03151-7","DOIUrl":"10.1007/s11517-024-03151-7","url":null,"abstract":"<p><p>According to the available studies, mobile applications have provided significant support in improving the diverse skills of special individuals with social pragmatic communication disorder (SPCD). Over the last decade, SPCD has affected 8 to 11% of individuals, and therapy sessions cost between $50 and $150 per hour. This preliminary study aims to develop an interactive, user-friendly intervention to enhance social and emotional interaction skills in individuals with SPCD. The proposed intervention is an Android application that enhances social and emotional interaction skills. This pilot study involved 29 human subjects aged 7-13 years with pragmatic communication deficits. In a randomized controlled trial, the intervention was developed and implemented with consideration of caregiver and professional requirements. The improvement was analyzed using standard scales, including the Social Communication Questionnaire (SCQ) and the Social Communication Disorder Scale (SCDS). Moreover, the outcomes were examined through statistical parameters (mean, standard deviation) and tests (t-test). The intervention significantly improved the social and emotional skills of individuals with deficits. Before using the intervention, the identified statistical values for SCQ (mean = 6.48 and standard deviation = 3.37) and SCDS (mean = 8.17 and standard deviation = 4.79). However, after using the intervention, values for SCQ (mean = 8.24 and standard deviation = 3.95) and SCDS (mean = 9.48 and standard deviation = 4.72) were improved in comparison to the before-intervention outcome. The evaluation of the t-scores and p-values indicates that there has been significant improvement in the performance of individuals after the successful completion of the intervention. The proposed and applied intervention resulted in a significant impact in terms of improvement in social and emotional skills. The study concluded that it allows individuals to practice social and emotional interaction skills in a structured, controlled, and interactive environment. The proposed intervention has been found acceptable as per the reviews of caregivers and professionals, based on essential criteria including user experience, usability, interactive nature, reliability, and creditability.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3507-3529"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-07DOI: 10.1007/s11517-024-03123-x
Chuanba Liu, Wenshuo Wang, Tao Sun, Yimin Song
Ultrasound (US) has been introduced to computer-assisted orthopedic surgery for bone registration owing to its advantages of nonionizing radiation, low cost, and noninvasiveness. However, the registration accuracy is limited by US image distortion caused by variations in the acoustic properties of soft tissues. This paper proposes a soft-tissue sound-speed-aware registration method to overcome the above challenge. First, the feature enhancement strategy of multi-channel overlay is proposed for U2-net to improve bone segmentation performance. Secondly, the sound speed of soft tissue is estimated by simulating the bone surface distance map for the update of US-derived points. Finally, an iterative registration strategy is adopted to optimize the registration result. A phantom experiment was conducted using different registration methods for the femur and tibia/fibula. The fiducial registration error (femur, 0.98 ± 0.08 mm (mean ± SD); tibia/fibula, 1.29 ± 0.19 mm) and the target registration error (less than 2.11 mm) showed the high accuracy of the proposed method. The experimental results suggest that the proposed method can be integrated into navigation systems that provide surgeons with accurate 3D navigation information.
由于超声波(US)具有非电离辐射、低成本和无创等优点,它已被引入计算机辅助骨科手术中用于骨骼配准。然而,由于软组织声学特性的变化导致超声图像失真,从而限制了配准的准确性。本文提出了一种软组织声速感知配准方法来克服上述难题。首先,针对 U2 网络提出了多通道叠加的特征增强策略,以提高骨分割性能。其次,通过模拟骨表面距离图来估计软组织的声速,从而更新 US 导出点。最后,采用迭代配准策略优化配准结果。对股骨和胫骨/腓骨采用不同的配准方法进行了模型实验。靶标配准误差(股骨,0.98 ± 0.08 mm(平均 ± SD);胫骨/腓骨,1.29 ± 0.19 mm)和目标配准误差(小于 2.11 mm)显示了所提方法的高精确度。实验结果表明,提出的方法可以集成到导航系统中,为外科医生提供精确的三维导航信息。
{"title":"Soft-tissue sound-speed-aware ultrasound-CT registration method for computer-assisted orthopedic surgery.","authors":"Chuanba Liu, Wenshuo Wang, Tao Sun, Yimin Song","doi":"10.1007/s11517-024-03123-x","DOIUrl":"10.1007/s11517-024-03123-x","url":null,"abstract":"<p><p>Ultrasound (US) has been introduced to computer-assisted orthopedic surgery for bone registration owing to its advantages of nonionizing radiation, low cost, and noninvasiveness. However, the registration accuracy is limited by US image distortion caused by variations in the acoustic properties of soft tissues. This paper proposes a soft-tissue sound-speed-aware registration method to overcome the above challenge. First, the feature enhancement strategy of multi-channel overlay is proposed for U<sup>2</sup>-net to improve bone segmentation performance. Secondly, the sound speed of soft tissue is estimated by simulating the bone surface distance map for the update of US-derived points. Finally, an iterative registration strategy is adopted to optimize the registration result. A phantom experiment was conducted using different registration methods for the femur and tibia/fibula. The fiducial registration error (femur, 0.98 ± 0.08 mm (mean ± SD); tibia/fibula, 1.29 ± 0.19 mm) and the target registration error (less than 2.11 mm) showed the high accuracy of the proposed method. The experimental results suggest that the proposed method can be integrated into navigation systems that provide surgeons with accurate 3D navigation information.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3385-3396"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-07DOI: 10.1007/s11517-024-03131-x
Jaime Collado-Montañez, Pilar López-Úbeda, Mariia Chizhikova, M Carlos Díaz-Galiano, L Alfonso Ureña-López, Teodoro Martín-Noguerol, Antonio Luna, M Teresa Martín-Valdivia
This paper presents the implementation of two automated text classification systems for prostate cancer findings based on the PI-RADS criteria. Specifically, a traditional machine learning model using XGBoost and a language model-based approach using RoBERTa were employed. The study focused on Spanish-language radiological MRI prostate reports, which has not been explored before. The results demonstrate that the RoBERTa model outperforms the XGBoost model, although both achieve promising results. Furthermore, the best-performing system was integrated into the radiological company's information systems as an API, operating in a real-world environment.
本文介绍了基于 PI-RADS 标准的两种前列腺癌检查结果自动文本分类系统的实施情况。具体来说,该系统采用了使用 XGBoost 的传统机器学习模型和使用 RoBERTa 的基于语言模型的方法。研究的重点是西班牙语的核磁共振前列腺放射报告,这在以前还没有过探索。结果表明,RoBERTa 模型优于 XGBoost 模型,尽管两者都取得了可喜的成果。此外,表现最好的系统作为 API 集成到了放射公司的信息系统中,在真实环境中运行。
{"title":"Automatic text classification of prostate cancer malignancy scores in radiology reports using NLP models.","authors":"Jaime Collado-Montañez, Pilar López-Úbeda, Mariia Chizhikova, M Carlos Díaz-Galiano, L Alfonso Ureña-López, Teodoro Martín-Noguerol, Antonio Luna, M Teresa Martín-Valdivia","doi":"10.1007/s11517-024-03131-x","DOIUrl":"10.1007/s11517-024-03131-x","url":null,"abstract":"<p><p>This paper presents the implementation of two automated text classification systems for prostate cancer findings based on the PI-RADS criteria. Specifically, a traditional machine learning model using XGBoost and a language model-based approach using RoBERTa were employed. The study focused on Spanish-language radiological MRI prostate reports, which has not been explored before. The results demonstrate that the RoBERTa model outperforms the XGBoost model, although both achieve promising results. Furthermore, the best-performing system was integrated into the radiological company's information systems as an API, operating in a real-world environment.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3373-3383"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-14DOI: 10.1007/s11517-024-03144-6
Jiaqing Zhao, Jianfeng Zhu, Jiangnan He, Guogang Cao, Cuixia Dai
Retinal disorders are a major cause of irreversible vision loss, which can be mitigated through accurate and early diagnosis. Conventionally, fundus images are used as the gold diagnosis standard in detecting retinal diseases. In recent years, more and more researchers have employed deep learning methods for diagnosing ophthalmic diseases using fundus photography datasets. Among the studies, most of them focus on diagnosing a single disease in fundus images, making it still challenging for the diagnosis of multiple diseases. In this paper, we propose a framework that combines ResNet and Transformer for multi-label classification of retinal disease. This model employs ResNet to extract image features, utilizes Transformer to capture global information, and enhances the relationships between categories through learnable label embedding. On the publicly available Ocular Disease Intelligent Recognition (ODIR-5 k) dataset, the proposed method achieves a mean average precision of 92.86%, an area under the curve (AUC) of 97.27%, and a recall of 90.62%, which outperforms other state-of-the-art approaches for the multi-label classification. The proposed method represents a significant advancement in the field of retinal disease diagnosis, offering a more accurate, efficient, and comprehensive model for the detection of multiple retinal conditions.
{"title":"Multi-label classification of retinal diseases based on fundus images using Resnet and Transformer.","authors":"Jiaqing Zhao, Jianfeng Zhu, Jiangnan He, Guogang Cao, Cuixia Dai","doi":"10.1007/s11517-024-03144-6","DOIUrl":"10.1007/s11517-024-03144-6","url":null,"abstract":"<p><p>Retinal disorders are a major cause of irreversible vision loss, which can be mitigated through accurate and early diagnosis. Conventionally, fundus images are used as the gold diagnosis standard in detecting retinal diseases. In recent years, more and more researchers have employed deep learning methods for diagnosing ophthalmic diseases using fundus photography datasets. Among the studies, most of them focus on diagnosing a single disease in fundus images, making it still challenging for the diagnosis of multiple diseases. In this paper, we propose a framework that combines ResNet and Transformer for multi-label classification of retinal disease. This model employs ResNet to extract image features, utilizes Transformer to capture global information, and enhances the relationships between categories through learnable label embedding. On the publicly available Ocular Disease Intelligent Recognition (ODIR-5 k) dataset, the proposed method achieves a mean average precision of 92.86%, an area under the curve (AUC) of 97.27%, and a recall of 90.62%, which outperforms other state-of-the-art approaches for the multi-label classification. The proposed method represents a significant advancement in the field of retinal disease diagnosis, offering a more accurate, efficient, and comprehensive model for the detection of multiple retinal conditions.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3459-3469"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.
{"title":"VascuConNet: an enhanced connectivity network for vascular segmentation.","authors":"Muwei Jian, Ronghua Wu, Wenjin Xu, Huixiang Zhi, Chen Tao, Hongyu Chen, Xiaoguang Li","doi":"10.1007/s11517-024-03150-8","DOIUrl":"10.1007/s11517-024-03150-8","url":null,"abstract":"<p><p>Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3543-3554"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}