Pub Date : 2025-08-01eCollection Date: 2025-01-01DOI: 10.1155/ijbi/2149042
Rafiqul Islam, Sazzad Hossain
Brain tumors are complex clinical lesions with diverse morphological characteristics, making accurate segmentation from MRI scans a challenging task. Manual segmentation by radiologists is time-consuming and susceptible to human error. Consequently, automated approaches are anticipated to accurately delineate tumor boundaries and quantify tumor burden, addressing these challenges efficiently. The presented work integrates a convolutional block attention module (CBAM) into a deep learning architecture to enhance the accuracy of MRI-based brain tumor segmentation. The deep learning network is built upon a VGG19-based U-Net model, augmented with depthwise and pointwise convolutions to improve feature extraction and processing efficiency during brain tumor segmentation. Furthermore, the proposed framework enhances segmentation precision while simultaneously incorporating tumor area measurement, making it a comprehensive tool for early-stage tumor analysis. Several qualitative assessments are used to assess the performance of the model in terms of tumor segmentation analysis. The qualitative metrics typically analyze the overlap between predicted tumor masks and ground truth annotations, providing information on the segmentation algorithms' accuracy and dependability. Following segmentation, a new approach is used to compute the extent of segmented tumor areas in MRI scans. This involves counting the number of pixels within the segmented tumor masks and multiplying by their area or volume. The computed tumor areas offer quantifiable data for future investigation and clinical interpretation. In general, the proposed methodology is projected to improve segmentation accuracy, efficiency, and clinical relevance compared to existing methods, resulting in better diagnosis, treatment planning, and monitoring of patients with brain tumors.
{"title":"Enhanced Brain Tumor Segmentation Using CBAM-Integrated Deep Learning and Area Quantification.","authors":"Rafiqul Islam, Sazzad Hossain","doi":"10.1155/ijbi/2149042","DOIUrl":"10.1155/ijbi/2149042","url":null,"abstract":"<p><p>Brain tumors are complex clinical lesions with diverse morphological characteristics, making accurate segmentation from MRI scans a challenging task. Manual segmentation by radiologists is time-consuming and susceptible to human error. Consequently, automated approaches are anticipated to accurately delineate tumor boundaries and quantify tumor burden, addressing these challenges efficiently. The presented work integrates a convolutional block attention module (CBAM) into a deep learning architecture to enhance the accuracy of MRI-based brain tumor segmentation. The deep learning network is built upon a VGG19-based U-Net model, augmented with depthwise and pointwise convolutions to improve feature extraction and processing efficiency during brain tumor segmentation. Furthermore, the proposed framework enhances segmentation precision while simultaneously incorporating tumor area measurement, making it a comprehensive tool for early-stage tumor analysis. Several qualitative assessments are used to assess the performance of the model in terms of tumor segmentation analysis. The qualitative metrics typically analyze the overlap between predicted tumor masks and ground truth annotations, providing information on the segmentation algorithms' accuracy and dependability. Following segmentation, a new approach is used to compute the extent of segmented tumor areas in MRI scans. This involves counting the number of pixels within the segmented tumor masks and multiplying by their area or volume. The computed tumor areas offer quantifiable data for future investigation and clinical interpretation. In general, the proposed methodology is projected to improve segmentation accuracy, efficiency, and clinical relevance compared to existing methods, resulting in better diagnosis, treatment planning, and monitoring of patients with brain tumors.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"2149042"},"PeriodicalIF":1.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12334286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23eCollection Date: 2025-01-01DOI: 10.1155/ijbi/7560099
Yu Xiao, Xin Yang, Sijuan Huang, Lihua Guo
Background: This study is aimed at solving the misalignment and semantic gap caused by multiple convolutional and pooling operations in U-Net while segmenting subabdominal MR images during rectal cancer treatment. Methods: We propose a new approach for MR Image Segmentation based on a multiscale feature pyramid network and a bidirectional cross-attention mechanism. Our approach comprises two innovative modules: (1) We use dilated convolution and a multiscale feature pyramid network in the encoding phase to mitigate the semantic gap, and (2) we implement a bidirectional cross-attention mechanism to preserve spatial information in U-Net and reduce misalignment. Results: Experimental results on a subabdominal MR image dataset demonstrate that our proposed method outperforms existing methods. Conclusion: A multiscale feature pyramid network effectively reduces the semantic gap, and the bidirectional cross-attention mechanism facilitates feature alignment between the encoding and decoding stages.
{"title":"Enhancing Deep Learning-Based Subabdominal MR Image Segmentation During Rectal Cancer Treatment: Exploiting Multiscale Feature Pyramid Network and Bidirectional Cross-Attention Mechanism.","authors":"Yu Xiao, Xin Yang, Sijuan Huang, Lihua Guo","doi":"10.1155/ijbi/7560099","DOIUrl":"10.1155/ijbi/7560099","url":null,"abstract":"<p><p><b>Background:</b> This study is aimed at solving the misalignment and semantic gap caused by multiple convolutional and pooling operations in U-Net while segmenting subabdominal MR images during rectal cancer treatment. <b>Methods:</b> We propose a new approach for MR Image Segmentation based on a multiscale feature pyramid network and a bidirectional cross-attention mechanism. Our approach comprises two innovative modules: (1) We use dilated convolution and a multiscale feature pyramid network in the encoding phase to mitigate the semantic gap, and (2) we implement a bidirectional cross-attention mechanism to preserve spatial information in U-Net and reduce misalignment. <b>Results:</b> Experimental results on a subabdominal MR image dataset demonstrate that our proposed method outperforms existing methods. <b>Conclusion:</b> A multiscale feature pyramid network effectively reduces the semantic gap, and the bidirectional cross-attention mechanism facilitates feature alignment between the encoding and decoding stages.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"7560099"},"PeriodicalIF":1.3,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12313379/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-30eCollection Date: 2025-01-01DOI: 10.1155/ijbi/5535505
S Trisheela, Roshan Fernandes, Anisha P Rodrigues, S Supreeth, B J Ambika, Piyush Kumar Pareek, Rakesh Kumar Godi, G Shruthi
The objective of AI research and development is to create intelligent systems capable of performing tasks and reasoning like humans. Artificial intelligence extends beyond pattern recognition, planning, and problem-solving, particularly in the realm of machine learning, where deep learning frameworks play a pivotal role. This study focuses on enhancing brain tumour detection in MRI scans using deep learning techniques. Malignant brain tumours result from abnormal cell growth, leading to severe neurological complications and high mortality rates. Early diagnosis is essential for effective treatment, and our research aims to improve detection accuracy through advanced AI methodologies. We propose a modified DarkNet-53 architecture, optimized with invasive weed optimization (IWO), to extract critical features from preprocessed MRI images. The model's presentation is assessed using accuracy, recall, loss, and AUC, achieving a 95% success rate on a dataset of 3264 MRI scans. The results demonstrate that our approach surpasses existing methods in accurately identifying a wide range of brain tumours at an early stage, contributing to improved diagnostic precision and patient outcomes.
{"title":"Brain Tumour Detection Using VGG-Based Feature Extraction With Modified DarkNet-53 Model.","authors":"S Trisheela, Roshan Fernandes, Anisha P Rodrigues, S Supreeth, B J Ambika, Piyush Kumar Pareek, Rakesh Kumar Godi, G Shruthi","doi":"10.1155/ijbi/5535505","DOIUrl":"10.1155/ijbi/5535505","url":null,"abstract":"<p><p>The objective of AI research and development is to create intelligent systems capable of performing tasks and reasoning like humans. Artificial intelligence extends beyond pattern recognition, planning, and problem-solving, particularly in the realm of machine learning, where deep learning frameworks play a pivotal role. This study focuses on enhancing brain tumour detection in MRI scans using deep learning techniques. Malignant brain tumours result from abnormal cell growth, leading to severe neurological complications and high mortality rates. Early diagnosis is essential for effective treatment, and our research aims to improve detection accuracy through advanced AI methodologies. We propose a modified DarkNet-53 architecture, optimized with invasive weed optimization (IWO), to extract critical features from preprocessed MRI images. The model's presentation is assessed using accuracy, recall, loss, and AUC, achieving a 95% success rate on a dataset of 3264 MRI scans. The results demonstrate that our approach surpasses existing methods in accurately identifying a wide range of brain tumours at an early stage, contributing to improved diagnostic precision and patient outcomes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"5535505"},"PeriodicalIF":3.3,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12143945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-04eCollection Date: 2025-01-01DOI: 10.1155/ijbi/8872566
Farhana Parveen, Parveen Wahid
Wideband antennas are extensively used in many medical applications, which require the placement of the antenna on or near a human body. The performance of the antenna should remain compliant with the requirements of the target application when placed in front of the subject under investigation. Since the performance of an antenna varies when the distance from the subject is changed, the effect of varying the distance of a miniaturized wideband antipodal Vivaldi antenna from a numerical head model is analyzed in this work. The analyses can demonstrate whether the antenna performance and its effect on the head aptly comply with the requirements for the intended application of microwave brain imaging. It is observed that, when the antenna-head distance is increased, the background noise in the received signal is enhanced, whereas when the distance is reduced, the radiation-safety consideration on the head is affected. Hence, the optimum distance should provide a good compromise in terms of both signal receptibility by the antenna and radiation safety on the head. As the optimum antenna-to-head distance may vary with the change in antenna, measurement system, and the surrounding medium, this work presents a basic analysis procedure to find the appropriate antenna distance for the intended application.
{"title":"Analysis of the Effect of Antenna-to-Head Distance for Microwave Brain Imaging Applications.","authors":"Farhana Parveen, Parveen Wahid","doi":"10.1155/ijbi/8872566","DOIUrl":"https://doi.org/10.1155/ijbi/8872566","url":null,"abstract":"<p><p>Wideband antennas are extensively used in many medical applications, which require the placement of the antenna on or near a human body. The performance of the antenna should remain compliant with the requirements of the target application when placed in front of the subject under investigation. Since the performance of an antenna varies when the distance from the subject is changed, the effect of varying the distance of a miniaturized wideband antipodal Vivaldi antenna from a numerical head model is analyzed in this work. The analyses can demonstrate whether the antenna performance and its effect on the head aptly comply with the requirements for the intended application of microwave brain imaging. It is observed that, when the antenna-head distance is increased, the background noise in the received signal is enhanced, whereas when the distance is reduced, the radiation-safety consideration on the head is affected. Hence, the optimum distance should provide a good compromise in terms of both signal receptibility by the antenna and radiation safety on the head. As the optimum antenna-to-head distance may vary with the change in antenna, measurement system, and the surrounding medium, this work presents a basic analysis procedure to find the appropriate antenna distance for the intended application.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"8872566"},"PeriodicalIF":3.3,"publicationDate":"2025-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12066184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144006452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24eCollection Date: 2025-01-01DOI: 10.1155/ijbi/1528291
Raghad Aljondi, Rahaf Alem, Rowa Aljondi, Abdulrahman Tajaldeen, Salem Saeed Alghamdi, Mohammed Majdi Toras
Introduction: Doctors can play a significant role in attributing to patient safety concerning exposure to ionizing radiation. Therefore, healthcare professionals should have adequate knowledge about radiation risk and protection of different medical imaging examinations. This study aims to evaluate the knowledge about radiation protection (RP) and applications of different imaging modalities (IMs) among medical students in their clinical years and intern, in Jeddah, Saudi Arabia. Materials and Methods: A cross-sectional study based on an online questionnaire was performed in Jeddah, Saudi Arabia, on 170 medical students during January 2024; the study participants included clinical years medical students (from Years 4 to 6) and interns of both gender and basic year medical students, and specialists and consultants were excluded. For each participant, the percentage of correct answers was calculated for the knowledge RP and knowledge in IMs separately, and each participant will have two scores, RP knowledge score (RPKS) and IM knowledge score (IMKS). Results: A total of 170 medical students responded and completed the questionnaire. The overall levels of awareness and knowledge of the students was determined through calculations of their scores in answering the questionnaire; students in this study group have low average knowledge score in RP, which is 43, while they have moderate-high knowledge score in IMs, which is 68. Regarding the knowledge score, for the RPKS, the best participant scored 82, while the worst scored 0, whereas for IMKS, the best participant score 100, while the worst scored 0. However, according to the SD, participants generally differ between each other by 19 in RPKS and 31 in IMKS. Conclusions: The assessments of medical students' knowledge regarding radiation exposure in diagnostic modalities reveal a low level of confidence in their knowledge of ionizing radiation dose parameters. Furthermore, the mean scores on overall knowledge assessments indicate a need for improvement in RP knowledge for medical students. To address this gap, a comprehensive modification of the undergraduate medical curriculum's radiology component is required by enhancing active learning approaches and integrating radiation safety courses early in the medical curriculum. Medical education institutions could implement ongoing workshops, online modules, and certification programs to reinforce radiation safety principles.
{"title":"Assessments of Medical Student's Knowledge About Radiation Protection and Different Imaging Modalities in Jeddah, Saudi Arabia.","authors":"Raghad Aljondi, Rahaf Alem, Rowa Aljondi, Abdulrahman Tajaldeen, Salem Saeed Alghamdi, Mohammed Majdi Toras","doi":"10.1155/ijbi/1528291","DOIUrl":"https://doi.org/10.1155/ijbi/1528291","url":null,"abstract":"<p><p><b>Introduction:</b> Doctors can play a significant role in attributing to patient safety concerning exposure to ionizing radiation. Therefore, healthcare professionals should have adequate knowledge about radiation risk and protection of different medical imaging examinations. This study aims to evaluate the knowledge about radiation protection (RP) and applications of different imaging modalities (IMs) among medical students in their clinical years and intern, in Jeddah, Saudi Arabia. <b>Materials and Methods:</b> A cross-sectional study based on an online questionnaire was performed in Jeddah, Saudi Arabia, on 170 medical students during January 2024; the study participants included clinical years medical students (from Years 4 to 6) and interns of both gender and basic year medical students, and specialists and consultants were excluded. For each participant, the percentage of correct answers was calculated for the knowledge RP and knowledge in IMs separately, and each participant will have two scores, RP knowledge score (RPKS) and IM knowledge score (IMKS). <b>Results:</b> A total of 170 medical students responded and completed the questionnaire. The overall levels of awareness and knowledge of the students was determined through calculations of their scores in answering the questionnaire; students in this study group have low average knowledge score in RP, which is 43, while they have moderate-high knowledge score in IMs, which is 68. Regarding the knowledge score, for the RPKS, the best participant scored 82, while the worst scored 0, whereas for IMKS, the best participant score 100, while the worst scored 0. However, according to the SD, participants generally differ between each other by 19 in RPKS and 31 in IMKS. <b>Conclusions:</b> The assessments of medical students' knowledge regarding radiation exposure in diagnostic modalities reveal a low level of confidence in their knowledge of ionizing radiation dose parameters. Furthermore, the mean scores on overall knowledge assessments indicate a need for improvement in RP knowledge for medical students. To address this gap, a comprehensive modification of the undergraduate medical curriculum's radiology component is required by enhancing active learning approaches and integrating radiation safety courses early in the medical curriculum. Medical education institutions could implement ongoing workshops, online modules, and certification programs to reinforce radiation safety principles.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"1528291"},"PeriodicalIF":3.3,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12045688/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-19eCollection Date: 2025-01-01DOI: 10.1155/ijbi/6464273
Benard Ohene-Botwe, Samuel Anim-Sampong, Robert Saizi
Introduction: Many diagnostic reference levels (DRLs) in computed tomography (CT) imaging are based mainly on anatomical locations and often overlook variations in radiation exposure due to different clinical indications. While indication-based DRLs, derived from dose descriptors like volume-weighted CT dose index (CTDIvol) and dose length product (DLP), are recommended for optimising patient radiation exposure, many studies still use anatomical-based DRL values. This study is aimed at quantifying the differences between anatomical and indication-based DRL values in head CT imaging and assessing its implications for radiation dose management. This will support the narrative when explaining the distinction between indication-based DRLs and anatomical DRLs for patients' dose management. Methods: Employing a retrospective quantitative study design, we developed and compared anatomical and common indication-based DRL values using a dataset of head CT scans with similar characteristics. The indications included in the study were brain tumor/intracranial space-occupying lesion (ISOL), head injury/trauma, stroke, and anatomical examinations. Data analysis was conducted using SPSS Version 29. Results: The findings suggest that using anatomical-based DLP DRL values for CT head examinations leads to underestimations in the median, 25th percentile, and 75th percentile values of head injury/trauma by 20.2%, 30.0%, and 14.5% in single-phase CT head procedures. Conversely, for the entire examination, using anatomical-based DLP DRL as a benchmark for CT stroke DRL overestimates median, 25th percentile, and 75th percentile values by 18.3%, 23.9%, and 13.5%. Brain tumor/ISOL DLP values are underestimated by 62.6%, 60.4%, and 71.8%, respectively. Conclusion:The study highlights that using anatomical DLP DRL values for specific indications in head CT scans can lead to underestimated or overestimated DLP values, making them less reliable for radiation management compared to indication-based DRLs. Therefore, it is imperative to promote the establishment and use of indication-based DRLs for more accurate dose management in CT imaging.
计算机断层扫描(CT)成像中的许多诊断参考水平(drl)主要基于解剖位置,往往忽略了由于不同临床适应症而导致的辐射暴露变化。虽然基于适应症的DRL(来自容积加权CT剂量指数(CTDIvol)和剂量长度积(DLP)等剂量描述符)被推荐用于优化患者的辐射暴露,但许多研究仍然使用基于解剖的DRL值。本研究旨在量化头部CT成像中解剖和指征DRL值之间的差异,并评估其对辐射剂量管理的影响。这将支持在解释患者剂量管理中基于适应症的drl和解剖性drl的区别时的叙述。方法:采用回顾性定量研究设计,我们使用具有相似特征的头部CT扫描数据集开发并比较了解剖学和常见适应症的DRL值。研究的适应症包括脑肿瘤/颅内占位性病变(ISOL)、头部损伤/创伤、脑卒中和解剖检查。使用SPSS Version 29进行数据分析。结果:研究结果表明,使用基于解剖的DLP DRL值进行CT头部检查导致在单相CT头部检查中,头部损伤/创伤的中位数、第25百分位和第75百分位值被低估20.2%、30.0%和14.5%。相反,在整个检查中,使用基于解剖的DLP DRL作为CT脑卒中DRL的基准,对中位数、第25百分位和第75百分位值的高估分别为18.3%、23.9%和13.5%。脑肿瘤/ISOL的DLP值分别被低估62.6%、60.4%和71.8%。结论:该研究强调,在头部CT扫描中,将解剖DLP DRL值用于特定适应症可能会导致DLP值被低估或高估,与基于适应症的DRL相比,它们在放射管理中的可靠性更低。因此,促进基于适应症的drl的建立和应用,实现CT成像中更准确的剂量管理势在必行。
{"title":"Comparison of Anatomical and Indication-Based Diagnostic Reference Levels (DRLs) in Head CT Imaging: Implications for Radiation Dose Management.","authors":"Benard Ohene-Botwe, Samuel Anim-Sampong, Robert Saizi","doi":"10.1155/ijbi/6464273","DOIUrl":"10.1155/ijbi/6464273","url":null,"abstract":"<p><p><b>Introduction:</b> Many diagnostic reference levels (DRLs) in computed tomography (CT) imaging are based mainly on anatomical locations and often overlook variations in radiation exposure due to different clinical indications. While indication-based DRLs, derived from dose descriptors like volume-weighted CT dose index (CTDI<sub>vol</sub>) and dose length product (DLP), are recommended for optimising patient radiation exposure, many studies still use anatomical-based DRL values. This study is aimed at quantifying the differences between anatomical and indication-based DRL values in head CT imaging and assessing its implications for radiation dose management. This will support the narrative when explaining the distinction between indication-based DRLs and anatomical DRLs for patients' dose management. <b>Methods:</b> Employing a retrospective quantitative study design, we developed and compared anatomical and common indication-based DRL values using a dataset of head CT scans with similar characteristics. The indications included in the study were brain tumor/intracranial space-occupying lesion (ISOL), head injury/trauma, stroke, and anatomical examinations. Data analysis was conducted using SPSS Version 29. <b>Results:</b> The findings suggest that using anatomical-based DLP DRL values for CT head examinations leads to underestimations in the median, 25th percentile, and 75th percentile values of head injury/trauma by 20.2%, 30.0%, and 14.5% in single-phase CT head procedures. Conversely, for the entire examination, using anatomical-based DLP DRL as a benchmark for CT stroke DRL overestimates median, 25th percentile, and 75th percentile values by 18.3%, 23.9%, and 13.5%. Brain tumor/ISOL DL<i>P</i> values are underestimated by 62.6%, 60.4%, and 71.8%, respectively. <b>Conclusion:</b>The study highlights that using anatomical DLP DRL values for specific indications in head CT scans can lead to underestimated or overestimated DL<i>P</i> values, making them less reliable for radiation management compared to indication-based DRLs. Therefore, it is imperative to promote the establishment and use of indication-based DRLs for more accurate dose management in CT imaging.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"6464273"},"PeriodicalIF":3.3,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11944678/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143721830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-16eCollection Date: 2025-01-01DOI: 10.1155/ijbi/9175473
Jiao Ding, Jie Chang, Renrui Han, Li Yang
Accurate segmentation of COVID-19 CT images is crucial for reducing the severity and mortality rates associated with COVID-19 infections. In response to blurred boundaries and high variability characteristic of lesion areas in COVID-19 CT images, we introduce CDSE-UNet: a novel UNet-based segmentation model that integrates Canny operator edge detection and a Dual-Path SENet Feature Fusion Block (DSBlock). This model enhances the standard UNet architecture by employing the Canny operator for edge detection in sample images, paralleling this with a similar network structure for semantic feature extraction. A key innovation is the DSBlock, applied across corresponding network layers to effectively combine features from both image paths. Moreover, we have developed a Multiscale Convolution Block (MSCovBlock), replacing the standard convolution in UNet, to adapt to the varied lesion sizes and shapes. This addition not only aids in accurately classifying lesion edge pixels but also significantly improves channel differentiation and expands the capacity of the model. Our evaluations on public datasets demonstrate CDSE-UNet's superior performance over other leading models. Specifically, CDSE-UNet achieved an accuracy of 0.9929, a recall of 0.9604, a DSC of 0.9063, and an IoU of 0.8286, outperforming UNet, Attention-UNet, Trans-Unet, Swin-Unet, and Dense-UNet in these metrics.
{"title":"CDSE-UNet: Enhancing COVID-19 CT Image Segmentation With Canny Edge Detection and Dual-Path SENet Feature Fusion.","authors":"Jiao Ding, Jie Chang, Renrui Han, Li Yang","doi":"10.1155/ijbi/9175473","DOIUrl":"10.1155/ijbi/9175473","url":null,"abstract":"<p><p>Accurate segmentation of COVID-19 CT images is crucial for reducing the severity and mortality rates associated with COVID-19 infections. In response to blurred boundaries and high variability characteristic of lesion areas in COVID-19 CT images, we introduce CDSE-UNet: a novel UNet-based segmentation model that integrates Canny operator edge detection and a Dual-Path SENet Feature Fusion Block (DSBlock). This model enhances the standard UNet architecture by employing the Canny operator for edge detection in sample images, paralleling this with a similar network structure for semantic feature extraction. A key innovation is the DSBlock, applied across corresponding network layers to effectively combine features from both image paths. Moreover, we have developed a Multiscale Convolution Block (MSCovBlock), replacing the standard convolution in UNet, to adapt to the varied lesion sizes and shapes. This addition not only aids in accurately classifying lesion edge pixels but also significantly improves channel differentiation and expands the capacity of the model. Our evaluations on public datasets demonstrate CDSE-UNet's superior performance over other leading models. Specifically, CDSE-UNet achieved an accuracy of 0.9929, a recall of 0.9604, a DSC of 0.9063, and an IoU of 0.8286, outperforming UNet, Attention-UNet, Trans-Unet, Swin-Unet, and Dense-UNet in these metrics.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"9175473"},"PeriodicalIF":3.3,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11930385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143693961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: This study is aimed at assessing glymphatic function by diffusion tensor image analysis along the perivascular space (DTI-ALPS) and its associations with cortical morphological changes and severity of accommodative asthenopia (AA). Methods: We prospectively enrolled 50 patients with AA and 47 healthy controls (HCs). All participants underwent diffusion tensor imaging (DTI) and T1-weighted imaging and completed the asthenopia survey scale (ASS). Differences in brain morphometry and the analysis along the perivascular space (ALPS) index between the two groups were compared. The correlation and mediation analyses were conducted to explore the relationships between them. Results: Compared to HCs, patients with AA exhibited significantly increased sulcal depth in the left superior occipital gyrus (SOG.L) and increased cortical thickness in the left superior temporal gyrus (STG.L), left middle occipital gyrus (MOG.L), left postcentral gyrus (PoCG.L), and left precuneus (PCUN.L). Additionally, patients with AA had a significantly lower ALPS index than HCs. The sulcal depth of the SOG.L was significantly positively correlated with the ASS score in patients with AA, and a positive correlation was found between the cortical thickness of the MOG.L and ASS score. The ALPS index was negatively associated with the sulcal depth of the SOG.L and cortical thickness of the MOG.L. Mediation analysis revealed that the sulcal depth of SOG.L and cortical thickness of MOG.L partially mediated the impact of the DTI-ALPS index on the ASS score. Conclusion: Our findings suggested that patients with AA exhibit impaired glymphatic function, which may contribute to the severity of asthenopia through its influence on cortical morphological changes. The ALPS index is anticipated to become a potential imaging biomarker for patients with AA. Trial Registration: Chinese Registry of Clinical Trials: ChiCTR1900028306.
研究目的本研究旨在通过沿血管周围空间的弥散张量图像分析(DTI-ALPS)评估肾上腺功能及其与皮质形态学变化和适应性散光(AA)严重程度的关系。研究方法我们前瞻性地招募了 50 名 AA 患者和 47 名健康对照者(HCs)。所有参与者都接受了弥散张量成像(DTI)和 T1 加权成像,并完成了散光调查量表(ASS)。比较了两组之间大脑形态测量和沿血管周围空间分析(ALPS)指数的差异。进行相关分析和中介分析以探讨它们之间的关系。结果显示与 HC 相比,AA 患者左侧枕上回(SOG.L)的脑沟深度明显增加,左侧颞上回(STG.L)、左侧枕中回(MOG.L)、左侧中央后回(PoCG.L)和左侧楔前回(PCUN.L)的皮质厚度增加。此外,AA 患者的 ALPS 指数明显低于 HC 患者。SOG.L的沟深度与AA患者的ASS评分呈显著正相关,MOG.L的皮质厚度与ASS评分呈正相关。ALPS指数与SOG.L的沟深度和MOG.L的皮质厚度呈负相关。中介分析显示,SOG.L的沟深度和MOG.L的皮质厚度部分中介了DTI-ALPS指数对ASS评分的影响。结论我们的研究结果表明,AA 患者的眼球功能受损,这可能会通过影响皮质形态变化而导致散光的严重程度。ALPS 指数有望成为 AA 患者的潜在影像生物标志物。试验注册:中国临床试验注册中心:ChiCTR1900028306。
{"title":"Cortical Morphology Alterations Mediate the Relationship Between Glymphatic System Function and the Severity of Asthenopia.","authors":"Yilei Chen, Jun Xu, Yingnan Kong, Yingjie Kang, Zhigang Gong, Hui Wang, Yanwen Huang, Songhua Zhan, Ying Yu, Xiaoli Lv, Wenli Tan","doi":"10.1155/ijbi/4464776","DOIUrl":"10.1155/ijbi/4464776","url":null,"abstract":"<p><p><b>Objectives</b>: This study is aimed at assessing glymphatic function by diffusion tensor image analysis along the perivascular space (DTI-ALPS) and its associations with cortical morphological changes and severity of accommodative asthenopia (AA). <b>Methods</b>: We prospectively enrolled 50 patients with AA and 47 healthy controls (HCs). All participants underwent diffusion tensor imaging (DTI) and T1-weighted imaging and completed the asthenopia survey scale (ASS). Differences in brain morphometry and the analysis along the perivascular space (ALPS) index between the two groups were compared. The correlation and mediation analyses were conducted to explore the relationships between them. <b>Results</b>: Compared to HCs, patients with AA exhibited significantly increased sulcal depth in the left superior occipital gyrus (SOG.L) and increased cortical thickness in the left superior temporal gyrus (STG.L), left middle occipital gyrus (MOG.L), left postcentral gyrus (PoCG.L), and left precuneus (PCUN.L). Additionally, patients with AA had a significantly lower ALPS index than HCs. The sulcal depth of the SOG.L was significantly positively correlated with the ASS score in patients with AA, and a positive correlation was found between the cortical thickness of the MOG.L and ASS score. The ALPS index was negatively associated with the sulcal depth of the SOG.L and cortical thickness of the MOG.L. Mediation analysis revealed that the sulcal depth of SOG.L and cortical thickness of MOG.L partially mediated the impact of the DTI-ALPS index on the ASS score. <b>Conclusion</b>: Our findings suggested that patients with AA exhibit impaired glymphatic function, which may contribute to the severity of asthenopia through its influence on cortical morphological changes. The ALPS index is anticipated to become a potential imaging biomarker for patients with AA. <b>Trial Registration:</b> Chinese Registry of Clinical Trials: ChiCTR1900028306.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"4464776"},"PeriodicalIF":3.3,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-24eCollection Date: 2025-01-01DOI: 10.1155/ijbi/9957797
Margje B Buitenhuis, Reinoud J Klijn, Antoine J W P Rosenberg, Caroline M Speksnijder
Introduction: This study is aimed at determining the validity and responsiveness of three-dimensional (3D) stereophotogrammetry as a measurement instrument for evaluating soft tissue changes in the head and neck area. Method: Twelve patients received a bilateral sagittal split osteotomy (BSSO). 3D stereophotogrammetry, tape measurements, and a global perceived effect scale were performed within the first, second, and third postoperative weeks and at 3 months postoperatively. Distance measurements, mean and root mean square of the distance map, and volume differences were obtained from 3D stereophotogrammetry. Validity and responsiveness were assessed by correlation coefficients. Results: Significant correlations between distances from 3D stereophotogrammetry and tape measurements varied from 0.583 to 0.988, meaning moderate to very high validity. The highest correlations were found for the total sum of distances (r ≥ 0.922). 3D stereophotogrammetry parameters presented weak to high responsiveness, depending on the evaluated head and neck region. None of the parameters for 3D stereophotogrammetry significantly correlated with the global perceived effect scale outcomes for all measurement moments. Conclusion: 3D stereophotogrammetry has high to very high construct validity for the total sum of distances and weak to high responsiveness. 3D stereophotogrammetry seems promising for measuring soft tissue changes after surgery but is not interchangeable with subjective measurements.
{"title":"Validity and Responsiveness of Measuring Facial Swelling With 3D Stereophotogrammetry in Patients After Bilateral Sagittal Split Osteotomy-A Prospective Clinimetric Study.","authors":"Margje B Buitenhuis, Reinoud J Klijn, Antoine J W P Rosenberg, Caroline M Speksnijder","doi":"10.1155/ijbi/9957797","DOIUrl":"10.1155/ijbi/9957797","url":null,"abstract":"<p><p><b>Introduction:</b> This study is aimed at determining the validity and responsiveness of three-dimensional (3D) stereophotogrammetry as a measurement instrument for evaluating soft tissue changes in the head and neck area. <b>Method:</b> Twelve patients received a bilateral sagittal split osteotomy (BSSO). 3D stereophotogrammetry, tape measurements, and a global perceived effect scale were performed within the first, second, and third postoperative weeks and at 3 months postoperatively. Distance measurements, mean and root mean square of the distance map, and volume differences were obtained from 3D stereophotogrammetry. Validity and responsiveness were assessed by correlation coefficients. <b>Results:</b> Significant correlations between distances from 3D stereophotogrammetry and tape measurements varied from 0.583 to 0.988, meaning moderate to very high validity. The highest correlations were found for the total sum of distances (<i>r</i> ≥ 0.922). 3D stereophotogrammetry parameters presented weak to high responsiveness, depending on the evaluated head and neck region. None of the parameters for 3D stereophotogrammetry significantly correlated with the global perceived effect scale outcomes for all measurement moments. <b>Conclusion:</b> 3D stereophotogrammetry has high to very high construct validity for the total sum of distances and weak to high responsiveness. 3D stereophotogrammetry seems promising for measuring soft tissue changes after surgery but is not interchangeable with subjective measurements.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"9957797"},"PeriodicalIF":3.3,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11876518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Accurate segmentation of the cisternal segment of the trigeminal nerve plays a critical role in identifying and treating different trigeminal nerve-related disorders, including trigeminal neuralgia (TN). However, the current manual segmentation process is prone to interobserver variability and consumes a significant amount of time. To overcome this challenge, we propose a deep learning-based approach, U-Net, that automatically segments the cisternal segment of the trigeminal nerve. Methods: To evaluate the efficacy of our proposed approach, the U-Net model was trained and validated on healthy control images and tested in on a separate dataset of TN patients. The methods such as Dice, Jaccard, positive predictive value (PPV), sensitivity (SEN), center-of-mass distance (CMD), and Hausdorff distance were used to assess segmentation performance. Results: Our approach achieved high accuracy in segmenting the cisternal segment of the trigeminal nerve, demonstrating robust performance and comparable results to those obtained by participating radiologists. Conclusion: The proposed deep learning-based approach, U-Net, shows promise in improving the accuracy and efficiency of segmenting the cisternal segment of the trigeminal nerve. To the best of our knowledge, this is the first fully automated segmentation method for the trigeminal nerve in anatomic MRI, and it has the potential to aid in the diagnosis and treatment of various trigeminal nerve-related disorders, such as TN.
{"title":"Automatic Segmentation of the Cisternal Segment of Trigeminal Nerve on MRI Using Deep Learning.","authors":"Li-Ming Hsu, Shuai Wang, Sheng-Wei Chang, Yu-Li Lee, Jen-Tsung Yang, Ching-Po Lin, Yuan-Hsiung Tsai","doi":"10.1155/ijbi/6694599","DOIUrl":"10.1155/ijbi/6694599","url":null,"abstract":"<p><p><b>Purpose:</b> Accurate segmentation of the cisternal segment of the trigeminal nerve plays a critical role in identifying and treating different trigeminal nerve-related disorders, including trigeminal neuralgia (TN). However, the current manual segmentation process is prone to interobserver variability and consumes a significant amount of time. To overcome this challenge, we propose a deep learning-based approach, U-Net, that automatically segments the cisternal segment of the trigeminal nerve. <b>Methods:</b> To evaluate the efficacy of our proposed approach, the U-Net model was trained and validated on healthy control images and tested in on a separate dataset of TN patients. The methods such as Dice, Jaccard, positive predictive value (PPV), sensitivity (SEN), center-of-mass distance (CMD), and Hausdorff distance were used to assess segmentation performance. <b>Results:</b> Our approach achieved high accuracy in segmenting the cisternal segment of the trigeminal nerve, demonstrating robust performance and comparable results to those obtained by participating radiologists. <b>Conclusion:</b> The proposed deep learning-based approach, U-Net, shows promise in improving the accuracy and efficiency of segmenting the cisternal segment of the trigeminal nerve. To the best of our knowledge, this is the first fully automated segmentation method for the trigeminal nerve in anatomic MRI, and it has the potential to aid in the diagnosis and treatment of various trigeminal nerve-related disorders, such as TN.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"6694599"},"PeriodicalIF":3.3,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11847612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}