首页 > 最新文献

International Journal of Imaging Systems and Technology最新文献

英文 中文
MMCAF: A Survival Status Prediction Method Based on Cross-Attention Fusion of Multimodal Colorectal Cancer Data
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-14 DOI: 10.1002/ima.70051
Xueping Tan, Dinghui Wu, Hao Wang, Zihao Zhao, Yuxi Ge, Shudong Hu

The employment of artificial intelligence methods in computer-assisted diagnosis systems is critical for colorectal cancer survival analysis and prognosis. However, due to the low prediction accuracy of single-modal data research and the complexity of multimodal data fusion methods, the current study's effect on colorectal cancer is minimal. To address this issue, the authors offer a multimodal cross attention fusion (MMCAF) technique for predicting colorectal cancer survival status. First, feature engineering is used to create feature sets for every mode and to address the heterogeneity of multimodal data. Second, a three-mode fusion technique is used to allocate weight to single-mode and multimodal features via channels and cross-attention processes. Lastly, the cross-entropy loss function is minimized in order to estimate the classification survival. The experimental results reveal that the MMCAF approach predicts survival states with 97.73% accuracy and an area under the receiver operating characteristic curve (AUC) of 0.99. When compared to the best outcome of other fusion algorithms (feature concatenation), the prediction accuracy increases by about 6 percentage points, while the AUC increases by 7 percentage points. This finding thoroughly demonstrates MMCAF's efficacy in predicting colorectal cancer survival.

{"title":"MMCAF: A Survival Status Prediction Method Based on Cross-Attention Fusion of Multimodal Colorectal Cancer Data","authors":"Xueping Tan,&nbsp;Dinghui Wu,&nbsp;Hao Wang,&nbsp;Zihao Zhao,&nbsp;Yuxi Ge,&nbsp;Shudong Hu","doi":"10.1002/ima.70051","DOIUrl":"https://doi.org/10.1002/ima.70051","url":null,"abstract":"<div>\u0000 \u0000 <p>The employment of artificial intelligence methods in computer-assisted diagnosis systems is critical for colorectal cancer survival analysis and prognosis. However, due to the low prediction accuracy of single-modal data research and the complexity of multimodal data fusion methods, the current study's effect on colorectal cancer is minimal. To address this issue, the authors offer a multimodal cross attention fusion (MMCAF) technique for predicting colorectal cancer survival status. First, feature engineering is used to create feature sets for every mode and to address the heterogeneity of multimodal data. Second, a three-mode fusion technique is used to allocate weight to single-mode and multimodal features via channels and cross-attention processes. Lastly, the cross-entropy loss function is minimized in order to estimate the classification survival. The experimental results reveal that the MMCAF approach predicts survival states with 97.73% accuracy and an area under the receiver operating characteristic curve (AUC) of 0.99. When compared to the best outcome of other fusion algorithms (feature concatenation), the prediction accuracy increases by about 6 percentage points, while the AUC increases by 7 percentage points. This finding thoroughly demonstrates MMCAF's efficacy in predicting colorectal cancer survival.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143404494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dermatology 2.0: Deploying YOLOv11 for Accurate and Accessible Skin Disease Detection: A Web-Based Approach
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-14 DOI: 10.1002/ima.70050
Adnan Hameed, Said Khalid Shah, Sajid Ullah Khan, Sultan Alanazi, Shabbab Ali Algamdi

Skin disorders are common and require diagnosis and treatment in a timely manner. In traditional diagnostics, great demands are made on the time and interpretation of the results. To cope with this, we introduce YOLOv11, an enhanced deep learning model designed for skin disease detection and classification. The model integrates EfficientNetB0 as the backbone for feature extraction and ResNet50 in the head for robust classification and localization. Our model was trained on a dataset of 10 common skin diseases to ensure robustness and accuracy; we were able to classify the diseases with a mean Average Precision (mAP) of 89.8%, a precision of 90%, and a recall of 88% on the test dataset. This model was developed in the form of a web application based on Streamlit, which was used for easy uploading of pictures by both clinicians and patients for threshold diagnostics. This upsurge in technology allows for treatment without visitation, making skin disease diagnosis more dynamic.

{"title":"Dermatology 2.0: Deploying YOLOv11 for Accurate and Accessible Skin Disease Detection: A Web-Based Approach","authors":"Adnan Hameed,&nbsp;Said Khalid Shah,&nbsp;Sajid Ullah Khan,&nbsp;Sultan Alanazi,&nbsp;Shabbab Ali Algamdi","doi":"10.1002/ima.70050","DOIUrl":"https://doi.org/10.1002/ima.70050","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin disorders are common and require diagnosis and treatment in a timely manner. In traditional diagnostics, great demands are made on the time and interpretation of the results. To cope with this, we introduce YOLOv11, an enhanced deep learning model designed for skin disease detection and classification. The model integrates EfficientNetB0 as the backbone for feature extraction and ResNet50 in the head for robust classification and localization. Our model was trained on a dataset of 10 common skin diseases to ensure robustness and accuracy; we were able to classify the diseases with a mean Average Precision (mAP) of 89.8%, a precision of 90%, and a recall of 88% on the test dataset. This model was developed in the form of a web application based on Streamlit, which was used for easy uploading of pictures by both clinicians and patients for threshold diagnostics. This upsurge in technology allows for treatment without visitation, making skin disease diagnosis more dynamic.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LFBTS: Enhanced Multimodality MRI Fusion for Brain Tumor Segmentation With Limited Computational Resources LFBTS:在计算资源有限的情况下增强多模态磁共振成像融合以进行脑肿瘤分割
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-13 DOI: 10.1002/ima.70044
Yuanjing Hu, Aibin Huang

Efficient and accurate segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) is crucial for clinical diagnosis and treatment planning. Traditional methods tend to concentrate solely on feature extraction from individual modalities, overlooking the substantial potential of multimodal feature fusion in enhancing segmentation performance. In this paper, we present a novel method that not only integrates salient features from different modalities strategically but also takes into account the constraints imposed by limited computational resources, ensuring both accuracy and efficiency. Two key modules, the attention-guided cross-modality fusion module (ACFM) and the hierarchical asymmetric convolution module (HACM), were designed to leverage the distinct modalities and the varying information focuses found within different dimensions. The ACFM is based on a transformer framework, utilizing self-attention and cross-attention mechanisms. These mechanisms enable the capture of both local and global dependencies within and between different MRI modalities. This design allows for the effective fusion of complementary features from multiple modalities, thereby enhancing segmentation performance by leveraging the valuable information contained in each modality. Meanwhile, the HACM reduces computational complexity using a pseudo-3D convolution approach. This approach breaks down 3D convolutions into components along the transverse and sagittal axes. Unlike traditional 2D convolutions, this method preserves essential spatial information across dimensions. It ensures accurate segmentation while maximizing efficiency by capitalizing on the varying focus of information in different spatial planes. This approach takes advantage of the varying information density in these dimensions, achieving a balance between accuracy and efficiency. Through extensive experiments on the BraTS2021 dataset, our proposed modality fusion-based network under limited resources (LFBTS) achieves dice scores of 0.925, 0.911, and 0.886 for whole tumor (WT), tumor core (TC), and enhanced tumor (ET), respectively. These results outperform state-of-the-art (SOTA) models and consistently demonstrate superiority over models developed in the preceding 2 years. This highlights the potential of our approach in advancing brain tumor segmentation and improving clinical decision-making, particularly in settings with limited resources.

{"title":"LFBTS: Enhanced Multimodality MRI Fusion for Brain Tumor Segmentation With Limited Computational Resources","authors":"Yuanjing Hu,&nbsp;Aibin Huang","doi":"10.1002/ima.70044","DOIUrl":"https://doi.org/10.1002/ima.70044","url":null,"abstract":"<div>\u0000 \u0000 <p>Efficient and accurate segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) is crucial for clinical diagnosis and treatment planning. Traditional methods tend to concentrate solely on feature extraction from individual modalities, overlooking the substantial potential of multimodal feature fusion in enhancing segmentation performance. In this paper, we present a novel method that not only integrates salient features from different modalities strategically but also takes into account the constraints imposed by limited computational resources, ensuring both accuracy and efficiency. Two key modules, the attention-guided cross-modality fusion module (ACFM) and the hierarchical asymmetric convolution module (HACM), were designed to leverage the distinct modalities and the varying information focuses found within different dimensions. The ACFM is based on a transformer framework, utilizing self-attention and cross-attention mechanisms. These mechanisms enable the capture of both local and global dependencies within and between different MRI modalities. This design allows for the effective fusion of complementary features from multiple modalities, thereby enhancing segmentation performance by leveraging the valuable information contained in each modality. Meanwhile, the HACM reduces computational complexity using a pseudo-3D convolution approach. This approach breaks down 3D convolutions into components along the transverse and sagittal axes. Unlike traditional 2D convolutions, this method preserves essential spatial information across dimensions. It ensures accurate segmentation while maximizing efficiency by capitalizing on the varying focus of information in different spatial planes. This approach takes advantage of the varying information density in these dimensions, achieving a balance between accuracy and efficiency. Through extensive experiments on the BraTS2021 dataset, our proposed modality fusion-based network under limited resources (LFBTS) achieves dice scores of 0.925, 0.911, and 0.886 for whole tumor (WT), tumor core (TC), and enhanced tumor (ET), respectively. These results outperform state-of-the-art (SOTA) models and consistently demonstrate superiority over models developed in the preceding 2 years. This highlights the potential of our approach in advancing brain tumor segmentation and improving clinical decision-making, particularly in settings with limited resources.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DML-GNN: ASD Diagnosis Based on Dual-Atlas Multi-Feature Learning Graph Neural Network
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-12 DOI: 10.1002/ima.70038
Shuaiqi Liu, Chaolei Sun, Jinkai Li, Shuihua Wang, Ling Zhao

To better automate the diagnosis of autism spectrum disorder (ASD) and improve diagnostic accuracy, a graph neural network via dual-atlas multi-feature learning (DML-GNN) model for ASD diagnosis is constructed based on the local feature information of brain atlas and the global feature information from the multi-modal data. First, DML-GNN constructs a dual-atlas feature extraction module to capture the initial features of each subject. Second, it combines K-nearest-neighbor graphs, graph pooling, graph convolution (GCN) and graph channel attention (GCA) to construct a local feature learning module. This module extracts deep features for each subject and eliminate redundant features, and further fuses multi-atlases features efficiently. Third, DML-GNN constructs a global feature learning module by combining the non-imaging information of fMRI data and graph isomorphism network (GINConv), which combines the information of multi-modal data to construct comprehensive multi-graph features and learns node embeddings using GINConv. Finally, multi-layer perceptron (MLP) is used to obtain the final ASD diagnosis results. Compared with recent algorithms for ASD diagnosis on the public data set-Autism Brain Imaging Data Exchange I (ABIDE I), our method demonstrated superior performance, underscoring its potential as an effective tool.

{"title":"DML-GNN: ASD Diagnosis Based on Dual-Atlas Multi-Feature Learning Graph Neural Network","authors":"Shuaiqi Liu,&nbsp;Chaolei Sun,&nbsp;Jinkai Li,&nbsp;Shuihua Wang,&nbsp;Ling Zhao","doi":"10.1002/ima.70038","DOIUrl":"https://doi.org/10.1002/ima.70038","url":null,"abstract":"<div>\u0000 \u0000 <p>To better automate the diagnosis of autism spectrum disorder (ASD) and improve diagnostic accuracy, a graph neural network via dual-atlas multi-feature learning (DML-GNN) model for ASD diagnosis is constructed based on the local feature information of brain atlas and the global feature information from the multi-modal data. First, DML-GNN constructs a dual-atlas feature extraction module to capture the initial features of each subject. Second, it combines K-nearest-neighbor graphs, graph pooling, graph convolution (GCN) and graph channel attention (GCA) to construct a local feature learning module. This module extracts deep features for each subject and eliminate redundant features, and further fuses multi-atlases features efficiently. Third, DML-GNN constructs a global feature learning module by combining the non-imaging information of fMRI data and graph isomorphism network (GINConv), which combines the information of multi-modal data to construct comprehensive multi-graph features and learns node embeddings using GINConv. Finally, multi-layer perceptron (MLP) is used to obtain the final ASD diagnosis results. Compared with recent algorithms for ASD diagnosis on the public data set-Autism Brain Imaging Data Exchange I (ABIDE I), our method demonstrated superior performance, underscoring its potential as an effective tool.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing AI for Comprehensive Reporting of Medical AI Research
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-11 DOI: 10.1002/ima.70047
Mohamed L. Seghier
<p>In this editorial, I would like to succinctly discuss the potential of using AI to improve reporting medical AI research. There are already several published guidelines and checklists in the current literature but how they are interpreted and implemented varies with publishers, editors, reviewers and authors. Here, I discuss the possibility of harnessing generative AI tools in order to assist authors to comprehensively report their AI work and meet current guidelines, with the ultimate aim to improve transparency and replicability in medical AI research. The succinct discussion below reckons two key issues: (1) AI has a seductive allure that might affect how AI-generated evidence is scrutinized and disseminated, hence the need for comprehensive and transparent reporting, and (2) authors sometimes feel uncertain about what to report in the light of so many existing guidelines about reporting AI research and the lack of consensus in the field.</p><p>It has been argued that extraneous or irrelevant information with a seductive allure can improve the ratings of scientific explanations [<span>1</span>]. AI, with its overhyped knowledgeability, can convey biases and false information that readers might judge believable [<span>2</span>]. AI can write highly convincing text that can impress or deceive readers, even in the presence of errors and false information [<span>3, 4</span>]. Likewise, merely mentioning “AI” in the title of a research paper seems to increase its citation potential [<span>5</span>]. The latter might incentivise scientists to use AI purely to boost their work citability, regardless of whether AI improved their work quality. In this context, one might speculate that some publications that used AI but with flawed methodologies or wrong conclusions might have slipped through the cracks of peer review, with many already being indexed and citable [<span>6</span>]. Overall, emerging evidence suggests that AI has an intrinsic seductive allure that is shaping the medical research landscape and impacting how readers appraise research articles that employ AI. This is why improving the reporting and evaluation of AI work is of paramount importance, and in this editorial, I underscore the potential role of generative AI for that purpose.</p><p>Consider this: readers might find a paper entitled “<i>Association between condition X and biomarker Y demonstrated with deep learning</i>” novel and worth reading. Now, imagine if the same finding was evidenced with a traditional analysis method and entitled “<i>Association between condition X and biomarker Y demonstrated with a correlation analysis</i>”, though it is unlikely that the authors of the latter will consider correlation analysis worth mentioning in the article title. Although both pieces of work report the same finding, they may not enjoy the same buzz and high citability in the field. This is because AI-based methods and traditional analysis methods operate at different maturity levels.
{"title":"Harnessing AI for Comprehensive Reporting of Medical AI Research","authors":"Mohamed L. Seghier","doi":"10.1002/ima.70047","DOIUrl":"https://doi.org/10.1002/ima.70047","url":null,"abstract":"&lt;p&gt;In this editorial, I would like to succinctly discuss the potential of using AI to improve reporting medical AI research. There are already several published guidelines and checklists in the current literature but how they are interpreted and implemented varies with publishers, editors, reviewers and authors. Here, I discuss the possibility of harnessing generative AI tools in order to assist authors to comprehensively report their AI work and meet current guidelines, with the ultimate aim to improve transparency and replicability in medical AI research. The succinct discussion below reckons two key issues: (1) AI has a seductive allure that might affect how AI-generated evidence is scrutinized and disseminated, hence the need for comprehensive and transparent reporting, and (2) authors sometimes feel uncertain about what to report in the light of so many existing guidelines about reporting AI research and the lack of consensus in the field.&lt;/p&gt;&lt;p&gt;It has been argued that extraneous or irrelevant information with a seductive allure can improve the ratings of scientific explanations [&lt;span&gt;1&lt;/span&gt;]. AI, with its overhyped knowledgeability, can convey biases and false information that readers might judge believable [&lt;span&gt;2&lt;/span&gt;]. AI can write highly convincing text that can impress or deceive readers, even in the presence of errors and false information [&lt;span&gt;3, 4&lt;/span&gt;]. Likewise, merely mentioning “AI” in the title of a research paper seems to increase its citation potential [&lt;span&gt;5&lt;/span&gt;]. The latter might incentivise scientists to use AI purely to boost their work citability, regardless of whether AI improved their work quality. In this context, one might speculate that some publications that used AI but with flawed methodologies or wrong conclusions might have slipped through the cracks of peer review, with many already being indexed and citable [&lt;span&gt;6&lt;/span&gt;]. Overall, emerging evidence suggests that AI has an intrinsic seductive allure that is shaping the medical research landscape and impacting how readers appraise research articles that employ AI. This is why improving the reporting and evaluation of AI work is of paramount importance, and in this editorial, I underscore the potential role of generative AI for that purpose.&lt;/p&gt;&lt;p&gt;Consider this: readers might find a paper entitled “&lt;i&gt;Association between condition X and biomarker Y demonstrated with deep learning&lt;/i&gt;” novel and worth reading. Now, imagine if the same finding was evidenced with a traditional analysis method and entitled “&lt;i&gt;Association between condition X and biomarker Y demonstrated with a correlation analysis&lt;/i&gt;”, though it is unlikely that the authors of the latter will consider correlation analysis worth mentioning in the article title. Although both pieces of work report the same finding, they may not enjoy the same buzz and high citability in the field. This is because AI-based methods and traditional analysis methods operate at different maturity levels. ","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143389020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced BoxInst for Weakly Supervised Liver Tumor Instance Segmentation in CT Images
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-08 DOI: 10.1002/ima.70043
Shanshan Li, Yuhan Zhang, Lingyan Zhang, Wei Chen

Accurate liver tumor detection and segmentation are essential for disease diagnosis and treatment planning. While traditional methods rely on pixel-level mask annotations in fully supervised training, weakly supervised techniques are gaining attention due to their reduced annotation requirements. In this study, we propose an enhanced version of BoxInst, called Enhanced BoxInst, which incorporates two key innovations: the position activation (PA) Module and the progressive mask generation (PMG) Module. The PA Module utilizes a Spatial Awareness (SA) Block to accurately locate tumor regions and encodes the location information to the segmentation branch through the Spatial Interaction Encoding (SIE) mechanism, thereby achieving cross-spatial feature interaction and ultimately improving the segmentation accuracy of liver tumors. Additionally, the PMG Module employs a feature decomposition scheme to refine tumor masks progressively from coarse to fine, accurately restoring the overall layout and boundary details of the tumor mask. Extensive experiments on the LiTS, AMU-Liver, and 3DIRCADb datasets demonstrate that Enhanced BoxInst outperforms existing methods in liver tumor instance segmentation. These results highlight the potential of our approach for practical use in medical image analysis, especially when only box annotations are available. The code is available at https://github.com/ssli23/Enhanced_BoxInst.

{"title":"Enhanced BoxInst for Weakly Supervised Liver Tumor Instance Segmentation in CT Images","authors":"Shanshan Li,&nbsp;Yuhan Zhang,&nbsp;Lingyan Zhang,&nbsp;Wei Chen","doi":"10.1002/ima.70043","DOIUrl":"https://doi.org/10.1002/ima.70043","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate liver tumor detection and segmentation are essential for disease diagnosis and treatment planning. While traditional methods rely on pixel-level mask annotations in fully supervised training, weakly supervised techniques are gaining attention due to their reduced annotation requirements. In this study, we propose an enhanced version of BoxInst, called Enhanced BoxInst, which incorporates two key innovations: the position activation (PA) Module and the progressive mask generation (PMG) Module. The PA Module utilizes a Spatial Awareness (SA) Block to accurately locate tumor regions and encodes the location information to the segmentation branch through the Spatial Interaction Encoding (SIE) mechanism, thereby achieving cross-spatial feature interaction and ultimately improving the segmentation accuracy of liver tumors. Additionally, the PMG Module employs a feature decomposition scheme to refine tumor masks progressively from coarse to fine, accurately restoring the overall layout and boundary details of the tumor mask. Extensive experiments on the LiTS, AMU-Liver, and 3DIRCADb datasets demonstrate that Enhanced BoxInst outperforms existing methods in liver tumor instance segmentation. These results highlight the potential of our approach for practical use in medical image analysis, especially when only box annotations are available. The code is available at https://github.com/ssli23/Enhanced_BoxInst.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pancreatic Tumor Detection From CT Images Converted to Graphs Using Whale Optimization and Classification Algorithms With Transfer Learning
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-08 DOI: 10.1002/ima.70040
Yusuf Alaca, Ömer Faruk Akmeşe

Pancreatic cancer is one of the most aggressive types of cancer, known for its high mortality rate, as it is often diagnosed at an advanced stage. Early diagnosis holds the potential to prolong patients' lifespans and improve treatment success rates. In this study, an innovative method is proposed to enhance the diagnosis of pancreatic cancer. Computed tomography (CT) images were converted into graphs using the Harris Corner Detection Algorithm and analyzed using deep learning models via transfer learning. DenseNet121 and InceptionV3 transfer learning models were trained on graph-based data, and model parameters were optimized using the Whale Optimization Algorithm (WOA). Additionally, classification algorithms such as k-Nearest Neighbors (k-NN), Support Vector Machines (SVM), and Random Forests (RF) were integrated into the analysis of the extracted features. The best results were achieved using the k-NN classification algorithm on features optimized by WOA, yielding an accuracy of 92.10% and an F1 score of 92.74%. The study demonstrated that graph-based transformation enabled more effective modeling of spatial relationships, thereby enhancing the performance of deep learning models. WOA offered significant superiority compared to other methods in parameter optimization. This study aims to contribute to the development of a reliable diagnostic system that can be integrated into clinical applications. In the future, the use of larger and more diverse datasets, along with different graph-based methods, could enhance the generalizability and performance of the proposed approach. The proposed model has the potential to serve as a decision support tool for physicians, particularly in early diagnosis, offering an opportunity to improve patients' quality of life.

{"title":"Pancreatic Tumor Detection From CT Images Converted to Graphs Using Whale Optimization and Classification Algorithms With Transfer Learning","authors":"Yusuf Alaca,&nbsp;Ömer Faruk Akmeşe","doi":"10.1002/ima.70040","DOIUrl":"https://doi.org/10.1002/ima.70040","url":null,"abstract":"<p>Pancreatic cancer is one of the most aggressive types of cancer, known for its high mortality rate, as it is often diagnosed at an advanced stage. Early diagnosis holds the potential to prolong patients' lifespans and improve treatment success rates. In this study, an innovative method is proposed to enhance the diagnosis of pancreatic cancer. Computed tomography (CT) images were converted into graphs using the Harris Corner Detection Algorithm and analyzed using deep learning models via transfer learning. DenseNet121 and InceptionV3 transfer learning models were trained on graph-based data, and model parameters were optimized using the Whale Optimization Algorithm (WOA). Additionally, classification algorithms such as k-Nearest Neighbors (k-NN), Support Vector Machines (SVM), and Random Forests (RF) were integrated into the analysis of the extracted features. The best results were achieved using the k-NN classification algorithm on features optimized by WOA, yielding an accuracy of 92.10% and an F1 score of 92.74%. The study demonstrated that graph-based transformation enabled more effective modeling of spatial relationships, thereby enhancing the performance of deep learning models. WOA offered significant superiority compared to other methods in parameter optimization. This study aims to contribute to the development of a reliable diagnostic system that can be integrated into clinical applications. In the future, the use of larger and more diverse datasets, along with different graph-based methods, could enhance the generalizability and performance of the proposed approach. The proposed model has the potential to serve as a decision support tool for physicians, particularly in early diagnosis, offering an opportunity to improve patients' quality of life.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Hepatic Nodules Using an Improved WOA-SVM Radiomics Model
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-08 DOI: 10.1002/ima.70036
Haoyun Sun, Lijia Wang

The incidence and mortality of liver cancer in China are not optimistic. Early diagnosis and treatment have become the urgent means to solve this situation. To develop an improved radiomics model for the classification of hepatic nodules based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The DCE-MRI images of 30 hepatitis, 30 cirrhotic nodules (CN), 30 dysplastic nodules (DN), and 30 hepatocellular carcinoma (HCC) patients were retrospectively and randomly divided into training and testing datasets in a 7:3 ratio. Firstly, the radiomics features of lesions were extracted by using feature extractor module based on Pyradiomics, from which optimal features were selected by least absolute shrinkage and selection operator (LASSO). Then, the improved whale optimization algorithm (WOA) with Tent mapping, Adaptive weight, and Levy flight (TALWOA) was used for parameter optimization of support vector machines (SVM). Finally, TALWOA-SVM was employed for the four-class classification of hepatic nodules. Receiver operating characteristic (ROC) curves, area under curve (AUC), and F1-score were used to evaluate the performance of the TALWOA-SVM model. Forty-four most informative features were selected from 851 features to train the SVM classifier. Compared with the standard whale algorithm and other optimization algorithms, the optimized model proposed in this paper has highest classification accuracy (81.315%), the ROC of each category being closer to the top left corner with AUC were 0.9378 (95% CI: 0.893–0.981), 0.9223 (95% CI: 0.873–0.971), 0.9794 (0.958–1.000), 0.9872 (0.971–1.000). The model proposed in this study can better classify hepatic nodules in different periods, and is expected to provide help for the early diagnosis of liver cancer.

{"title":"Classification of Hepatic Nodules Using an Improved WOA-SVM Radiomics Model","authors":"Haoyun Sun,&nbsp;Lijia Wang","doi":"10.1002/ima.70036","DOIUrl":"https://doi.org/10.1002/ima.70036","url":null,"abstract":"<div>\u0000 \u0000 <p>The incidence and mortality of liver cancer in China are not optimistic. Early diagnosis and treatment have become the urgent means to solve this situation. To develop an improved radiomics model for the classification of hepatic nodules based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The DCE-MRI images of 30 hepatitis, 30 cirrhotic nodules (CN), 30 dysplastic nodules (DN), and 30 hepatocellular carcinoma (HCC) patients were retrospectively and randomly divided into training and testing datasets in a 7:3 ratio. Firstly, the radiomics features of lesions were extracted by using feature extractor module based on Pyradiomics, from which optimal features were selected by least absolute shrinkage and selection operator (LASSO). Then, the improved whale optimization algorithm (WOA) with Tent mapping, Adaptive weight, and Levy flight (TALWOA) was used for parameter optimization of support vector machines (SVM). Finally, TALWOA-SVM was employed for the four-class classification of hepatic nodules. Receiver operating characteristic (ROC) curves, area under curve (AUC), and F1-score were used to evaluate the performance of the TALWOA-SVM model. Forty-four most informative features were selected from 851 features to train the SVM classifier. Compared with the standard whale algorithm and other optimization algorithms, the optimized model proposed in this paper has highest classification accuracy (81.315%), the ROC of each category being closer to the top left corner with AUC were 0.9378 (95% CI: 0.893–0.981), 0.9223 (95% CI: 0.873–0.971), 0.9794 (0.958–1.000), 0.9872 (0.971–1.000). The model proposed in this study can better classify hepatic nodules in different periods, and is expected to provide help for the early diagnosis of liver cancer.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCABNet: A Novel Polyp Segmentation Network With Spatial-Gradient Attention and Channel Prioritization
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-02-06 DOI: 10.1002/ima.70039
Khaled ELKarazle, Valliappan Raman, Caslon Chua, Patrick Then

Current colorectal polyps detection methods often struggle with efficiency and boundary precision, especially when dealing with polyps of complex shapes and sizes. Traditional techniques may fail to precisely define the boundaries of these polyps, leading to suboptimal detection rates. Furthermore, flat and small polyps often blend into the background due to their low contrast against the mucosal wall, making them even more challenging to detect. To address these challenges, we introduce SCABNet, a novel deep learning architecture for the efficient detection of colorectal polyps. SCABNet employs an encoder-decoder structure with three novel blocks: the Feature Enhancement Block (FEB), the Channel Prioritization Block (CPB), and the Spatial-Gradient Boundary Attention Block (SGBAB). The FEB applies dilation and spatial attention to high-level features, enhancing their discriminative power and improving the model's ability to capture complex patterns. The CPB, an efficient alternative to traditional channel attention blocks, assigns prioritization weights to diverse feature channels. The SGBAB replaces conventional boundary attention mechanisms with a more efficient solution that focuses on the spatial attention of the feature map. It employs a Jacobian-based approach to construct learned convolutions on both vertical and horizontal components of the feature map. This allows the SGBAB to effectively understand the changes in the feature map across different spatial locations, which is crucial for detecting the boundaries of complex-shaped polyps. These blocks are strategically embedded within the network's skip connections, enhancing the model's boundary detection capabilities without imposing excessive computational demands. They exploit and enhance features at three levels: high, mid, and low, thereby ensuring the detection of a wide range of polyps. SCABNet has been trained on the Kvasir-SEG and CVC-ClinicDB datasets and evaluated on multiple datasets, demonstrating superior results. The code is available on: https://github.com/KhaledELKarazle97/SCABNet.

{"title":"SCABNet: A Novel Polyp Segmentation Network With Spatial-Gradient Attention and Channel Prioritization","authors":"Khaled ELKarazle,&nbsp;Valliappan Raman,&nbsp;Caslon Chua,&nbsp;Patrick Then","doi":"10.1002/ima.70039","DOIUrl":"https://doi.org/10.1002/ima.70039","url":null,"abstract":"<p>Current colorectal polyps detection methods often struggle with efficiency and boundary precision, especially when dealing with polyps of complex shapes and sizes. Traditional techniques may fail to precisely define the boundaries of these polyps, leading to suboptimal detection rates. Furthermore, flat and small polyps often blend into the background due to their low contrast against the mucosal wall, making them even more challenging to detect. To address these challenges, we introduce SCABNet, a novel deep learning architecture for the efficient detection of colorectal polyps. SCABNet employs an encoder-decoder structure with three novel blocks: the Feature Enhancement Block (FEB), the Channel Prioritization Block (CPB), and the Spatial-Gradient Boundary Attention Block (SGBAB). The FEB applies dilation and spatial attention to high-level features, enhancing their discriminative power and improving the model's ability to capture complex patterns. The CPB, an efficient alternative to traditional channel attention blocks, assigns prioritization weights to diverse feature channels. The SGBAB replaces conventional boundary attention mechanisms with a more efficient solution that focuses on the spatial attention of the feature map. It employs a Jacobian-based approach to construct learned convolutions on both vertical and horizontal components of the feature map. This allows the SGBAB to effectively understand the changes in the feature map across different spatial locations, which is crucial for detecting the boundaries of complex-shaped polyps. These blocks are strategically embedded within the network's skip connections, enhancing the model's boundary detection capabilities without imposing excessive computational demands. They exploit and enhance features at three levels: high, mid, and low, thereby ensuring the detection of a wide range of polyps. SCABNet has been trained on the Kvasir-SEG and CVC-ClinicDB datasets and evaluated on multiple datasets, demonstrating superior results. The code is available on: https://github.com/KhaledELKarazle97/SCABNet.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLFE-UNet: Multi-Level Feature Extraction Transformer-Based UNet for Gastrointestinal Disease Segmentation
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-27 DOI: 10.1002/ima.70030
Anass Garbaz, Yassine Oukdach, Said Charfi, Mohamed El Ansari, Lahcen Koutti, Mouna Salihoun, Samira Lafraxo

Accurately segmenting gastrointestinal (GI) disease regions from Wireless Capsule Endoscopy images is essential for clinical diagnosis and survival prediction. However, challenges arise due to similar intensity distributions, variable lesion shapes, and fuzzy boundaries. In this paper, we propose MLFE-UNet, an advanced fusion of CNN-based transformers with UNet. Both the encoder and decoder utilize a multi-level feature extraction (MLFA) CNN-Transformer-based module. This module extracts features from the input data, considering both global dependencies and local information. Furthermore, we introduce a multi-level spatial attention (MLSA) block that functions as the bottleneck. It enhances the network's ability to handle complex structures and overlapping regions in feature maps. The MLSA block captures multiscale dependencies of tokens from the channel perspective and transmits them to the decoding path. A contextual feature stabilization block follows each transition to emulate lesion zones and facilitate segmentation guidelines at each phase. To address high-level semantic information, we incorporate a computationally efficient spatial channel attention block. This is followed by a stabilization block in the skip connections, ensuring global interaction and highlighting important semantic features from the encoder to the decoder. To evaluate the performance of our proposed MLFE-UNet, we selected common GI diseases, specifically bleeding and polyps. The dice coefficient scores obtained by MLFE-UNet on the MICCAI 2017 (Red lesion) and CVC-ClinicalDB data sets are 92.34% and 88.37%, respectively.

{"title":"MLFE-UNet: Multi-Level Feature Extraction Transformer-Based UNet for Gastrointestinal Disease Segmentation","authors":"Anass Garbaz,&nbsp;Yassine Oukdach,&nbsp;Said Charfi,&nbsp;Mohamed El Ansari,&nbsp;Lahcen Koutti,&nbsp;Mouna Salihoun,&nbsp;Samira Lafraxo","doi":"10.1002/ima.70030","DOIUrl":"https://doi.org/10.1002/ima.70030","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurately segmenting gastrointestinal (GI) disease regions from Wireless Capsule Endoscopy images is essential for clinical diagnosis and survival prediction. However, challenges arise due to similar intensity distributions, variable lesion shapes, and fuzzy boundaries. In this paper, we propose MLFE-UNet, an advanced fusion of CNN-based transformers with UNet. Both the encoder and decoder utilize a multi-level feature extraction (MLFA) CNN-Transformer-based module. This module extracts features from the input data, considering both global dependencies and local information. Furthermore, we introduce a multi-level spatial attention (MLSA) block that functions as the bottleneck. It enhances the network's ability to handle complex structures and overlapping regions in feature maps. The MLSA block captures multiscale dependencies of tokens from the channel perspective and transmits them to the decoding path. A contextual feature stabilization block follows each transition to emulate lesion zones and facilitate segmentation guidelines at each phase. To address high-level semantic information, we incorporate a computationally efficient spatial channel attention block. This is followed by a stabilization block in the skip connections, ensuring global interaction and highlighting important semantic features from the encoder to the decoder. To evaluate the performance of our proposed MLFE-UNet, we selected common GI diseases, specifically bleeding and polyps. The dice coefficient scores obtained by MLFE-UNet on the MICCAI 2017 (Red lesion) and CVC-ClinicalDB data sets are 92.34% and 88.37%, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143120026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Imaging Systems and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1