Pub Date : 2022-04-13DOI: 10.1142/s0219467823500225
Rambabu Pemula, Sagenela Vijaya Kumar, C. Nagaraju
Generation of random fields (GRF) for image segmentation represents partitioning an image into different regions that are homogeneous or have similar facets of the image. It is one of the most challenging tasks in image processing and a very important pre-processing step in the fields of computer vision, image analysis, medical image processing, pattern recognition, remote sensing, and geographical information system. Many researchers have presented numerous image segmentation approaches, but still, there are challenges like segmentation of low contrast images, removal of shadow in the images, reduction of high dimensional images, and computational complexity of segmentation techniques. In this review paper, the authors address these issues. The experiments are conducted and tested on the Berkely dataset (BSD500), Semantic dataset, and our own dataset, and the results are shown in the form of tables and graphs.
{"title":"Generation of Random Fields for Image Segmentation Techniques: A Review","authors":"Rambabu Pemula, Sagenela Vijaya Kumar, C. Nagaraju","doi":"10.1142/s0219467823500225","DOIUrl":"https://doi.org/10.1142/s0219467823500225","url":null,"abstract":"Generation of random fields (GRF) for image segmentation represents partitioning an image into different regions that are homogeneous or have similar facets of the image. It is one of the most challenging tasks in image processing and a very important pre-processing step in the fields of computer vision, image analysis, medical image processing, pattern recognition, remote sensing, and geographical information system. Many researchers have presented numerous image segmentation approaches, but still, there are challenges like segmentation of low contrast images, removal of shadow in the images, reduction of high dimensional images, and computational complexity of segmentation techniques. In this review paper, the authors address these issues. The experiments are conducted and tested on the Berkely dataset (BSD500), Semantic dataset, and our own dataset, and the results are shown in the form of tables and graphs.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133710110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-13DOI: 10.1142/s0219467823500262
Md. Shafiqul Islam, Rafiqul Islam
Rapid system and hardware development of X-ray computed tomography (CT) technologies has been accompanied by equally exciting advances in image reconstruction algorithms. Of the two reconstruction algorithms, analytical and iterative, iterative reconstruction (IR) algorithms have become a clinically viable option in CT imaging. The first CT scanners in the early 1970s used IR algorithms, but lack of computation power prevented their clinical use. In 2009, the first IR algorithms became commercially available and replaced conventionally established analytical algorithms as filtered back projection. Since then, IR has played a vital role in the field of radiology. Although all available IR algorithms share the common mechanism of artifact reduction and/or potential for radiation dose reduction, the magnitude of these effects depends upon specific IR algorithms. IR reconstructs images by iteratively optimizing an objective function. The objective function typically consists of a data integrity term and a regularization term. Therefore, different regularization priors are used in IR algorithms. This paper will briefly look at the overall evolution of CT image reconstruction and the regularization priors used in IR algorithms. Finally, a discussion is presented based on the reality of various reconstruction methodologies at a glance to find the preferred one. Consequently, we will present anticipation towards future advancements in this domain.
{"title":"A Critical Survey on Developed Reconstruction Algorithms for Computed Tomography Imaging from a Limited Number of Projections","authors":"Md. Shafiqul Islam, Rafiqul Islam","doi":"10.1142/s0219467823500262","DOIUrl":"https://doi.org/10.1142/s0219467823500262","url":null,"abstract":"Rapid system and hardware development of X-ray computed tomography (CT) technologies has been accompanied by equally exciting advances in image reconstruction algorithms. Of the two reconstruction algorithms, analytical and iterative, iterative reconstruction (IR) algorithms have become a clinically viable option in CT imaging. The first CT scanners in the early 1970s used IR algorithms, but lack of computation power prevented their clinical use. In 2009, the first IR algorithms became commercially available and replaced conventionally established analytical algorithms as filtered back projection. Since then, IR has played a vital role in the field of radiology. Although all available IR algorithms share the common mechanism of artifact reduction and/or potential for radiation dose reduction, the magnitude of these effects depends upon specific IR algorithms. IR reconstructs images by iteratively optimizing an objective function. The objective function typically consists of a data integrity term and a regularization term. Therefore, different regularization priors are used in IR algorithms. This paper will briefly look at the overall evolution of CT image reconstruction and the regularization priors used in IR algorithms. Finally, a discussion is presented based on the reality of various reconstruction methodologies at a glance to find the preferred one. Consequently, we will present anticipation towards future advancements in this domain.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132726618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-08DOI: 10.1142/s0219467823500286
Vikrant Chole, V. Gadicha
The art of mimicking a human’s responses and behavior in a programming machine is called Artificial intelligence (AI). AI has been incorporated in games in such a way to make them interesting, especially in chess games. This paper proposes a hybrid optimization tuned neural network (NN) to establish a winning strategy in the chess game by generating the possible next moves in the game. Initially, the images from Portable Game Notation (PGN) file are used to train the NN classifier. The proposed Locust Mayfly algorithm is utilized to optimally tune the weights of the NN classifier. The proposed Locust Mayfly algorithm inherits the characteristic features of hybrid survival and social interacting search agents. The NN classifier involves in finding all the possible moves in the board, among which the best move is obtained using the mini-max algorithm. At last, the performance of the proposed Locust mayfly-based NN method is evaluated with help of the performance metrics, such as specificity, accuracy, and sensitivity. The proposed Locust mayfly-based NN method attained a specificity of 98%, accuracy of 98%, and a sensitivity of 98%, which demonstrates the productiveness of the proposed mayfly-based NN method in pruning.
在编程机器中模仿人类反应和行为的艺术被称为人工智能(AI)。AI以这种方式融入游戏中,使其变得有趣,尤其是在国际象棋游戏中。本文提出了一种混合优化调谐神经网络(NN),通过生成棋局中可能的下一步棋来建立棋局中的获胜策略。首先,使用便携式游戏符号(Portable Game Notation, PGN)文件中的图像来训练神经网络分类器。利用蝗虫蜉蝣算法对神经网络分类器的权值进行优化调整。提出的蝗虫蜉蝣算法继承了混合生存和社会互动搜索代理的特征。神经网络分类器的工作是寻找棋盘上所有可能的走法,其中最优走法采用最小-最大算法。最后,利用特异性、准确性和灵敏度等性能指标对基于蝗虫的神经网络方法进行了性能评价。该方法的特异性为98%,准确率为98%,灵敏度为98%,证明了该方法在修剪方面的有效性。
{"title":"Locust Mayfly Optimization-Tuned Neural Network for AI-Based Pruning in Chess Game","authors":"Vikrant Chole, V. Gadicha","doi":"10.1142/s0219467823500286","DOIUrl":"https://doi.org/10.1142/s0219467823500286","url":null,"abstract":"The art of mimicking a human’s responses and behavior in a programming machine is called Artificial intelligence (AI). AI has been incorporated in games in such a way to make them interesting, especially in chess games. This paper proposes a hybrid optimization tuned neural network (NN) to establish a winning strategy in the chess game by generating the possible next moves in the game. Initially, the images from Portable Game Notation (PGN) file are used to train the NN classifier. The proposed Locust Mayfly algorithm is utilized to optimally tune the weights of the NN classifier. The proposed Locust Mayfly algorithm inherits the characteristic features of hybrid survival and social interacting search agents. The NN classifier involves in finding all the possible moves in the board, among which the best move is obtained using the mini-max algorithm. At last, the performance of the proposed Locust mayfly-based NN method is evaluated with help of the performance metrics, such as specificity, accuracy, and sensitivity. The proposed Locust mayfly-based NN method attained a specificity of 98%, accuracy of 98%, and a sensitivity of 98%, which demonstrates the productiveness of the proposed mayfly-based NN method in pruning.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127979971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-08DOI: 10.1142/s0219467823500237
Ayalapogu Ratna Raju, S. Pabboju, Rajeswara Rao Ramisetty
An irregular growth in brain cells causes brain tumors. In recent years, a considerable rate of increment in medical cases regarding brain tumors has been observed, affecting adults and children. However, it is highly curable in recent times only if detected in the early time of tumor growth. Moreover, there are many sophisticated approaches devised by researchers for predicting the tumor regions and their stages. In addition, Magnetic Resonance Imaging (MRI) is utilized commonly by radiologists to evaluate tumors. In this paper, the input image is from a database, and brain tumor segmentation is performed using various segmentation techniques. Here, the comparative analysis is performed by comparing the performance of segmentation approaches, like Hybrid Active Contour (HAC) model, Bayesian Fuzzy Clustering (BFC), Active Contour (AC), Fuzzy C-Means (FCM) clustering technique, Sparse (Sparse FCM), and Black Hole Entropy Fuzzy Clustering (BHEFC) model. Moreover, segmentation technique performance is evaluated with the Dice coefficient, Jaccard coefficient, and segmentation accuracy. The proposed method shows high Dice and Jaccard coefficients of 0.7809 and 0.6456 by varying iteration with the REMBRANDT dataset and a better segmentation accuracy of 0.9789 by changing image size in the Brats-2015 database.
{"title":"Performance Analysis and Critical Review on Segmentation Techniques for Brain Tumor Classification","authors":"Ayalapogu Ratna Raju, S. Pabboju, Rajeswara Rao Ramisetty","doi":"10.1142/s0219467823500237","DOIUrl":"https://doi.org/10.1142/s0219467823500237","url":null,"abstract":"An irregular growth in brain cells causes brain tumors. In recent years, a considerable rate of increment in medical cases regarding brain tumors has been observed, affecting adults and children. However, it is highly curable in recent times only if detected in the early time of tumor growth. Moreover, there are many sophisticated approaches devised by researchers for predicting the tumor regions and their stages. In addition, Magnetic Resonance Imaging (MRI) is utilized commonly by radiologists to evaluate tumors. In this paper, the input image is from a database, and brain tumor segmentation is performed using various segmentation techniques. Here, the comparative analysis is performed by comparing the performance of segmentation approaches, like Hybrid Active Contour (HAC) model, Bayesian Fuzzy Clustering (BFC), Active Contour (AC), Fuzzy C-Means (FCM) clustering technique, Sparse (Sparse FCM), and Black Hole Entropy Fuzzy Clustering (BHEFC) model. Moreover, segmentation technique performance is evaluated with the Dice coefficient, Jaccard coefficient, and segmentation accuracy. The proposed method shows high Dice and Jaccard coefficients of 0.7809 and 0.6456 by varying iteration with the REMBRANDT dataset and a better segmentation accuracy of 0.9789 by changing image size in the Brats-2015 database.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130654285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-08DOI: 10.1142/s0219467822400137
E. S. Rao, C. Prasad
Medical image fusion plays a significant role in medical diagnosis applications. Although the conventional approaches have produced moderate visual analysis, still there is a scope to improve the performance parameters and reduce the computational complexity. Thus, this article implemented the hybrid fusion method by using the novel implementation of joint slope analysis (JSA), probabilistic parametric steered image filtration (PPSIF), and deep learning convolutional neural networks (DLCNNs)-based SR Fusion Net. Here, JSA decomposes the images to estimate edge-based slopes and develops the edge-preserving approximate layers from the multi-modal medical images. Further, PPSIF is used to generate the feature fusion with base layer-based weight maps. Then, the SR Fusion Net is used to generate the spatial and texture feature-based weight maps. Finally, optimal fusion rule is applied on the detail layers generated from the base layer and approximate layer, which resulted in the fused outcome. The proposed method is capable of performing the fusion operation between various modalities of images, such as MRI-CT, MRI-PET, and MRI-SPECT combinations by using two different architectures. The simulation results show that the proposed method resulted in better subjective and objective performance as compared to state of art approaches.
{"title":"Deep Learning-Based Medical Image Fusion Using Integrated Joint Slope Analysis with Probabilistic Parametric Steered Image Filter","authors":"E. S. Rao, C. Prasad","doi":"10.1142/s0219467822400137","DOIUrl":"https://doi.org/10.1142/s0219467822400137","url":null,"abstract":"Medical image fusion plays a significant role in medical diagnosis applications. Although the conventional approaches have produced moderate visual analysis, still there is a scope to improve the performance parameters and reduce the computational complexity. Thus, this article implemented the hybrid fusion method by using the novel implementation of joint slope analysis (JSA), probabilistic parametric steered image filtration (PPSIF), and deep learning convolutional neural networks (DLCNNs)-based SR Fusion Net. Here, JSA decomposes the images to estimate edge-based slopes and develops the edge-preserving approximate layers from the multi-modal medical images. Further, PPSIF is used to generate the feature fusion with base layer-based weight maps. Then, the SR Fusion Net is used to generate the spatial and texture feature-based weight maps. Finally, optimal fusion rule is applied on the detail layers generated from the base layer and approximate layer, which resulted in the fused outcome. The proposed method is capable of performing the fusion operation between various modalities of images, such as MRI-CT, MRI-PET, and MRI-SPECT combinations by using two different architectures. The simulation results show that the proposed method resulted in better subjective and objective performance as compared to state of art approaches.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133286124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-07DOI: 10.1142/s0219467822400113
J. Naveen, S. Sheba, B. Selvam
In recent days, the major concern for diabetic patients is foot ulcers. According to the survey, among 15 people among 100 are suffering from this foot ulcer. The wound or ulcer found which is found in diabetic patients consumes more time to heal, also required more conscious treatment. Foot ulcers may lead to deleterious danger condition and also may be the cause for loss of limb. By understanding this grim condition, this paper proposes Fractional-Order Darwinian Particle Swarm Optimization (FO-DPSO) technique for analyzing foot ulcer 2D color images. This paper deals with standard image processing, i.e. efficient segmentation using FO-DPSO algorithm and extracting textural features using Gray Level Co-occurrence Matrix (GLCM) technique. The whole effort projected results as accuracy of 91.2%, sensitivity of 100% and specificity as 96.7% for Naïve Bayes classifier and accuracy of 91.2%, sensitivity of 100% and sensitivity of 79.6% for Hoeffding tree classifier.
{"title":"FO-DPSO Algorithm for Segmentation and Detection of Diabetic Mellitus for Ulcers","authors":"J. Naveen, S. Sheba, B. Selvam","doi":"10.1142/s0219467822400113","DOIUrl":"https://doi.org/10.1142/s0219467822400113","url":null,"abstract":"In recent days, the major concern for diabetic patients is foot ulcers. According to the survey, among 15 people among 100 are suffering from this foot ulcer. The wound or ulcer found which is found in diabetic patients consumes more time to heal, also required more conscious treatment. Foot ulcers may lead to deleterious danger condition and also may be the cause for loss of limb. By understanding this grim condition, this paper proposes Fractional-Order Darwinian Particle Swarm Optimization (FO-DPSO) technique for analyzing foot ulcer 2D color images. This paper deals with standard image processing, i.e. efficient segmentation using FO-DPSO algorithm and extracting textural features using Gray Level Co-occurrence Matrix (GLCM) technique. The whole effort projected results as accuracy of 91.2%, sensitivity of 100% and specificity as 96.7% for Naïve Bayes classifier and accuracy of 91.2%, sensitivity of 100% and sensitivity of 79.6% for Hoeffding tree classifier.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123114031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-06DOI: 10.1142/s0219467823500304
Vaddadi Sai Rahul, M. Tejas, N. Prasanth, S. Raja
Scientific studies of the elements that influence the box office performance of Indian films have generally concentrated on post-production elements, such as those discovered after a film has been completed or released, and notably for Bollywood films. Only fewer studies have looked at regional film industries and pre-production factors, which are elements that are known before a decision to greenlight a film is made. This study looked at Indian films using natural language processing and machine learning approaches to see if they would be profitable in the pre-production stage. We extract movie data and English subtitles (as an approximation to the screenplay) for the top five Indian regional film industries: Bollywood, Kollywood, Tollywood, Mollywood, and Sandalwood, as they make up a major portion of the Indian film industry’s revenue. Subtitle Vector (Sub2Vec), a Paragraph Vector model trained on English subtitles, was used to embed subtitle text into 50 and 100 dimensions. The proposed approach followed a two-stage pipeline. In the first stage, Return on Investment (ROI) was calculated using aggregated subtitle embeddings and associated movie data. Classification models used the ROI calculated in the first step to predicting a film’s verdict in the second step. The optimal regressor–classifier pair was determined by evaluating classification models using [Formula: see text]-score and Cohen’s Kappa scores on various hyperparameters. When compared to benchmark methods, our proposed methodology forecasts box office success more accurately.
{"title":"Early Success Prediction of Indian Movies Using Subtitles: A Document Vector Approach","authors":"Vaddadi Sai Rahul, M. Tejas, N. Prasanth, S. Raja","doi":"10.1142/s0219467823500304","DOIUrl":"https://doi.org/10.1142/s0219467823500304","url":null,"abstract":"Scientific studies of the elements that influence the box office performance of Indian films have generally concentrated on post-production elements, such as those discovered after a film has been completed or released, and notably for Bollywood films. Only fewer studies have looked at regional film industries and pre-production factors, which are elements that are known before a decision to greenlight a film is made. This study looked at Indian films using natural language processing and machine learning approaches to see if they would be profitable in the pre-production stage. We extract movie data and English subtitles (as an approximation to the screenplay) for the top five Indian regional film industries: Bollywood, Kollywood, Tollywood, Mollywood, and Sandalwood, as they make up a major portion of the Indian film industry’s revenue. Subtitle Vector (Sub2Vec), a Paragraph Vector model trained on English subtitles, was used to embed subtitle text into 50 and 100 dimensions. The proposed approach followed a two-stage pipeline. In the first stage, Return on Investment (ROI) was calculated using aggregated subtitle embeddings and associated movie data. Classification models used the ROI calculated in the first step to predicting a film’s verdict in the second step. The optimal regressor–classifier pair was determined by evaluating classification models using [Formula: see text]-score and Cohen’s Kappa scores on various hyperparameters. When compared to benchmark methods, our proposed methodology forecasts box office success more accurately.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131329009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-04DOI: 10.1142/s0219467823500213
Gyanesh Das, Rutuparna Panda, Leena Samantaray, S. Agrawal
Image segmentation is imperative for image processing applications. Thresholding technique is the easiest way of partitioning an image into different regions. Mostly, entropy-based threshold selection methods are used for multilevel thresholding. However, these methods suffer from their dependencies on spatial distribution of gray values. To solve this issue, a novel segmentation error minimization (SEM)-based method for multilevel optimal threshold selection using opposition equilibrium optimizer (OEO) is suggested. In this contribution, a new segmentation score (SS) (objective function) is derived while minimizing the segmentation error function. Our proposal is explicitly free from gray level spatial distribution of an image. Optimal threshold values are achieved by maximizing the SS (fitness value) using OEO. The key to success is the maximization of score among classes, ensuring the sharpening of the shred boundary between classes, leading to an improved threshold selection method. It is empirically demonstrated how the optimal threshold selection is made. Experimental results are presented using standard test images. Standard measures like PSNR, SSIM and FSIM are used for validation The results are compared with state-of-the-art entropy-based technique. Our method performs well both qualitatively and quantitatively. The suggested technique would be useful for biomedical image segmentation.
{"title":"A Novel Segmentation Error Minimization-Based Method for Multilevel Optimal Threshold Selection Using Opposition Equilibrium Optimizer","authors":"Gyanesh Das, Rutuparna Panda, Leena Samantaray, S. Agrawal","doi":"10.1142/s0219467823500213","DOIUrl":"https://doi.org/10.1142/s0219467823500213","url":null,"abstract":"Image segmentation is imperative for image processing applications. Thresholding technique is the easiest way of partitioning an image into different regions. Mostly, entropy-based threshold selection methods are used for multilevel thresholding. However, these methods suffer from their dependencies on spatial distribution of gray values. To solve this issue, a novel segmentation error minimization (SEM)-based method for multilevel optimal threshold selection using opposition equilibrium optimizer (OEO) is suggested. In this contribution, a new segmentation score (SS) (objective function) is derived while minimizing the segmentation error function. Our proposal is explicitly free from gray level spatial distribution of an image. Optimal threshold values are achieved by maximizing the SS (fitness value) using OEO. The key to success is the maximization of score among classes, ensuring the sharpening of the shred boundary between classes, leading to an improved threshold selection method. It is empirically demonstrated how the optimal threshold selection is made. Experimental results are presented using standard test images. Standard measures like PSNR, SSIM and FSIM are used for validation The results are compared with state-of-the-art entropy-based technique. Our method performs well both qualitatively and quantitatively. The suggested technique would be useful for biomedical image segmentation.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129612367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-11DOI: 10.1142/s0219467823500171
B. Spoorthi, S. Mahesh
Lung cancer is a severe disease, which causes high deaths in the world. Earlier discovery of lung cancer is useful to enhance the rate of survival in patients. Computed Tomography (CT) is utilized for determining the tumor and identifying the cancer level in the body. However, the issues of CT images cause less tumor visibility areas and unconstructive rates in tumor regions. This paper devises an optimization-driven technique for classifying lung cancer. The CT image is utilized for determining the position of the tumor. Here, the CT image undergoes segmentation, which is performed using the DeepJoint model. Furthermore, the feature extraction is carried out, wherein features such as local ternary pattern-based features, Histogram of Gradients (HoG) features, and statistical features, like variance, mean, kurtosis, energy, entropy, and skewness. The categorization of lung cancer is performed using Hierarchical Attention Network (HAN). The training of HAN is carried out using proposed Firefly Competitive Swarm Optimization (FCSO), which is devised by combining firefly algorithm (FA), and Competitive Swarm Optimization (CSO). The proposed FCSO-based HAN provided effective performance with high accuracy of 91.3%, sensitivity of 88%, and specificity of 89.1%.
肺癌是一种严重的疾病,在世界上造成很高的死亡率。早期发现肺癌有助于提高患者的生存率。计算机断层扫描(CT)用于确定肿瘤和确定体内的癌症水平。然而,CT图像的问题导致肿瘤可见区域较少和肿瘤区域的非建设性率。本文设计了一种优化驱动的肺癌分类技术。利用CT图像确定肿瘤的位置。在这里,使用DeepJoint模型对CT图像进行分割。此外,进行特征提取,包括基于局部三元模式的特征、梯度直方图(Histogram of Gradients, HoG)特征以及方差、均值、峰度、能量、熵和偏度等统计特征。采用层次注意网络(HAN)对肺癌进行分类。将萤火虫算法(FA)和竞争群体优化(CSO)相结合,提出了萤火虫竞争群体优化(FCSO)算法,对HAN进行训练。基于fcso的HAN具有较高的准确性91.3%,敏感性88%,特异性89.1%。
{"title":"Firefly Competitive Swarm Optimization Based Hierarchical Attention Network for Lung Cancer Detection","authors":"B. Spoorthi, S. Mahesh","doi":"10.1142/s0219467823500171","DOIUrl":"https://doi.org/10.1142/s0219467823500171","url":null,"abstract":"Lung cancer is a severe disease, which causes high deaths in the world. Earlier discovery of lung cancer is useful to enhance the rate of survival in patients. Computed Tomography (CT) is utilized for determining the tumor and identifying the cancer level in the body. However, the issues of CT images cause less tumor visibility areas and unconstructive rates in tumor regions. This paper devises an optimization-driven technique for classifying lung cancer. The CT image is utilized for determining the position of the tumor. Here, the CT image undergoes segmentation, which is performed using the DeepJoint model. Furthermore, the feature extraction is carried out, wherein features such as local ternary pattern-based features, Histogram of Gradients (HoG) features, and statistical features, like variance, mean, kurtosis, energy, entropy, and skewness. The categorization of lung cancer is performed using Hierarchical Attention Network (HAN). The training of HAN is carried out using proposed Firefly Competitive Swarm Optimization (FCSO), which is devised by combining firefly algorithm (FA), and Competitive Swarm Optimization (CSO). The proposed FCSO-based HAN provided effective performance with high accuracy of 91.3%, sensitivity of 88%, and specificity of 89.1%.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122884787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-04DOI: 10.1142/s021946782350016x
T. Shashikala, B. L. Sunitha, S. Basavarajappa, J. Davim
Digital image processing (DIP) becomes a common tool for analyzing engineering problems by fast, frequent and noncontact method of identification and measurement. An attempt has been made in the present investigation to use this method for automatically detecting the worn regions on the material surface and also its measurement. Brass material has been used for experimentation as it is used generally as a bearing material. A pin on disc dry sliding wear testing machine has been used for conducting the experiments by applying loads from 10 N to 50 N and by keeping sliding distance and sliding speed constant. After testing, images are acquired by using 1/2 inch interline transfer CCD image sensor with 795(H)[Formula: see text]896(V) spatial resolution of 8.6[Formula: see text][Formula: see text]m (H)[Formula: see text]8.3[Formula: see text][Formula: see text]m (V) unit cell. Denoising has been done to remove any possible noise followed by contrast stretching to enhance image for wear region extraction. Segmentation tool was used to divide the worn and unworn regions by identifying white regions greater than a threshold value with an objective of quantifying the worn surface for tested specimen. Canny edge detection and granulometry techniques have been used to quantify the wear region. The results revel that the specific wear rate increases with increase in applied load, at constant sliding speed and sliding distance. Similarly, the area of worn region as identified by DIP also increased from 42.7% to 69.97%. This is because of formation of deeper groves in the worn material.
{"title":"Some Studies on Measurement of Worn Surface by Digital Image Processing","authors":"T. Shashikala, B. L. Sunitha, S. Basavarajappa, J. Davim","doi":"10.1142/s021946782350016x","DOIUrl":"https://doi.org/10.1142/s021946782350016x","url":null,"abstract":"Digital image processing (DIP) becomes a common tool for analyzing engineering problems by fast, frequent and noncontact method of identification and measurement. An attempt has been made in the present investigation to use this method for automatically detecting the worn regions on the material surface and also its measurement. Brass material has been used for experimentation as it is used generally as a bearing material. A pin on disc dry sliding wear testing machine has been used for conducting the experiments by applying loads from 10 N to 50 N and by keeping sliding distance and sliding speed constant. After testing, images are acquired by using 1/2 inch interline transfer CCD image sensor with 795(H)[Formula: see text]896(V) spatial resolution of 8.6[Formula: see text][Formula: see text]m (H)[Formula: see text]8.3[Formula: see text][Formula: see text]m (V) unit cell. Denoising has been done to remove any possible noise followed by contrast stretching to enhance image for wear region extraction. Segmentation tool was used to divide the worn and unworn regions by identifying white regions greater than a threshold value with an objective of quantifying the worn surface for tested specimen. Canny edge detection and granulometry techniques have been used to quantify the wear region. The results revel that the specific wear rate increases with increase in applied load, at constant sliding speed and sliding distance. Similarly, the area of worn region as identified by DIP also increased from 42.7% to 69.97%. This is because of formation of deeper groves in the worn material.","PeriodicalId":177479,"journal":{"name":"Int. J. Image Graph.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131063464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}