Pub Date : 2023-07-21DOI: 10.1142/s021946782450058x
K. Kiruthika, Rashmita Khilar
As innovations for image handling, image enrichment (IE) can give more effective information and image compression can decrease memory space. IE plays a vital role in the medical field for which we have to use a noiseless image. IE applies to all areas of understanding and analysis of images. This paper provides an innovative algorithm called contrast-limited adaptive fuzzy gamma (CLAFG) for IE using chest X-ray (CXR) images. The image dissimilarity is enriched by computing several histograms and membership planes. The proposed algorithm comprises various steps. Firstly, CXR is separated into contextual region (CR). Secondly, the cliplimit, a threshold value which alters the dissimilarity of the CXR and applies it to the histogram which, is generated by CR and then applies the fuzzification technique via the membership plane to the CXR. Thirdly, the clipped histograms are performed in two ways, i.e. it is merged using bi-cubic interpolation techniques and it is modified with membership function. Finally, the resulting output from bi-cubic interpolation and membership function are fond of using upgrade contemplate standard methods for a richer CXR image.
{"title":"Novel Enrichment of Brightness-Distorted Chest X-Ray Images Using Fusion-Based Contrast-Limited Adaptive Fuzzy Gamma Algorithm","authors":"K. Kiruthika, Rashmita Khilar","doi":"10.1142/s021946782450058x","DOIUrl":"https://doi.org/10.1142/s021946782450058x","url":null,"abstract":"As innovations for image handling, image enrichment (IE) can give more effective information and image compression can decrease memory space. IE plays a vital role in the medical field for which we have to use a noiseless image. IE applies to all areas of understanding and analysis of images. This paper provides an innovative algorithm called contrast-limited adaptive fuzzy gamma (CLAFG) for IE using chest X-ray (CXR) images. The image dissimilarity is enriched by computing several histograms and membership planes. The proposed algorithm comprises various steps. Firstly, CXR is separated into contextual region (CR). Secondly, the cliplimit, a threshold value which alters the dissimilarity of the CXR and applies it to the histogram which, is generated by CR and then applies the fuzzification technique via the membership plane to the CXR. Thirdly, the clipped histograms are performed in two ways, i.e. it is merged using bi-cubic interpolation techniques and it is modified with membership function. Finally, the resulting output from bi-cubic interpolation and membership function are fond of using upgrade contemplate standard methods for a richer CXR image.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43869871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-21DOI: 10.1142/s0219467824500426
A. Khaki
Nowadays, the iris recognition system is one of the most widely used and most accurate biometric systems. The iris segmentation is the most crucial stage of iris recognition system. The accurate iris segmentation can improve the efficiency of iris recognition. The main objective of iris segmentation is to obtain the iris area. Recently, the iris segmentation methods based on convolutional neural networks (CNNs) have been grown, and they have improved the accuracy greatly. Nevertheless, their accuracy is decreased by low-quality images captured in uncontrolled conditions. Therefore, the existing methods cannot segment low-quality images precisely. To overcome the challenge, this paper proposes a robust convolutional neural network (R-Net) inspired by UNet for iris segmentation. R-Net is divided into two parts: encoder and decoder. In this network, several layers are added to ResNet-34, and used in the encoder path. In the decoder path, four convolutions are applied at each level. Both help to obtain suitable feature maps and increase the network accuracy. The proposed network has been tested on four datasets: UBIRIS v2 (UBIRIS), CASIA iris v4.0 (CASIA) distance, CASIA interval, and IIT Delhi v1.0 (IITD). UBIRIS is a dataset that is used for low-quality images. The error rate (NICE1) of proposed network is 0.0055 on UBIRIS, 0.0105 on CASIA interval, 0.0043 on CASIA distance, and 0.0154 on IITD. Results show better performance of the proposed network compared to other methods.
{"title":"Robust Convolutional Neural Network based on UNet for Iris Segmentation","authors":"A. Khaki","doi":"10.1142/s0219467824500426","DOIUrl":"https://doi.org/10.1142/s0219467824500426","url":null,"abstract":"Nowadays, the iris recognition system is one of the most widely used and most accurate biometric systems. The iris segmentation is the most crucial stage of iris recognition system. The accurate iris segmentation can improve the efficiency of iris recognition. The main objective of iris segmentation is to obtain the iris area. Recently, the iris segmentation methods based on convolutional neural networks (CNNs) have been grown, and they have improved the accuracy greatly. Nevertheless, their accuracy is decreased by low-quality images captured in uncontrolled conditions. Therefore, the existing methods cannot segment low-quality images precisely. To overcome the challenge, this paper proposes a robust convolutional neural network (R-Net) inspired by UNet for iris segmentation. R-Net is divided into two parts: encoder and decoder. In this network, several layers are added to ResNet-34, and used in the encoder path. In the decoder path, four convolutions are applied at each level. Both help to obtain suitable feature maps and increase the network accuracy. The proposed network has been tested on four datasets: UBIRIS v2 (UBIRIS), CASIA iris v4.0 (CASIA) distance, CASIA interval, and IIT Delhi v1.0 (IITD). UBIRIS is a dataset that is used for low-quality images. The error rate (NICE1) of proposed network is 0.0055 on UBIRIS, 0.0105 on CASIA interval, 0.0043 on CASIA distance, and 0.0154 on IITD. Results show better performance of the proposed network compared to other methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45410552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-21DOI: 10.1142/s0219467824500554
J. Palanimeera, K. Ponmozhi
Yoga posture recognition remains a difficult issue because of crowded backgrounds, varied settings, occlusions, viewpoint alterations, and camera motions, despite recent promising advances in deep learning. In this paper, the method for accurately detecting various yoga poses using DL (Deep Learning) algorithms is provided. Using a standard RGB camera, six yoga poses — Sukhasana, Kakasana, Naukasana, Dhanurasana, Tadasana, and Vrikshasana — were captured on ten people, five men and five women. In this study, a brand-new DL model is presented for representing the spatio-temporal (ST) variation of skeleton-based yoga poses in movies. It is advised to use a variety of representation learners to pry video-level temporal recordings, which combine spatio-temporal sampling with long-range time mastering to produce a successful and effective training approach. A novel feature extraction method using Open Pose is described, together with a DenceBi-directional LSTM network to represent spatial-temporal links in both the forward and backward directions. This will increase the efficacy and consistency of modeling long-range action detection. To improve temporal pattern modeling capability, they are stacked and combined with dense skip connections. To improve performance, two modalities from look and motion are fused with a fusion module and compared to other deep learning models are LSTMs including LSTM, Bi-LSTM, Res-LSTM, and Res-BiLSTM. Studies on real-time datasets of yoga poses show that the suggested DenseBi-LSTM model performs better and yields better results than state-of-the-art techniques for yoga pose detection.
{"title":"Yoga Posture Recognition by Learning Spatial-Temporal Feature with Deep Learning Techniques","authors":"J. Palanimeera, K. Ponmozhi","doi":"10.1142/s0219467824500554","DOIUrl":"https://doi.org/10.1142/s0219467824500554","url":null,"abstract":"Yoga posture recognition remains a difficult issue because of crowded backgrounds, varied settings, occlusions, viewpoint alterations, and camera motions, despite recent promising advances in deep learning. In this paper, the method for accurately detecting various yoga poses using DL (Deep Learning) algorithms is provided. Using a standard RGB camera, six yoga poses — Sukhasana, Kakasana, Naukasana, Dhanurasana, Tadasana, and Vrikshasana — were captured on ten people, five men and five women. In this study, a brand-new DL model is presented for representing the spatio-temporal (ST) variation of skeleton-based yoga poses in movies. It is advised to use a variety of representation learners to pry video-level temporal recordings, which combine spatio-temporal sampling with long-range time mastering to produce a successful and effective training approach. A novel feature extraction method using Open Pose is described, together with a DenceBi-directional LSTM network to represent spatial-temporal links in both the forward and backward directions. This will increase the efficacy and consistency of modeling long-range action detection. To improve temporal pattern modeling capability, they are stacked and combined with dense skip connections. To improve performance, two modalities from look and motion are fused with a fusion module and compared to other deep learning models are LSTMs including LSTM, Bi-LSTM, Res-LSTM, and Res-BiLSTM. Studies on real-time datasets of yoga poses show that the suggested DenseBi-LSTM model performs better and yields better results than state-of-the-art techniques for yoga pose detection.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49029323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-21DOI: 10.1142/s0219467823400089
M. Praveena, M. Rao
Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.
{"title":"Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms","authors":"M. Praveena, M. Rao","doi":"10.1142/s0219467823400089","DOIUrl":"https://doi.org/10.1142/s0219467823400089","url":null,"abstract":"Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42142265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-21DOI: 10.1142/s0219467824500591
Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive
The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.
{"title":"An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet","authors":"Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive","doi":"10.1142/s0219467824500591","DOIUrl":"https://doi.org/10.1142/s0219467824500591","url":null,"abstract":"The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46186203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-21DOI: 10.1142/s0219467824500542
Suresh Shanmugasundaram, Natarajan Palaniappan
A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.
一项具有挑战性的任务是确保深度学习网络能够自己学习预测精度。ground truth和instance mask之间的交集-over- union (IoU)决定了mask的质量。分类评分与掩膜质量无相关性。我们的任务是研究这个问题,并了解预测的实例掩码的准确性。该网络通过比较预测的掩码和相应的实例特征来回归MaskIoU。掩码评分策略确定掩码评分与掩码质量之间的无序性,并对参数进行相应的调整。对物体几何变化的适应能力决定了可变形卷积网络的性能。利用增强的建模能力和更强的训练,重构的可变形卷积神经网络提高了对相关图像区域的聚焦能力。调制技术的引入拓宽了变形建模的范围,并在网络内全面集成了变形卷积,提高了建模能力。该网络借助DCNv2的特征模拟方案学习与基于区域的卷积神经网络(R-CNN)特征的分类能力及其对象焦点相似的特征。DCNv2的特征模拟方案指导网络训练,有效地控制这种增强的建模能力。所提出的掩码评分R-CNN网络的主干采用ResNet-152 FPN和DCNv2网络设计。基于DCNv2网络的掩码评分R-CNN网络也在ResNet-50和ResNet-101骨干网上进行了测试。使用该网络在COCO基准和cityscape数据集上实现了高精度的实例分割和目标检测。
{"title":"Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN","authors":"Suresh Shanmugasundaram, Natarajan Palaniappan","doi":"10.1142/s0219467824500542","DOIUrl":"https://doi.org/10.1142/s0219467824500542","url":null,"abstract":"A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48564995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-21DOI: 10.1142/s0219467825500020
Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi
Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.
{"title":"Detection of Fake Colorized Images based on Deep Learning","authors":"Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi","doi":"10.1142/s0219467825500020","DOIUrl":"https://doi.org/10.1142/s0219467825500020","url":null,"abstract":"Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44758172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-15DOI: 10.1142/s0219467824500402
Dhekra El Hamdi, Ines Elouedi, I. Slim
Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.
{"title":"Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network","authors":"Dhekra El Hamdi, Ines Elouedi, I. Slim","doi":"10.1142/s0219467824500402","DOIUrl":"https://doi.org/10.1142/s0219467824500402","url":null,"abstract":"Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47610541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-13DOI: 10.1142/s0219467824500505
Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed
Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.
{"title":"RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network","authors":"Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed","doi":"10.1142/s0219467824500505","DOIUrl":"https://doi.org/10.1142/s0219467824500505","url":null,"abstract":"Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46658873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-08DOI: 10.1142/s0219467824500530
Chaitanya Jannu, S. Vanambathina
Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.
{"title":"Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples","authors":"Chaitanya Jannu, S. Vanambathina","doi":"10.1142/s0219467824500530","DOIUrl":"https://doi.org/10.1142/s0219467824500530","url":null,"abstract":"Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41729473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}