首页 > 最新文献

International Journal of Image and Graphics最新文献

英文 中文
Development of Trio Optimal Feature Extraction Model for Attention-Based Adaptive Weighted RNN-Based Lung and Colon Cancer Detection Framework Using Histopathological Images 基于注意力自适应加权rnn的肺癌和结肠癌组织病理图像检测框架三最优特征提取模型的开发
Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-09 DOI: 10.1142/s0219467825500275
MD Azam Pasha, M. Narayana
Due to the combination of genetic diseases as well as a variety of biomedical abnormalities, the fatal disease named cancer is caused. Colon and lung cancer are regarded as the two leading diseases for disability and death. The most significant component for demonstrating the best course of action is the histopathological identification of such malignancies. So, in order to minimize the mortality rate caused by cancer, there is a need for early detection of the aliment on both fronts accordingly. In this case, both the deep and machine learning techniques have been utilized to speed up the detection process of cancer which may also help the researchers to study a huge amount of patients over a short period and less loss. Hence, it is highly essential to design a new lung and colon detection model based on deep learning approaches. Initially, a different set of histopathological images is collected from benchmark resources to perform effective analysis. Then, to attain the first set of features, the collected image is offered to the dilated net for attaining deep image features with the help of the Visual Geometry Group (VGG16) and Residual Neural Network (ResNet). Further, the second set of features is attained by the below process. Here, the collected image is given to pre-processing phase and the image is pre–pre-processed with the help of Contrast-limited Adaptive Histogram Equalization (CLAHE) and filter technique. Then, the pre-processed image is offered to the segmentation phase with the help of adaptive binary thresholding and offered to a dilated network that holds VGG16 and ResNet and attained the second set of features. The parameters of adaptive binary thresholding are tuned with the help of a developed hybrid approach called Sand Cat swarm JAya Optimization (SC-JAO) via Sand Cat swarm Optimization (SCO) and JAYA (SC-JAO). Finally, the third set of features is attained by offering the image to pre-processing phase. Then, the pre-processed image is offered to the segmentation phase and the image is a segmented phase and features are tuned by developed SC-JAO. Further, the segmented features are offered to attain the textural features like Gray-Level Co-Occurrence Matrix (GLCM) and Local Weber Pattern (LWP) and attained the third set of features. Then, the attained three different sets of features are given to the optimal weighted feature phase, where the parameters are optimized by the SC-JAO algorithm and then given to the disease prediction phase. Here, disease prediction is made with the help of Attention-based Adaptive Weighted Recurrent Neural Networks (AAW-RNN), and their parameters are tuned by developed SC-JAO. Thus, the developed model achieved an effective lung and colon detection rate over conventional approaches over multiple experimental analyses.
由于遗传疾病以及各种生物医学异常的结合,导致了一种名为癌症的致命疾病。结肠癌和肺癌被认为是导致残疾和死亡的两种主要疾病。证明最佳治疗方案的最重要的组成部分是这种恶性肿瘤的组织病理学鉴定。因此,为了最大限度地减少癌症造成的死亡率,有必要在这两个方面及早发现营养。在这种情况下,深度学习和机器学习技术都被用来加快癌症的检测过程,这也可以帮助研究人员在短时间内研究大量的患者,减少损失。因此,设计一种基于深度学习方法的新的肺和结肠检测模型是非常必要的。首先,从基准资源中收集一组不同的组织病理学图像以进行有效的分析。然后,通过视觉几何组(VGG16)和残差神经网络(ResNet),将采集到的图像提供给扩展网络获取深度图像特征,以获得第一组特征。此外,第二组特征是通过以下过程获得的。在这里,采集到的图像进入预处理阶段,利用对比度有限的自适应直方图均衡化(CLAHE)和滤波技术对图像进行预处理。然后,通过自适应二值阈值将预处理后的图像提供给分割阶段,并提供给包含VGG16和ResNet的扩展网络,获得第二组特征。采用沙猫群优化(SCO)和JAya (SC-JAO)相结合的沙猫群JAya优化方法(SC-JAO)对自适应二值阈值的参数进行了调整。最后,将图像提交到预处理阶段,得到第三组特征。然后,将预处理后的图像提供给分割阶段,图像即为分割阶段,并通过开发的SC-JAO对特征进行调整。在此基础上,利用分割特征得到灰度共生矩阵(GLCM)和局部韦伯模式(LWP)等纹理特征,得到第三组特征。然后,将得到的三种不同的特征集赋给最优加权特征阶段,其中的参数通过SC-JAO算法进行优化,然后赋给疾病预测阶段。本研究利用基于注意力的自适应加权递归神经网络(AAW-RNN)进行疾病预测,并通过开发的SC-JAO对其参数进行调整。因此,在多个实验分析中,开发的模型比传统方法获得了有效的肺和结肠检出率。
{"title":"Development of Trio Optimal Feature Extraction Model for Attention-Based Adaptive Weighted RNN-Based Lung and Colon Cancer Detection Framework Using Histopathological Images","authors":"MD Azam Pasha, M. Narayana","doi":"10.1142/s0219467825500275","DOIUrl":"https://doi.org/10.1142/s0219467825500275","url":null,"abstract":"Due to the combination of genetic diseases as well as a variety of biomedical abnormalities, the fatal disease named cancer is caused. Colon and lung cancer are regarded as the two leading diseases for disability and death. The most significant component for demonstrating the best course of action is the histopathological identification of such malignancies. So, in order to minimize the mortality rate caused by cancer, there is a need for early detection of the aliment on both fronts accordingly. In this case, both the deep and machine learning techniques have been utilized to speed up the detection process of cancer which may also help the researchers to study a huge amount of patients over a short period and less loss. Hence, it is highly essential to design a new lung and colon detection model based on deep learning approaches. Initially, a different set of histopathological images is collected from benchmark resources to perform effective analysis. Then, to attain the first set of features, the collected image is offered to the dilated net for attaining deep image features with the help of the Visual Geometry Group (VGG16) and Residual Neural Network (ResNet). Further, the second set of features is attained by the below process. Here, the collected image is given to pre-processing phase and the image is pre–pre-processed with the help of Contrast-limited Adaptive Histogram Equalization (CLAHE) and filter technique. Then, the pre-processed image is offered to the segmentation phase with the help of adaptive binary thresholding and offered to a dilated network that holds VGG16 and ResNet and attained the second set of features. The parameters of adaptive binary thresholding are tuned with the help of a developed hybrid approach called Sand Cat swarm JAya Optimization (SC-JAO) via Sand Cat swarm Optimization (SCO) and JAYA (SC-JAO). Finally, the third set of features is attained by offering the image to pre-processing phase. Then, the pre-processed image is offered to the segmentation phase and the image is a segmented phase and features are tuned by developed SC-JAO. Further, the segmented features are offered to attain the textural features like Gray-Level Co-Occurrence Matrix (GLCM) and Local Weber Pattern (LWP) and attained the third set of features. Then, the attained three different sets of features are given to the optimal weighted feature phase, where the parameters are optimized by the SC-JAO algorithm and then given to the disease prediction phase. Here, disease prediction is made with the help of Attention-based Adaptive Weighted Recurrent Neural Networks (AAW-RNN), and their parameters are tuned by developed SC-JAO. Thus, the developed model achieved an effective lung and colon detection rate over conventional approaches over multiple experimental analyses.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136193149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combined Shallow and Deep Learning Models for Malware Detection in Wsn 用于Wsn中恶意软件检测的浅层和深度学习组合模型
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-07 DOI: 10.1142/s0219467825500342
Madhavarapu Chandan, S. G. Santhi, T. Srinivasa Rao
Due to the major operating restrictions, ensuring security is the fundamental problem of Wireless Sensor Networks (WSNs). Because of their inadequate security mechanisms, WSNs are indeed a simple point for malware (worms, viruses, malicious code, etc.). According to the epidemic nature of worm propagation, it is critical to develop a worm defense mechanism in the network. This concept aims to establish novel malware detection in WSN that consists of several phases: “(i) Preprocessing, (ii) feature extraction, as well as (iii) detection”. At first, the input data is subjected for preprocessing phase. Then, the feature extraction takes place, in which principal component analysis (PCA), improved linear discriminant analysis (LDA), and autoencoder-based characteristics are retrieved. Moreover, the retrieved characteristics are subjected to the detection phase. The detection is performed employing combined shallow learning and DL. Further, the shallow learning includes decision tree (DT), logistic regression (LR), and Naive Bayes (NB); the deep learning (DL) includes deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Here, the DT output is given to the DNN, LR output is subjected to CNN, and the NB output is given to the RNN, respectively. Eventually, the DNN, CNN, and RNN outputs are averaged to generate a successful outcome. The combination can be thought of as an Ensemble classifier. The weight of the RNN is optimally tuned through the Self Improved Shark Smell Optimization with Opposition Learning (SISSOOL) model to improve detection precision and accuracy. Lastly, the outcomes of the suggested approach are computed in terms of different measures.
由于主要的操作限制,确保安全是无线传感器网络的根本问题。由于其安全机制不足,无线传感器网络确实是恶意软件(蠕虫、病毒、恶意代码等)的一个简单点。根据蠕虫传播的流行性,在网络中开发蠕虫防御机制至关重要。这一概念旨在在WSN中建立新的恶意软件检测,该检测由几个阶段组成:“(i)预处理,(ii)特征提取,以及(iii)检测”。首先,对输入数据进行预处理。然后,进行特征提取,其中检索主成分分析(PCA)、改进的线性判别分析(LDA)和基于自动编码器的特征。此外,对检索到的特征进行检测阶段。使用组合的浅层学习和DL来执行检测。此外,浅层学习包括决策树(DT)、逻辑回归(LR)和朴素贝叶斯(NB);深度学习(DL)包括深度神经网络(DNN)、卷积神经网络(CNN)和递归神经网络(RNN)。这里,DT输出分别被给予DNN,LR输出被给予CNN,NB输出被给予RNN。最终,对DNN、CNN和RNN的输出进行平均,以产生成功的结果。该组合可以被认为是一个集合分类器。RNN的权重通过自改进的鲨鱼嗅觉优化与反对学习(SISSOOL)模型进行优化调整,以提高检测精度和准确性。最后,根据不同的衡量标准对所建议的方法的结果进行了计算。
{"title":"Combined Shallow and Deep Learning Models for Malware Detection in Wsn","authors":"Madhavarapu Chandan, S. G. Santhi, T. Srinivasa Rao","doi":"10.1142/s0219467825500342","DOIUrl":"https://doi.org/10.1142/s0219467825500342","url":null,"abstract":"Due to the major operating restrictions, ensuring security is the fundamental problem of Wireless Sensor Networks (WSNs). Because of their inadequate security mechanisms, WSNs are indeed a simple point for malware (worms, viruses, malicious code, etc.). According to the epidemic nature of worm propagation, it is critical to develop a worm defense mechanism in the network. This concept aims to establish novel malware detection in WSN that consists of several phases: “(i) Preprocessing, (ii) feature extraction, as well as (iii) detection”. At first, the input data is subjected for preprocessing phase. Then, the feature extraction takes place, in which principal component analysis (PCA), improved linear discriminant analysis (LDA), and autoencoder-based characteristics are retrieved. Moreover, the retrieved characteristics are subjected to the detection phase. The detection is performed employing combined shallow learning and DL. Further, the shallow learning includes decision tree (DT), logistic regression (LR), and Naive Bayes (NB); the deep learning (DL) includes deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Here, the DT output is given to the DNN, LR output is subjected to CNN, and the NB output is given to the RNN, respectively. Eventually, the DNN, CNN, and RNN outputs are averaged to generate a successful outcome. The combination can be thought of as an Ensemble classifier. The weight of the RNN is optimally tuned through the Self Improved Shark Smell Optimization with Opposition Learning (SISSOOL) model to improve detection precision and accuracy. Lastly, the outcomes of the suggested approach are computed in terms of different measures.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43201003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Enhancement: A Review of Different Deep Learning Methods 语音增强:不同深度学习方法综述
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-05 DOI: 10.1142/s021946782550024x
Sivaramakrishna Yechuri, Sunny Dayal Vanabathina
Speech enhancement methods differ depending on the degree of degradation and noise in the speech signal, so research in the field is still difficult, especially when dealing with residual and background noise, which is highly transient. Numerous deep learning networks have been developed that provide promising results for improving the perceptual quality and intelligibility of noisy speech. Innovation and research in speech enhancement have been opened up by the power of deep learning techniques with implications across a wide range of real time applications. By reviewing the important datasets, feature extraction methods, deep learning models, training algorithms and evaluation metrics for speech enhancement, this paper provides a comprehensive overview. We begin by tracing the evolution of speech enhancement research, from early approaches to recent advances in deep learning architectures. By analyzing and comparing the approaches to solving speech enhancement challenges, we categorize them according to their strengths and weaknesses. Moreover, we discuss the challenges and future directions of deep learning in speech enhancement, including the demand for parameter-efficient models for speech enhancement. The purpose of this paper is to examine the development of the field, compare and contrast different approaches, and highlight future directions as well as challenges for further research.
由于语音信号的退化程度和噪声的不同,语音增强的方法也不同,因此该领域的研究仍然很困难,特别是在处理高度瞬态的残余噪声和背景噪声时。许多深度学习网络已经被开发出来,在提高有噪声语音的感知质量和可理解性方面提供了有希望的结果。深度学习技术的力量开启了语音增强的创新和研究,并对广泛的实时应用产生了影响。通过回顾语音增强的重要数据集、特征提取方法、深度学习模型、训练算法和评估指标,本文提供了一个全面的概述。我们首先追溯语音增强研究的演变,从早期的方法到深度学习架构的最新进展。通过分析和比较解决语音增强挑战的方法,根据它们的优缺点对它们进行分类。此外,我们还讨论了语音增强中深度学习的挑战和未来方向,包括对参数高效语音增强模型的需求。本文的目的是考察该领域的发展,比较和对比不同的方法,并强调未来的方向以及进一步研究的挑战。
{"title":"Speech Enhancement: A Review of Different Deep Learning Methods","authors":"Sivaramakrishna Yechuri, Sunny Dayal Vanabathina","doi":"10.1142/s021946782550024x","DOIUrl":"https://doi.org/10.1142/s021946782550024x","url":null,"abstract":"Speech enhancement methods differ depending on the degree of degradation and noise in the speech signal, so research in the field is still difficult, especially when dealing with residual and background noise, which is highly transient. Numerous deep learning networks have been developed that provide promising results for improving the perceptual quality and intelligibility of noisy speech. Innovation and research in speech enhancement have been opened up by the power of deep learning techniques with implications across a wide range of real time applications. By reviewing the important datasets, feature extraction methods, deep learning models, training algorithms and evaluation metrics for speech enhancement, this paper provides a comprehensive overview. We begin by tracing the evolution of speech enhancement research, from early approaches to recent advances in deep learning architectures. By analyzing and comparing the approaches to solving speech enhancement challenges, we categorize them according to their strengths and weaknesses. Moreover, we discuss the challenges and future directions of deep learning in speech enhancement, including the demand for parameter-efficient models for speech enhancement. The purpose of this paper is to examine the development of the field, compare and contrast different approaches, and highlight future directions as well as challenges for further research.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45839286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time Image De-Noising Method Based on Sparse Regularization 基于稀疏正则化的时间图像去噪方法
Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-01 DOI: 10.1142/s0219467825500093
Xin Wang, Xiaogang Dong
The blurring of texture edges often occurs during image data transmission and acquisition. To ensure the detailed clarity of the drag-time images, we propose a time image de-noising method based on sparse regularization. First, the image pixel sparsity index is set, and then an image de-noising model is established based on sparse regularization processing to obtain the neighborhood weights of similar image blocks. Second, a time image de-noising algorithm is designed to determine whether the coding coefficient reaches the standard value, and a new image de-noising method is obtained. Finally, the images of electronic clocks and mechanical clocks are used as two kinds of time images to compare different image de-noising methods, respectively. The results show that the sparsity regularization method has the highest peak signal-to-noise ratio among the six compared methods for different noise standard deviations and two time images. The image structure similarity is always above which shows that the proposed method is better than the other five image de-noising methods.
在图像数据传输和采集过程中,经常会出现纹理边缘模糊的现象。为了保证拖拽时间图像的细节清晰度,提出了一种基于稀疏正则化的时间图像去噪方法。首先设置图像像素稀疏度指数,然后基于稀疏正则化处理建立图像去噪模型,得到相似图像块的邻域权值。其次,设计了一种判断编码系数是否达到标准值的时间图像去噪算法,获得了一种新的图像去噪方法;最后,以电子钟和机械钟的图像作为两种时间图像,分别比较了不同的图像去噪方法。结果表明,对于不同噪声标准差和两种时间图像,稀疏化正则化方法的峰值信噪比最高。图像结构相似度始终在以上,表明该方法优于其他五种图像去噪方法。
{"title":"Time Image De-Noising Method Based on Sparse Regularization","authors":"Xin Wang, Xiaogang Dong","doi":"10.1142/s0219467825500093","DOIUrl":"https://doi.org/10.1142/s0219467825500093","url":null,"abstract":"The blurring of texture edges often occurs during image data transmission and acquisition. To ensure the detailed clarity of the drag-time images, we propose a time image de-noising method based on sparse regularization. First, the image pixel sparsity index is set, and then an image de-noising model is established based on sparse regularization processing to obtain the neighborhood weights of similar image blocks. Second, a time image de-noising algorithm is designed to determine whether the coding coefficient reaches the standard value, and a new image de-noising method is obtained. Finally, the images of electronic clocks and mechanical clocks are used as two kinds of time images to compare different image de-noising methods, respectively. The results show that the sparsity regularization method has the highest peak signal-to-noise ratio among the six compared methods for different noise standard deviations and two time images. The image structure similarity is always above which shows that the proposed method is better than the other five image de-noising methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135688305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Model for Classification of Skin Cancer Images After Segmentation 癌症皮肤图像分割后的混合分类模型
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-31 DOI: 10.1142/s0219467825500226
Rasmiranjan Mohakud, Rajashree Dash
For dermatoscopic skin lesion images, deep learning-based algorithms, particularly convolutional neural networks (CNN), have demonstrated good classification and segmentation capabilities. The impact of utilizing lesion segmentation data on classification performance, however, is still up for being subject to discussion. Being driven in this direction, in this work we propose a hybrid deep learning-based model to classify the skin cancer using segmented images. In the first stage, a fully convolutional encoder–decoder network (FCEDN) is employed to segment the skin cancer image and then in the second phase, a CNN is applied on the segmented images for classification. As the model’s success depends on the hyper-parameters it uses and fine-tuning these hyper-parameters by hand is time-consuming, so in this study the hyper-parameters of the hybrid model are optimized by utilizing an exponential neighborhood gray wolf optimization (ENGWO) technique. Extensive experiments are carried out using the International Skin Imaging Collaboration (ISIC) 2016 and ISIC 2017 datasets to show the efficacy of the model. The suggested model has been evaluated on both balanced and unbalanced datasets. With the balanced dataset, the proposed hybrid model achieves training accuracy up to 99.98%, validation accuracy up to 92.13% and testing accuracy up to 89.75%. It is evident from the findings that the proposed hybrid model outperforms previous known models in a competitive manner over balanced data.
对于皮肤镜下的皮肤病变图像,基于深度学习的算法,特别是卷积神经网络(CNN),已经显示出良好的分类和分割能力。然而,利用病灶分割数据对分类性能的影响仍有待讨论。在这个方向上,在这项工作中,我们提出了一种基于混合深度学习的模型,使用分割图像对皮肤癌进行分类。在第一阶段,采用全卷积编码器-解码器网络(FCEDN)对皮肤癌图像进行分割,然后在第二阶段,对分割后的图像进行CNN分类。由于模型的成功取决于其使用的超参数,而手工微调这些超参数非常耗时,因此本研究采用指数邻域灰狼优化(ENGWO)技术对混合模型的超参数进行优化。使用国际皮肤成像协作(ISIC) 2016和ISIC 2017数据集进行了大量实验,以显示该模型的有效性。建议的模型已经在平衡和不平衡数据集上进行了评估。在平衡数据集下,混合模型的训练准确率高达99.98%,验证准确率高达92.13%,测试准确率高达89.75%。从研究结果中可以明显看出,所提出的混合模型在平衡数据的竞争方式上优于以前已知的模型。
{"title":"A Hybrid Model for Classification of Skin Cancer Images After Segmentation","authors":"Rasmiranjan Mohakud, Rajashree Dash","doi":"10.1142/s0219467825500226","DOIUrl":"https://doi.org/10.1142/s0219467825500226","url":null,"abstract":"For dermatoscopic skin lesion images, deep learning-based algorithms, particularly convolutional neural networks (CNN), have demonstrated good classification and segmentation capabilities. The impact of utilizing lesion segmentation data on classification performance, however, is still up for being subject to discussion. Being driven in this direction, in this work we propose a hybrid deep learning-based model to classify the skin cancer using segmented images. In the first stage, a fully convolutional encoder–decoder network (FCEDN) is employed to segment the skin cancer image and then in the second phase, a CNN is applied on the segmented images for classification. As the model’s success depends on the hyper-parameters it uses and fine-tuning these hyper-parameters by hand is time-consuming, so in this study the hyper-parameters of the hybrid model are optimized by utilizing an exponential neighborhood gray wolf optimization (ENGWO) technique. Extensive experiments are carried out using the International Skin Imaging Collaboration (ISIC) 2016 and ISIC 2017 datasets to show the efficacy of the model. The suggested model has been evaluated on both balanced and unbalanced datasets. With the balanced dataset, the proposed hybrid model achieves training accuracy up to 99.98%, validation accuracy up to 92.13% and testing accuracy up to 89.75%. It is evident from the findings that the proposed hybrid model outperforms previous known models in a competitive manner over balanced data.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44995806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Brain Tumor Prediction Using Pteropus Unicinctus Optimization on Deep Neural Network 基于深度神经网络的翼龙优化脑肿瘤预测
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-31 DOI: 10.1142/s0219467825500238
Sumit Chhabra, Khushboo Bansal
Human brain tumors are now the most serious and horrible diseases for people, causing certain deaths. The patient’s life also becomes more complicated over time as a result of the brain tumor. Thus, it is essential to find tumors early to safeguard and extend the patient’s life. Hence, new improvements are highly essential in the techniques of brain tumor detection in medical areas. To address this, research has introduced automatic brain tumor prediction using Pteropus unicinctus optimization on deep neural networks (PUO-deep NNs). Initially, the data are gathered from the BraTS MICCAI brain tumor dataset and preprocessing and ROI extraction are performed to remove the noise from the data. Then the extracted RoI is forwarded to the fuzzy c-means (FCM) clustering to segment the brain image. The parameters of the FCM tune the PUO algorithm so the image is segmented into the tumor region and the non-tumor region. Then the feature extraction takes place on ResNet. Finally, the deep NN classifier successfully predicted the brain tumor by utilizing the PUO method, which improved the classifier performance and produced extremely accurate results. For dataset 1, the PUO-deep NN achieved values of 87.69% accuracy, 93.81% sensitivity, and 99.01% specificity. The suggested PUO-deep NN also attained the values for dataset 2 of 98.49%, 98.55%, and 95.60%, which is significantly more effective than the current approaches.
人类脑肿瘤是目前人类最严重、最可怕的疾病,会导致某些人死亡。随着时间的推移,由于脑肿瘤,患者的生活也变得更加复杂。因此,早期发现肿瘤是保障和延长患者生命的关键。因此,在医学领域对脑肿瘤检测技术进行新的改进是非常必要的。为了解决这个问题,研究人员在深度神经网络(PUO-deep NNs)上引入了利用翼虎优化的自动脑肿瘤预测。首先,从BraTS MICCAI脑肿瘤数据集中收集数据,进行预处理和ROI提取,去除数据中的噪声。然后将提取的感兴趣区域转发到模糊c均值聚类中进行脑图像分割。FCM的参数调整了PUO算法,从而将图像分割为肿瘤区域和非肿瘤区域。然后在ResNet上进行特征提取。最后,深度神经网络分类器利用PUO方法成功地预测了脑肿瘤,提高了分类器的性能,产生了非常准确的结果。对于数据集1,PUO-deep NN的准确率为87.69%,灵敏度为93.81%,特异性为99.01%。所建议的PUO-deep NN在数据集2上也达到了98.49%、98.55%和95.60%的值,明显比目前的方法更有效。
{"title":"An Efficient Brain Tumor Prediction Using Pteropus Unicinctus Optimization on Deep Neural Network","authors":"Sumit Chhabra, Khushboo Bansal","doi":"10.1142/s0219467825500238","DOIUrl":"https://doi.org/10.1142/s0219467825500238","url":null,"abstract":"Human brain tumors are now the most serious and horrible diseases for people, causing certain deaths. The patient’s life also becomes more complicated over time as a result of the brain tumor. Thus, it is essential to find tumors early to safeguard and extend the patient’s life. Hence, new improvements are highly essential in the techniques of brain tumor detection in medical areas. To address this, research has introduced automatic brain tumor prediction using Pteropus unicinctus optimization on deep neural networks (PUO-deep NNs). Initially, the data are gathered from the BraTS MICCAI brain tumor dataset and preprocessing and ROI extraction are performed to remove the noise from the data. Then the extracted RoI is forwarded to the fuzzy c-means (FCM) clustering to segment the brain image. The parameters of the FCM tune the PUO algorithm so the image is segmented into the tumor region and the non-tumor region. Then the feature extraction takes place on ResNet. Finally, the deep NN classifier successfully predicted the brain tumor by utilizing the PUO method, which improved the classifier performance and produced extremely accurate results. For dataset 1, the PUO-deep NN achieved values of 87.69% accuracy, 93.81% sensitivity, and 99.01% specificity. The suggested PUO-deep NN also attained the values for dataset 2 of 98.49%, 98.55%, and 95.60%, which is significantly more effective than the current approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48086675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abnormal Behavior Recognition for Human Motion Based on Improved Deep Reinforcement Learning 基于改进深度强化学习的人体运动异常行为识别
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-30 DOI: 10.1142/s0219467825500299
Xueying Duan
Recognizing abnormal behavior recognition (ABR) is an important part of social security work. To ensure social harmony and stability, it is of great significance to study the identification methods of abnormal human motion behavior. Aiming at the low accuracy of human motion ABR method, ABR method for human motion based on improved deep reinforcement learning (DRL) is proposed. First, the background image is processed in combination with the Gaussian model; second, the background features and human motion trajectory features are extracted, respectively; finally, the improved DRL model is constructed, and the feature information is input into the improvement model to further extract the abnormal behavior features, and the ABR of human motion is realized through the interaction between the agent and the environment. The different methods were examined based on UCF101 data set and HiEve data set. The results show that the accuracy of human motion key point acquisition and posture estimation accuracy is high, the proposed method sensitivity is good, and the recognition accuracy of human motion abnormal behavior is as high as 95.5%. It can realize the ABR for human motion and lay a foundation for the further development of follow-up social security management.
异常行为识别(ABR)是社会保障工作的重要组成部分。研究人体异常运动行为的识别方法,对保障社会和谐稳定具有重要意义。针对人体运动ABR方法准确率较低的问题,提出了基于改进深度强化学习(DRL)的人体运动ABR方法。首先,结合高斯模型对背景图像进行处理;其次,分别提取背景特征和人体运动轨迹特征;最后,构建改进的DRL模型,将特征信息输入改进模型,进一步提取异常行为特征,通过智能体与环境的交互实现人体运动的ABR。基于UCF101数据集和HiEve数据集对不同的方法进行了检验。结果表明,该方法对人体运动关键点的获取精度和姿态估计精度高,灵敏度好,对人体运动异常行为的识别精度高达95.5%。实现了人体运动的ABR,为后续社会保障管理的进一步发展奠定了基础。
{"title":"Abnormal Behavior Recognition for Human Motion Based on Improved Deep Reinforcement Learning","authors":"Xueying Duan","doi":"10.1142/s0219467825500299","DOIUrl":"https://doi.org/10.1142/s0219467825500299","url":null,"abstract":"Recognizing abnormal behavior recognition (ABR) is an important part of social security work. To ensure social harmony and stability, it is of great significance to study the identification methods of abnormal human motion behavior. Aiming at the low accuracy of human motion ABR method, ABR method for human motion based on improved deep reinforcement learning (DRL) is proposed. First, the background image is processed in combination with the Gaussian model; second, the background features and human motion trajectory features are extracted, respectively; finally, the improved DRL model is constructed, and the feature information is input into the improvement model to further extract the abnormal behavior features, and the ABR of human motion is realized through the interaction between the agent and the environment. The different methods were examined based on UCF101 data set and HiEve data set. The results show that the accuracy of human motion key point acquisition and posture estimation accuracy is high, the proposed method sensitivity is good, and the recognition accuracy of human motion abnormal behavior is as high as 95.5%. It can realize the ABR for human motion and lay a foundation for the further development of follow-up social security management.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41533888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Magnetic Resonance Image Segmentation and Classification for Alzheimer’s Disease Diagnosis 基于深度学习的磁共振图像分割与分类用于阿尔茨海默病诊断
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-29 DOI: 10.1142/s0219467825500263
Manochandar Thenralmanoharan, P. Kumaraguru Diderot
Accurate and rapid detection of Alzheimer’s disease (AD) using magnetic resonance imaging (MRI) gained considerable attention among research workers because of an increased number of current researches being driven by deep learning (DL) methods that have accomplished outstanding outcomes in variety of domains involving medical image analysis. Especially, convolution neural network (CNN) is primarily applied for the analyses of image datasets according to the capability of handling massive unstructured datasets and automatically extracting significant features. Earlier detection is dominant to the success and development interferences, and neuroimaging characterizes the potential regions for earlier diagnosis of AD. The study presents and develops a novel Deep Learning-based Magnetic Resonance Image Segmentation and Classification for AD Diagnosis (DLMRISC-ADD) model. The presented DLMRISC-ADD model mainly focuses on the segmentation of MRI images to detect AD. To accomplish this, the presented DLMRISC-ADD model follows a two-stage process, namely, skull stripping and image segmentation. At the preliminary stage, the presented DLMRISC-ADD model employs U-Net-based skull stripping approach to remove skull regions from the input MRIs. Next, in the second stage, the DLMRISC-ADD model applies QuickNAT model for MRI image segmentation, which identifies distinct parts such as white matter, gray matter, hippocampus, amygdala, and ventricles. Moreover, densely connected network (DenseNet201) feature extractor with sparse autoencoder (SAE) classifier is used for AD detection process. A brief set of simulations is implemented on ADNI dataset to demonstrate the improved performance of the DLMRISC-ADD method, and the outcomes are examined extensively. The experimental results exhibit the effectual segmentation results of the DLMRISC-ADD technique.
使用磁共振成像(MRI)准确快速地检测阿尔茨海默病(AD)在研究工作者中引起了相当大的关注,因为目前越来越多的研究是由深度学习(DL)方法驱动的,这些方法在涉及医学图像分析的各个领域都取得了突出的成果。特别是卷积神经网络(CNN)由于能够处理大量非结构化数据集并自动提取重要特征,主要应用于图像数据集的分析。早期检测是成功和发展干扰的主导因素,神经成像表征了AD早期诊断的潜在区域。本研究提出并开发了一种新的基于深度学习的磁共振图像分割和分类AD诊断(DLMRISC-ADD)模型。所提出的DLMRISC-ADD模型主要关注MRI图像的分割来检测AD。为了实现这一点,所提出的DL MRISC-ADD模型遵循两个阶段的过程,即颅骨剥离和图像分割。在初步阶段,所提出的DLMRISC-ADD模型采用基于U-Net的颅骨剥离方法从输入MRI中去除颅骨区域。接下来,在第二阶段,DLMRISC-ADD模型将QuickNAT模型应用于MRI图像分割,该模型识别不同的部分,如白质、灰质、海马体、杏仁核和心室。此外,将具有稀疏自动编码器(SAE)分类器的密集连接网络(DenseNet201)特征提取器用于AD检测过程。在ADNI数据集上进行了一组简短的模拟,以证明DLMRISC-ADD方法的改进性能,并对结果进行了广泛的检验。实验结果显示了DLMRISC-ADD技术的有效分割效果。
{"title":"Deep Learning-Based Magnetic Resonance Image Segmentation and Classification for Alzheimer’s Disease Diagnosis","authors":"Manochandar Thenralmanoharan, P. Kumaraguru Diderot","doi":"10.1142/s0219467825500263","DOIUrl":"https://doi.org/10.1142/s0219467825500263","url":null,"abstract":"Accurate and rapid detection of Alzheimer’s disease (AD) using magnetic resonance imaging (MRI) gained considerable attention among research workers because of an increased number of current researches being driven by deep learning (DL) methods that have accomplished outstanding outcomes in variety of domains involving medical image analysis. Especially, convolution neural network (CNN) is primarily applied for the analyses of image datasets according to the capability of handling massive unstructured datasets and automatically extracting significant features. Earlier detection is dominant to the success and development interferences, and neuroimaging characterizes the potential regions for earlier diagnosis of AD. The study presents and develops a novel Deep Learning-based Magnetic Resonance Image Segmentation and Classification for AD Diagnosis (DLMRISC-ADD) model. The presented DLMRISC-ADD model mainly focuses on the segmentation of MRI images to detect AD. To accomplish this, the presented DLMRISC-ADD model follows a two-stage process, namely, skull stripping and image segmentation. At the preliminary stage, the presented DLMRISC-ADD model employs U-Net-based skull stripping approach to remove skull regions from the input MRIs. Next, in the second stage, the DLMRISC-ADD model applies QuickNAT model for MRI image segmentation, which identifies distinct parts such as white matter, gray matter, hippocampus, amygdala, and ventricles. Moreover, densely connected network (DenseNet201) feature extractor with sparse autoencoder (SAE) classifier is used for AD detection process. A brief set of simulations is implemented on ADNI dataset to demonstrate the improved performance of the DLMRISC-ADD method, and the outcomes are examined extensively. The experimental results exhibit the effectual segmentation results of the DLMRISC-ADD technique.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45697084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Enhanced Compression Method for Medical Images Using SPIHT Encoder for Fog Computing 用于雾计算的SPIHT编码医学图像增强压缩方法
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-28 DOI: 10.1142/s0219467825500251
Shabana Rai, Arif Ullah, Wong Lai Kuan, Rifat Mustafa
When it comes to filtering and compressing data before sending it to a cloud server, fog computing is a rummage sale. Fog computing enables an alternate method to reduce the complexity of medical image processing and steadily improve its dependability. Medical images are produced by imaging processing modalities using X-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI) scans, and ultrasound (US). These medical images are large and have a huge amount of storage. This problem is being solved by making use of compression. In this area, lots of work is done. However, before adding more techniques to Fog, getting a high compression ratio (CR) in a shorter time is required, therefore consuming less network traffic. Le Gall5/3 integer wavelet transform (IWT) and a set partitioning in hierarchical trees (SPIHT) encoder were used in this study’s implementation of an image compression technique. MRI is used in the experiments. The suggested technique uses a modified CR and less compression time (CT) to compress the medical image. The proposed approach results in an average CR of 84.8895%. A 40.92% peak signal-to-noise ratio (PSNR) PNSR value is present. Using the Huffman coding, the proposed approach reduces the CT by 36.7434 s compared to the IWT. Regarding CR, the suggested technique outperforms IWTs with Huffman coding by 12%. The current approach has a 72.36% CR. The suggested work’s shortcoming is that the high CR caused a decline in the quality of the medical images. PSNR values can be raised, and more effort can be made to compress colored medical images and 3-dimensional medical images.
当涉及到在将数据发送到云服务器之前过滤和压缩数据时,雾计算是一笔大买卖。雾计算为降低医学图像处理的复杂性和稳步提高其可靠性提供了一种替代方法。医学图像是通过使用x射线、计算机断层扫描(CT)、磁共振成像(MRI)扫描和超声波(US)等成像处理方式产生的。这些医学图像很大,有很大的存储空间。利用压缩技术解决了这个问题。在这个领域,很多工作已经完成。但是,在为Fog添加更多的技术之前,需要在更短的时间内获得更高的压缩比(CR),从而减少网络流量的消耗。采用Le Gall5/3整数小波变换(IWT)和分层树集分割(SPIHT)编码器实现了一种图像压缩技术。实验中使用了核磁共振成像。该技术采用改进的CR和更短的压缩时间(CT)来压缩医学图像。该方法的平均CR为84.8895%。峰值信噪比(PSNR)为40.92%。采用霍夫曼编码,与IWT相比,该方法减少了36.7434秒的CT。关于CR,建议的技术优于霍夫曼编码的iwt 12%。目前的方法的CR为72.36%,建议的工作的缺点是高CR导致医学图像质量下降。可以提高PSNR值,并且可以更加努力地压缩彩色医学图像和三维医学图像。
{"title":"An Enhanced Compression Method for Medical Images Using SPIHT Encoder for Fog Computing","authors":"Shabana Rai, Arif Ullah, Wong Lai Kuan, Rifat Mustafa","doi":"10.1142/s0219467825500251","DOIUrl":"https://doi.org/10.1142/s0219467825500251","url":null,"abstract":"When it comes to filtering and compressing data before sending it to a cloud server, fog computing is a rummage sale. Fog computing enables an alternate method to reduce the complexity of medical image processing and steadily improve its dependability. Medical images are produced by imaging processing modalities using X-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI) scans, and ultrasound (US). These medical images are large and have a huge amount of storage. This problem is being solved by making use of compression. In this area, lots of work is done. However, before adding more techniques to Fog, getting a high compression ratio (CR) in a shorter time is required, therefore consuming less network traffic. Le Gall5/3 integer wavelet transform (IWT) and a set partitioning in hierarchical trees (SPIHT) encoder were used in this study’s implementation of an image compression technique. MRI is used in the experiments. The suggested technique uses a modified CR and less compression time (CT) to compress the medical image. The proposed approach results in an average CR of 84.8895%. A 40.92% peak signal-to-noise ratio (PSNR) PNSR value is present. Using the Huffman coding, the proposed approach reduces the CT by 36.7434 s compared to the IWT. Regarding CR, the suggested technique outperforms IWTs with Huffman coding by 12%. The current approach has a 72.36% CR. The suggested work’s shortcoming is that the high CR caused a decline in the quality of the medical images. PSNR values can be raised, and more effort can be made to compress colored medical images and 3-dimensional medical images.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48965801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Black Gram Disease Classification via Deep Ensemble Model with Optimal Training 基于最优训练的深度集成模型的黑革兰氏病分类
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-08-22 DOI: 10.1142/s0219467825500330
Neha Hajare, A. Rajawat
Black gram crop belongs to the Fabaceae family and its scientific name is Vigna Mungo.It has high nutritional content, improves the fertility of the soil, and provides atmospheric nitrogen fixation in the soil. The quality of the black gram crop is degraded by diseases such as Yellow mosaic, Anthracnose, Powdery Mildew, and Leaf Crinkle which causes economic loss to farmers and degraded production. The agriculture sector needs to classify plant nutrient deficiencies in order to increase crop quality and yield. In order to handle a variety of difficult challenges, computer vision and deep learning technologies play a crucial role in the agricultural and biological sectors. The typical diagnostic procedure involves a pathologist visiting the site and inspecting each plant. However, manually crop disease assessment is limited due to lesser accuracy and limited access of personnel. To address these problems, it is necessary to develop automated methods that can quickly identify and classify a wide range of plant diseases. In this paper, black gram disease classifications are done through a deep ensemble model with optimal training and the procedure of this technique is as follows: Initially, the input dataset is processed to increase its size via data augmentation. Here, the processes like shifting, rotation, and shearing take place. Then, the model starts with the noise removal of images using median filtering. Subsequent to the preprocessing, segmentation takes place via the proposed deep joint segmentation model to determine the ROI and non-ROI regions. The next process is the extraction of the feature set that includes the features like improved multi-texton-based features, shape-based features, color-based features, and local Gabor X-OR pattern features. The model combines the classifiers like Deep Belief Networks, Recurrent Neural Networks, and Convolutional Neural Networks. For tuning the optimal weights of the model, a new algorithm termed swarm intelligence-based Self-Improved Dwarf Mongoose Optimization algorithm (SIDMO) is introduced. Over the past two decades, nature-based metaheuristic algorithms have gained more popularity because of their ability to solve various global optimization problems with optimal solutions. This training model ensures the enhancement of classification accuracy. The accuracy of the SIDMO, which is around 94.82%, is substantially higher than that of the existing models, which are FPA[Formula: see text]88.86%, SSOA[Formula: see text]88.99%, GOA[Formula: see text]85.84%, SMA[Formula: see text]85.11%, SRSR[Formula: see text]85.32%, and DMOA[Formula: see text]88.99%, respectively.
黑革属豆科植物,学名为Vigna Mungo。它具有高营养含量,提高土壤的肥力,并在土壤中提供大气固氮。黑革作物的品质受到黄花叶病、炭疽病、白粉病、皱叶病等病害的影响,给农民造成经济损失,降低了产量。农业部门需要对植物营养缺乏进行分类,以提高作物质量和产量。为了应对各种困难的挑战,计算机视觉和深度学习技术在农业和生物领域发挥着至关重要的作用。典型的诊断程序包括病理学家访问现场并检查每个植物。然而,由于准确性较低和人员访问有限,人工作物病害评估受到限制。为了解决这些问题,有必要开发能够快速识别和分类各种植物病害的自动化方法。本文通过最优训练的深度集成模型对黑革兰氏病进行分类,该技术的流程如下:首先对输入数据集进行处理,通过数据扩充来增大其大小。在这里,发生了移动、旋转和剪切等过程。然后,该模型首先使用中值滤波对图像进行去噪。预处理后,通过提出的深度联合分割模型进行分割,确定感兴趣区域和非感兴趣区域。下一个过程是特征集的提取,其中包括改进的基于多文本的特征、基于形状的特征、基于颜色的特征和局部Gabor X-OR模式特征。该模型结合了深度信念网络、循环神经网络和卷积神经网络等分类器。为了优化模型的最优权重,提出了一种基于群智能的自改进矮猫鼬优化算法(SIDMO)。在过去的二十年里,基于自然的元启发式算法因其能够用最优解解决各种全局优化问题而获得了越来越多的欢迎。该训练模型保证了分类准确率的提高。SIDMO的准确率约为94.82%,大大高于现有的FPA[公式:见文]88.86%,SSOA[公式:见文]88.99%,GOA[公式:见文]85.84%,SMA[公式:见文]85.11%,SRSR[公式:见文]85.32%,DMOA[公式:见文]88.99%。
{"title":"Black Gram Disease Classification via Deep Ensemble Model with Optimal Training","authors":"Neha Hajare, A. Rajawat","doi":"10.1142/s0219467825500330","DOIUrl":"https://doi.org/10.1142/s0219467825500330","url":null,"abstract":"Black gram crop belongs to the Fabaceae family and its scientific name is Vigna Mungo.It has high nutritional content, improves the fertility of the soil, and provides atmospheric nitrogen fixation in the soil. The quality of the black gram crop is degraded by diseases such as Yellow mosaic, Anthracnose, Powdery Mildew, and Leaf Crinkle which causes economic loss to farmers and degraded production. The agriculture sector needs to classify plant nutrient deficiencies in order to increase crop quality and yield. In order to handle a variety of difficult challenges, computer vision and deep learning technologies play a crucial role in the agricultural and biological sectors. The typical diagnostic procedure involves a pathologist visiting the site and inspecting each plant. However, manually crop disease assessment is limited due to lesser accuracy and limited access of personnel. To address these problems, it is necessary to develop automated methods that can quickly identify and classify a wide range of plant diseases. In this paper, black gram disease classifications are done through a deep ensemble model with optimal training and the procedure of this technique is as follows: Initially, the input dataset is processed to increase its size via data augmentation. Here, the processes like shifting, rotation, and shearing take place. Then, the model starts with the noise removal of images using median filtering. Subsequent to the preprocessing, segmentation takes place via the proposed deep joint segmentation model to determine the ROI and non-ROI regions. The next process is the extraction of the feature set that includes the features like improved multi-texton-based features, shape-based features, color-based features, and local Gabor X-OR pattern features. The model combines the classifiers like Deep Belief Networks, Recurrent Neural Networks, and Convolutional Neural Networks. For tuning the optimal weights of the model, a new algorithm termed swarm intelligence-based Self-Improved Dwarf Mongoose Optimization algorithm (SIDMO) is introduced. Over the past two decades, nature-based metaheuristic algorithms have gained more popularity because of their ability to solve various global optimization problems with optimal solutions. This training model ensures the enhancement of classification accuracy. The accuracy of the SIDMO, which is around 94.82%, is substantially higher than that of the existing models, which are FPA[Formula: see text]88.86%, SSOA[Formula: see text]88.99%, GOA[Formula: see text]85.84%, SMA[Formula: see text]85.11%, SRSR[Formula: see text]85.32%, and DMOA[Formula: see text]88.99%, respectively.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43941856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Image and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1