首页 > 最新文献

Neural Computing & Applications最新文献

英文 中文
A novel bio-inspired hybrid multi-filter wrapper gene selection method with ensemble classifier for microarray data. 一种基于集成分类器的仿生混合多过滤器基因选择方法。
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-021-06459-9
Babak Nouri-Moghaddam, Mehdi Ghazanfari, Mohammad Fathian

Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with a small sample size, a significant number of genes, imbalanced data, etc., making classification models inefficient. Thus, a new hybrid solution based on a multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, a multi-filter model (i.e., ensemble filter) is proposed as preprocessing step to reduce the dataset's dimensions, using a combination of five filter methods to remove redundant and irrelevant genes. Accordingly, the results of the five filter methods are combined using a voting-based function. Additionally, the results of the proposed multi-filter indicate that it has good capability in reducing the gene subset size and selecting relevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, hypervolume indicator, and spacing metric with five hybrid multi-objective methods, and three hybrid single-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in the seven of nine microarray datasets compared to conventional ensemble methods. Moreover, the comparison results of the Ensemble Classifier model with three state-of-the-art ensemble generation methods indicate its competitive performance in which the proposed ensemble model achieved better results in the five of nine datasets.

Supplementary information: The online version contains supplementary material available at 10.1007/s00521-021-06459-9.

微阵列技术被认为是收集DNA表达数据最重要的工具之一。这项技术使研究人员能够调查和检查疾病的类型及其起源。然而,微阵列数据往往样本量小,基因数量多,数据不平衡等,使得分类模型效率低下。为此,提出了一种基于多滤波器和自适应混沌多目标森林优化算法(AC-MOFOA)的混合解决方案来解决基因选择问题并构建集成分类器。在该解决方案中,提出了一个多滤波模型(即集成滤波)作为预处理步骤来降低数据集的维数,使用五种滤波方法的组合来去除冗余和不相关的基因。因此,使用基于投票的函数将五种过滤方法的结果组合在一起。此外,所提出的多滤波器在减小基因子集大小和选择相关基因方面具有良好的能力。然后,提出了一种基于非支配排序、拥挤距离、混沌理论和自适应算子的AC-MOFOA算法。AC-MOFOA作为一种包装方法,旨在降低数据集维数,优化KELM,同时提高分类精度。其次,在该方法中,提出了一个集成分类器模型,利用AC-MOFOA结果对微阵列数据进行分类。采用5种混合多目标方法和3种混合单目标方法,在基因选择数、分类效率、执行时间、时间复杂度、超大体积指标和间隔度量等方面对所提算法进行了性能评估。结果表明,本文提出的混合方法可以通过降低数据集的维数来提高KELM在大多数数据集上的准确率,并取得与其他多目标方法相似或更好的性能。此外,与传统集成方法相比,所提出的集成分类器模型在9个微阵列数据集中的7个数据集上具有更好的分类精度和泛化性。此外,集成分类器模型与三种最先进的集成生成方法的比较结果表明了其竞争性能,其中所提出的集成模型在9个数据集中的5个数据集上取得了更好的结果。补充信息:在线版本包含补充资料,下载地址为10.1007/s00521-021-06459-9。
{"title":"A novel bio-inspired hybrid multi-filter wrapper gene selection method with ensemble classifier for microarray data.","authors":"Babak Nouri-Moghaddam,&nbsp;Mehdi Ghazanfari,&nbsp;Mohammad Fathian","doi":"10.1007/s00521-021-06459-9","DOIUrl":"https://doi.org/10.1007/s00521-021-06459-9","url":null,"abstract":"<p><p>Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with a small sample size, a significant number of genes, imbalanced data, etc., making classification models inefficient. Thus, a new hybrid solution based on a multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, a multi-filter model (i.e., ensemble filter) is proposed as preprocessing step to reduce the dataset's dimensions, using a combination of five filter methods to remove redundant and irrelevant genes. Accordingly, the results of the five filter methods are combined using a voting-based function. Additionally, the results of the proposed multi-filter indicate that it has good capability in reducing the gene subset size and selecting relevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, hypervolume indicator, and spacing metric with five hybrid multi-objective methods, and three hybrid single-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in the seven of nine microarray datasets compared to conventional ensemble methods. Moreover, the comparison results of the Ensemble Classifier model with three state-of-the-art ensemble generation methods indicate its competitive performance in which the proposed ensemble model achieved better results in the five of nine datasets.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00521-021-06459-9.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8435304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9854336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Building fuzzy time series model from unsupervised learning technique and genetic algorithm. 利用无监督学习技术和遗传算法建立模糊时间序列模型。
IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 Epub Date: 2021-10-18 DOI: 10.1007/s00521-021-06485-7
Dinh Phamtoan, Tai Vovan

This paper proposes a new model to interpolate time series and forecast it effectively for the future. The important contribution of this study is the combination of optimal techniques for fuzzy clustering problem using genetic algorithm and forecasting model for fuzzy time series. Firstly, the proposed model finds the suitable number of clusters for a series and optimizes the clustering problem by the genetic algorithm using the improved Davies and Bouldin index as the objective function. Secondly, the study gives the method to establish the fuzzy relationship of each element to the established clusters. Finally, the developed model establishes the rule to forecast for the future. The steps of the proposed model are presented clearly and illustrated by the numerical example. Furthermore, it has been realized positively by the established MATLAB procedure. Performing for a lot of series (3007 series) with the differences about characteristics and areas, the new model has shown the significant performance in comparison with the existing models via some parameters to evaluate the built model. In addition, we also present an application of the proposed model in forecasting the COVID-19 victims in Vietnam that it can perform similarly for other countries. The numerical examples and application show potential in the forecasting area of this research.

本文提出了一种新的时间序列插值模型,并对未来进行了有效的预测。本研究的重要贡献在于将遗传算法的模糊聚类优化技术与模糊时间序列的预测模型相结合。首先,该模型以改进的Davies和Bouldin指数为目标函数,通过遗传算法对聚类问题进行优化。其次,给出了建立各要素与已建立的聚类之间模糊关系的方法。最后,建立了对未来的预测规律。文中给出了模型的具体步骤,并通过数值算例进行了说明。并通过所建立的MATLAB程序进行了较好的实现。通过对多个具有不同特性和区域的系列(3007系列)进行性能测试,通过一些参数对所建模型进行评价,与现有模型相比,新模型表现出了显著的性能。此外,我们还介绍了所提出的模型在预测越南COVID-19受害者方面的应用,该模型可以在其他国家进行类似的应用。数值算例和应用显示了该研究在预测领域的潜力。
{"title":"Building fuzzy time series model from unsupervised learning technique and genetic algorithm.","authors":"Dinh Phamtoan, Tai Vovan","doi":"10.1007/s00521-021-06485-7","DOIUrl":"10.1007/s00521-021-06485-7","url":null,"abstract":"<p><p>This paper proposes a new model to interpolate time series and forecast it effectively for the future. The important contribution of this study is the combination of optimal techniques for fuzzy clustering problem using genetic algorithm and forecasting model for fuzzy time series. Firstly, the proposed model finds the suitable number of clusters for a series and optimizes the clustering problem by the genetic algorithm using the improved Davies and Bouldin index as the objective function. Secondly, the study gives the method to establish the fuzzy relationship of each element to the established clusters. Finally, the developed model establishes the rule to forecast for the future. The steps of the proposed model are presented clearly and illustrated by the numerical example. Furthermore, it has been realized positively by the established MATLAB procedure. Performing for a lot of series (3007 series) with the differences about characteristics and areas, the new model has shown the significant performance in comparison with the existing models via some parameters to evaluate the built model. In addition, we also present an application of the proposed model in forecasting the COVID-19 victims in Vietnam that it can perform similarly for other countries. The numerical examples and application show potential in the forecasting area of this research.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8522192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9128773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation. 基于区域的证据深度学习,量化不确定性,提高脑肿瘤分割的稳健性。
IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 Epub Date: 2022-11-17 DOI: 10.1007/s00521-022-08016-4
Hao Li, Yang Nan, Javier Del Ser, Guang Yang

Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.

尽管最近在脑肿瘤分割的准确性方面取得了进展,但结果仍然存在可靠性和稳健性低的问题。不确定性估计是这个问题的有效解决方案,因为它提供了分割结果的置信度度量。目前基于分位数回归、贝叶斯神经网络、集成和蒙特卡罗丢弃的不确定性估计方法由于计算成本高和不一致性而受到限制。为了克服这些挑战,证据深度学习(EDL)在最近的工作中得到了发展,但主要用于自然图像分类,并且显示出较差的分割结果。在本文中,我们提出了一种基于区域的EDL分割框架,该框架可以生成可靠的不确定性图和准确的分割结果,对噪声和图像损坏具有鲁棒性。我们使用证据理论将神经网络的输出解释为从输入特征中收集的证据值。根据主观逻辑,证据被参数化为狄利克雷分布,预测的概率被视为主观意见。为了评估我们的模型在分割和不确定性估计方面的性能,我们在BraTS 2020数据集上进行了定量和定性实验。结果证明了所提出的方法在量化分割不确定性和稳健分割肿瘤方面的最高性能。此外,我们提出的新框架保持了计算成本低、易于实现的优势,并显示出临床应用的潜力。
{"title":"Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation.","authors":"Hao Li, Yang Nan, Javier Del Ser, Guang Yang","doi":"10.1007/s00521-022-08016-4","DOIUrl":"10.1007/s00521-022-08016-4","url":null,"abstract":"<p><p>Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10505106/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10309470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Q networks-based optimization of emergency resource scheduling for urban public health events. 基于深度Q网络的城市公共卫生事件应急资源调度优化。
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07696-2
Xianli Zhao, Guixin Wang

In today's severe situation of the global new crown virus raging, there are still efficiency problems in emergency resource scheduling, and there are still deficiencies in rescue standards. For the happiness and well-being of people's lives, adhering to the principle of a community with a shared future for mankind, the emergency resource scheduling system for urban public health emergencies needs to be improved and perfected. This paper mainly studies the optimization model of urban emergency resource scheduling, which uses the deep reinforcement learning algorithm to build the emergency resource distribution system framework, and uses the Deep Q Network path planning algorithm to optimize the system, to achieve the purpose of optimizing and upgrading the efficient scheduling of emergency resources in the city. Finally, through simulation experiments, it is concluded that the deep learning algorithm studied is helpful to the emergency resource scheduling optimization system. However, with the gradual development of deep learning, some of its disadvantages are becoming increasingly obvious. An obvious flaw is that building a deep learning-based model generally requires a lot of CPU computing resources, making the cost too high.

在全球新冠病毒肆虐的严峻形势下,应急资源调度仍存在效率问题,救援标准仍存在不足。为了人民群众的幸福安康,坚持人类命运共同体理念,城市突发公共卫生事件应急资源调度体系有待完善和完善。本文主要研究城市应急资源调度优化模型,利用深度强化学习算法构建应急资源分配系统框架,并利用deep Q Network路径规划算法对系统进行优化,达到优化提升城市应急资源高效调度的目的。最后,通过仿真实验,得出所研究的深度学习算法有助于应急资源调度优化系统。然而,随着深度学习的逐步发展,它的一些缺点也越来越明显。一个明显的缺陷是,构建一个基于深度学习的模型通常需要大量的CPU计算资源,使得成本过高。
{"title":"Deep Q networks-based optimization of emergency resource scheduling for urban public health events.","authors":"Xianli Zhao,&nbsp;Guixin Wang","doi":"10.1007/s00521-022-07696-2","DOIUrl":"https://doi.org/10.1007/s00521-022-07696-2","url":null,"abstract":"<p><p>In today's severe situation of the global new crown virus raging, there are still efficiency problems in emergency resource scheduling, and there are still deficiencies in rescue standards. For the happiness and well-being of people's lives, adhering to the principle of a community with a shared future for mankind, the emergency resource scheduling system for urban public health emergencies needs to be improved and perfected. This paper mainly studies the optimization model of urban emergency resource scheduling, which uses the deep reinforcement learning algorithm to build the emergency resource distribution system framework, and uses the Deep Q Network path planning algorithm to optimize the system, to achieve the purpose of optimizing and upgrading the efficient scheduling of emergency resources in the city. Finally, through simulation experiments, it is concluded that the deep learning algorithm studied is helpful to the emergency resource scheduling optimization system. However, with the gradual development of deep learning, some of its disadvantages are becoming increasingly obvious. An obvious flaw is that building a deep learning-based model generally requires a lot of CPU computing resources, making the cost too high.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9401203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9285301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fuzzy-based hunger games search algorithm for global optimization and feature selection using medical data. 基于模糊的饥饿游戏搜索算法在医疗数据中的全局优化和特征选择。
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07916-9
Essam H Houssein, Mosa E Hosney, Waleed M Mohamed, Abdelmgeid A Ali, Eman M G Younis

Feature selection (FS) is one of the basic data preprocessing steps in data mining and machine learning. It is used to reduce feature size and increase model generalization. In addition to minimizing feature dimensionality, it also enhances classification accuracy and reduces model complexity, which are essential in several applications. Traditional methods for feature selection often fail in the optimal global solution due to the large search space. Many hybrid techniques have been proposed depending on merging several search strategies which have been used individually as a solution to the FS problem. This study proposes a modified hunger games search algorithm (mHGS), for solving optimization and FS problems. The main advantages of the proposed mHGS are to resolve the following drawbacks that have been raised in the original HGS; (1) avoiding the local search, (2) solving the problem of premature convergence, and (3) balancing between the exploitation and exploration phases. The mHGS has been evaluated by using the IEEE Congress on Evolutionary Computation 2020 (CEC'20) for optimization test and ten medical and chemical datasets. The data have dimensions up to 20000 features or more. The results of the proposed algorithm have been compared to a variety of well-known optimization methods, including improved multi-operator differential evolution algorithm (IMODE), gravitational search algorithm, grey wolf optimization, Harris Hawks optimization, whale optimization algorithm, slime mould algorithm and hunger search games search. The experimental results suggest that the proposed mHGS can generate effective search results without increasing the computational cost and improving the convergence speed. It has also improved the SVM classification performance.

特征选择(FS)是数据挖掘和机器学习中基本的数据预处理步骤之一。它用于减小特征大小和提高模型泛化。除了最小化特征维度外,它还提高了分类精度和降低了模型复杂性,这在一些应用中是必不可少的。传统的特征选择方法由于搜索空间大,往往无法得到全局最优解。已经提出了许多混合技术,这些技术依赖于合并多个单独使用的搜索策略来解决FS问题。本文提出了一种改进的饥饿游戏搜索算法(mHGS),用于解决优化和FS问题。拟议的mHGS的主要优点是解决了原HGS中提出的以下缺点;(1)避免局部搜索;(2)解决过早收敛问题;(3)平衡开发和勘探阶段。mHGS已通过IEEE进化计算大会2020 (CEC'20)的优化测试和10个医疗和化学数据集进行了评估。数据的维度可达20000个或更多特征。所提出算法的结果已与多种知名的优化方法进行了比较,包括改进的多算子差分进化算法(IMODE)、引力搜索算法、灰狼优化、哈里斯鹰优化、鲸鱼优化算法、黏菌算法和饥饿搜索游戏搜索。实验结果表明,该算法能够在不增加计算量和提高收敛速度的前提下生成有效的搜索结果。同时也提高了SVM的分类性能。
{"title":"Fuzzy-based hunger games search algorithm for global optimization and feature selection using medical data.","authors":"Essam H Houssein,&nbsp;Mosa E Hosney,&nbsp;Waleed M Mohamed,&nbsp;Abdelmgeid A Ali,&nbsp;Eman M G Younis","doi":"10.1007/s00521-022-07916-9","DOIUrl":"https://doi.org/10.1007/s00521-022-07916-9","url":null,"abstract":"<p><p>Feature selection (FS) is one of the basic data preprocessing steps in data mining and machine learning. It is used to reduce feature size and increase model generalization. In addition to minimizing feature dimensionality, it also enhances classification accuracy and reduces model complexity, which are essential in several applications. Traditional methods for feature selection often fail in the optimal global solution due to the large search space. Many hybrid techniques have been proposed depending on merging several search strategies which have been used individually as a solution to the FS problem. This study proposes a modified hunger games search algorithm (mHGS), for solving optimization and FS problems. The main advantages of the proposed mHGS are to resolve the following drawbacks that have been raised in the original HGS; (1) avoiding the local search, (2) solving the problem of premature convergence, and (3) balancing between the exploitation and exploration phases. The mHGS has been evaluated by using the IEEE Congress on Evolutionary Computation 2020 (CEC'20) for optimization test and ten medical and chemical datasets. The data have dimensions up to 20000 features or more. The results of the proposed algorithm have been compared to a variety of well-known optimization methods, including improved multi-operator differential evolution algorithm (IMODE), gravitational search algorithm, grey wolf optimization, Harris Hawks optimization, whale optimization algorithm, slime mould algorithm and hunger search games search. The experimental results suggest that the proposed mHGS can generate effective search results without increasing the computational cost and improving the convergence speed. It has also improved the SVM classification performance.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9628476/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10274818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
EOS-3D-DCNN: Ebola optimization search-based 3D-dense convolutional neural network for corn leaf disease prediction. EOS-3D-DCNN:基于埃博拉优化搜索的三维密集卷积神经网络玉米叶病预测
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-023-08289-3
C Ashwini, V Sellam

Corn disease prediction is an essential part of agricultural productivity. This paper presents a novel 3D-dense convolutional neural network (3D-DCNN) optimized using the Ebola optimization search (EOS) algorithm to predict corn disease targeting the increased prediction accuracy than the conventional AI methods. Since the dataset samples are generally insufficient, the paper uses some preliminary pre-processing approaches to increase the sample set and improve the samples for corn disease. The Ebola optimization search (EOS) technique is used to reduce the classification errors of the 3D-CNN approach. As an outcome, the corn disease is predicted and classified accurately and more effectually. The accuracy of the proposed 3D-DCNN-EOS model is improved, and some necessary baseline tests are performed to project the efficacy of the anticipated model. The simulation is performed in the MATLAB 2020a environment, and the outcomes specify the significance of the proposed model over other approaches. The feature representation of the input data is learned effectually to trigger the model's performance. When the proposed method is compared to other existing techniques, it outperforms them in terms of precision, the area under receiver operating characteristics (AUC), f1 score, Kappa statistic error (KSE), accuracy, root mean square error value (RMSE), and recall.

玉米病害预测是农业生产的重要组成部分。本文提出了一种利用埃博拉优化搜索(EOS)算法优化的三维密集卷积神经网络(3D-DCNN)预测玉米病害的方法,其预测精度比传统的人工智能方法有所提高。由于数据集样本普遍不足,本文采用了一些初步的预处理方法来增加样本集,改进玉米病害的样本。采用埃博拉优化搜索(EOS)技术降低3D-CNN方法的分类误差。对玉米病害进行了准确、有效的预测和分类。提出的3D-DCNN-EOS模型的精度得到了提高,并进行了一些必要的基线测试来预测预期模型的有效性。在MATLAB 2020a环境中进行了仿真,结果表明了所提出模型相对于其他方法的重要性。有效地学习输入数据的特征表示来触发模型的性能。与现有方法相比,该方法在精度、接收者工作特征下面积(AUC)、f1分数、Kappa统计误差(KSE)、准确率、均方根误差值(RMSE)和召回率等方面均优于现有方法。
{"title":"EOS-3D-DCNN: Ebola optimization search-based 3D-dense convolutional neural network for corn leaf disease prediction.","authors":"C Ashwini,&nbsp;V Sellam","doi":"10.1007/s00521-023-08289-3","DOIUrl":"https://doi.org/10.1007/s00521-023-08289-3","url":null,"abstract":"<p><p>Corn disease prediction is an essential part of agricultural productivity. This paper presents a novel 3D-dense convolutional neural network (3D-DCNN) optimized using the Ebola optimization search (EOS) algorithm to predict corn disease targeting the increased prediction accuracy than the conventional AI methods. Since the dataset samples are generally insufficient, the paper uses some preliminary pre-processing approaches to increase the sample set and improve the samples for corn disease. The Ebola optimization search (EOS) technique is used to reduce the classification errors of the 3D-CNN approach. As an outcome, the corn disease is predicted and classified accurately and more effectually. The accuracy of the proposed 3D-DCNN-EOS model is improved, and some necessary baseline tests are performed to project the efficacy of the anticipated model. The simulation is performed in the MATLAB 2020a environment, and the outcomes specify the significance of the proposed model over other approaches. The feature representation of the input data is learned effectually to trigger the model's performance. When the proposed method is compared to other existing techniques, it outperforms them in terms of precision, the area under receiver operating characteristics (AUC), f1 score, Kappa statistic error (KSE), accuracy, root mean square error value (RMSE), and recall.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10043543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9439692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis. 伪病变学习:COVID-19诊断的自监督框架
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-023-08259-9
Zhongliang Li, Xuechen Li, Zhihao Jin, Linlin Shen

The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.

2019冠状病毒病(COVID-19)自2019年12月首次报告以来,在全球迅速传播,胸部计算机断层扫描(CT)已成为其诊断的主要工具之一。近年来,基于深度学习的方法在无数图像识别任务中表现出令人印象深刻的性能。然而,它们通常需要大量带注释的数据进行训练。受COVID-19患者CT扫描中常见的磨玻璃不透明现象的启发,我们提出了一种基于伪病变生成和恢复的自监督预训练方法用于COVID-19诊断。我们使用基于梯度噪声的数学模型Perlin噪声生成病变样模式,然后将其随机粘贴到正常CT图像的肺部区域以生成伪covid -19图像。然后使用正常和伪covid -19图像对来训练基于编码器-解码器架构的U-Net用于图像恢复,该方法不需要任何标记数据。然后使用标记数据对预训练的编码器进行微调,用于COVID-19诊断任务。采用由CT图像组成的两个公开的COVID-19诊断数据集进行评估。综合实验结果表明,本文提出的自监督学习方法可以更好地提取COVID-19诊断的特征表示,在SARS-CoV-2数据集和济南COVID-19数据集上,该方法的准确率分别比大规模图像上预训练的监督模型高6.57%和3.03%。
{"title":"Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis.","authors":"Zhongliang Li,&nbsp;Xuechen Li,&nbsp;Zhihao Jin,&nbsp;Linlin Shen","doi":"10.1007/s00521-023-08259-9","DOIUrl":"https://doi.org/10.1007/s00521-023-08259-9","url":null,"abstract":"<p><p>The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10038387/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9439693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Res-CovNet: an internet of medical health things driven COVID-19 framework using transfer learning. Res-CovNet:一个使用迁移学习的医疗健康事物驱动的新冠肺炎框架互联网。
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 Epub Date: 2021-06-09 DOI: 10.1007/s00521-021-06171-8
Mangena Venu Madhavan, Aditya Khamparia, Deepak Gupta, Sagar Pande, Prayag Tiwari, M Shamim Hossain

Major countries are globally facing difficult situations due to this pandemic disease, COVID-19. There are high chances of getting false positives and false negatives identifying the COVID-19 symptoms through existing medical practices such as PCR (polymerase chain reaction) and RT-PCR (reverse transcription-polymerase chain reaction). It might lead to a community spread of the disease. The alternative of these tests can be CT (Computer Tomography) imaging or X-rays of the lungs to identify the patient with COVID-19 symptoms more accurately. Furthermore, by using feasible and usable technology to automate the identification of COVID-19, the facilities can be improved. This notion became the basic framework, Res-CovNet, of the implemented methodology, a hybrid methodology to bring different platforms into a single platform. This basic framework is incorporated into IoMT based framework, a web-based service to identify and classify various forms of pneumonia or COVID-19 utilizing chest X-ray images. For the front end, the.NET framework along with C# language was utilized, MongoDB was utilized for the storage aspect, Res-CovNet was utilized for the processing aspect. Deep learning combined with the notion forms a comprehensive implementation of the framework, Res-CovNet, to classify the COVID-19 affected patients from pneumonia-affected patients as both lung imaging looks similar to the naked eye. The implemented framework, Res-CovNet, developed with the technique, transfer learning in which ResNet-50 used as a pre-trained model and then extended with classification layers. The work implemented using the data of X-ray images collected from the various trustable sources that include cases such as normal, bacterial pneumonia, viral pneumonia, and COVID-19, with the overall size of the data is about 5856. The accuracy of the model implemented is about 98.4% in identifying COVID-19 against the normal cases. The accuracy of the model is about 96.2% in the case of identifying COVID-19 against all other cases, as mentioned.

由于新冠肺炎这一流行病,全球主要国家正面临困难局面。通过现有的医学实践,如PCR(聚合酶链式反应)和RT-PCR(逆转录聚合酶链式反应。这可能会导致疾病在社区传播。这些测试的替代方法可以是CT(计算机断层扫描)成像或肺部X光检查,以更准确地识别有新冠肺炎症状的患者。此外,通过使用可行和可用的技术自动识别新冠肺炎,可以改进设施。这一概念成为实施方法的基本框架Res-CovNet,这是一种将不同平台整合到单个平台中的混合方法。这一基本框架被纳入基于IoMT的框架,这是一项基于网络的服务,用于利用胸部X射线图像识别和分类各种形式的肺炎或新冠肺炎。对于前端。NET框架和C#语言一起使用,MongoDB用于存储方面,Res-CovNet用于处理方面。深度学习与这一概念相结合,形成了Res-CovNet框架的全面实施,将新冠肺炎影响的患者与肺炎影响的患者进行分类,因为两种肺部成像看起来都与肉眼相似。实现的框架Res-CovNet是用迁移学习技术开发的,其中ResNet-50用作预先训练的模型,然后用分类层进行扩展。这项工作是利用从各种可靠来源收集的X射线图像数据进行的,这些来源包括正常病例、细菌性肺炎、病毒性肺炎和新冠肺炎,数据的总体规模约为5856。所实施的模型在识别新冠肺炎与正常病例方面的准确率约为98.4%。如前所述,在针对所有其他病例识别新冠肺炎的情况下,该模型的准确率约为96.2%。
{"title":"Res-CovNet: an internet of medical health things driven COVID-19 framework using transfer learning.","authors":"Mangena Venu Madhavan,&nbsp;Aditya Khamparia,&nbsp;Deepak Gupta,&nbsp;Sagar Pande,&nbsp;Prayag Tiwari,&nbsp;M Shamim Hossain","doi":"10.1007/s00521-021-06171-8","DOIUrl":"10.1007/s00521-021-06171-8","url":null,"abstract":"<p><p>Major countries are globally facing difficult situations due to this pandemic disease, COVID-19. There are high chances of getting false positives and false negatives identifying the COVID-19 symptoms through existing medical practices such as PCR (polymerase chain reaction) and RT-PCR (reverse transcription-polymerase chain reaction). It might lead to a community spread of the disease. The alternative of these tests can be CT (Computer Tomography) imaging or X-rays of the lungs to identify the patient with COVID-19 symptoms more accurately. Furthermore, by using feasible and usable technology to automate the identification of COVID-19, the facilities can be improved. This notion became the basic framework, Res-CovNet, of the implemented methodology, a hybrid methodology to bring different platforms into a single platform. This basic framework is incorporated into IoMT based framework, a web-based service to identify and classify various forms of pneumonia or COVID-19 utilizing chest X-ray images. For the front end, the.NET framework along with C# language was utilized, MongoDB was utilized for the storage aspect, Res-CovNet was utilized for the processing aspect. Deep learning combined with the notion forms a comprehensive implementation of the framework, Res-CovNet, to classify the COVID-19 affected patients from pneumonia-affected patients as both lung imaging looks similar to the naked eye. The implemented framework, Res-CovNet, developed with the technique, transfer learning in which ResNet-50 used as a pre-trained model and then extended with classification layers. The work implemented using the data of X-ray images collected from the various trustable sources that include cases such as normal, bacterial pneumonia, viral pneumonia, and COVID-19, with the overall size of the data is about 5856. The accuracy of the model implemented is about 98.4% in identifying COVID-19 against the normal cases. The accuracy of the model is about 96.2% in the case of identifying COVID-19 against all other cases, as mentioned.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00521-021-06171-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9526793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Multiobjective problem modeling of the capacitated vehicle routing problem with urgency in a pandemic period. 大流行时期有能力紧急车辆路径问题的多目标问题建模。
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07921-y
Mehmet Altinoz, O Tolga Altinoz

This research is based on the capacitated vehicle routing problem with urgency where each vertex corresponds to a medical facility with a urgency level and the traveling vehicle could be contaminated. This contamination is defined as the infectiousness rate, which is defined for each vertex and each vehicle. At each visited vertex, this rate for the vehicle will be increased. Therefore time-total distance it is desired to react to vertex as fast as possible- and infectiousness rate are main issues in the problem. This problem is solved with multiobjective optimization algorithms in this research. As a multiobjective problem, two objectives are defined for this model: the time and the infectiousness, and will be solved using multiobjective optimization algorithms which are nondominated sorting genetic algorithm (NSGAII), grid-based evolutionary algorithm GrEA, hypervolume estimation algorithm HypE, strength Pareto evolutionary algorithm shift-based density estimation SPEA2-SDE, and reference points-based evolutionary algorithm.

本研究基于紧急情况下的有能力车辆路径问题,其中每个顶点对应一个具有紧急级别的医疗设施,并且行进的车辆可能受到污染。这种污染被定义为传染率,它被定义为每个顶点和每个车辆。在每个访问的顶点,车辆的这个速率将增加。因此,对顶点作出反应的时间-总距离和传染率是问题的主要问题。本研究采用多目标优化算法解决这一问题。作为一个多目标问题,该模型定义了时间和传染性两个目标,并将使用非支配排序遗传算法(NSGAII)、基于网格的进化算法GrEA、超大体积估计算法HypE、强度Pareto进化算法、基于位移的密度估计SPEA2-SDE和基于参考点的进化算法进行求解。
{"title":"Multiobjective problem modeling of the capacitated vehicle routing problem with urgency in a pandemic period.","authors":"Mehmet Altinoz,&nbsp;O Tolga Altinoz","doi":"10.1007/s00521-022-07921-y","DOIUrl":"https://doi.org/10.1007/s00521-022-07921-y","url":null,"abstract":"<p><p>This research is based on the capacitated vehicle routing problem with urgency where each vertex corresponds to a medical facility with a urgency level and the traveling vehicle could be contaminated. This contamination is defined as the infectiousness rate, which is defined for each vertex and each vehicle. At each visited vertex, this rate for the vehicle will be increased. Therefore time-total distance it is desired to react to vertex as fast as possible- and infectiousness rate are main issues in the problem. This problem is solved with multiobjective optimization algorithms in this research. As a multiobjective problem, two objectives are defined for this model: the time and the infectiousness, and will be solved using multiobjective optimization algorithms which are nondominated sorting genetic algorithm (NSGAII), grid-based evolutionary algorithm GrEA, hypervolume estimation algorithm HypE, strength Pareto evolutionary algorithm shift-based density estimation SPEA2-SDE, and reference points-based evolutionary algorithm.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9568933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10632381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multilevel thresholding satellite image segmentation using chaotic coronavirus optimization algorithm with hybrid fitness function. 基于混合适应度函数的混沌冠状病毒优化算法的多级阈值卫星图像分割。
IF 6 3区 计算机科学 Q1 Computer Science Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07718-z
Khalid M Hosny, Asmaa M Khalid, Hanaa M Hamza, Seyedali Mirjalili

Image segmentation is a critical step in digital image processing applications. One of the most preferred methods for image segmentation is multilevel thresholding, in which a set of threshold values is determined to divide an image into different classes. However, the computational complexity increases when the required thresholds are high. Therefore, this paper introduces a modified Coronavirus Optimization algorithm for image segmentation. In the proposed algorithm, the chaotic map concept is added to the initialization step of the naive algorithm to increase the diversity of solutions. A hybrid of the two commonly used methods, Otsu's and Kapur's entropy, is applied to form a new fitness function to determine the optimum threshold values. The proposed algorithm is evaluated using two different datasets, including six benchmarks and six satellite images. Various evaluation metrics are used to measure the quality of the segmented images using the proposed algorithm, such as mean square error, peak signal-to-noise ratio, Structural Similarity Index, Feature Similarity Index, and Normalized Correlation Coefficient. Additionally, the best fitness values are calculated to demonstrate the proposed method's ability to find the optimum solution. The obtained results are compared to eleven powerful and recent metaheuristics and prove the superiority of the proposed algorithm in the image segmentation problem.

图像分割是数字图像处理应用的关键步骤。多级阈值分割是一种常用的图像分割方法,通过确定一组阈值将图像划分为不同的类别。然而,当所需的阈值较高时,计算复杂性会增加。为此,本文引入一种改进的冠状病毒优化算法进行图像分割。在该算法的初始化步骤中加入混沌映射的概念,增加了解的多样性。将常用的两种方法Otsu熵和Kapur熵混合,形成新的适应度函数来确定最优阈值。使用两个不同的数据集,包括六个基准和六个卫星图像,对所提出的算法进行了评估。采用各种评价指标,如均方误差、峰值信噪比、结构相似指数、特征相似指数和归一化相关系数等,来衡量使用该算法分割的图像的质量。此外,计算了最佳适应度值,以证明所提出的方法能够找到最优解。将所得结果与11种功能强大的最新元启发式算法进行了比较,证明了该算法在图像分割问题上的优越性。
{"title":"Multilevel thresholding satellite image segmentation using chaotic coronavirus optimization algorithm with hybrid fitness function.","authors":"Khalid M Hosny,&nbsp;Asmaa M Khalid,&nbsp;Hanaa M Hamza,&nbsp;Seyedali Mirjalili","doi":"10.1007/s00521-022-07718-z","DOIUrl":"https://doi.org/10.1007/s00521-022-07718-z","url":null,"abstract":"<p><p>Image segmentation is a critical step in digital image processing applications. One of the most preferred methods for image segmentation is multilevel thresholding, in which a set of threshold values is determined to divide an image into different classes. However, the computational complexity increases when the required thresholds are high. Therefore, this paper introduces a modified Coronavirus Optimization algorithm for image segmentation. In the proposed algorithm, the chaotic map concept is added to the initialization step of the naive algorithm to increase the diversity of solutions. A hybrid of the two commonly used methods, Otsu's and Kapur's entropy, is applied to form a new fitness function to determine the optimum threshold values. The proposed algorithm is evaluated using two different datasets, including six benchmarks and six satellite images. Various evaluation metrics are used to measure the quality of the segmented images using the proposed algorithm, such as mean square error, peak signal-to-noise ratio, Structural Similarity Index, Feature Similarity Index, and Normalized Correlation Coefficient. Additionally, the best fitness values are calculated to demonstrate the proposed method's ability to find the optimum solution. The obtained results are compared to eleven powerful and recent metaheuristics and prove the superiority of the proposed algorithm in the image segmentation problem.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9510310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10497954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Neural Computing & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1