Pub Date : 2023-01-01DOI: 10.1007/s00521-021-06459-9
Babak Nouri-Moghaddam, Mehdi Ghazanfari, Mohammad Fathian
Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with a small sample size, a significant number of genes, imbalanced data, etc., making classification models inefficient. Thus, a new hybrid solution based on a multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, a multi-filter model (i.e., ensemble filter) is proposed as preprocessing step to reduce the dataset's dimensions, using a combination of five filter methods to remove redundant and irrelevant genes. Accordingly, the results of the five filter methods are combined using a voting-based function. Additionally, the results of the proposed multi-filter indicate that it has good capability in reducing the gene subset size and selecting relevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, hypervolume indicator, and spacing metric with five hybrid multi-objective methods, and three hybrid single-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in the seven of nine microarray datasets compared to conventional ensemble methods. Moreover, the comparison results of the Ensemble Classifier model with three state-of-the-art ensemble generation methods indicate its competitive performance in which the proposed ensemble model achieved better results in the five of nine datasets.
Supplementary information: The online version contains supplementary material available at 10.1007/s00521-021-06459-9.
{"title":"A novel bio-inspired hybrid multi-filter wrapper gene selection method with ensemble classifier for microarray data.","authors":"Babak Nouri-Moghaddam, Mehdi Ghazanfari, Mohammad Fathian","doi":"10.1007/s00521-021-06459-9","DOIUrl":"https://doi.org/10.1007/s00521-021-06459-9","url":null,"abstract":"<p><p>Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with a small sample size, a significant number of genes, imbalanced data, etc., making classification models inefficient. Thus, a new hybrid solution based on a multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, a multi-filter model (i.e., ensemble filter) is proposed as preprocessing step to reduce the dataset's dimensions, using a combination of five filter methods to remove redundant and irrelevant genes. Accordingly, the results of the five filter methods are combined using a voting-based function. Additionally, the results of the proposed multi-filter indicate that it has good capability in reducing the gene subset size and selecting relevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, hypervolume indicator, and spacing metric with five hybrid multi-objective methods, and three hybrid single-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in the seven of nine microarray datasets compared to conventional ensemble methods. Moreover, the comparison results of the Ensemble Classifier model with three state-of-the-art ensemble generation methods indicate its competitive performance in which the proposed ensemble model achieved better results in the five of nine datasets.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00521-021-06459-9.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 16","pages":"11531-11561"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8435304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9854336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-11-17DOI: 10.1007/s00521-022-08016-4
Hao Li, Yang Nan, Javier Del Ser, Guang Yang
Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.
{"title":"Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation.","authors":"Hao Li, Yang Nan, Javier Del Ser, Guang Yang","doi":"10.1007/s00521-022-08016-4","DOIUrl":"10.1007/s00521-022-08016-4","url":null,"abstract":"<p><p>Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 30","pages":"22071-22085"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10505106/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10309470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07797-y
Bhavani Devi Ravichandran, Pantea Keikhosrokiani
The spread of Covid-19 misinformation on social media had significant real-world consequences, and it raised fears among internet users since the pandemic has begun. Researchers from all over the world have shown an interest in developing deception classification methods to reduce the issue. Despite numerous obstacles that can thwart the efforts, the researchers aim to create an automated, stable, accurate, and effective mechanism for misinformation classification. In this paper, a systematic literature review is conducted to analyse the state-of-the-art related to the classification of misinformation on social media. IEEE Xplore, SpringerLink, ScienceDirect, Scopus, Taylor & Francis, Wiley, Google Scholar are used as databases to find relevant papers since 2018-2021. Firstly, the study begins by reviewing the history of the issues surrounding Covid-19 misinformation and its effects on social media users. Secondly, various neuro-fuzzy and neural network classification methods are identified. Thirdly, the strength, limitations, and challenges of neuro-fuzzy and neural network approaches are verified for the classification misinformation specially in case of Covid-19. Finally, the most efficient hybrid method of neuro-fuzzy and neural networks in terms of performance accuracy is discovered. This study is wrapped up by suggesting a hybrid ANFIS-DNN model for improving Covid-19 misinformation classification. The results of this study can be served as a roadmap for future research on misinformation classification.
{"title":"Classification of Covid-19 misinformation on social media based on neuro-fuzzy and neural network: A systematic review.","authors":"Bhavani Devi Ravichandran, Pantea Keikhosrokiani","doi":"10.1007/s00521-022-07797-y","DOIUrl":"https://doi.org/10.1007/s00521-022-07797-y","url":null,"abstract":"<p><p>The spread of Covid-19 misinformation on social media had significant real-world consequences, and it raised fears among internet users since the pandemic has begun. Researchers from all over the world have shown an interest in developing deception classification methods to reduce the issue. Despite numerous obstacles that can thwart the efforts, the researchers aim to create an automated, stable, accurate, and effective mechanism for misinformation classification. In this paper, a systematic literature review is conducted to analyse the state-of-the-art related to the classification of misinformation on social media. IEEE Xplore, SpringerLink, ScienceDirect, Scopus, Taylor & Francis, Wiley, Google Scholar are used as databases to find relevant papers since 2018-2021. Firstly, the study begins by reviewing the history of the issues surrounding Covid-19 misinformation and its effects on social media users. Secondly, various neuro-fuzzy and neural network classification methods are identified. Thirdly, the strength, limitations, and challenges of neuro-fuzzy and neural network approaches are verified for the classification misinformation specially in case of Covid-19. Finally, the most efficient hybrid method of neuro-fuzzy and neural networks in terms of performance accuracy is discovered. This study is wrapped up by suggesting a hybrid ANFIS-DNN model for improving Covid-19 misinformation classification. The results of this study can be served as a roadmap for future research on misinformation classification.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 1","pages":"699-717"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9488884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10504775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2021-09-21DOI: 10.1007/s00521-021-06412-w
Rajagopal Kumar, Fadi Al-Turjman, L N B Srinivas, M Braveen, Jothilakshmi Ramakrishnan
Corona Virus Disease 2019 (COVID-19) is a continuing extensive incident globally affecting several million people's health and sometimes leading to death. The outbreak prediction and making cautious steps is the only way to prevent the spread of COVID-19. This paper presents an Adaptive Neuro-fuzzy Inference System (ANFIS)-based machine learning technique to predict the possible outbreak in India. The proposed ANFIS-based prediction system tracks the growth of epidemic based on the previous data sets fetched from cloud computing. The proposed ANFIS technique predicts the epidemic peak and COVID-19 infected cases through the cloud data sets. The ANFIS is chosen for this study as it has both numerical and linguistic knowledge, and also has ability to classify data and identify patterns. The proposed technique not only predicts the outbreak but also tracks the disease and suggests a measurable policy to manage the COVID-19 epidemic. The obtained prediction shows that the proposed technique very effectively tracks the growth of the COVID-19 epidemic. The result shows the growth of infection rate decreases at end of 2020 and also has delay epidemic peak by 40-60 days. The prediction result using the proposed ANFIS technique shows a low Mean Square Error (MSE) of 1.184 × 10-3 with an accuracy of 86%. The study provides important information for public health providers and the government to control the COVID-19 epidemic.
{"title":"ANFIS for prediction of epidemic peak and infected cases for COVID-19 in India.","authors":"Rajagopal Kumar, Fadi Al-Turjman, L N B Srinivas, M Braveen, Jothilakshmi Ramakrishnan","doi":"10.1007/s00521-021-06412-w","DOIUrl":"10.1007/s00521-021-06412-w","url":null,"abstract":"<p><p>Corona Virus Disease 2019 (COVID-19) is a continuing extensive incident globally affecting several million people's health and sometimes leading to death. The outbreak prediction and making cautious steps is the only way to prevent the spread of COVID-19. This paper presents an Adaptive Neuro-fuzzy Inference System (ANFIS)-based machine learning technique to predict the possible outbreak in India. The proposed ANFIS-based prediction system tracks the growth of epidemic based on the previous data sets fetched from cloud computing. The proposed ANFIS technique predicts the epidemic peak and COVID-19 infected cases through the cloud data sets. The ANFIS is chosen for this study as it has both numerical and linguistic knowledge, and also has ability to classify data and identify patterns. The proposed technique not only predicts the outbreak but also tracks the disease and suggests a measurable policy to manage the COVID-19 epidemic. The obtained prediction shows that the proposed technique very effectively tracks the growth of the COVID-19 epidemic. The result shows the growth of infection rate decreases at end of 2020 and also has delay epidemic peak by 40-60 days. The prediction result using the proposed ANFIS technique shows a low Mean Square Error (MSE) of 1.184 × 10<sup>-3</sup> with an accuracy of 86%. The study provides important information for public health providers and the government to control the COVID-19 epidemic.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 10","pages":"7207-7220"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8452449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9141726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07696-2
Xianli Zhao, Guixin Wang
In today's severe situation of the global new crown virus raging, there are still efficiency problems in emergency resource scheduling, and there are still deficiencies in rescue standards. For the happiness and well-being of people's lives, adhering to the principle of a community with a shared future for mankind, the emergency resource scheduling system for urban public health emergencies needs to be improved and perfected. This paper mainly studies the optimization model of urban emergency resource scheduling, which uses the deep reinforcement learning algorithm to build the emergency resource distribution system framework, and uses the Deep Q Network path planning algorithm to optimize the system, to achieve the purpose of optimizing and upgrading the efficient scheduling of emergency resources in the city. Finally, through simulation experiments, it is concluded that the deep learning algorithm studied is helpful to the emergency resource scheduling optimization system. However, with the gradual development of deep learning, some of its disadvantages are becoming increasingly obvious. An obvious flaw is that building a deep learning-based model generally requires a lot of CPU computing resources, making the cost too high.
{"title":"Deep Q networks-based optimization of emergency resource scheduling for urban public health events.","authors":"Xianli Zhao, Guixin Wang","doi":"10.1007/s00521-022-07696-2","DOIUrl":"https://doi.org/10.1007/s00521-022-07696-2","url":null,"abstract":"<p><p>In today's severe situation of the global new crown virus raging, there are still efficiency problems in emergency resource scheduling, and there are still deficiencies in rescue standards. For the happiness and well-being of people's lives, adhering to the principle of a community with a shared future for mankind, the emergency resource scheduling system for urban public health emergencies needs to be improved and perfected. This paper mainly studies the optimization model of urban emergency resource scheduling, which uses the deep reinforcement learning algorithm to build the emergency resource distribution system framework, and uses the Deep Q Network path planning algorithm to optimize the system, to achieve the purpose of optimizing and upgrading the efficient scheduling of emergency resources in the city. Finally, through simulation experiments, it is concluded that the deep learning algorithm studied is helpful to the emergency resource scheduling optimization system. However, with the gradual development of deep learning, some of its disadvantages are becoming increasingly obvious. An obvious flaw is that building a deep learning-based model generally requires a lot of CPU computing resources, making the cost too high.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 12","pages":"8823-8832"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9401203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9285301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-11-04DOI: 10.1007/s00521-022-07953-4
P Celard, E L Iglesias, J M Sorribes-Fdez, R Romero, A Seara Vieira, L Borrajo
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
{"title":"A survey on deep learning applied to medical images: from simple artificial neural networks to generative models.","authors":"P Celard, E L Iglesias, J M Sorribes-Fdez, R Romero, A Seara Vieira, L Borrajo","doi":"10.1007/s00521-022-07953-4","DOIUrl":"10.1007/s00521-022-07953-4","url":null,"abstract":"<p><p>Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 3","pages":"2291-2323"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9638354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10539766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07921-y
Mehmet Altinoz, O Tolga Altinoz
This research is based on the capacitated vehicle routing problem with urgency where each vertex corresponds to a medical facility with a urgency level and the traveling vehicle could be contaminated. This contamination is defined as the infectiousness rate, which is defined for each vertex and each vehicle. At each visited vertex, this rate for the vehicle will be increased. Therefore time-total distance it is desired to react to vertex as fast as possible- and infectiousness rate are main issues in the problem. This problem is solved with multiobjective optimization algorithms in this research. As a multiobjective problem, two objectives are defined for this model: the time and the infectiousness, and will be solved using multiobjective optimization algorithms which are nondominated sorting genetic algorithm (NSGAII), grid-based evolutionary algorithm GrEA, hypervolume estimation algorithm HypE, strength Pareto evolutionary algorithm shift-based density estimation SPEA2-SDE, and reference points-based evolutionary algorithm.
{"title":"Multiobjective problem modeling of the capacitated vehicle routing problem with urgency in a pandemic period.","authors":"Mehmet Altinoz, O Tolga Altinoz","doi":"10.1007/s00521-022-07921-y","DOIUrl":"https://doi.org/10.1007/s00521-022-07921-y","url":null,"abstract":"<p><p>This research is based on the capacitated vehicle routing problem with urgency where each vertex corresponds to a medical facility with a urgency level and the traveling vehicle could be contaminated. This contamination is defined as the infectiousness rate, which is defined for each vertex and each vehicle. At each visited vertex, this rate for the vehicle will be increased. Therefore time-total distance it is desired to react to vertex as fast as possible- and infectiousness rate are main issues in the problem. This problem is solved with multiobjective optimization algorithms in this research. As a multiobjective problem, two objectives are defined for this model: the time and the infectiousness, and will be solved using multiobjective optimization algorithms which are nondominated sorting genetic algorithm (NSGAII), grid-based evolutionary algorithm GrEA, hypervolume estimation algorithm HypE, strength Pareto evolutionary algorithm shift-based density estimation SPEA2-SDE, and reference points-based evolutionary algorithm.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 5","pages":"3865-3882"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9568933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10632381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-023-08289-3
C Ashwini, V Sellam
Corn disease prediction is an essential part of agricultural productivity. This paper presents a novel 3D-dense convolutional neural network (3D-DCNN) optimized using the Ebola optimization search (EOS) algorithm to predict corn disease targeting the increased prediction accuracy than the conventional AI methods. Since the dataset samples are generally insufficient, the paper uses some preliminary pre-processing approaches to increase the sample set and improve the samples for corn disease. The Ebola optimization search (EOS) technique is used to reduce the classification errors of the 3D-CNN approach. As an outcome, the corn disease is predicted and classified accurately and more effectually. The accuracy of the proposed 3D-DCNN-EOS model is improved, and some necessary baseline tests are performed to project the efficacy of the anticipated model. The simulation is performed in the MATLAB 2020a environment, and the outcomes specify the significance of the proposed model over other approaches. The feature representation of the input data is learned effectually to trigger the model's performance. When the proposed method is compared to other existing techniques, it outperforms them in terms of precision, the area under receiver operating characteristics (AUC), f1 score, Kappa statistic error (KSE), accuracy, root mean square error value (RMSE), and recall.
{"title":"EOS-3D-DCNN: Ebola optimization search-based 3D-dense convolutional neural network for corn leaf disease prediction.","authors":"C Ashwini, V Sellam","doi":"10.1007/s00521-023-08289-3","DOIUrl":"https://doi.org/10.1007/s00521-023-08289-3","url":null,"abstract":"<p><p>Corn disease prediction is an essential part of agricultural productivity. This paper presents a novel 3D-dense convolutional neural network (3D-DCNN) optimized using the Ebola optimization search (EOS) algorithm to predict corn disease targeting the increased prediction accuracy than the conventional AI methods. Since the dataset samples are generally insufficient, the paper uses some preliminary pre-processing approaches to increase the sample set and improve the samples for corn disease. The Ebola optimization search (EOS) technique is used to reduce the classification errors of the 3D-CNN approach. As an outcome, the corn disease is predicted and classified accurately and more effectually. The accuracy of the proposed 3D-DCNN-EOS model is improved, and some necessary baseline tests are performed to project the efficacy of the anticipated model. The simulation is performed in the MATLAB 2020a environment, and the outcomes specify the significance of the proposed model over other approaches. The feature representation of the input data is learned effectually to trigger the model's performance. When the proposed method is compared to other existing techniques, it outperforms them in terms of precision, the area under receiver operating characteristics (AUC), f1 score, Kappa statistic error (KSE), accuracy, root mean square error value (RMSE), and recall.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 15","pages":"11125-11139"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10043543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9439692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.
{"title":"Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis.","authors":"Zhongliang Li, Xuechen Li, Zhihao Jin, Linlin Shen","doi":"10.1007/s00521-023-08259-9","DOIUrl":"https://doi.org/10.1007/s00521-023-08259-9","url":null,"abstract":"<p><p>The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 15","pages":"10717-10731"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10038387/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9439693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-11-04DOI: 10.1007/s00521-022-07967-y
Binrong Wu, Lin Wang, Rui Tao, Yu-Rong Zeng
This study proposes a novel interpretable framework to forecast the daily tourism volume of Jiuzhaigou Valley, Huangshan Mountain, and Siguniang Mountain in China under the impact of COVID-19 by using multivariate time-series data, particularly historical tourism volume data, COVID-19 data, the Baidu index, and weather data. For the first time, epidemic-related search engine data is introduced for tourism demand forecasting. A new method named the composition leading search index-variational mode decomposition is proposed to process search engine data. Meanwhile, to overcome the problem of insufficient interpretability of existing tourism demand forecasting, a new model of DE-TFT interpretable tourism demand forecasting is proposed in this study, in which the hyperparameters of temporal fusion transformers (TFT) are optimized intelligently and efficiently based on the differential evolution algorithm. TFT is an attention-based deep learning model that combines high-performance forecasting with interpretable analysis of temporal dynamics, displaying excellent performance in forecasting research. The TFT model produces an interpretable tourism demand forecast output, including the importance ranking of different input variables and attention analysis at different time steps. Besides, the validity of the proposed forecasting framework is verified based on three cases. Interpretable experimental results show that the epidemic-related search engine data can well reflect the concerns of tourists about tourism during the COVID-19 epidemic.
{"title":"Interpretable tourism volume forecasting with multivariate time series under the impact of COVID-19.","authors":"Binrong Wu, Lin Wang, Rui Tao, Yu-Rong Zeng","doi":"10.1007/s00521-022-07967-y","DOIUrl":"10.1007/s00521-022-07967-y","url":null,"abstract":"<p><p>This study proposes a novel interpretable framework to forecast the daily tourism volume of Jiuzhaigou Valley, Huangshan Mountain, and Siguniang Mountain in China under the impact of COVID-19 by using multivariate time-series data, particularly historical tourism volume data, COVID-19 data, the Baidu index, and weather data. For the first time, epidemic-related search engine data is introduced for tourism demand forecasting. A new method named the composition leading search index-variational mode decomposition is proposed to process search engine data. Meanwhile, to overcome the problem of insufficient interpretability of existing tourism demand forecasting, a new model of DE-TFT interpretable tourism demand forecasting is proposed in this study, in which the hyperparameters of temporal fusion transformers (TFT) are optimized intelligently and efficiently based on the differential evolution algorithm. TFT is an attention-based deep learning model that combines high-performance forecasting with interpretable analysis of temporal dynamics, displaying excellent performance in forecasting research. The TFT model produces an interpretable tourism demand forecast output, including the importance ranking of different input variables and attention analysis at different time steps. Besides, the validity of the proposed forecasting framework is verified based on three cases. Interpretable experimental results show that the epidemic-related search engine data can well reflect the concerns of tourists about tourism during the COVID-19 epidemic.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 7","pages":"5437-5463"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9638251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10700857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}