首页 > 最新文献

Neural Computing & Applications最新文献

英文 中文
A novel bio-inspired hybrid multi-filter wrapper gene selection method with ensemble classifier for microarray data. 一种基于集成分类器的仿生混合多过滤器基因选择方法。
IF 6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1007/s00521-021-06459-9
Babak Nouri-Moghaddam, Mehdi Ghazanfari, Mohammad Fathian

Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with a small sample size, a significant number of genes, imbalanced data, etc., making classification models inefficient. Thus, a new hybrid solution based on a multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, a multi-filter model (i.e., ensemble filter) is proposed as preprocessing step to reduce the dataset's dimensions, using a combination of five filter methods to remove redundant and irrelevant genes. Accordingly, the results of the five filter methods are combined using a voting-based function. Additionally, the results of the proposed multi-filter indicate that it has good capability in reducing the gene subset size and selecting relevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, hypervolume indicator, and spacing metric with five hybrid multi-objective methods, and three hybrid single-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in the seven of nine microarray datasets compared to conventional ensemble methods. Moreover, the comparison results of the Ensemble Classifier model with three state-of-the-art ensemble generation methods indicate its competitive performance in which the proposed ensemble model achieved better results in the five of nine datasets.

Supplementary information: The online version contains supplementary material available at 10.1007/s00521-021-06459-9.

微阵列技术被认为是收集DNA表达数据最重要的工具之一。这项技术使研究人员能够调查和检查疾病的类型及其起源。然而,微阵列数据往往样本量小,基因数量多,数据不平衡等,使得分类模型效率低下。为此,提出了一种基于多滤波器和自适应混沌多目标森林优化算法(AC-MOFOA)的混合解决方案来解决基因选择问题并构建集成分类器。在该解决方案中,提出了一个多滤波模型(即集成滤波)作为预处理步骤来降低数据集的维数,使用五种滤波方法的组合来去除冗余和不相关的基因。因此,使用基于投票的函数将五种过滤方法的结果组合在一起。此外,所提出的多滤波器在减小基因子集大小和选择相关基因方面具有良好的能力。然后,提出了一种基于非支配排序、拥挤距离、混沌理论和自适应算子的AC-MOFOA算法。AC-MOFOA作为一种包装方法,旨在降低数据集维数,优化KELM,同时提高分类精度。其次,在该方法中,提出了一个集成分类器模型,利用AC-MOFOA结果对微阵列数据进行分类。采用5种混合多目标方法和3种混合单目标方法,在基因选择数、分类效率、执行时间、时间复杂度、超大体积指标和间隔度量等方面对所提算法进行了性能评估。结果表明,本文提出的混合方法可以通过降低数据集的维数来提高KELM在大多数数据集上的准确率,并取得与其他多目标方法相似或更好的性能。此外,与传统集成方法相比,所提出的集成分类器模型在9个微阵列数据集中的7个数据集上具有更好的分类精度和泛化性。此外,集成分类器模型与三种最先进的集成生成方法的比较结果表明了其竞争性能,其中所提出的集成模型在9个数据集中的5个数据集上取得了更好的结果。补充信息:在线版本包含补充资料,下载地址为10.1007/s00521-021-06459-9。
{"title":"A novel bio-inspired hybrid multi-filter wrapper gene selection method with ensemble classifier for microarray data.","authors":"Babak Nouri-Moghaddam,&nbsp;Mehdi Ghazanfari,&nbsp;Mohammad Fathian","doi":"10.1007/s00521-021-06459-9","DOIUrl":"https://doi.org/10.1007/s00521-021-06459-9","url":null,"abstract":"<p><p>Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with a small sample size, a significant number of genes, imbalanced data, etc., making classification models inefficient. Thus, a new hybrid solution based on a multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, a multi-filter model (i.e., ensemble filter) is proposed as preprocessing step to reduce the dataset's dimensions, using a combination of five filter methods to remove redundant and irrelevant genes. Accordingly, the results of the five filter methods are combined using a voting-based function. Additionally, the results of the proposed multi-filter indicate that it has good capability in reducing the gene subset size and selecting relevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, hypervolume indicator, and spacing metric with five hybrid multi-objective methods, and three hybrid single-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in the seven of nine microarray datasets compared to conventional ensemble methods. Moreover, the comparison results of the Ensemble Classifier model with three state-of-the-art ensemble generation methods indicate its competitive performance in which the proposed ensemble model achieved better results in the five of nine datasets.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00521-021-06459-9.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 16","pages":"11531-11561"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8435304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9854336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation. 基于区域的证据深度学习,量化不确定性,提高脑肿瘤分割的稳健性。
IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 Epub Date: 2022-11-17 DOI: 10.1007/s00521-022-08016-4
Hao Li, Yang Nan, Javier Del Ser, Guang Yang

Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.

尽管最近在脑肿瘤分割的准确性方面取得了进展,但结果仍然存在可靠性和稳健性低的问题。不确定性估计是这个问题的有效解决方案,因为它提供了分割结果的置信度度量。目前基于分位数回归、贝叶斯神经网络、集成和蒙特卡罗丢弃的不确定性估计方法由于计算成本高和不一致性而受到限制。为了克服这些挑战,证据深度学习(EDL)在最近的工作中得到了发展,但主要用于自然图像分类,并且显示出较差的分割结果。在本文中,我们提出了一种基于区域的EDL分割框架,该框架可以生成可靠的不确定性图和准确的分割结果,对噪声和图像损坏具有鲁棒性。我们使用证据理论将神经网络的输出解释为从输入特征中收集的证据值。根据主观逻辑,证据被参数化为狄利克雷分布,预测的概率被视为主观意见。为了评估我们的模型在分割和不确定性估计方面的性能,我们在BraTS 2020数据集上进行了定量和定性实验。结果证明了所提出的方法在量化分割不确定性和稳健分割肿瘤方面的最高性能。此外,我们提出的新框架保持了计算成本低、易于实现的优势,并显示出临床应用的潜力。
{"title":"Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation.","authors":"Hao Li, Yang Nan, Javier Del Ser, Guang Yang","doi":"10.1007/s00521-022-08016-4","DOIUrl":"10.1007/s00521-022-08016-4","url":null,"abstract":"<p><p>Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 30","pages":"22071-22085"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10505106/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10309470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Covid-19 misinformation on social media based on neuro-fuzzy and neural network: A systematic review. 基于神经模糊和神经网络的新型冠状病毒虚假信息分类系统综述
IF 6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07797-y
Bhavani Devi Ravichandran, Pantea Keikhosrokiani

The spread of Covid-19 misinformation on social media had significant real-world consequences, and it raised fears among internet users since the pandemic has begun. Researchers from all over the world have shown an interest in developing deception classification methods to reduce the issue. Despite numerous obstacles that can thwart the efforts, the researchers aim to create an automated, stable, accurate, and effective mechanism for misinformation classification. In this paper, a systematic literature review is conducted to analyse the state-of-the-art related to the classification of misinformation on social media. IEEE Xplore, SpringerLink, ScienceDirect, Scopus, Taylor & Francis, Wiley, Google Scholar are used as databases to find relevant papers since 2018-2021. Firstly, the study begins by reviewing the history of the issues surrounding Covid-19 misinformation and its effects on social media users. Secondly, various neuro-fuzzy and neural network classification methods are identified. Thirdly, the strength, limitations, and challenges of neuro-fuzzy and neural network approaches are verified for the classification misinformation specially in case of Covid-19. Finally, the most efficient hybrid method of neuro-fuzzy and neural networks in terms of performance accuracy is discovered. This study is wrapped up by suggesting a hybrid ANFIS-DNN model for improving Covid-19 misinformation classification. The results of this study can be served as a roadmap for future research on misinformation classification.

新冠肺炎错误信息在社交媒体上的传播对现实世界产生了重大影响,自疫情开始以来,它引发了互联网用户的担忧。来自世界各地的研究人员都对开发欺骗分类方法来减少这个问题感兴趣。尽管有许多障碍可以阻碍这一努力,但研究人员的目标是创建一个自动化、稳定、准确和有效的错误信息分类机制。在本文中,进行了系统的文献综述,以分析与社交媒体上的错误信息分类有关的最新进展。使用IEEE explore、SpringerLink、ScienceDirect、Scopus、Taylor & Francis、Wiley、Google Scholar作为检索2018-2021年相关论文的数据库。首先,该研究首先回顾了有关Covid-19错误信息问题的历史及其对社交媒体用户的影响。其次,对各种神经模糊和神经网络分类方法进行了识别。第三,验证了神经模糊和神经网络方法的优势、局限性和挑战,特别是在Covid-19的情况下。最后,从性能精度方面找到了神经模糊和神经网络最有效的混合方法。本研究最后提出了一种用于改进Covid-19错误信息分类的混合anfiss - dnn模型。本研究的结果可以作为未来错误信息分类研究的路线图。
{"title":"Classification of Covid-19 misinformation on social media based on neuro-fuzzy and neural network: A systematic review.","authors":"Bhavani Devi Ravichandran,&nbsp;Pantea Keikhosrokiani","doi":"10.1007/s00521-022-07797-y","DOIUrl":"https://doi.org/10.1007/s00521-022-07797-y","url":null,"abstract":"<p><p>The spread of Covid-19 misinformation on social media had significant real-world consequences, and it raised fears among internet users since the pandemic has begun. Researchers from all over the world have shown an interest in developing deception classification methods to reduce the issue. Despite numerous obstacles that can thwart the efforts, the researchers aim to create an automated, stable, accurate, and effective mechanism for misinformation classification. In this paper, a systematic literature review is conducted to analyse the state-of-the-art related to the classification of misinformation on social media. IEEE Xplore, SpringerLink, ScienceDirect, Scopus, Taylor & Francis, Wiley, Google Scholar are used as databases to find relevant papers since 2018-2021. Firstly, the study begins by reviewing the history of the issues surrounding Covid-19 misinformation and its effects on social media users. Secondly, various neuro-fuzzy and neural network classification methods are identified. Thirdly, the strength, limitations, and challenges of neuro-fuzzy and neural network approaches are verified for the classification misinformation specially in case of Covid-19. Finally, the most efficient hybrid method of neuro-fuzzy and neural networks in terms of performance accuracy is discovered. This study is wrapped up by suggesting a hybrid ANFIS-DNN model for improving Covid-19 misinformation classification. The results of this study can be served as a roadmap for future research on misinformation classification.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 1","pages":"699-717"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9488884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10504775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ANFIS for prediction of epidemic peak and infected cases for COVID-19 in India. ANFIS 用于预测印度 COVID-19 的流行高峰和感染病例。
IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 Epub Date: 2021-09-21 DOI: 10.1007/s00521-021-06412-w
Rajagopal Kumar, Fadi Al-Turjman, L N B Srinivas, M Braveen, Jothilakshmi Ramakrishnan

Corona Virus Disease 2019 (COVID-19) is a continuing extensive incident globally affecting several million people's health and sometimes leading to death. The outbreak prediction and making cautious steps is the only way to prevent the spread of COVID-19. This paper presents an Adaptive Neuro-fuzzy Inference System (ANFIS)-based machine learning technique to predict the possible outbreak in India. The proposed ANFIS-based prediction system tracks the growth of epidemic based on the previous data sets fetched from cloud computing. The proposed ANFIS technique predicts the epidemic peak and COVID-19 infected cases through the cloud data sets. The ANFIS is chosen for this study as it has both numerical and linguistic knowledge, and also has ability to classify data and identify patterns. The proposed technique not only predicts the outbreak but also tracks the disease and suggests a measurable policy to manage the COVID-19 epidemic. The obtained prediction shows that the proposed technique very effectively tracks the growth of the COVID-19 epidemic. The result shows the growth of infection rate decreases at end of 2020 and also has delay epidemic peak by 40-60 days. The prediction result using the proposed ANFIS technique shows a low Mean Square Error (MSE) of 1.184 × 10-3 with an accuracy of 86%. The study provides important information for public health providers and the government to control the COVID-19 epidemic.

科罗娜病毒病 2019(COVID-19)是全球范围内持续发生的大范围事件,影响了数百万人的健康,有时甚至导致死亡。预测疫情并采取谨慎措施是防止 COVID-19 传播的唯一途径。本文提出了一种基于自适应神经模糊推理系统(ANFIS)的机器学习技术,用于预测印度可能爆发的疫情。所提出的基于 ANFIS 的预测系统根据以前从云计算获取的数据集跟踪疫情的增长。拟议的 ANFIS 技术可通过云数据集预测疫情高峰和 COVID-19 感染病例。本研究之所以选择 ANFIS,是因为它同时具备数字和语言知识,还能对数据进行分类并识别模式。所提出的技术不仅能预测疫情,还能跟踪疾病,并提出可衡量的政策来管理 COVID-19 疫情。预测结果表明,所提出的技术能非常有效地跟踪 COVID-19 疫情的增长。结果表明,感染率的增长在 2020 年底有所下降,疫情高峰期也推迟了 40-60 天。使用所提出的 ANFIS 技术得出的预测结果显示,平均平方误差(MSE)为 1.184 × 10-3,准确率为 86%。该研究为公共卫生人员和政府控制 COVID-19 疫情提供了重要信息。
{"title":"ANFIS for prediction of epidemic peak and infected cases for COVID-19 in India.","authors":"Rajagopal Kumar, Fadi Al-Turjman, L N B Srinivas, M Braveen, Jothilakshmi Ramakrishnan","doi":"10.1007/s00521-021-06412-w","DOIUrl":"10.1007/s00521-021-06412-w","url":null,"abstract":"<p><p>Corona Virus Disease 2019 (COVID-19) is a continuing extensive incident globally affecting several million people's health and sometimes leading to death. The outbreak prediction and making cautious steps is the only way to prevent the spread of COVID-19. This paper presents an Adaptive Neuro-fuzzy Inference System (ANFIS)-based machine learning technique to predict the possible outbreak in India. The proposed ANFIS-based prediction system tracks the growth of epidemic based on the previous data sets fetched from cloud computing. The proposed ANFIS technique predicts the epidemic peak and COVID-19 infected cases through the cloud data sets. The ANFIS is chosen for this study as it has both numerical and linguistic knowledge, and also has ability to classify data and identify patterns. The proposed technique not only predicts the outbreak but also tracks the disease and suggests a measurable policy to manage the COVID-19 epidemic. The obtained prediction shows that the proposed technique very effectively tracks the growth of the COVID-19 epidemic. The result shows the growth of infection rate decreases at end of 2020 and also has delay epidemic peak by 40-60 days. The prediction result using the proposed ANFIS technique shows a low Mean Square Error (MSE) of 1.184 × 10<sup>-3</sup> with an accuracy of 86%. The study provides important information for public health providers and the government to control the COVID-19 epidemic.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 10","pages":"7207-7220"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8452449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9141726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Q networks-based optimization of emergency resource scheduling for urban public health events. 基于深度Q网络的城市公共卫生事件应急资源调度优化。
IF 6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07696-2
Xianli Zhao, Guixin Wang

In today's severe situation of the global new crown virus raging, there are still efficiency problems in emergency resource scheduling, and there are still deficiencies in rescue standards. For the happiness and well-being of people's lives, adhering to the principle of a community with a shared future for mankind, the emergency resource scheduling system for urban public health emergencies needs to be improved and perfected. This paper mainly studies the optimization model of urban emergency resource scheduling, which uses the deep reinforcement learning algorithm to build the emergency resource distribution system framework, and uses the Deep Q Network path planning algorithm to optimize the system, to achieve the purpose of optimizing and upgrading the efficient scheduling of emergency resources in the city. Finally, through simulation experiments, it is concluded that the deep learning algorithm studied is helpful to the emergency resource scheduling optimization system. However, with the gradual development of deep learning, some of its disadvantages are becoming increasingly obvious. An obvious flaw is that building a deep learning-based model generally requires a lot of CPU computing resources, making the cost too high.

在全球新冠病毒肆虐的严峻形势下,应急资源调度仍存在效率问题,救援标准仍存在不足。为了人民群众的幸福安康,坚持人类命运共同体理念,城市突发公共卫生事件应急资源调度体系有待完善和完善。本文主要研究城市应急资源调度优化模型,利用深度强化学习算法构建应急资源分配系统框架,并利用deep Q Network路径规划算法对系统进行优化,达到优化提升城市应急资源高效调度的目的。最后,通过仿真实验,得出所研究的深度学习算法有助于应急资源调度优化系统。然而,随着深度学习的逐步发展,它的一些缺点也越来越明显。一个明显的缺陷是,构建一个基于深度学习的模型通常需要大量的CPU计算资源,使得成本过高。
{"title":"Deep Q networks-based optimization of emergency resource scheduling for urban public health events.","authors":"Xianli Zhao,&nbsp;Guixin Wang","doi":"10.1007/s00521-022-07696-2","DOIUrl":"https://doi.org/10.1007/s00521-022-07696-2","url":null,"abstract":"<p><p>In today's severe situation of the global new crown virus raging, there are still efficiency problems in emergency resource scheduling, and there are still deficiencies in rescue standards. For the happiness and well-being of people's lives, adhering to the principle of a community with a shared future for mankind, the emergency resource scheduling system for urban public health emergencies needs to be improved and perfected. This paper mainly studies the optimization model of urban emergency resource scheduling, which uses the deep reinforcement learning algorithm to build the emergency resource distribution system framework, and uses the Deep Q Network path planning algorithm to optimize the system, to achieve the purpose of optimizing and upgrading the efficient scheduling of emergency resources in the city. Finally, through simulation experiments, it is concluded that the deep learning algorithm studied is helpful to the emergency resource scheduling optimization system. However, with the gradual development of deep learning, some of its disadvantages are becoming increasingly obvious. An obvious flaw is that building a deep learning-based model generally requires a lot of CPU computing resources, making the cost too high.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 12","pages":"8823-8832"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9401203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9285301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. 应用于医学图像的深度学习调查:从简单的人工神经网络到生成模型。
IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 Epub Date: 2022-11-04 DOI: 10.1007/s00521-022-07953-4
P Celard, E L Iglesias, J M Sorribes-Fdez, R Romero, A Seara Vieira, L Borrajo

Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.

深度学习技术,尤其是生成模型,在医学图像分析中占据了重要地位。本文概述了与医学图像生成相关的深度学习基本概念。它简明扼要地概述了一些研究,这些研究使用了过去几年中一些最新的先进模型,并将其应用于与疾病相关的不同受伤身体部位或器官(如脑肿瘤和 COVID-19 肺部肺炎)的医学图像。本研究的目的是全面概述人工神经网络(NN)和深度生成模型在医学影像中的应用,以便让更多不熟悉深度学习的团体和作者考虑到其在医学工程中的应用。我们回顾了生成模型的使用情况,如生成对抗网络和变异自动编码器,它们是实现语义分割、数据增强和更好的分类算法等目的的技术。此外,我们还介绍了一组广泛使用的公共医疗数据集,其中包含磁共振(MR)图像、计算机断层扫描(CT)扫描和普通图片。最后,我们总结了医学图像生成模型的现状,包括主要特征、当前挑战和未来研究方向。
{"title":"A survey on deep learning applied to medical images: from simple artificial neural networks to generative models.","authors":"P Celard, E L Iglesias, J M Sorribes-Fdez, R Romero, A Seara Vieira, L Borrajo","doi":"10.1007/s00521-022-07953-4","DOIUrl":"10.1007/s00521-022-07953-4","url":null,"abstract":"<p><p>Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 3","pages":"2291-2323"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9638354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10539766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiobjective problem modeling of the capacitated vehicle routing problem with urgency in a pandemic period. 大流行时期有能力紧急车辆路径问题的多目标问题建模。
IF 6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1007/s00521-022-07921-y
Mehmet Altinoz, O Tolga Altinoz

This research is based on the capacitated vehicle routing problem with urgency where each vertex corresponds to a medical facility with a urgency level and the traveling vehicle could be contaminated. This contamination is defined as the infectiousness rate, which is defined for each vertex and each vehicle. At each visited vertex, this rate for the vehicle will be increased. Therefore time-total distance it is desired to react to vertex as fast as possible- and infectiousness rate are main issues in the problem. This problem is solved with multiobjective optimization algorithms in this research. As a multiobjective problem, two objectives are defined for this model: the time and the infectiousness, and will be solved using multiobjective optimization algorithms which are nondominated sorting genetic algorithm (NSGAII), grid-based evolutionary algorithm GrEA, hypervolume estimation algorithm HypE, strength Pareto evolutionary algorithm shift-based density estimation SPEA2-SDE, and reference points-based evolutionary algorithm.

本研究基于紧急情况下的有能力车辆路径问题,其中每个顶点对应一个具有紧急级别的医疗设施,并且行进的车辆可能受到污染。这种污染被定义为传染率,它被定义为每个顶点和每个车辆。在每个访问的顶点,车辆的这个速率将增加。因此,对顶点作出反应的时间-总距离和传染率是问题的主要问题。本研究采用多目标优化算法解决这一问题。作为一个多目标问题,该模型定义了时间和传染性两个目标,并将使用非支配排序遗传算法(NSGAII)、基于网格的进化算法GrEA、超大体积估计算法HypE、强度Pareto进化算法、基于位移的密度估计SPEA2-SDE和基于参考点的进化算法进行求解。
{"title":"Multiobjective problem modeling of the capacitated vehicle routing problem with urgency in a pandemic period.","authors":"Mehmet Altinoz,&nbsp;O Tolga Altinoz","doi":"10.1007/s00521-022-07921-y","DOIUrl":"https://doi.org/10.1007/s00521-022-07921-y","url":null,"abstract":"<p><p>This research is based on the capacitated vehicle routing problem with urgency where each vertex corresponds to a medical facility with a urgency level and the traveling vehicle could be contaminated. This contamination is defined as the infectiousness rate, which is defined for each vertex and each vehicle. At each visited vertex, this rate for the vehicle will be increased. Therefore time-total distance it is desired to react to vertex as fast as possible- and infectiousness rate are main issues in the problem. This problem is solved with multiobjective optimization algorithms in this research. As a multiobjective problem, two objectives are defined for this model: the time and the infectiousness, and will be solved using multiobjective optimization algorithms which are nondominated sorting genetic algorithm (NSGAII), grid-based evolutionary algorithm GrEA, hypervolume estimation algorithm HypE, strength Pareto evolutionary algorithm shift-based density estimation SPEA2-SDE, and reference points-based evolutionary algorithm.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 5","pages":"3865-3882"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9568933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10632381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EOS-3D-DCNN: Ebola optimization search-based 3D-dense convolutional neural network for corn leaf disease prediction. EOS-3D-DCNN:基于埃博拉优化搜索的三维密集卷积神经网络玉米叶病预测
IF 6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1007/s00521-023-08289-3
C Ashwini, V Sellam

Corn disease prediction is an essential part of agricultural productivity. This paper presents a novel 3D-dense convolutional neural network (3D-DCNN) optimized using the Ebola optimization search (EOS) algorithm to predict corn disease targeting the increased prediction accuracy than the conventional AI methods. Since the dataset samples are generally insufficient, the paper uses some preliminary pre-processing approaches to increase the sample set and improve the samples for corn disease. The Ebola optimization search (EOS) technique is used to reduce the classification errors of the 3D-CNN approach. As an outcome, the corn disease is predicted and classified accurately and more effectually. The accuracy of the proposed 3D-DCNN-EOS model is improved, and some necessary baseline tests are performed to project the efficacy of the anticipated model. The simulation is performed in the MATLAB 2020a environment, and the outcomes specify the significance of the proposed model over other approaches. The feature representation of the input data is learned effectually to trigger the model's performance. When the proposed method is compared to other existing techniques, it outperforms them in terms of precision, the area under receiver operating characteristics (AUC), f1 score, Kappa statistic error (KSE), accuracy, root mean square error value (RMSE), and recall.

玉米病害预测是农业生产的重要组成部分。本文提出了一种利用埃博拉优化搜索(EOS)算法优化的三维密集卷积神经网络(3D-DCNN)预测玉米病害的方法,其预测精度比传统的人工智能方法有所提高。由于数据集样本普遍不足,本文采用了一些初步的预处理方法来增加样本集,改进玉米病害的样本。采用埃博拉优化搜索(EOS)技术降低3D-CNN方法的分类误差。对玉米病害进行了准确、有效的预测和分类。提出的3D-DCNN-EOS模型的精度得到了提高,并进行了一些必要的基线测试来预测预期模型的有效性。在MATLAB 2020a环境中进行了仿真,结果表明了所提出模型相对于其他方法的重要性。有效地学习输入数据的特征表示来触发模型的性能。与现有方法相比,该方法在精度、接收者工作特征下面积(AUC)、f1分数、Kappa统计误差(KSE)、准确率、均方根误差值(RMSE)和召回率等方面均优于现有方法。
{"title":"EOS-3D-DCNN: Ebola optimization search-based 3D-dense convolutional neural network for corn leaf disease prediction.","authors":"C Ashwini,&nbsp;V Sellam","doi":"10.1007/s00521-023-08289-3","DOIUrl":"https://doi.org/10.1007/s00521-023-08289-3","url":null,"abstract":"<p><p>Corn disease prediction is an essential part of agricultural productivity. This paper presents a novel 3D-dense convolutional neural network (3D-DCNN) optimized using the Ebola optimization search (EOS) algorithm to predict corn disease targeting the increased prediction accuracy than the conventional AI methods. Since the dataset samples are generally insufficient, the paper uses some preliminary pre-processing approaches to increase the sample set and improve the samples for corn disease. The Ebola optimization search (EOS) technique is used to reduce the classification errors of the 3D-CNN approach. As an outcome, the corn disease is predicted and classified accurately and more effectually. The accuracy of the proposed 3D-DCNN-EOS model is improved, and some necessary baseline tests are performed to project the efficacy of the anticipated model. The simulation is performed in the MATLAB 2020a environment, and the outcomes specify the significance of the proposed model over other approaches. The feature representation of the input data is learned effectually to trigger the model's performance. When the proposed method is compared to other existing techniques, it outperforms them in terms of precision, the area under receiver operating characteristics (AUC), f1 score, Kappa statistic error (KSE), accuracy, root mean square error value (RMSE), and recall.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 15","pages":"11125-11139"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10043543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9439692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis. 伪病变学习:COVID-19诊断的自监督框架
IF 6 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1007/s00521-023-08259-9
Zhongliang Li, Xuechen Li, Zhihao Jin, Linlin Shen

The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.

2019冠状病毒病(COVID-19)自2019年12月首次报告以来,在全球迅速传播,胸部计算机断层扫描(CT)已成为其诊断的主要工具之一。近年来,基于深度学习的方法在无数图像识别任务中表现出令人印象深刻的性能。然而,它们通常需要大量带注释的数据进行训练。受COVID-19患者CT扫描中常见的磨玻璃不透明现象的启发,我们提出了一种基于伪病变生成和恢复的自监督预训练方法用于COVID-19诊断。我们使用基于梯度噪声的数学模型Perlin噪声生成病变样模式,然后将其随机粘贴到正常CT图像的肺部区域以生成伪covid -19图像。然后使用正常和伪covid -19图像对来训练基于编码器-解码器架构的U-Net用于图像恢复,该方法不需要任何标记数据。然后使用标记数据对预训练的编码器进行微调,用于COVID-19诊断任务。采用由CT图像组成的两个公开的COVID-19诊断数据集进行评估。综合实验结果表明,本文提出的自监督学习方法可以更好地提取COVID-19诊断的特征表示,在SARS-CoV-2数据集和济南COVID-19数据集上,该方法的准确率分别比大规模图像上预训练的监督模型高6.57%和3.03%。
{"title":"Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis.","authors":"Zhongliang Li,&nbsp;Xuechen Li,&nbsp;Zhihao Jin,&nbsp;Linlin Shen","doi":"10.1007/s00521-023-08259-9","DOIUrl":"https://doi.org/10.1007/s00521-023-08259-9","url":null,"abstract":"<p><p>The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 15","pages":"10717-10731"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10038387/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9439693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable tourism volume forecasting with multivariate time series under the impact of COVID-19. 在 COVID-19 的影响下,利用多变量时间序列进行可解释的旅游数量预测。
IF 4.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 Epub Date: 2022-11-04 DOI: 10.1007/s00521-022-07967-y
Binrong Wu, Lin Wang, Rui Tao, Yu-Rong Zeng

This study proposes a novel interpretable framework to forecast the daily tourism volume of Jiuzhaigou Valley, Huangshan Mountain, and Siguniang Mountain in China under the impact of COVID-19 by using multivariate time-series data, particularly historical tourism volume data, COVID-19 data, the Baidu index, and weather data. For the first time, epidemic-related search engine data is introduced for tourism demand forecasting. A new method named the composition leading search index-variational mode decomposition is proposed to process search engine data. Meanwhile, to overcome the problem of insufficient interpretability of existing tourism demand forecasting, a new model of DE-TFT interpretable tourism demand forecasting is proposed in this study, in which the hyperparameters of temporal fusion transformers (TFT) are optimized intelligently and efficiently based on the differential evolution algorithm. TFT is an attention-based deep learning model that combines high-performance forecasting with interpretable analysis of temporal dynamics, displaying excellent performance in forecasting research. The TFT model produces an interpretable tourism demand forecast output, including the importance ranking of different input variables and attention analysis at different time steps. Besides, the validity of the proposed forecasting framework is verified based on three cases. Interpretable experimental results show that the epidemic-related search engine data can well reflect the concerns of tourists about tourism during the COVID-19 epidemic.

本研究利用多元时间序列数据,特别是历史旅游量数据、COVID-19 数据、百度指数和天气数据,提出了一个新颖的可解释框架,用于预测 COVID-19 影响下中国九寨沟、黄山和四姑娘山的日旅游量。首次引入与疫情相关的搜索引擎数据用于旅游需求预测。提出了一种名为 "领先搜索指数构成-变模分解 "的新方法来处理搜索引擎数据。同时,为了克服现有旅游需求预测可解释性不足的问题,本研究提出了一种新的 DE-TFT 可解释旅游需求预测模型,其中基于微分进化算法对时空融合变换器(TFT)的超参数进行了智能、高效的优化。TFT 是一种基于注意力的深度学习模型,它将高性能预测与可解释的时间动态分析相结合,在预测研究中表现出卓越的性能。TFT 模型产生了可解释的旅游需求预测输出,包括不同输入变量的重要性排序和不同时间步的注意力分析。此外,基于三个案例验证了所提出的预测框架的有效性。可解释的实验结果表明,与疫情相关的搜索引擎数据能很好地反映 COVID-19 疫情期间游客对旅游的关注。
{"title":"Interpretable tourism volume forecasting with multivariate time series under the impact of COVID-19.","authors":"Binrong Wu, Lin Wang, Rui Tao, Yu-Rong Zeng","doi":"10.1007/s00521-022-07967-y","DOIUrl":"10.1007/s00521-022-07967-y","url":null,"abstract":"<p><p>This study proposes a novel interpretable framework to forecast the daily tourism volume of Jiuzhaigou Valley, Huangshan Mountain, and Siguniang Mountain in China under the impact of COVID-19 by using multivariate time-series data, particularly historical tourism volume data, COVID-19 data, the Baidu index, and weather data. For the first time, epidemic-related search engine data is introduced for tourism demand forecasting. A new method named the composition leading search index-variational mode decomposition is proposed to process search engine data. Meanwhile, to overcome the problem of insufficient interpretability of existing tourism demand forecasting, a new model of DE-TFT interpretable tourism demand forecasting is proposed in this study, in which the hyperparameters of temporal fusion transformers (TFT) are optimized intelligently and efficiently based on the differential evolution algorithm. TFT is an attention-based deep learning model that combines high-performance forecasting with interpretable analysis of temporal dynamics, displaying excellent performance in forecasting research. The TFT model produces an interpretable tourism demand forecast output, including the importance ranking of different input variables and attention analysis at different time steps. Besides, the validity of the proposed forecasting framework is verified based on three cases. Interpretable experimental results show that the epidemic-related search engine data can well reflect the concerns of tourists about tourism during the COVID-19 epidemic.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 7","pages":"5437-5463"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9638251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10700857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Computing & Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1