Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07889-9
R Rathipriya, Abdul Aziz Abdul Rahman, S Dhamodharavadhani, Abdelrhman Meero, G Yoganandan
Demand forecasting is a scientific and methodical assessment of future demand for a critical product.The effective Demand Forecast Model (DFM) enables pharmaceutical companies to be successful in the global market. The purpose of this research paper is to validate various shallow and deep neural network methods for demand forecasting, with the aim of recommending sales and marketing strategies based on the trend/seasonal effects of eight different groups of pharmaceutical products with different characteristics. The root mean squared error (RMSE) is used as the predictive accuracy of DFMs. This study also found that the mean RMSE value of the shallow neural network-based DFMs was 6.27 for all drug categories, which was lower than deep neural network models. According to the findings, DFMs based on shallow neural networks can effectively estimate future demand for pharmaceutical products.
{"title":"Demand forecasting model for time-series pharmaceutical data using shallow and deep neural network model.","authors":"R Rathipriya, Abdul Aziz Abdul Rahman, S Dhamodharavadhani, Abdelrhman Meero, G Yoganandan","doi":"10.1007/s00521-022-07889-9","DOIUrl":"https://doi.org/10.1007/s00521-022-07889-9","url":null,"abstract":"<p><p>Demand forecasting is a scientific and methodical assessment of future demand for a critical product.The effective Demand Forecast Model (DFM) enables pharmaceutical companies to be successful in the global market. The purpose of this research paper is to validate various shallow and deep neural network methods for demand forecasting, with the aim of recommending sales and marketing strategies based on the trend/seasonal effects of eight different groups of pharmaceutical products with different characteristics. The root mean squared error (RMSE) is used as the predictive accuracy of DFMs. This study also found that the mean RMSE value of the shallow neural network-based DFMs was 6.27 for all drug categories, which was lower than deep neural network models. According to the findings, DFMs based on shallow neural networks can effectively estimate future demand for pharmaceutical products.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9540101/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10517070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07938-3
Deepika Varshney, Dinesh Kumar Vishwakarma
Spreading of misleading information on social web platforms has fuelled huge panic and confusion among the public regarding the Corona disease, the detection of which is of paramount importance. To identify the credibility of the posted claim, we have analyzed possible evidence from the news articles in the google search results. This paper proposes an intelligent and expert strategy to gather important clues from the top 10 google search results related to the claim. The N-gram, Levenshtein Distance, and Word-Similarity-based features are used to identify the clues from the news article that can automatically warn users against spreading false news if no significant supportive clues are identified concerning that claim. The complete process is done in four steps, wherein the first step we build a query from the posted claim received in the form of text or text additive images which further goes as an input to the search query phase, where the top 10 google results are processed. In the third step, the important clues are extracted from titles of the top 10 news articles. Lastly, useful pieces of evidence are extracted from the content of each news article. All the useful clues with respect to N-gram, Levenshtein Distance, and Word Similarity are finally fed into the machine learning model for classification and to evaluate its performances. It has been observed that our proposed intelligent strategy gives promising experimental results and is quite effective in predicting misleading information. The proposed work provides practical implications for the policymakers and health practitioners that could be useful in protecting the world from misleading information proliferation during this pandemic.
{"title":"Framework for detection of probable clues to predict misleading information proliferated during COVID-19 outbreak.","authors":"Deepika Varshney, Dinesh Kumar Vishwakarma","doi":"10.1007/s00521-022-07938-3","DOIUrl":"https://doi.org/10.1007/s00521-022-07938-3","url":null,"abstract":"<p><p>Spreading of misleading information on social web platforms has fuelled huge panic and confusion among the public regarding the Corona disease, the detection of which is of paramount importance. To identify the credibility of the posted claim, we have analyzed possible evidence from the news articles in the google search results. This paper proposes an intelligent and expert strategy to gather important clues from the top 10 google search results related to the claim. The N-gram, Levenshtein Distance, and Word-Similarity-based features are used to identify the clues from the news article that can automatically warn users against spreading false news if no significant supportive clues are identified concerning that claim. The complete process is done in four steps, wherein the first step we build a query from the posted claim received in the form of text or text additive images which further goes as an input to the search query phase, where the top 10 google results are processed. In the third step, the important clues are extracted from titles of the top 10 news articles. Lastly, useful pieces of evidence are extracted from the content of each news article. All the useful clues with respect to N-gram, Levenshtein Distance, and Word Similarity are finally fed into the machine learning model for classification and to evaluate its performances. It has been observed that our proposed intelligent strategy gives promising experimental results and is quite effective in predicting misleading information. The proposed work provides practical implications for the policymakers and health practitioners that could be useful in protecting the world from misleading information proliferation during this pandemic.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9660173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10801914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07847-5
Carmen Vidaurre, Vadim V Nikulin, Maria Herrojo Ruiz
Anxiety affects approximately 5-10% of the adult population worldwide, placing a large burden on the health systems. Despite its omnipresence and impact on mental and physical health, most of the individuals affected by anxiety do not receive appropriate treatment. Current research in the field of psychiatry emphasizes the need to identify and validate biological markers relevant to this condition. Neurophysiological preclinical studies are a prominent approach to determine brain rhythms that can be reliable markers of key features of anxiety. However, while neuroimaging research consistently implicated prefrontal cortex and subcortical structures, such as amygdala and hippocampus, in anxiety, there is still a lack of consensus on the underlying neurophysiological processes contributing to this condition. Methods allowing non-invasive recording and assessment of cortical processing may provide an opportunity to help identify anxiety signatures that could be used as intervention targets. In this study, we apply Source-Power Comodulation (SPoC) to electroencephalography (EEG) recordings in a sample of participants with different levels of trait anxiety. SPoC was developed to find spatial filters and patterns whose power comodulates with an external variable in individual participants. The obtained patterns can be interpreted neurophysiologically. Here, we extend the use of SPoC to a multi-subject setting and test its validity using simulated data with a realistic head model. Next, we apply our SPoC framework to resting state EEG of 43 human participants for whom trait anxiety scores were available. SPoC inter-subject analysis of narrow frequency band data reveals neurophysiologically meaningful spatial patterns in the theta band (4-7 Hz) that are negatively correlated with anxiety. The outcome is specific to the theta band and not observed in the alpha (8-12 Hz) or beta (13-30 Hz) frequency range. The theta-band spatial pattern is primarily localised to the superior frontal gyrus. We discuss the relevance of our spatial pattern results for the search of biomarkers for anxiety and their application in neurofeedback studies.
{"title":"Identification of spatial patterns with maximum association between power of resting state neural oscillations and trait anxiety.","authors":"Carmen Vidaurre, Vadim V Nikulin, Maria Herrojo Ruiz","doi":"10.1007/s00521-022-07847-5","DOIUrl":"https://doi.org/10.1007/s00521-022-07847-5","url":null,"abstract":"<p><p>Anxiety affects approximately 5-10% of the adult population worldwide, placing a large burden on the health systems. Despite its omnipresence and impact on mental and physical health, most of the individuals affected by anxiety do not receive appropriate treatment. Current research in the field of psychiatry emphasizes the need to identify and validate biological markers relevant to this condition. Neurophysiological preclinical studies are a prominent approach to determine brain rhythms that can be reliable markers of key features of anxiety. However, while neuroimaging research consistently implicated prefrontal cortex and subcortical structures, such as amygdala and hippocampus, in anxiety, there is still a lack of consensus on the underlying neurophysiological processes contributing to this condition. Methods allowing non-invasive recording and assessment of cortical processing may provide an opportunity to help identify anxiety signatures that could be used as intervention targets. In this study, we apply Source-Power Comodulation (SPoC) to electroencephalography (EEG) recordings in a sample of participants with different levels of trait anxiety. SPoC was developed to find spatial filters and patterns whose power comodulates with an external variable in individual participants. The obtained patterns can be interpreted neurophysiologically. Here, we extend the use of SPoC to a multi-subject setting and test its validity using simulated data with a realistic head model. Next, we apply our SPoC framework to resting state EEG of 43 human participants for whom trait anxiety scores were available. SPoC inter-subject analysis of narrow frequency band data reveals neurophysiologically meaningful spatial patterns in the theta band (4-7 Hz) that are negatively correlated with anxiety. The outcome is specific to the theta band and not observed in the alpha (8-12 Hz) or beta (13-30 Hz) frequency range. The theta-band spatial pattern is primarily localised to the superior frontal gyrus. We discuss the relevance of our spatial pattern results for the search of biomarkers for anxiety and their application in neurofeedback studies.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9525925/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10820887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-023-08276-8
Sanghyub John Lee, JongYoon Lim, Leo Paas, Ho Seok Ahn
Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors' self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that reflect the characteristics of the annotators. But models based on large, self-reported emotion data sets may overlook subtle, social emotions that human annotators can recognize. In seeking to establish a means to train emotion detection models so that they can achieve good performance in different contexts, the current study proposes a novel transformer transfer learning approach that parallels human development stages: (1) detect emotions reported by the texts' authors and (2) synchronize the model with social emotions identified in annotator-rated emotion data sets. The analysis, based on a large, novel, self-reported emotion data set (n = 3,654,544) and applied to 10 previously published data sets, shows that the transfer learning emotion model achieves relatively strong performance.
{"title":"Transformer transfer learning emotion detection model: synchronizing socially agreed and self-reported emotions in big data.","authors":"Sanghyub John Lee, JongYoon Lim, Leo Paas, Ho Seok Ahn","doi":"10.1007/s00521-023-08276-8","DOIUrl":"https://doi.org/10.1007/s00521-023-08276-8","url":null,"abstract":"<p><p>Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors' self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that reflect the characteristics of the annotators. But models based on large, self-reported emotion data sets may overlook subtle, social emotions that human annotators can recognize. In seeking to establish a means to train emotion detection models so that they can achieve good performance in different contexts, the current study proposes a novel transformer transfer learning approach that parallels human development stages: (1) detect emotions reported by the texts' authors and (2) synchronize the model with social emotions identified in annotator-rated emotion data sets. The analysis, based on a large, novel, self-reported emotion data set (<i>n</i> = 3,654,544) and applied to 10 previously published data sets, shows that the transfer learning emotion model achieves relatively strong performance.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9879253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9721080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The detection and location of image splicing forgery are a challenging task in the field of image forensics. It is to study whether an image contains a suspicious tampered area pasted from another image. In this paper, we propose a new image tamper location method based on dual-channel U-Net, that is, DCU-Net. The detection framework based on DCU-Net is mainly divided into three parts: encoder, feature fusion, and decoder. Firstly, high-pass filters are used to extract the residual of the tampered image and generate the residual image, which contains the edge information of the tampered area. Secondly, a dual-channel encoding network model is constructed. The input of the model is the original tampered image and the tampered residual image. Then, the deep features extracted from the dual-channel encoding network are fused for the first time, and then the tampered features with different granularity are extracted by dilation convolution, and then, the secondary fusion is carried out. Finally, the fused feature map is input into the decoder, and the predicted image is decoded layer by layer. The experimental results on Casia2.0 and Columbia datasets show that DCU-Net performs better than the latest algorithm and can accurately locate tampered areas. In addition, the attack experiments show that DCU-Net model has good robustness and can resist noise and JPEG recompression attacks.
{"title":"DCU-Net: a dual-channel U-shaped network for image splicing forgery detection.","authors":"Hongwei Ding, Leiyang Chen, Qi Tao, Zhongwang Fu, Liang Dong, Xiaohui Cui","doi":"10.1007/s00521-021-06329-4","DOIUrl":"https://doi.org/10.1007/s00521-021-06329-4","url":null,"abstract":"<p><p>The detection and location of image splicing forgery are a challenging task in the field of image forensics. It is to study whether an image contains a suspicious tampered area pasted from another image. In this paper, we propose a new image tamper location method based on dual-channel U-Net, that is, DCU-Net. The detection framework based on DCU-Net is mainly divided into three parts: encoder, feature fusion, and decoder. Firstly, high-pass filters are used to extract the residual of the tampered image and generate the residual image, which contains the edge information of the tampered area. Secondly, a dual-channel encoding network model is constructed. The input of the model is the original tampered image and the tampered residual image. Then, the deep features extracted from the dual-channel encoding network are fused for the first time, and then the tampered features with different granularity are extracted by dilation convolution, and then, the secondary fusion is carried out. Finally, the fused feature map is input into the decoder, and the predicted image is decoded layer by layer. The experimental results on Casia2.0 and Columbia datasets show that DCU-Net performs better than the latest algorithm and can accurately locate tampered areas. In addition, the attack experiments show that DCU-Net model has good robustness and can resist noise and JPEG recompression attacks.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00521-021-06329-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10275922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-10-12DOI: 10.1007/s00521-022-07867-1
Di Yuan, Xiu Shu, Qiao Liu, Xinming Zhang, Zhenyu He
When dealing with complex thermal infrared (TIR) tracking scenarios, the single category feature is not sufficient to portray the appearance of the target, which drastically affects the accuracy of the TIR target tracking method. In order to address these problems, we propose an adaptively multi-feature fusion model (AMFT) for the TIR tracking task. Specifically, our AMFT tracking method adaptively integrates hand-crafted features and deep convolutional neural network (CNN) features. In order to accurately locate the target position, it takes advantage of the complementarity between different features. Additionally, the model is updated using a simple but effective model update strategy to adapt to changes in the target during tracking. In addition, a simple but effective model update strategy is adopted to adapt the model to the changes of the target during the tracking process. We have shown through ablation studies that the adaptively multi-feature fusion model in our AMFT tracking method is very effective. Our AMFT tracker performs favorably on PTB-TIR and LSOTB-TIR benchmarks compared with state-of-the-art trackers.
{"title":"Robust thermal infrared tracking via an adaptively multi-feature fusion model.","authors":"Di Yuan, Xiu Shu, Qiao Liu, Xinming Zhang, Zhenyu He","doi":"10.1007/s00521-022-07867-1","DOIUrl":"10.1007/s00521-022-07867-1","url":null,"abstract":"<p><p>When dealing with complex thermal infrared (TIR) tracking scenarios, the single category feature is not sufficient to portray the appearance of the target, which drastically affects the accuracy of the TIR target tracking method. In order to address these problems, we propose an adaptively multi-feature fusion model (AMFT) for the TIR tracking task. Specifically, our AMFT tracking method adaptively integrates hand-crafted features and deep convolutional neural network (CNN) features. In order to accurately locate the target position, it takes advantage of the complementarity between different features. Additionally, the model is updated using a simple but effective model update strategy to adapt to changes in the target during tracking. In addition, a simple but effective model update strategy is adopted to adapt the model to the changes of the target during the tracking process. We have shown through ablation studies that the adaptively multi-feature fusion model in our AMFT tracking method is very effective. Our AMFT tracker performs favorably on PTB-TIR and LSOTB-TIR benchmarks compared with state-of-the-art trackers.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9553631/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10630224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-023-08450-y
M Emin Sahin, Hasan Ulutas, Esra Yuce, Mustafa Fatih Erkoc
The coronavirus (COVID-19) pandemic has a devastating impact on people's daily lives and healthcare systems. The rapid spread of this virus should be stopped by early detection of infected patients through efficient screening. Artificial intelligence techniques are used for accurate disease detection in computed tomography (CT) images. This article aims to develop a process that can accurately diagnose COVID-19 using deep learning techniques on CT images. Using CT images collected from Yozgat Bozok University, the presented method begins with the creation of an original dataset, which includes 4000 CT images. The faster R-CNN and mask R-CNN methods are presented for this purpose in order to train and test the dataset to categorize patients with COVID-19 and pneumonia infections. In this study, the results are compared using VGG-16 for faster R-CNN model and ResNet-50 and ResNet-101 backbones for mask R-CNN. The faster R-CNN model used in the study has an accuracy rate of 93.86%, and the ROI (region of interest) classification loss is 0.061 per ROI. At the conclusion of the final training, the mask R-CNN model generates mAP (mean average precision) values for ResNet-50 and ResNet-101, respectively, of 97.72% and 95.65%. The results for five folds are obtained by applying the cross-validation to the methods used. With training, our model performs better than the industry standard baselines and can help with automated COVID-19 severity quantification in CT images.
冠状病毒(COVID-19)大流行对人们的日常生活和医疗保健系统造成了毁灭性影响。应通过有效筛查,及早发现受感染患者,阻止这种病毒的迅速传播。人工智能技术用于计算机断层扫描(CT)图像的准确疾病检测。本文旨在开发一种利用CT图像的深度学习技术准确诊断COVID-19的过程。使用从Yozgat Bozok大学收集的CT图像,提出的方法首先创建一个原始数据集,其中包括4000张CT图像。为此提出了更快的R-CNN和mask R-CNN方法,以训练和测试数据集,对COVID-19和肺炎感染患者进行分类。在本研究中,将VGG-16用于更快的R-CNN模型,ResNet-50和ResNet-101骨干网用于掩模R-CNN的结果进行了比较。研究中使用的更快的R-CNN模型准确率为93.86%,每个ROI的ROI(兴趣区域)分类损失为0.061。在最终训练结束时,掩码R-CNN模型对ResNet-50和ResNet-101分别生成了97.72%和95.65%的mAP (mean average precision)值。通过对所使用的方法进行交叉验证,获得了五倍的结果。经过训练,我们的模型比行业标准基线表现更好,可以帮助CT图像中自动量化COVID-19严重程度。
{"title":"Detection and classification of COVID-19 by using faster R-CNN and mask R-CNN on CT images.","authors":"M Emin Sahin, Hasan Ulutas, Esra Yuce, Mustafa Fatih Erkoc","doi":"10.1007/s00521-023-08450-y","DOIUrl":"https://doi.org/10.1007/s00521-023-08450-y","url":null,"abstract":"<p><p>The coronavirus (COVID-19) pandemic has a devastating impact on people's daily lives and healthcare systems. The rapid spread of this virus should be stopped by early detection of infected patients through efficient screening. Artificial intelligence techniques are used for accurate disease detection in computed tomography (CT) images. This article aims to develop a process that can accurately diagnose COVID-19 using deep learning techniques on CT images. Using CT images collected from Yozgat Bozok University, the presented method begins with the creation of an original dataset, which includes 4000 CT images. The faster R-CNN and mask R-CNN methods are presented for this purpose in order to train and test the dataset to categorize patients with COVID-19 and pneumonia infections. In this study, the results are compared using VGG-16 for faster R-CNN model and ResNet-50 and ResNet-101 backbones for mask R-CNN. The faster R-CNN model used in the study has an accuracy rate of 93.86%, and the ROI (region of interest) classification loss is 0.061 per ROI. At the conclusion of the final training, the mask R-CNN model generates mAP (mean average precision) values for ResNet-50 and ResNet-101, respectively, of 97.72% and 95.65%. The results for five folds are obtained by applying the cross-validation to the methods used. With training, our model performs better than the industry standard baselines and can help with automated COVID-19 severity quantification in CT images.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10014413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9502282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid industrial development in the human society has brought about the air pollution, which seriously affects human health. PM2.5 concentration is one of the main factors causing the air pollution. To accurately predict PM2.5 microns, we propose a dendritic neuron model (DNM) trained by an improved state-of-matter heuristic algorithm (DSMS) based on STL-LOESS, namely DS-DNM. Firstly, DS-DNM adopts STL-LOESS for the data preprocessing to obtain three characteristic quantities from original data: seasonal, trend, and residual components. Then, DNM trained by DSMS predicts the residual values. Finally, three sets of feature quantities are summed to obtain the predicted values. In the performance test experiments, five real-world PM2.5 concentration data are used to test DS-DNM. On the other hand, four training algorithms and seven prediction models were selected for comparison to verify the rationality of the training algorithms and the accuracy of the prediction models, respectively. The experimental results show that DS-DNM has the more competitive performance in PM2.5 concentration prediction problem.
{"title":"Prediction of PM2.5 time series by seasonal trend decomposition-based dendritic neuron model.","authors":"Zijing Yuan, Shangce Gao, Yirui Wang, Jiayi Li, Chunzhi Hou, Lijun Guo","doi":"10.1007/s00521-023-08513-0","DOIUrl":"10.1007/s00521-023-08513-0","url":null,"abstract":"<p><p>The rapid industrial development in the human society has brought about the air pollution, which seriously affects human health. PM2.5 concentration is one of the main factors causing the air pollution. To accurately predict PM2.5 microns, we propose a dendritic neuron model (DNM) trained by an improved state-of-matter heuristic algorithm (DSMS) based on STL-LOESS, namely DS-DNM. Firstly, DS-DNM adopts STL-LOESS for the data preprocessing to obtain three characteristic quantities from original data: seasonal, trend, and residual components. Then, DNM trained by DSMS predicts the residual values. Finally, three sets of feature quantities are summed to obtain the predicted values. In the performance test experiments, five real-world PM2.5 concentration data are used to test DS-DNM. On the other hand, four training algorithms and seven prediction models were selected for comparison to verify the rationality of the training algorithms and the accuracy of the prediction models, respectively. The experimental results show that DS-DNM has the more competitive performance in PM2.5 concentration prediction problem.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10107594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9570243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-04-01DOI: 10.1007/s00521-023-08498-w
Yasmeen ELsayed, Ashraf ELSayed, Mohamed A Abdou
Automatic facial expression recognition (AFER), sometimes referred to as emotional recognition, is important for socializing. Automatic methods in the past two years faced challenges due to Covid-19 and the vital wearing of a mask. Machine learning techniques tremendously increase the amount of data processed and achieved good results in such AFER to detect emotions; however, those techniques are not designed for masked faces and thus achieved poor recognition. This paper introduces a hybrid convolutional neural network aided by a local binary pattern to extract features in an accurate way, especially for masked faces. The basic seven emotions classified into anger, happiness, sadness, surprise, contempt, disgust, and fear are to be recognized. The proposed method is applied on two datasets: the first represents CK and CK +, while the second represents M-LFW-FER. Obtained results show that emotion recognition with a face mask achieved an accuracy of 70.76% on three emotions. Results are compared to existing techniques and show significant improvement.
{"title":"An automatic improved facial expression recognition for masked faces.","authors":"Yasmeen ELsayed, Ashraf ELSayed, Mohamed A Abdou","doi":"10.1007/s00521-023-08498-w","DOIUrl":"10.1007/s00521-023-08498-w","url":null,"abstract":"<p><p>Automatic facial expression recognition (AFER), sometimes referred to as emotional recognition, is important for socializing. Automatic methods in the past two years faced challenges due to Covid-19 and the vital wearing of a mask. Machine learning techniques tremendously increase the amount of data processed and achieved good results in such AFER to detect emotions; however, those techniques are not designed for masked faces and thus achieved poor recognition. This paper introduces a hybrid convolutional neural network aided by a local binary pattern to extract features in an accurate way, especially for masked faces. The basic seven emotions classified into anger, happiness, sadness, surprise, contempt, disgust, and fear are to be recognized. The proposed method is applied on two datasets: the first represents CK and CK +, while the second represents M-LFW-FER. Obtained results show that emotion recognition with a face mask achieved an accuracy of 70.76% on three emotions. Results are compared to existing techniques and show significant improvement.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10067009/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9576951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-023-08200-0
Mughees Ahmad, Usama Ijaz Bajwa, Yasar Mehmood, Muhammad Waqas Anwar
The new COVID-19 emerged in a town in China named Wuhan in December 2019, and since then, this deadly virus has infected 324 million people worldwide and caused 5.53 million deaths by January 2022. Because of the rapid spread of this pandemic, different countries are facing the problem of a shortage of resources, such as medical test kits and ventilators, as the number of cases increased uncontrollably. Therefore, developing a readily available, low-priced, and automated approach for COVID-19 identification is the need of the hour. The proposed study uses chest radiography images (CRIs) such as X-rays and computed tomography (CTs) to detect chest infections, as these modalities contain important information about chest infections. This research introduces a novel hybrid deep learning model named Lightweight ResGRU that uses residual blocks and a bidirectional gated recurrent unit to diagnose non-COVID and COVID-19 infections using pre-processed CRIs. Lightweight ResGRU is used for multi-modal two-class classification (normal and COVID-19), three-class classification (normal, COVID-19, and viral pneumonia), four-class classification (normal, COVID-19, viral pneumonia, and bacterial pneumonia), and COVID-19 severity types' classification (i.e., atypical appearance, indeterminate appearance, typical appearance, and negative for pneumonia). The proposed architecture achieved f-measure of 99.0%, 98.4%, 91.0%, and 80.5% for two-class, three-class, four-class, and COVID-19 severity level classifications, respectively, on unseen data. A large dataset is created by combining and changing different publicly available datasets. The results prove that radiologists can adopt this method to screen chest infections where test kits are limited.
{"title":"Lightweight ResGRU: a deep learning-based prediction of SARS-CoV-2 (COVID-19) and its severity classification using multimodal chest radiography images.","authors":"Mughees Ahmad, Usama Ijaz Bajwa, Yasar Mehmood, Muhammad Waqas Anwar","doi":"10.1007/s00521-023-08200-0","DOIUrl":"https://doi.org/10.1007/s00521-023-08200-0","url":null,"abstract":"<p><p>The new COVID-19 emerged in a town in China named Wuhan in December 2019, and since then, this deadly virus has infected 324 million people worldwide and caused 5.53 million deaths by January 2022. Because of the rapid spread of this pandemic, different countries are facing the problem of a shortage of resources, such as medical test kits and ventilators, as the number of cases increased uncontrollably. Therefore, developing a readily available, low-priced, and automated approach for COVID-19 identification is the need of the hour. The proposed study uses chest radiography images (CRIs) such as X-rays and computed tomography (CTs) to detect chest infections, as these modalities contain important information about chest infections. This research introduces a novel hybrid deep learning model named <i>Lightweight ResGRU</i> that uses residual blocks and a bidirectional gated recurrent unit to diagnose non-COVID and COVID-19 infections using pre-processed CRIs. <i>Lightweight ResGRU</i> is used for multi-modal two-class classification (normal and COVID-19), three-class classification (normal, COVID-19, and viral pneumonia), four-class classification (normal, COVID-19, viral pneumonia, and bacterial pneumonia), and COVID-19 severity types' classification (i.e., atypical appearance, indeterminate appearance, typical appearance, and negative for pneumonia). The proposed architecture achieved f-measure of 99.0%, 98.4%, 91.0%, and 80.5% for two-class, three-class, four-class, and COVID-19 severity level classifications, respectively, on unseen data. A large dataset is created by combining and changing different publicly available datasets. The results prove that radiologists can adopt this method to screen chest infections where test kits are limited.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9873217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9330135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}