Pub Date : 2023-01-01DOI: 10.1007/s11227-023-05100-x
Antonio M Fernández-Gómez, David Gutiérrez-Avilés, Alicia Troncoso, Francisco Martínez-Álvarez
Analyzing time-dependent data acquired in a continuous flow is a major challenge for various fields, such as big data and machine learning. Being able to analyze a large volume of data from various sources, such as sensors, networks, and the internet, is essential for improving the efficiency of our society's production processes. Additionally, this vast amount of data is collected dynamically in a continuous stream. The goal of this research is to provide a comprehensive framework for forecasting big data streams from Internet of Things networks and serve as a guide for designing and deploying other third-party solutions. Hence, a new framework for time series forecasting in a big data streaming scenario, using data collected from Internet of Things networks, is presented. This framework comprises of five main modules: Internet of Things network design and deployment, big data streaming architecture, stream data modeling method, big data forecasting method, and a comprehensive real-world application scenario, consisting of a physical Internet of Things network feeding the big data streaming architecture, being the linear regression the algorithm used for illustrative purposes. Comparison with other frameworks reveals that this is the first framework that incorporates and integrates all the aforementioned modules.
{"title":"A new Apache Spark-based framework for big data streaming forecasting in IoT networks.","authors":"Antonio M Fernández-Gómez, David Gutiérrez-Avilés, Alicia Troncoso, Francisco Martínez-Álvarez","doi":"10.1007/s11227-023-05100-x","DOIUrl":"https://doi.org/10.1007/s11227-023-05100-x","url":null,"abstract":"<p><p>Analyzing time-dependent data acquired in a continuous flow is a major challenge for various fields, such as big data and machine learning. Being able to analyze a large volume of data from various sources, such as sensors, networks, and the internet, is essential for improving the efficiency of our society's production processes. Additionally, this vast amount of data is collected dynamically in a continuous stream. The goal of this research is to provide a comprehensive framework for forecasting big data streams from Internet of Things networks and serve as a guide for designing and deploying other third-party solutions. Hence, a new framework for time series forecasting in a big data streaming scenario, using data collected from Internet of Things networks, is presented. This framework comprises of five main modules: Internet of Things network design and deployment, big data streaming architecture, stream data modeling method, big data forecasting method, and a comprehensive real-world application scenario, consisting of a physical Internet of Things network feeding the big data streaming architecture, being the linear regression the algorithm used for illustrative purposes. Comparison with other frameworks reveals that this is the first framework that incorporates and integrates all the aforementioned modules.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 10","pages":"11078-11100"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9942040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9502933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-08-22DOI: 10.1007/s11227-022-04680-4
Nghi C Tran, Jian-Hong Wang, Toan H Vu, Tzu-Chiang Tai, Jia-Ching Wang
Metaverse, which is anticipated to be the future of the internet, is a 3D virtual world in which users interact via highly customizable computer avatars. It is considerably promising for several industries, including gaming, education, and business. However, it still has drawbacks, particularly in the privacy and identity threads. When a person joins the metaverse via a virtual reality (VR) human-robot equipment, their avatar, digital assets, and private information may be compromised by cybercriminals. This paper introduces a specific Finger Vein Recognition approach for the virtual reality (VR) human-robot equipment of the metaverse of the Metaverse to prevent others from misappropriating it. Finger vein is a is a biometric feature hidden beneath our skin. It is considerably more secure in person verification than other hand-based biometric characteristics such as finger print and palm print since it is difficult to imitate. Most conventional finger vein recognition systems that use hand-crafted features are ineffective, especially for images with low quality, low contrast, scale variation, translation, and rotation. Deep learning methods have been demonstrated to be more successful than traditional methods in computer vision. This paper develops a finger vein recognition system based on a convolution neural network and anti-aliasing technique. We employ/ utilize a contrast image enhancement algorithm in the preprocessing step to improve performance of the system. The proposed approach is evaluated on three publicly available finger vein datasets. Experimental results show that our proposed method outperforms the current state-of-the-art methods, improvement of 97.66% accuracy on FVUSM dataset, 99.94% accuracy on SDUMLA dataset, and 88.19% accuracy on THUFV2 dataset.
{"title":"Anti-aliasing convolution neural network of finger vein recognition for virtual reality (VR) human-robot equipment of metaverse.","authors":"Nghi C Tran, Jian-Hong Wang, Toan H Vu, Tzu-Chiang Tai, Jia-Ching Wang","doi":"10.1007/s11227-022-04680-4","DOIUrl":"10.1007/s11227-022-04680-4","url":null,"abstract":"<p><p>Metaverse, which is anticipated to be the future of the internet, is a 3D virtual world in which users interact via highly customizable computer avatars. It is considerably promising for several industries, including gaming, education, and business. However, it still has drawbacks, particularly in the privacy and identity threads. When a person joins the metaverse via a virtual reality (VR) human-robot equipment, their avatar, digital assets, and private information may be compromised by cybercriminals. This paper introduces a specific Finger Vein Recognition approach for the virtual reality (VR) human-robot equipment of the metaverse of the Metaverse to prevent others from misappropriating it. Finger vein is a is a biometric feature hidden beneath our skin. It is considerably more secure in person verification than other hand-based biometric characteristics such as finger print and palm print since it is difficult to imitate. Most conventional finger vein recognition systems that use hand-crafted features are ineffective, especially for images with low quality, low contrast, scale variation, translation, and rotation. Deep learning methods have been demonstrated to be more successful than traditional methods in computer vision. This paper develops a finger vein recognition system based on a convolution neural network and anti-aliasing technique. We employ/ utilize a contrast image enhancement algorithm in the preprocessing step to improve performance of the system. The proposed approach is evaluated on three publicly available finger vein datasets. Experimental results show that our proposed method outperforms the current state-of-the-art methods, improvement of 97.66% accuracy on FVUSM dataset, 99.94% accuracy on SDUMLA dataset, and 88.19% accuracy on THUFV2 dataset.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 3","pages":"2767-2782"},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9395830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9093328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-04708-9
Ram Kumar, S C Sharma
Query expansion is an important approach utilized to improve the efficiency of data retrieval tasks. Numerous works are carried out by the researchers to generate fair constructive results; however, they do not provide acceptable results for all kinds of queries particularly phrase and individual queries. The utilization of identical data sources and weighting strategies for expanding such terms are the major cause of this issue which leads the model unable to capture the comprehensive relationship between the query terms. In order to tackle this issue, we developed a novel approach for query expansion technique to analyze the different data sources namely WordNet, Wikipedia, and Text REtrieval Conference. This paper presents an Improved Aquila Optimization-based COOT(IAOCOOT) algorithm for query expansion which retrieves the semantic aspects that match the query term. The semantic heterogeneity associated with document retrieval mainly impacts the relevance matching between the query and the document. The main cause of this issue is that the similarity among the words is not evaluated correctly. To overcome this problem, we are using a Modified Needleman Wunsch algorithm algorithm to deal with the problems of uncertainty, imprecision in the information retrieval process, and semantic ambiguity of indexed terms in both the local and global perspectives. The k most similar word is determined and returned from a candidate set through the top-k words selection technique and it is widely utilized in different tasks. The proposed IAOCOOT model is evaluated using different standard Information Retrieval performance metrics to compute the validity of the proposed work by comparing it with other state-of-art techniques.
{"title":"Hybrid optimization and ontology-based semantic model for efficient text-based information retrieval.","authors":"Ram Kumar, S C Sharma","doi":"10.1007/s11227-022-04708-9","DOIUrl":"https://doi.org/10.1007/s11227-022-04708-9","url":null,"abstract":"<p><p>Query expansion is an important approach utilized to improve the efficiency of data retrieval tasks. Numerous works are carried out by the researchers to generate fair constructive results; however, they do not provide acceptable results for all kinds of queries particularly phrase and individual queries. The utilization of identical data sources and weighting strategies for expanding such terms are the major cause of this issue which leads the model unable to capture the comprehensive relationship between the query terms. In order to tackle this issue, we developed a novel approach for query expansion technique to analyze the different data sources namely WordNet, Wikipedia, and Text REtrieval Conference. This paper presents an Improved Aquila Optimization-based COOT(IAOCOOT) algorithm for query expansion which retrieves the semantic aspects that match the query term. The semantic heterogeneity associated with document retrieval mainly impacts the relevance matching between the query and the document. The main cause of this issue is that the similarity among the words is not evaluated correctly. To overcome this problem, we are using a Modified Needleman Wunsch algorithm algorithm to deal with the problems of uncertainty, imprecision in the information retrieval process, and semantic ambiguity of indexed terms in both the local and global perspectives. The k most similar word is determined and returned from a candidate set through the top-k words selection technique and it is widely utilized in different tasks. The proposed IAOCOOT model is evaluated using different standard Information Retrieval performance metrics to compute the validity of the proposed work by comparing it with other state-of-art techniques.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 2","pages":"2251-2280"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9364863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10582958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-04788-7
Israr Ahmad, Saima Abdullah, Adeel Ahmed
Real-time tracking and surveillance of patients' health has become ubiquitous in the healthcare sector as a result of the development of fog, cloud computing, and Internet of Things (IoT) technologies. Medical IoT (MIoT) equipment often transfers health data to a pharmaceutical data center, where it is saved, evaluated, and made available to relevant stakeholders or users. Fog layers have been utilized to increase the scalability and flexibility of IoT-based healthcare services, by providing quick response times and low latency. Our proposed solution focuses on an electronic healthcare system that manages both critical and non-critical patients simultaneously. Fog layer is distributed into two halves: critical fog cluster and non-critical fog cluster. Critical patients are handled at critical fog clusters for quick response, while non-critical patients are handled using blockchain technology at non-critical fog cluster, which protects the privacy of patient health records. The suggested solution requires little modification to the current IoT ecosystem while decrease the response time for critical messages and offloading the cloud infrastructure. Reduced storage requirements for cloud data centers benefit users in addition to saving money on construction and operating expenses. In addition, we examined the proposed work for recall, accuracy, precision, and F-score. The results show that the suggested approach is successful in protecting privacy while retaining standard network settings. Moreover, suggested system and benchmark are evaluated in terms of system response time, drop rate, throughput, fog, and cloud utilization. Evaluated results clearly indicate the performance of proposed system is better than benchmark.
{"title":"IoT-fog-based healthcare 4.0 system using blockchain technology.","authors":"Israr Ahmad, Saima Abdullah, Adeel Ahmed","doi":"10.1007/s11227-022-04788-7","DOIUrl":"https://doi.org/10.1007/s11227-022-04788-7","url":null,"abstract":"<p><p>Real-time tracking and surveillance of patients' health has become ubiquitous in the healthcare sector as a result of the development of fog, cloud computing, and Internet of Things (IoT) technologies. Medical IoT (MIoT) equipment often transfers health data to a pharmaceutical data center, where it is saved, evaluated, and made available to relevant stakeholders or users. Fog layers have been utilized to increase the scalability and flexibility of IoT-based healthcare services, by providing quick response times and low latency. Our proposed solution focuses on an electronic healthcare system that manages both critical and non-critical patients simultaneously. Fog layer is distributed into two halves: critical fog cluster and non-critical fog cluster. Critical patients are handled at critical fog clusters for quick response, while non-critical patients are handled using blockchain technology at non-critical fog cluster, which protects the privacy of patient health records. The suggested solution requires little modification to the current IoT ecosystem while decrease the response time for critical messages and offloading the cloud infrastructure. Reduced storage requirements for cloud data centers benefit users in addition to saving money on construction and operating expenses. In addition, we examined the proposed work for recall, accuracy, precision, and F-score. The results show that the suggested approach is successful in protecting privacy while retaining standard network settings. Moreover, suggested system and benchmark are evaluated in terms of system response time, drop rate, throughput, fog, and cloud utilization. Evaluated results clearly indicate the performance of proposed system is better than benchmark.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 4","pages":"3999-4020"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10631897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-05014-0
Jerónimo S García, Savíns Puertas-Martín, Juana L Redondo, Juan José Moreno, Pilar M Ortigosa
Compound identification in ligand-based virtual screening is limited by two key issues: the quality and the time needed to obtain predictions. In this sense, we designed OptiPharm, an algorithm that obtained excellent results in improving the sequential methods in the literature. In this work, we go a step further and propose its parallelization. Specifically, we propose a two-layer parallelization. Firstly, an automation of the molecule distribution process between the available nodes in a cluster, and secondly, a parallelization of the internal methods (initialization, reproduction, selection and optimization). This new software, called pOptiPharm, aims to improve the quality of predictions and reduce experimentation time. As the results show, the performance of the proposed methods is good. It can find better solutions than the sequential OptiPharm, all while reducing its computation time almost proportionally to the number of processing units considered.
{"title":"Improving drug discovery through parallelism.","authors":"Jerónimo S García, Savíns Puertas-Martín, Juana L Redondo, Juan José Moreno, Pilar M Ortigosa","doi":"10.1007/s11227-022-05014-0","DOIUrl":"https://doi.org/10.1007/s11227-022-05014-0","url":null,"abstract":"<p><p>Compound identification in ligand-based virtual screening is limited by two key issues: the quality and the time needed to obtain predictions. In this sense, we designed OptiPharm, an algorithm that obtained excellent results in improving the sequential methods in the literature. In this work, we go a step further and propose its parallelization. Specifically, we propose a two-layer parallelization. Firstly, an automation of the molecule distribution process between the available nodes in a cluster, and secondly, a parallelization of the internal methods (initialization, reproduction, selection and optimization). This new software, called pOptiPharm, aims to improve the quality of predictions and reduce experimentation time. As the results show, the performance of the proposed methods is good. It can find better solutions than the sequential OptiPharm, all while reducing its computation time almost proportionally to the number of processing units considered.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 9","pages":"9538-9557"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9842220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9721072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-04865-x
M. H. Chehreghani
{"title":"On using affine sketches for multiple-response dynamic graph regression","authors":"M. H. Chehreghani","doi":"10.1007/s11227-022-04865-x","DOIUrl":"https://doi.org/10.1007/s11227-022-04865-x","url":null,"abstract":"","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"139 1","pages":"5139-5153"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73670183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-04827-3
Jianying Huang, Seunghyeok Yang, Jinhui Li, Jeill Oh, Hoon Kang
Sanitary sewer overflows caused by excessive rainfall derived infiltration and inflow is the major challenge currently faced by municipal administrations, and therefore, the ability to correctly predict the wastewater state of the sanitary sewage system in advance is especially significant. In this paper, we present the design of the Sparse Autoencoder-based Bidirectional long short-term memory (SAE-BLSTM) network model, a model built on Sparse Autoencoder (SAE) and Bidirectional long short-term memory (BLSTM) networks to predict the wastewater flow rate in a sanitary sewer system. This network model consists of a data preprocessing segment, the SAE network segment, and the BLSTM network segment. The SAE is capable of performing data dimensionality reduction on high-dimensional original input feature data from which it can extract sparse potential features from the aforementioned high-dimensional original input feature data. The potential features extracted by the SAE hidden layer are concatenated with the smooth historical wastewater flow rate features to create an augmented previous feature vector that more accurately predicts the wastewater flow rate. These augmented previous features are applied to the BLSTM network to predict the future wastewater flow rate. Thus, this network model combines two kinds of abilities, SAE's low-dimensional nonlinear representation for original input feature data and BLSTM's time series prediction for wastewater flow rate. Then, we conducted extensive experiments on the SAE-BLSTM network model utilizing the real-world hydrological time series datasets and employing advanced SVM, FCN, GRU, LSTM, and BLSTM models as comparison algorithms. The experimental results show that our proposed SAE-BLSTM model consistently outperforms the advanced comparison models. Specifically, we selected a 3 months period training dataset in our dataset to train and test the SAE-BLSTM network model. The SAE-BLSTM network model yielded the lowest RMSE, MAE, and highest R2, which are 242.55, 179.05, and 0.99626, respectively.
{"title":"Prediction model of sparse autoencoder-based bidirectional LSTM for wastewater flow rate.","authors":"Jianying Huang, Seunghyeok Yang, Jinhui Li, Jeill Oh, Hoon Kang","doi":"10.1007/s11227-022-04827-3","DOIUrl":"https://doi.org/10.1007/s11227-022-04827-3","url":null,"abstract":"<p><p>Sanitary sewer overflows caused by excessive rainfall derived infiltration and inflow is the major challenge currently faced by municipal administrations, and therefore, the ability to correctly predict the wastewater state of the sanitary sewage system in advance is especially significant. In this paper, we present the design of the Sparse Autoencoder-based Bidirectional long short-term memory (SAE-BLSTM) network model, a model built on Sparse Autoencoder (SAE) and Bidirectional long short-term memory (BLSTM) networks to predict the wastewater flow rate in a sanitary sewer system. This network model consists of a data preprocessing segment, the SAE network segment, and the BLSTM network segment. The SAE is capable of performing data dimensionality reduction on high-dimensional original input feature data from which it can extract sparse potential features from the aforementioned high-dimensional original input feature data. The potential features extracted by the SAE hidden layer are concatenated with the smooth historical wastewater flow rate features to create an augmented previous feature vector that more accurately predicts the wastewater flow rate. These augmented previous features are applied to the BLSTM network to predict the future wastewater flow rate. Thus, this network model combines two kinds of abilities, SAE's low-dimensional nonlinear representation for original input feature data and BLSTM's time series prediction for wastewater flow rate. Then, we conducted extensive experiments on the SAE-BLSTM network model utilizing the real-world hydrological time series datasets and employing advanced SVM, FCN, GRU, LSTM, and BLSTM models as comparison algorithms. The experimental results show that our proposed SAE-BLSTM model consistently outperforms the advanced comparison models. Specifically, we selected a 3 months period training dataset in our dataset to train and test the SAE-BLSTM network model. The SAE-BLSTM network model yielded the lowest RMSE, MAE, and highest <i>R</i> <sup>2</sup>, which are 242.55, 179.05, and 0.99626, respectively.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 4","pages":"4412-4435"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9511464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10623977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-03-03DOI: 10.1007/s11227-023-05132-3
Şevket Ay, Ekin Ekinci, Zeynep Garip
This study aims to use a machine learning (ML)-based enhanced diagnosis and survival model to predict heart disease and survival in heart failure by combining the cuckoo search (CS), flower pollination algorithm (FPA), whale optimization algorithm (WOA), and Harris hawks optimization (HHO) algorithms, which are meta-heuristic feature selection algorithms. To achieve this, experiments are conducted on the Cleveland heart disease dataset and the heart failure dataset collected from the Faisalabad Institute of Cardiology published at UCI. CS, FPA, WOA, and HHO algorithms for feature selection are applied for different population sizes and are realized based on the best fitness values. For the original dataset of heart disease, the maximum prediction F-score of 88% is obtained using K-nearest neighbour (KNN) when compared to logistic regression (LR), support vector machine (SVM), Gaussian Naive Bayes (GNB), and random forest (RF). With the proposed approach, the heart disease prediction F-score of 99.72% is obtained using KNN for population sizes 60 with FPA by selecting eight features. For the original dataset of heart failure, the maximum prediction F-score of 70% is obtained using LR and RF compared to SVM, GNB, and KNN. With the proposed approach, the heart failure prediction F-score of 97.45% is obtained using KNN for population sizes 10 with HHO by selecting five features. Experimental findings show that the applied meta-heuristic algorithms with ML algorithms significantly improve prediction performances compared to performances obtained from the original datasets. The motivation of this paper is to select the most critical and informative feature subset through meta-heuristic algorithms to improve classification accuracy.
{"title":"A comparative analysis of meta-heuristic optimization algorithms for feature selection on ML-based classification of heart-related diseases.","authors":"Şevket Ay, Ekin Ekinci, Zeynep Garip","doi":"10.1007/s11227-023-05132-3","DOIUrl":"10.1007/s11227-023-05132-3","url":null,"abstract":"<p><p>This study aims to use a machine learning (ML)-based enhanced diagnosis and survival model to predict heart disease and survival in heart failure by combining the cuckoo search (CS), flower pollination algorithm (FPA), whale optimization algorithm (WOA), and Harris hawks optimization (HHO) algorithms, which are meta-heuristic feature selection algorithms. To achieve this, experiments are conducted on the Cleveland heart disease dataset and the heart failure dataset collected from the Faisalabad Institute of Cardiology published at UCI. CS, FPA, WOA, and HHO algorithms for feature selection are applied for different population sizes and are realized based on the best fitness values. For the original dataset of heart disease, the maximum prediction F-score of 88% is obtained using K-nearest neighbour (KNN) when compared to logistic regression (LR), support vector machine (SVM), Gaussian Naive Bayes (GNB), and random forest (RF). With the proposed approach, the heart disease prediction F-score of 99.72% is obtained using KNN for population sizes 60 with FPA by selecting eight features. For the original dataset of heart failure, the maximum prediction F-score of 70% is obtained using LR and RF compared to SVM, GNB, and KNN. With the proposed approach, the heart failure prediction F-score of 97.45% is obtained using KNN for population sizes 10 with HHO by selecting five features. Experimental findings show that the applied meta-heuristic algorithms with ML algorithms significantly improve prediction performances compared to performances obtained from the original datasets. The motivation of this paper is to select the most critical and informative feature subset through meta-heuristic algorithms to improve classification accuracy.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 11","pages":"11797-11826"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9983547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10644968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-04733-8
Min Chen, Lili Zhang
The aim is to clarify the evolution mechanism of Network Public Opinion (NPO) in public emergencies. This work makes up for the insufficient semantic understanding in NPO-oriented emotion analysis and tries to maintain social harmony and stability. The combination of the Edge Computing (EC) and Deep Learning (DL) model is applied to the NPO-oriented Emotion Recognition Model (ERM). Firstly, the NPO on public emergencies is introduced. Secondly, three types of NPO emergencies are selected as research cases. An emotional rule system is established based on the One-Class Classification (OCC) model as emotional standards. The word embedding representation method represents the preprocessed Weibo text data. Convolutional Neural Network (CNN) is used as the classifier. The NPO-oriented ERM is implemented on CNN and verified through comparative experiments after the CNN's hyperparameters are adjusted. The research results show that the text annotation of the NPO based on OCC emotion rules can obtain better recognition performance. Additionally, the recognition effect of the improved CNN is significantly higher than the Support Vector Machine (SVM) in traditional Machine Learning (ML). This work realizes the technological innovation of automatic emotion recognition of NPO groups and provides a basis for the relevant government agencies to handle the NPO in public emergencies scientifically.
{"title":"Application of edge computing combined with deep learning model in the dynamic evolution of network public opinion in emergencies.","authors":"Min Chen, Lili Zhang","doi":"10.1007/s11227-022-04733-8","DOIUrl":"https://doi.org/10.1007/s11227-022-04733-8","url":null,"abstract":"<p><p>The aim is to clarify the evolution mechanism of Network Public Opinion (NPO) in public emergencies. This work makes up for the insufficient semantic understanding in NPO-oriented emotion analysis and tries to maintain social harmony and stability. The combination of the Edge Computing (EC) and Deep Learning (DL) model is applied to the NPO-oriented Emotion Recognition Model (ERM). Firstly, the NPO on public emergencies is introduced. Secondly, three types of NPO emergencies are selected as research cases. An emotional rule system is established based on the One-Class Classification (OCC) model as emotional standards. The word embedding representation method represents the preprocessed Weibo text data. Convolutional Neural Network (CNN) is used as the classifier. The NPO-oriented ERM is implemented on CNN and verified through comparative experiments after the CNN's hyperparameters are adjusted. The research results show that the text annotation of the NPO based on OCC emotion rules can obtain better recognition performance. Additionally, the recognition effect of the improved CNN is significantly higher than the Support Vector Machine (SVM) in traditional Machine Learning (ML). This work realizes the technological innovation of automatic emotion recognition of NPO groups and provides a basis for the relevant government agencies to handle the NPO in public emergencies scientifically.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 2","pages":"1526-1543"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330939/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10534038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s11227-022-04845-1
Shih-Hao Chang, Chih-Hsien Hsia, Wei-Zhi Hong
Long-term care refers to any support, both medical and non-medical, provided to the elderly with a chronic illness or disability due to physical or mental conditions. Since the cost of long-term care insurance is not inexpensive, low-cost devices and sensors can be used to create medical assistance systems to reduce human maintenance costs. The requirement of security and privacy under healthcare information protection is a critical issue for internet of medical things (IoMT) data transmission. In this paper, we designed an IoMT security robot for a long-term care system. The goal of this IoMT security robot is to provide secure transmission of the residents' private information. It is composed of three layers, namely, collection, encryption, and transmission. The function of the IoMT security robot is to first collect data from the patient or the elderly, then provide efficient data encryption, and deliver secured data transmission mechanisms to send the valuable data to the cloud. This IoMT security robot also has a server authentication mechanism, and a support IoT and IoMT devices inspection function. Our evaluation results showed that even when we utilized a low power consumption device like Raspberry Pi, AES algorithm achieved an encrypt and decrypt of 100-100 K bytes under 9 ms, which is a lot better than ECC, which takes about 104 ms. Further, we found that the AES only takes 0.00015 s to decrypt 100 Bytes data, which is way faster than the ECC algorithm, which takes 0.09 s.
{"title":"A secured internet of robotic things (IoRT) for long-term care services in a smart building.","authors":"Shih-Hao Chang, Chih-Hsien Hsia, Wei-Zhi Hong","doi":"10.1007/s11227-022-04845-1","DOIUrl":"https://doi.org/10.1007/s11227-022-04845-1","url":null,"abstract":"<p><p>Long-term care refers to any support, both medical and non-medical, provided to the elderly with a chronic illness or disability due to physical or mental conditions. Since the cost of long-term care insurance is not inexpensive, low-cost devices and sensors can be used to create medical assistance systems to reduce human maintenance costs. The requirement of security and privacy under healthcare information protection is a critical issue for internet of medical things (IoMT) data transmission. In this paper, we designed an IoMT security robot for a long-term care system. The goal of this IoMT security robot is to provide secure transmission of the residents' private information. It is composed of three layers, namely, collection, encryption, and transmission. The function of the IoMT security robot is to first collect data from the patient or the elderly, then provide efficient data encryption, and deliver secured data transmission mechanisms to send the valuable data to the cloud. This IoMT security robot also has a server authentication mechanism, and a support IoT and IoMT devices inspection function. Our evaluation results showed that even when we utilized a low power consumption device like Raspberry Pi, AES algorithm achieved an encrypt and decrypt of 100-100 K bytes under 9 ms, which is a lot better than ECC, which takes about 104 ms. Further, we found that the AES only takes 0.00015 s to decrypt 100 Bytes data, which is way faster than the ECC algorithm, which takes 0.09 s.</p>","PeriodicalId":50034,"journal":{"name":"Journal of Supercomputing","volume":"79 5","pages":"5276-5290"},"PeriodicalIF":3.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9559120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10761235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}