Yilin Wu, Jinghua Liu, Xiehua Yu, Yaojin Lin, Shaozi Li
Neighborhood rough set (NRS) is considered as an effective tool for feature selection and has been widely used in processing high‐dimensional data. However, most of the existing methods are difficult to deal with multi‐label data and are lack of considering label correlation (LC), which is an important issue in multi‐label learning. Therefore, in this article, we introduce a new NRS model with considering LC. First, we explore LC by calculating the similarity relation between labels and divide the related labels into several label subsets. Then, a new neighborhood relation is proposed, which can solve the problem of neighborhood granularity selection by using the nearest neighbor information distribution of instances under the related labels. On this basis, the NRS model is reconstructed by embedding LC information, and the related properties of the model are discussed. Moreover, we design a new feature significance function to evaluate the quality of features, which can well capture the specific relationship between features and labels. Finally, a greedy forward feature selection algorithm is designed. Extensive experiments which are conducted on different types of datasets verify the effectiveness of the proposed algorithm.
{"title":"Neighborhood rough set based multi‐label feature selection with label correlation","authors":"Yilin Wu, Jinghua Liu, Xiehua Yu, Yaojin Lin, Shaozi Li","doi":"10.1002/cpe.7162","DOIUrl":"https://doi.org/10.1002/cpe.7162","url":null,"abstract":"Neighborhood rough set (NRS) is considered as an effective tool for feature selection and has been widely used in processing high‐dimensional data. However, most of the existing methods are difficult to deal with multi‐label data and are lack of considering label correlation (LC), which is an important issue in multi‐label learning. Therefore, in this article, we introduce a new NRS model with considering LC. First, we explore LC by calculating the similarity relation between labels and divide the related labels into several label subsets. Then, a new neighborhood relation is proposed, which can solve the problem of neighborhood granularity selection by using the nearest neighbor information distribution of instances under the related labels. On this basis, the NRS model is reconstructed by embedding LC information, and the related properties of the model are discussed. Moreover, we design a new feature significance function to evaluate the quality of features, which can well capture the specific relationship between features and labels. Finally, a greedy forward feature selection algorithm is designed. Extensive experiments which are conducted on different types of datasets verify the effectiveness of the proposed algorithm.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86228103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Internet of Things environment, the intrusion detection involves identification of distributed denial of service attacks in the network traffic which is aimed at improving network security. Recently, several methods have been developed for network anomaly detection which is generally based on the conventional machine learning techniques. The existing methods completely rely on manual traffic features which increases the system complexity and results in a lower detection rate on large traffic datasets. To overcome these issues, a new intrusion detection system is proposed based on the enhanced flower pollination algorithm (EFPA) and ensemble classification technique. First, the optimal set of features is selected from the UNSW‐NB15 and NSL‐KDD datasets by using EFPA. In the EFPA, a scaling factor is used in the conventional FPA for optimal feature selection and better convergence, and the selected features are fed to the ensemble classifier for network attack detection. The ensemble classifier aims to learn a set of classifiers such as random forest, decision tree (ID3), and support vector machine classifiers and then votes the best results. In the resulting section, the proposed ensemble‐based EFPA model attained 99.32% and 99.67% of accuracy on UNSW‐NB15 and NSL‐KDD datasets, respectively, and these obtained results are more superior compared to the traditional network intrusion detection models. The proposed and the existing models are validated on the anaconda‐navigator and Python 3.6 software environment.
{"title":"Network intrusion detection system for Internet of Things based on enhanced flower pollination algorithm and ensemble classifier","authors":"Rekha Gangula, M. V, R. M","doi":"10.1002/cpe.7103","DOIUrl":"https://doi.org/10.1002/cpe.7103","url":null,"abstract":"In the Internet of Things environment, the intrusion detection involves identification of distributed denial of service attacks in the network traffic which is aimed at improving network security. Recently, several methods have been developed for network anomaly detection which is generally based on the conventional machine learning techniques. The existing methods completely rely on manual traffic features which increases the system complexity and results in a lower detection rate on large traffic datasets. To overcome these issues, a new intrusion detection system is proposed based on the enhanced flower pollination algorithm (EFPA) and ensemble classification technique. First, the optimal set of features is selected from the UNSW‐NB15 and NSL‐KDD datasets by using EFPA. In the EFPA, a scaling factor is used in the conventional FPA for optimal feature selection and better convergence, and the selected features are fed to the ensemble classifier for network attack detection. The ensemble classifier aims to learn a set of classifiers such as random forest, decision tree (ID3), and support vector machine classifiers and then votes the best results. In the resulting section, the proposed ensemble‐based EFPA model attained 99.32% and 99.67% of accuracy on UNSW‐NB15 and NSL‐KDD datasets, respectively, and these obtained results are more superior compared to the traditional network intrusion detection models. The proposed and the existing models are validated on the anaconda‐navigator and Python 3.6 software environment.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89855126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soura Boulaares, S. Sassi, D. Benslimane, Sami Faïz
In the era of Internet Technology (IT), uncertainty management is a challenge in many fields. These include e‐commerce, social and sensor networks, scientific data production and mining, object tracking, data integration, geo‐located services, and recently Internet and Web of Things. 3$$ {}^3 $$ Due to the uncertain data published on the web, web resources are diverse. Hence, identical resources could be available from heterogeneous platforms and heterogeneous resources could represent the same objects. These resources are hugely heterogeneous, conflict, inconsistent, or have incompatible formats. This uncertainty is inherently related to many facts, such as information extraction and integration. Hence, with web resources proliferation on the web, referencing through the uncertain web has become increasingly difficult. The traditional techniques used for the classical web could not handle uncertain navigation. Generally, it's implicitly represented, decided randomly, or even neglected. Harnessing these uncertain resources to their full potential in order to handle the uncertain navigation, raises major challenges that relate to each phase of their life cycle: creation, representation, and navigation. In this article, we establish a probabilistic approach to model and interpret uncertain web resources. We present operators to compute response uncertainty. Finally, we create algorithms in order to validate resources and achieve uncertain hypertext navigation.
{"title":"A probabilistic approach: Uncertain navigation of the uncertain web","authors":"Soura Boulaares, S. Sassi, D. Benslimane, Sami Faïz","doi":"10.1002/cpe.7194","DOIUrl":"https://doi.org/10.1002/cpe.7194","url":null,"abstract":"In the era of Internet Technology (IT), uncertainty management is a challenge in many fields. These include e‐commerce, social and sensor networks, scientific data production and mining, object tracking, data integration, geo‐located services, and recently Internet and Web of Things. 3$$ {}^3 $$ Due to the uncertain data published on the web, web resources are diverse. Hence, identical resources could be available from heterogeneous platforms and heterogeneous resources could represent the same objects. These resources are hugely heterogeneous, conflict, inconsistent, or have incompatible formats. This uncertainty is inherently related to many facts, such as information extraction and integration. Hence, with web resources proliferation on the web, referencing through the uncertain web has become increasingly difficult. The traditional techniques used for the classical web could not handle uncertain navigation. Generally, it's implicitly represented, decided randomly, or even neglected. Harnessing these uncertain resources to their full potential in order to handle the uncertain navigation, raises major challenges that relate to each phase of their life cycle: creation, representation, and navigation. In this article, we establish a probabilistic approach to model and interpret uncertain web resources. We present operators to compute response uncertainty. Finally, we create algorithms in order to validate resources and achieve uncertain hypertext navigation.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"145 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86213070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computations offload for multi‐party computation (MPC) in the Industrial Internet of Things (IIoT) involves the transfer of resource‐intensive industrial computations of resource‐constraint industrial nodes (sourcers) to idle and powerful nodes (workers) such as hardware accelerator grids, IIoT gateways, and cloud servers. However, verifying the results of the offloaded computations and solving the ensuing security and privacy problems have been the drawbacks of MPC in IIoT. Although the morphism approach is currently being used for ensuring the correctness of the results of the outsource computations, it has been proved that its overhead increases as the number of the computations increases. In this article, we formulate a secure offloading scheme capable of achieving perfect verification of the results using reputation and morphism and providing security requirements for effective MPC. Performance and security analyses show that the scheme is not only secure, but also ensures privacy preservation, fairness, and perfect verification of the result at a low cost.
{"title":"Secure reputation and morphism‐based offloading scheme: A veritable tool for multi‐party computation in Industrial Internet of Things","authors":"O. Olakanmi, K. Odeyemi","doi":"10.1002/cpe.7116","DOIUrl":"https://doi.org/10.1002/cpe.7116","url":null,"abstract":"Computations offload for multi‐party computation (MPC) in the Industrial Internet of Things (IIoT) involves the transfer of resource‐intensive industrial computations of resource‐constraint industrial nodes (sourcers) to idle and powerful nodes (workers) such as hardware accelerator grids, IIoT gateways, and cloud servers. However, verifying the results of the offloaded computations and solving the ensuing security and privacy problems have been the drawbacks of MPC in IIoT. Although the morphism approach is currently being used for ensuring the correctness of the results of the outsource computations, it has been proved that its overhead increases as the number of the computations increases. In this article, we formulate a secure offloading scheme capable of achieving perfect verification of the results using reputation and morphism and providing security requirements for effective MPC. Performance and security analyses show that the scheme is not only secure, but also ensures privacy preservation, fairness, and perfect verification of the result at a low cost.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79137157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aditya Gupta, I. S. Rajput, Gunjan, Vibha Jain, Soni Chaurasia
Diabetes is one of the most prevalent causes of casualties in the modern world. Early diagnosis of diabetes is the most promising way for increasing the chances of patients' survival. The ever‐growing technology of the current era, machine learning‐based algorithms pave the door in the healthcare industry by delivering efficient decision support services in real‐time. However, high‐dimensionality of the data obtained using multiple sources increases the computation time and significantly impacts the models' efficiency in classifying the results. Feature selection improves learning performance and reduces the computational cost by selecting subsets of features and eliminating unnecessary and irrelevant features. In this article, an attempt has been made to develop a hybrid machine learning model based on non‐dominated sorting genetic algorithm (NSGA‐II) and ensemble learning for the efficient categorization of diabetes. The proposed work uses various data preprocessing techniques, such as missing data handling and normalization, prior to model training. The most prominent and salient features are selected by exploiting the potential of the NSGA‐II in the diabetes dataset. Finally, an ensemble learning‐based extreme gradient boosting (XGBoost) model is modeled using features selected by NSGA‐II to classify patients as diabetic or non‐diabetic. The proposed methodology is experimentally validated using a hybridized dataset comprising 23 features, with 1288 instances of both male and female patients between the ages of 21 and 65. In addition, for performance evaluation, the results of statistical parameters are compared with several state‐of‐the‐art decision‐making models in the current domain. Experiment findings exemplify that the proposed NSGA‐II‐XGB approach gives better classification results with an average accuracy of 98.86%. Furthermore, the statistical results of specificity (88.6%), sensitivity (96.36%), and F‐score (97.84%) also support the utility of the proposed methodology in the early diagnosis of diabetes.
{"title":"NSGA‐II‐XGB: Meta‐heuristic feature selection with XGBoost framework for diabetes prediction","authors":"Aditya Gupta, I. S. Rajput, Gunjan, Vibha Jain, Soni Chaurasia","doi":"10.1002/cpe.7123","DOIUrl":"https://doi.org/10.1002/cpe.7123","url":null,"abstract":"Diabetes is one of the most prevalent causes of casualties in the modern world. Early diagnosis of diabetes is the most promising way for increasing the chances of patients' survival. The ever‐growing technology of the current era, machine learning‐based algorithms pave the door in the healthcare industry by delivering efficient decision support services in real‐time. However, high‐dimensionality of the data obtained using multiple sources increases the computation time and significantly impacts the models' efficiency in classifying the results. Feature selection improves learning performance and reduces the computational cost by selecting subsets of features and eliminating unnecessary and irrelevant features. In this article, an attempt has been made to develop a hybrid machine learning model based on non‐dominated sorting genetic algorithm (NSGA‐II) and ensemble learning for the efficient categorization of diabetes. The proposed work uses various data preprocessing techniques, such as missing data handling and normalization, prior to model training. The most prominent and salient features are selected by exploiting the potential of the NSGA‐II in the diabetes dataset. Finally, an ensemble learning‐based extreme gradient boosting (XGBoost) model is modeled using features selected by NSGA‐II to classify patients as diabetic or non‐diabetic. The proposed methodology is experimentally validated using a hybridized dataset comprising 23 features, with 1288 instances of both male and female patients between the ages of 21 and 65. In addition, for performance evaluation, the results of statistical parameters are compared with several state‐of‐the‐art decision‐making models in the current domain. Experiment findings exemplify that the proposed NSGA‐II‐XGB approach gives better classification results with an average accuracy of 98.86%. Furthermore, the statistical results of specificity (88.6%), sensitivity (96.36%), and F‐score (97.84%) also support the utility of the proposed methodology in the early diagnosis of diabetes.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83459325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resource allocation in the cloud is becoming more complicated and challenging due to the rising necessities of cloud services. Effective management of virtual resources in the cloud is of large significance since it has a great impact on both the operational cost and scalability of the cloud environment. Nowadays, containers are becoming more popular in this regard due to their characteristics like reduced overhead and portability. Conventional resource allocation schemes are usually modeled for the migration and allocation of virtual machines (VM), as a result; the question may arise on, “how these strategies can be adapted for the management of a containerized cloud”. This work evolves the solution to this issue by introducing a new fitness oriented moth flame algorithm (F‐MFA) for optimizing the allocation of containers. Further in this work, the optimal allocation relies on certain constraints like balanced cluster use, system failure, total network distance (TND), security and threshold distance, and credibility factor as well. In the end, the supremacy of the presented model is computed to the conventional models in terms of cost and convergence analysis.
{"title":"Self‐improved moth flame for optimal container resource allocation in cloud","authors":"K. Vhatkar, G. Bhole","doi":"10.1002/cpe.7200","DOIUrl":"https://doi.org/10.1002/cpe.7200","url":null,"abstract":"Resource allocation in the cloud is becoming more complicated and challenging due to the rising necessities of cloud services. Effective management of virtual resources in the cloud is of large significance since it has a great impact on both the operational cost and scalability of the cloud environment. Nowadays, containers are becoming more popular in this regard due to their characteristics like reduced overhead and portability. Conventional resource allocation schemes are usually modeled for the migration and allocation of virtual machines (VM), as a result; the question may arise on, “how these strategies can be adapted for the management of a containerized cloud”. This work evolves the solution to this issue by introducing a new fitness oriented moth flame algorithm (F‐MFA) for optimizing the allocation of containers. Further in this work, the optimal allocation relies on certain constraints like balanced cluster use, system failure, total network distance (TND), security and threshold distance, and credibility factor as well. In the end, the supremacy of the presented model is computed to the conventional models in terms of cost and convergence analysis.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"172 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74346066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While recent studies on automated multilabel chest X‐ray (CXR) images classification have shown remarkable progress in leveraging complicated network and attention mechanisms, the automated detection on chest radiographs is still challenging because the pathological patterns are usually highly diverse in their sizes and locations. The CNN model will suffer from the complicated background and high diversity of diseases, which reduce the generalization and performance of the model. To solve these problems, we propose a dual‐distribution consistency (DDC) model, which increases the consistency from two aspects, that is, feature‐level and label‐level. This model integrates two novel loss functions: multilabel response consistency (MRC) loss and distribution consistency (DC) loss. Specifically, we use the original image and its transformed image as inputs to imitate different views of CXR images. The MRC loss encourages the multilabel‐wise attention maps to be consistent between the original CXR image and its transformed counterpart. And the DC loss can force their output probability distributions to be uniform. In this manner, we can make sure that the model can learn discriminative features by using a different view of CXR images. Experiments conducted on the ChestX‐ray14 dataset show the effectiveness of the proposed method.
{"title":"Consistent response for automated multilabel thoracic disease classification","authors":"Jiawei Su, Zhiming Luo, Shaozi Li","doi":"10.1002/cpe.7201","DOIUrl":"https://doi.org/10.1002/cpe.7201","url":null,"abstract":"While recent studies on automated multilabel chest X‐ray (CXR) images classification have shown remarkable progress in leveraging complicated network and attention mechanisms, the automated detection on chest radiographs is still challenging because the pathological patterns are usually highly diverse in their sizes and locations. The CNN model will suffer from the complicated background and high diversity of diseases, which reduce the generalization and performance of the model. To solve these problems, we propose a dual‐distribution consistency (DDC) model, which increases the consistency from two aspects, that is, feature‐level and label‐level. This model integrates two novel loss functions: multilabel response consistency (MRC) loss and distribution consistency (DC) loss. Specifically, we use the original image and its transformed image as inputs to imitate different views of CXR images. The MRC loss encourages the multilabel‐wise attention maps to be consistent between the original CXR image and its transformed counterpart. And the DC loss can force their output probability distributions to be uniform. In this manner, we can make sure that the model can learn discriminative features by using a different view of CXR images. Experiments conducted on the ChestX‐ray14 dataset show the effectiveness of the proposed method.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84308160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Centralized healthcare Internet of Things (HIoT)‐based ecosystems are challenged by high latency, single‐point failures, and privacy‐based attacks due to data exchange over open channels. To address the challenges, the shift has progressed toward decentralized HIoT setups that infuse computation closer to a patient node via edge services. As HIoT data are critical and sensitive, trust among stakeholders is a prime concern. To address the challenges, researchers integrated blockchain (BCH) into edge‐based HIoT models. However, the integration of lightweight BCH is required with an edge for proper interplay and leverage effective, scalable, and energy‐efficient computational processes for constrained HIoT applications. Owing to the existing gap, this article proposes a scheme MobEdge, that fuses lightweight BCH, and edge computing to secure HIoT. A local BCH client model is set up that forwards data to edge sensor gateways. The shared data are secured through an access tree control lock scheme that preserves the privacy of health records. For security and signing purposes, we have considered signcryption, and the validated records meta‐information are stored in an on‐chain structure. The scheme is compared on two grounds, security and simulation grounds. On the security front, we do cost evaluation and present a formal analysis model using the Automated Validation of Internet Security Protocols and Applications tool. An edge‐based BCH setup use‐case is presented, and parameters like mining cost, storage cost, edge servicing latency, energy consumption, BCH network usage, and transaction signing costs are considered. In the simulation, the mining cost is 0.6675 USD, and improvement of storage costs are improved by 18.34%, edge‐servicing latency is 384 ms, and signcryption improves the signing cost by 36.78% against similar schemes, that indicates the scheme viability in HIoT setups.
{"title":"MobEdge: Mobile blockchain‐based privacy‐edge scheme for healthcare Internet of Things‐based ecosystems","authors":"Varun Deshmukh, Sunil Pathak, S. Bothe","doi":"10.1002/cpe.7210","DOIUrl":"https://doi.org/10.1002/cpe.7210","url":null,"abstract":"Centralized healthcare Internet of Things (HIoT)‐based ecosystems are challenged by high latency, single‐point failures, and privacy‐based attacks due to data exchange over open channels. To address the challenges, the shift has progressed toward decentralized HIoT setups that infuse computation closer to a patient node via edge services. As HIoT data are critical and sensitive, trust among stakeholders is a prime concern. To address the challenges, researchers integrated blockchain (BCH) into edge‐based HIoT models. However, the integration of lightweight BCH is required with an edge for proper interplay and leverage effective, scalable, and energy‐efficient computational processes for constrained HIoT applications. Owing to the existing gap, this article proposes a scheme MobEdge, that fuses lightweight BCH, and edge computing to secure HIoT. A local BCH client model is set up that forwards data to edge sensor gateways. The shared data are secured through an access tree control lock scheme that preserves the privacy of health records. For security and signing purposes, we have considered signcryption, and the validated records meta‐information are stored in an on‐chain structure. The scheme is compared on two grounds, security and simulation grounds. On the security front, we do cost evaluation and present a formal analysis model using the Automated Validation of Internet Security Protocols and Applications tool. An edge‐based BCH setup use‐case is presented, and parameters like mining cost, storage cost, edge servicing latency, energy consumption, BCH network usage, and transaction signing costs are considered. In the simulation, the mining cost is 0.6675 USD, and improvement of storage costs are improved by 18.34%, edge‐servicing latency is 384 ms, and signcryption improves the signing cost by 36.78% against similar schemes, that indicates the scheme viability in HIoT setups.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76571923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatiotemporal solar radiation forecasting is extremely challenging due to its dependence on metrological and environmental factors. Chaotic time‐varying and non‐linearity make the forecasting model more complex. To cater this crucial issue, the paper provides a comprehensive investigation of the deep learning framework for the prediction of the two components of solar irradiation, that is, Diffuse Horizontal Irradiance (DHI) and Direct Normal Irradiance (DNI). Through exploratory data analysis the three recent most prominent deep learning (DL) architecture have been developed and compared with the other classical machine learning (ML) models in terms of the statistical performance accuracy. In our study, DL architecture includes convolutional neural network (CNN) and recurrent neural network (RNN) whereas classical ML models include Random Forest (RF), Support Vector Regression (SVR), Multilayer Perceptron (MLP), Extreme Gradient Boosting (XGB), and K‐Nearest Neighbor (KNN). Additionally, three optimization techniques Grid Search (GS), Random Search (RS), and Bayesian Optimization (BO) have been incorporated for tuning the hyper parameters of the classical ML models to obtain the best results. Based on the rigorous comparative analysis it was found that the CNN model has outperformed all classical machine learning and DL models having lowest mean squared error and highest R‐Squared value with least computational time.
{"title":"Hyper‐parametric improved machine learning models for solar radiation forecasting","authors":"Mantosh Kumar, K. Namrata, N. Kumari","doi":"10.1002/cpe.7190","DOIUrl":"https://doi.org/10.1002/cpe.7190","url":null,"abstract":"Spatiotemporal solar radiation forecasting is extremely challenging due to its dependence on metrological and environmental factors. Chaotic time‐varying and non‐linearity make the forecasting model more complex. To cater this crucial issue, the paper provides a comprehensive investigation of the deep learning framework for the prediction of the two components of solar irradiation, that is, Diffuse Horizontal Irradiance (DHI) and Direct Normal Irradiance (DNI). Through exploratory data analysis the three recent most prominent deep learning (DL) architecture have been developed and compared with the other classical machine learning (ML) models in terms of the statistical performance accuracy. In our study, DL architecture includes convolutional neural network (CNN) and recurrent neural network (RNN) whereas classical ML models include Random Forest (RF), Support Vector Regression (SVR), Multilayer Perceptron (MLP), Extreme Gradient Boosting (XGB), and K‐Nearest Neighbor (KNN). Additionally, three optimization techniques Grid Search (GS), Random Search (RS), and Bayesian Optimization (BO) have been incorporated for tuning the hyper parameters of the classical ML models to obtain the best results. Based on the rigorous comparative analysis it was found that the CNN model has outperformed all classical machine learning and DL models having lowest mean squared error and highest R‐Squared value with least computational time.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81249392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gesture recognition is the foremost need in building intelligent human‐computer interaction systems to solve many day‐to‐day problems and simplify human life in this digital world. The traditional machine learning (ML) algorithm tried to capture specific handcrafted features, failed miserably in some real‐world environments. Deep learning (DL) techniques have become a sensation among researchers in recent years, making the traditional ML approaches quite obsolete. However, existing reviews consider only a few datasets on which DL algorithm has been applied, and the categorization of the DL algorithms is vague in their review. This study provides the precise categorization of DL algorithms and considers around 15 gesture datasets on which these techniques have been applied. This study also provides a brief overview of the numerous challenging dataset available among the research community and insight into various challenges and limitations of a DL algorithm in vision‐based dynamic gesture recognition.
{"title":"Literature review of vision‐based dynamic gesture recognition using deep learning techniques","authors":"Rahul Jain, R. Karsh, Abul Abbas Barbhuiya","doi":"10.1002/cpe.7159","DOIUrl":"https://doi.org/10.1002/cpe.7159","url":null,"abstract":"Gesture recognition is the foremost need in building intelligent human‐computer interaction systems to solve many day‐to‐day problems and simplify human life in this digital world. The traditional machine learning (ML) algorithm tried to capture specific handcrafted features, failed miserably in some real‐world environments. Deep learning (DL) techniques have become a sensation among researchers in recent years, making the traditional ML approaches quite obsolete. However, existing reviews consider only a few datasets on which DL algorithm has been applied, and the categorization of the DL algorithms is vague in their review. This study provides the precise categorization of DL algorithms and considers around 15 gesture datasets on which these techniques have been applied. This study also provides a brief overview of the numerous challenging dataset available among the research community and insight into various challenges and limitations of a DL algorithm in vision‐based dynamic gesture recognition.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85315487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}