Pub Date : 2020-07-01DOI: 10.4018/ijghpc.2020070105
Rajakumar Ramalingam, K. Dinesh, A. Dumka, L. Jayakumar
Intrusion detection systems (IDS's) play a vital role in network security to prevent the unauthorized use of data over networks. The feature selection approach is an important paradigm to strengthen IDS systems. In this article, a reinforced firefly-based feature selection model is proposed. This model utilizes the firefly inspired optimizer to select the features and it combines filter-based and wrapper-based approaches to boost the optimizer approach of the significant feature subset. In addition to that, novel classifiers are used to validate the efficiency of the selected subset. The proposed work is tested on the KDD Cup99 data sets which include 41 different features. Experimental results convey that the proposed work outperforms in terms of better detection accuracy, FPR and F-score. Also, it achieves better classification accuracy and less computational complexity compared to other algorithms.
{"title":"RFA Reinforced Firefly Algorithm to Identify Optimal Feature Subsets for Network IDS","authors":"Rajakumar Ramalingam, K. Dinesh, A. Dumka, L. Jayakumar","doi":"10.4018/ijghpc.2020070105","DOIUrl":"https://doi.org/10.4018/ijghpc.2020070105","url":null,"abstract":"Intrusion detection systems (IDS's) play a vital role in network security to prevent the unauthorized use of data over networks. The feature selection approach is an important paradigm to strengthen IDS systems. In this article, a reinforced firefly-based feature selection model is proposed. This model utilizes the firefly inspired optimizer to select the features and it combines filter-based and wrapper-based approaches to boost the optimizer approach of the significant feature subset. In addition to that, novel classifiers are used to validate the efficiency of the selected subset. The proposed work is tested on the KDD Cup99 data sets which include 41 different features. Experimental results convey that the proposed work outperforms in terms of better detection accuracy, FPR and F-score. Also, it achieves better classification accuracy and less computational complexity compared to other algorithms.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"42 1","pages":"68-87"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91165961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.4018/ijghpc.2020070102
M. Mihoubi, Abdellatif Rahmoun, Meriem Zerkouk, P. Lorenz, Lotfi Baidar
For the last decade, there has been an intensive research development in the area of wireless sensor networks (WSN). This is mainly due to their growing interest in several applications of the Internet of Things (IoT). Several issues are thus discussed such as node localization, a capability that is highly desirable for performance evaluation in monitoring applications. The localization aim is to look for precise geographical positions of sensors. Recently, swarm intelligence techniques are suggested to deal with localization challenge and localization is seen as an optimization problem. In this article, an Enhanced Fruit Fly Optimization Algorithm (EFFOA) is proposed to solve the localization. EFFOA has a strong capacity to calculate the position of the unknown nodes and converges iteratively to the best solution. Distributing and exploiting nodes is a chief challenge to testing the scalability performance. the EFFOA is simulated under variant studies and scenarios. in addition, a comparative experimental study proves that EFFOA outperforms some of the well-known optimization algorithms.
{"title":"Intelligent Technique Based on Enhanced Metaheuristic for Optimization Problem in Internet of Things and Wireless Sensor Network","authors":"M. Mihoubi, Abdellatif Rahmoun, Meriem Zerkouk, P. Lorenz, Lotfi Baidar","doi":"10.4018/ijghpc.2020070102","DOIUrl":"https://doi.org/10.4018/ijghpc.2020070102","url":null,"abstract":"For the last decade, there has been an intensive research development in the area of wireless sensor networks (WSN). This is mainly due to their growing interest in several applications of the Internet of Things (IoT). Several issues are thus discussed such as node localization, a capability that is highly desirable for performance evaluation in monitoring applications. The localization aim is to look for precise geographical positions of sensors. Recently, swarm intelligence techniques are suggested to deal with localization challenge and localization is seen as an optimization problem. In this article, an Enhanced Fruit Fly Optimization Algorithm (EFFOA) is proposed to solve the localization. EFFOA has a strong capacity to calculate the position of the unknown nodes and converges iteratively to the best solution. Distributing and exploiting nodes is a chief challenge to testing the scalability performance. the EFFOA is simulated under variant studies and scenarios. in addition, a comparative experimental study proves that EFFOA outperforms some of the well-known optimization algorithms.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"138 1","pages":"17-42"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80100523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.4018/ijghpc.2020070104
Chetna Dabas, Aditya Agarwal, Naman Gupta, Vaibhav Jain, S. Pathak
Music genre classification has its own popularity index in the present times. Machine learning can play an important role in the music streaming task. This research article proposes a machine learning based model for the classification of music genre. The evaluation of the proposed model is carried out while considering different music genres as in blues, metal, pop, country, classical, disco, jazz and hip-hop. Different audio features utilized in this study include MFCC (Mel Frequency Spectral Coefficients), Delta, Delta-Delta and temporal aspects for processing the data. The implementation of the proposed model has been done in the Python language. The results of the proposed model reveal an accuracy SVM accuracy of 95%. The proposed algorithm has been compared with existing algorithms and the proposed algorithm performs better than the existing ones in terms of accuracy.
音乐类型分类在当代有自己的流行指数。机器学习可以在音乐流媒体任务中发挥重要作用。本文提出了一种基于机器学习的音乐体裁分类模型。对所提出的模型的评估是在考虑蓝调、金属、流行、乡村、古典、迪斯科、爵士和嘻哈等不同音乐类型的同时进行的。本研究中使用的音频特征包括MFCC (Mel Frequency Spectral Coefficients)、Delta、Delta-Delta和时间方面来处理数据。提出的模型的实现已经在Python语言中完成。结果表明,该模型的SVM准确率达到95%。将所提算法与现有算法进行了比较,发现所提算法在精度上优于现有算法。
{"title":"Machine Learning Evaluation for Music Genre Classification of Audio Signals","authors":"Chetna Dabas, Aditya Agarwal, Naman Gupta, Vaibhav Jain, S. Pathak","doi":"10.4018/ijghpc.2020070104","DOIUrl":"https://doi.org/10.4018/ijghpc.2020070104","url":null,"abstract":"Music genre classification has its own popularity index in the present times. Machine learning can play an important role in the music streaming task. This research article proposes a machine learning based model for the classification of music genre. The evaluation of the proposed model is carried out while considering different music genres as in blues, metal, pop, country, classical, disco, jazz and hip-hop. Different audio features utilized in this study include MFCC (Mel Frequency Spectral Coefficients), Delta, Delta-Delta and temporal aspects for processing the data. The implementation of the proposed model has been done in the Python language. The results of the proposed model reveal an accuracy SVM accuracy of 95%. The proposed algorithm has been compared with existing algorithms and the proposed algorithm performs better than the existing ones in terms of accuracy.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"13 1","pages":"57-67"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86937628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.4018/ijghpc.2020070106
Manimaran Aridoss, Chandramohan Dhasarathan, A. Dumka, L. Jayakumar
Classification of underwater images is a challenging task due to wavelength-dependent light propagation, absorption, and dispersion distort the visibility of images, which produces low contrast and degraded images in difficult operating environments. Deep learning algorithms are suitable to classify the turbid images, for that softmax activation function used for classification and minimize cross-entropy loss. The proposed deep underwater image classification model (DUICM) uses a convolutional neural network (CNN), a machine learning algorithm, for automatic underwater image classification. It helps to train the image and apply the classification techniques to categorise the turbid images for the selected features from the Benchmark Turbid Image Dataset. The proposed system was trained with several underwater images based on CNN models, which are independent to each sort of underwater image formation. Experimental results show that DUICM provides better classification accuracy against turbid underwater images. The proposed neural network model is validated using turbid images with different characteristics to prove the generalization capabilities.
{"title":"DUICM Deep Underwater Image Classification Mobdel using Convolutional Neural Networks","authors":"Manimaran Aridoss, Chandramohan Dhasarathan, A. Dumka, L. Jayakumar","doi":"10.4018/ijghpc.2020070106","DOIUrl":"https://doi.org/10.4018/ijghpc.2020070106","url":null,"abstract":"Classification of underwater images is a challenging task due to wavelength-dependent light propagation, absorption, and dispersion distort the visibility of images, which produces low contrast and degraded images in difficult operating environments. Deep learning algorithms are suitable to classify the turbid images, for that softmax activation function used for classification and minimize cross-entropy loss. The proposed deep underwater image classification model (DUICM) uses a convolutional neural network (CNN), a machine learning algorithm, for automatic underwater image classification. It helps to train the image and apply the classification techniques to categorise the turbid images for the selected features from the Benchmark Turbid Image Dataset. The proposed system was trained with several underwater images based on CNN models, which are independent to each sort of underwater image formation. Experimental results show that DUICM provides better classification accuracy against turbid underwater images. The proposed neural network model is validated using turbid images with different characteristics to prove the generalization capabilities.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"26 1","pages":"88-100"},"PeriodicalIF":1.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82102057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.4018/ijghpc.2020040104
A. Negi, Saurabh Sharma
Breast cancer is one of the main health issues for women. This disease can be cured only if detected at early stages. Digital mammography is used to detect the malignant cells at an early stage. This article designs a methodology to detect the malignant tumors. The methodology is comprised of preprocessing feature extraction by Gabor and Law's feature extraction, and feature reduction by ant-lion optimization as well as a classification step using a SVM classifier which is implemented on the live dataset prepared through the Rajindra Hospital Patiala along with MIAS and DDSM datasets. The results of proposed techniques have been compared with three states of art techniques SVM based classification without feature reduction, PSOWNN i.e. PSO based reduction with a neural network as a classifier and binary gray wolf-based feature reduction with SVM classifier. The performance analysis proves the significance of the technique.
{"title":"The Optimized Classification of Mammograms Based on the Antlion Technique","authors":"A. Negi, Saurabh Sharma","doi":"10.4018/ijghpc.2020040104","DOIUrl":"https://doi.org/10.4018/ijghpc.2020040104","url":null,"abstract":"Breast cancer is one of the main health issues for women. This disease can be cured only if detected at early stages. Digital mammography is used to detect the malignant cells at an early stage. This article designs a methodology to detect the malignant tumors. The methodology is comprised of preprocessing feature extraction by Gabor and Law's feature extraction, and feature reduction by ant-lion optimization as well as a classification step using a SVM classifier which is implemented on the live dataset prepared through the Rajindra Hospital Patiala along with MIAS and DDSM datasets. The results of proposed techniques have been compared with three states of art techniques SVM based classification without feature reduction, PSOWNN i.e. PSO based reduction with a neural network as a classifier and binary gray wolf-based feature reduction with SVM classifier. The performance analysis proves the significance of the technique.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"18 1","pages":"64-86"},"PeriodicalIF":1.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78002502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.4018/ijghpc.2020040105
Meenu Vijarania, Vivek Jaglan, B. K. Mishra
In wireless ad-hoc networks the nodes may be placed in a remote area and with no fixed infrastructure. Wireless nodes have limited energy resources which act as a key factor to estimate the node lifetime. Most research is based on the power aware schemes, which takes advantage of the remaining energy of wireless nodes. Existing schemes estimate remaining energy based on only current consumption and voltage, leading to erroneous estimations that result in early power exhaustion of nodes that affects real world deployment, because the residual energy in real batteries is also affected by temperature, charge cycle, aging, self-discharge, and various other factors. A lifetime estimation model examines the battery characteristics to investigate their performance under varying operational conditions more precisely. In this article a lifetime estimation model is proposed, it takes into account the varying environmental temperatures effecting battery performance. An experimental approach is proposed to determine the actual capacity of ad-hoc nodes under varying temperatures.
{"title":"The Modelling of an Energy Efficient Algorithm Considering the Temperature Effect on the Lifetime of a Node in a Wireless Network","authors":"Meenu Vijarania, Vivek Jaglan, B. K. Mishra","doi":"10.4018/ijghpc.2020040105","DOIUrl":"https://doi.org/10.4018/ijghpc.2020040105","url":null,"abstract":"In wireless ad-hoc networks the nodes may be placed in a remote area and with no fixed infrastructure. Wireless nodes have limited energy resources which act as a key factor to estimate the node lifetime. Most research is based on the power aware schemes, which takes advantage of the remaining energy of wireless nodes. Existing schemes estimate remaining energy based on only current consumption and voltage, leading to erroneous estimations that result in early power exhaustion of nodes that affects real world deployment, because the residual energy in real batteries is also affected by temperature, charge cycle, aging, self-discharge, and various other factors. A lifetime estimation model examines the battery characteristics to investigate their performance under varying operational conditions more precisely. In this article a lifetime estimation model is proposed, it takes into account the varying environmental temperatures effecting battery performance. An experimental approach is proposed to determine the actual capacity of ad-hoc nodes under varying temperatures.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"18 1","pages":"87-101"},"PeriodicalIF":1.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76844846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.4018/ijghpc.2020040106
Mohammad M. Alshammari, A. Alwan, Azlin Nordin, A. Abualkishik
Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.
{"title":"Data Backup and Recovery With a Minimum Replica Plan in a Multi-Cloud Environment","authors":"Mohammad M. Alshammari, A. Alwan, Azlin Nordin, A. Abualkishik","doi":"10.4018/ijghpc.2020040106","DOIUrl":"https://doi.org/10.4018/ijghpc.2020040106","url":null,"abstract":"Cloud computing has become a desirable choice to store and share large amounts of data among several users. The two main concerns with cloud storage are data recovery and cost of storage. This article discusses the issue of data recovery in case of a disaster in a multi-cloud environment. This research proposes a preventive approach for data backup and recovery aiming at minimizing the number of replicas and ensuring high data reliability during disasters. This approach named Preventive Disaster Recovery Plan with Minimum Replica (PDRPMR) aims at reducing the number of replications in the cloud without compromising the data reliability. PDRPMR means preventive action checking of the availability of replicas and monitoring of denial of service attacks to maintain data reliability. Several experiments were conducted to evaluate the effectiveness of PDRPMR and the results demonstrated that the storage space used one-third to two-thirds compared to typical 3-replicas replication strategies.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"183 1","pages":"102-120"},"PeriodicalIF":1.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80457109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.4018/ijghpc.2020040103
E. Sathiyamoorthy, P. Karthikeyan
Cloud computing is a trending area of information technology (IT). In a cloud environment, the Cloud service provider (CSP) provides all the functionalities to the users or customers in terms of services. With the rapid development of cloud computing, the performance of any cloud environment relies on the quality of services (QoS) at the time of providing the services. A service level agreement (SLA) increases the confidence of the user or customer to use the cloud services in a cloud environment. There should be negotiation between the CSP and users to achieve a strong SLA. Many existing SLA models are already developed. However, these models do not concentrate to maintain the quality in a long-time duration period. To solve this issue, a novel SLA model has been proposed in this article by using Fuzzy logic. Both the theoretical and simulation results show the proficiency of the proposed scheme over the existing schemes in a cloud computing environment.
{"title":"An Adaptive Service Monitoring System in a Cloud Computing Environment","authors":"E. Sathiyamoorthy, P. Karthikeyan","doi":"10.4018/ijghpc.2020040103","DOIUrl":"https://doi.org/10.4018/ijghpc.2020040103","url":null,"abstract":"Cloud computing is a trending area of information technology (IT). In a cloud environment, the Cloud service provider (CSP) provides all the functionalities to the users or customers in terms of services. With the rapid development of cloud computing, the performance of any cloud environment relies on the quality of services (QoS) at the time of providing the services. A service level agreement (SLA) increases the confidence of the user or customer to use the cloud services in a cloud environment. There should be negotiation between the CSP and users to achieve a strong SLA. Many existing SLA models are already developed. However, these models do not concentrate to maintain the quality in a long-time duration period. To solve this issue, a novel SLA model has been proposed in this article by using Fuzzy logic. Both the theoretical and simulation results show the proficiency of the proposed scheme over the existing schemes in a cloud computing environment.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"33 1","pages":"47-63"},"PeriodicalIF":1.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77576510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.4018/ijghpc.2020040102
T. Srinivas, S. Manivannan
Workers or labors who are working in construction sites are prone to severe risks such as death, injuries happened due to accidents, falls and stuck in between objects. Internet of things (IoT) based sensors can be utilized to monitor the behavior of workers when they are in danger zones/areas. To safeguard site workers, supervisors or site managers should monitor and alert them when they are in danger. Data will be routed from site worker to supervisor, during this routing process data is subjected to routing attacks such as black hole attack and so on, due to wireless transmission. This article addresses the problem of black hole attack that happens during the wireless transmission between nodes and the base station (BS) of IoT-based civil construction. The proposed solution Collaborative Black Hole Attack – Ad Hoc On-Demand Distance Vector routing protocol (CBHA-AODV) prevents the collaborative black hole attack by 87.72%.
{"title":"Preventing Collaborative Black Hole Attack in IoT Construction Using a CBHA-AODV Routing Protocol","authors":"T. Srinivas, S. Manivannan","doi":"10.4018/ijghpc.2020040102","DOIUrl":"https://doi.org/10.4018/ijghpc.2020040102","url":null,"abstract":"Workers or labors who are working in construction sites are prone to severe risks such as death, injuries happened due to accidents, falls and stuck in between objects. Internet of things (IoT) based sensors can be utilized to monitor the behavior of workers when they are in danger zones/areas. To safeguard site workers, supervisors or site managers should monitor and alert them when they are in danger. Data will be routed from site worker to supervisor, during this routing process data is subjected to routing attacks such as black hole attack and so on, due to wireless transmission. This article addresses the problem of black hole attack that happens during the wireless transmission between nodes and the base station (BS) of IoT-based civil construction. The proposed solution Collaborative Black Hole Attack – Ad Hoc On-Demand Distance Vector routing protocol (CBHA-AODV) prevents the collaborative black hole attack by 87.72%.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"1 1","pages":"25-46"},"PeriodicalIF":1.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91187114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-21DOI: 10.4018/IJGHPC.2020040101
Neelu Khare, Xia Xie, Jin Huang, Song Wu, Hai Jin, Melvin Koh, Jie Song, Shanshan Yu, Jindian Su, Pengfei Li
The tremendous growth of social networking systems enables the active participation of a wide variety of users. This has led to an increased probability of security and privacy concerns. In order to solve the issue, the article defines a secure and privacy-preserving approach to protect user data across Cloud-based online social networks. The proposed approach models social networks as a directed graph, such that a user can share sensitive information with other users only if there exists a directed edge from one user to another. The connectivity between data users data is efficiently shared using an attribute-based encryption (ABE) with different data access levels. The proposed ABE technique makes use of a trapdoor function to re-encrypt the data without the use of proxy re-encryption techniques. Experimental evaluation states that the proposed approach provides comparatively better results than the existing techniques.
{"title":"A Secure and Privacy-Preserving Approach to Protect User Data across Cloud based Online Social Networks","authors":"Neelu Khare, Xia Xie, Jin Huang, Song Wu, Hai Jin, Melvin Koh, Jie Song, Shanshan Yu, Jindian Su, Pengfei Li","doi":"10.4018/IJGHPC.2020040101","DOIUrl":"https://doi.org/10.4018/IJGHPC.2020040101","url":null,"abstract":"The tremendous growth of social networking systems enables the active participation of a wide variety of users. This has led to an increased probability of security and privacy concerns. In order to solve the issue, the article defines a secure and privacy-preserving approach to protect user data across Cloud-based online social networks. The proposed approach models social networks as a directed graph, such that a user can share sensitive information with other users only if there exists a directed edge from one user to another. The connectivity between data users data is efficiently shared using an attribute-based encryption (ABE) with different data access levels. The proposed ABE technique makes use of a trapdoor function to re-encrypt the data without the use of proxy re-encryption techniques. Experimental evaluation states that the proposed approach provides comparatively better results than the existing techniques.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"1 1","pages":"1-24"},"PeriodicalIF":1.0,"publicationDate":"2020-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77596728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}