Pub Date : 2021-04-01DOI: 10.4018/IJGHPC.2021040105
Munish Khanna, Abhishek Toofani, Siddharth Bansal, M. Asif
Producing software of high quality is challenging in view of the large volume, size, and complexity of the developed software. Checking the software for faults in the early phases helps to bring down testing resources. This empirical study explores the performance of different machine learning model, fuzzy logic algorithms against the problem of predicting software fault proneness. The work experiments on the public domain KC1 NASA data set. Performance of different methods of fault prediction is evaluated using parameters such as receiver characteristics (ROC) analysis and RMS (root mean squared), etc. Comparison is made among different algorithms/models using such results which are presented in this paper.
{"title":"Performance Comparison of Various Algorithms During Software Fault Prediction","authors":"Munish Khanna, Abhishek Toofani, Siddharth Bansal, M. Asif","doi":"10.4018/IJGHPC.2021040105","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040105","url":null,"abstract":"Producing software of high quality is challenging in view of the large volume, size, and complexity of the developed software. Checking the software for faults in the early phases helps to bring down testing resources. This empirical study explores the performance of different machine learning model, fuzzy logic algorithms against the problem of predicting software fault proneness. The work experiments on the public domain KC1 NASA data set. Performance of different methods of fault prediction is evaluated using parameters such as receiver characteristics (ROC) analysis and RMS (root mean squared), etc. Comparison is made among different algorithms/models using such results which are presented in this paper.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"193 1","pages":"70-94"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83085124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.4018/IJGHPC.2021040102
P. Sarwesh, K. Chandrasekaran, S. Thamizharasan
In the modern communication and computation era, internet of things (IoT) is developing as the key technology that satisfies the requirements of various applications. Prolonging device lifetime and maintaining network reliability is the evident objective for IoT network. Therefore, the authors come up with the network architecture that integrates node placement technique and routing technique. In the architecture, node placement is implemented by varying the density of nodes, by varying battery level of nodes, and by varying transmission range of nodes. Energy efficient and reliable path computation is addressed by routing technique. Therefore, enhancing the features of routing and node placement technique and integrating them together in network architecture can efficiently prolong the network lifetime. From the results, the authors observed that the proposed network architecture prolongs the network lifetime two times better than the standard model and also outperforms EQSR protocol and maintains the reliable data transfer.
{"title":"Network Blueprint for Maximizing the Lifetime of Smart Devices in Low Power IoT Networks","authors":"P. Sarwesh, K. Chandrasekaran, S. Thamizharasan","doi":"10.4018/IJGHPC.2021040102","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040102","url":null,"abstract":"In the modern communication and computation era, internet of things (IoT) is developing as the key technology that satisfies the requirements of various applications. Prolonging device lifetime and maintaining network reliability is the evident objective for IoT network. Therefore, the authors come up with the network architecture that integrates node placement technique and routing technique. In the architecture, node placement is implemented by varying the density of nodes, by varying battery level of nodes, and by varying transmission range of nodes. Energy efficient and reliable path computation is addressed by routing technique. Therefore, enhancing the features of routing and node placement technique and integrating them together in network architecture can efficiently prolong the network lifetime. From the results, the authors observed that the proposed network architecture prolongs the network lifetime two times better than the standard model and also outperforms EQSR protocol and maintains the reliable data transfer.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"15 1","pages":"21-38"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86009701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.4018/IJGHPC.2021040107
Gokulnath Chandra Babu, Shantharajah S. Periyasamy
This paper presents a heart disease prediction model. Among the recent technology, internet of things-enabled healthcare plays a vital role. The medical sensors used in healthcare provide a huge volume of medical data in a continuous manner. The speed of data generation in IoT healthcare is high so the volume of data is also high. In order to overcome this problem, the proposed model is a novel three-step process to store and analyze the large volumes of data. The first step focuses on a collection of data from sensor devices. In Step 2, HBase has been used to store the large volume of medical sensor data from a wearable device to the cloud. Step 3 uses Mahout for devolving logistic regression-based prediction model. At last, ROC curve is used to find the parameters that cause heart disease.
{"title":"Remote Health Patient Monitoring System for Early Detection of Heart Disease","authors":"Gokulnath Chandra Babu, Shantharajah S. Periyasamy","doi":"10.4018/IJGHPC.2021040107","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040107","url":null,"abstract":"This paper presents a heart disease prediction model. Among the recent technology, internet of things-enabled healthcare plays a vital role. The medical sensors used in healthcare provide a huge volume of medical data in a continuous manner. The speed of data generation in IoT healthcare is high so the volume of data is also high. In order to overcome this problem, the proposed model is a novel three-step process to store and analyze the large volumes of data. The first step focuses on a collection of data from sensor devices. In Step 2, HBase has been used to store the large volume of medical sensor data from a wearable device to the cloud. Step 3 uses Mahout for devolving logistic regression-based prediction model. At last, ROC curve is used to find the parameters that cause heart disease.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"4 1","pages":"118-130"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72938044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.4018/IJGHPC.2021040106
M. M. Al-Qutt, H. Khaled, Rania El-Gohary
Deciding the number of processors that can efficiently speed-up solving a computationally intensive problem while perceiving efficient power consumption constitutes a major challenge to researcher in the HPC high performance computing realm. This paper exploits machine learning techniques to propose and implement a recommender system that recommends the optimal HPC architecture given the problem size. An approach for multi-objective function optimization based on neural network (neural network inversion) is employed. The neural network inversion approach is used for forward problem optimization. The objective functions in concern are maximizing the speedup and minimizing the power consumption. The recommendations of the proposed prediction systems achieved more than 89% accuracy for both validation and testing set. The experiments were conducted on 2500 CUDA core on Tesla K20 Kepler GPU Accelerator and Intel(R) Xeon(R) CPU E5-2695 v2.
在计算密集型问题的求解过程中,如何在保证高效功耗的前提下确定处理器的数量,是高性能计算领域研究人员面临的一个重大挑战。本文利用机器学习技术提出并实现了一个推荐系统,该系统在给定问题规模的情况下推荐最优的HPC架构。采用了一种基于神经网络的多目标函数优化方法(神经网络反演)。采用神经网络反演方法进行正向优化。所关注的目标函数是加速最大化和功耗最小化。所建议的预测系统在验证集和测试集的准确率均超过89%。实验在Tesla K20 Kepler GPU Accelerator和Intel(R) Xeon(R) CPU E5-2695 v2上,在2500 CUDA核上进行。
{"title":"Neural Network Inversion-Based Model for Predicting an Optimal Hardware Configuration: Solving Computationally Intensive Problems","authors":"M. M. Al-Qutt, H. Khaled, Rania El-Gohary","doi":"10.4018/IJGHPC.2021040106","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040106","url":null,"abstract":"Deciding the number of processors that can efficiently speed-up solving a computationally intensive problem while perceiving efficient power consumption constitutes a major challenge to researcher in the HPC high performance computing realm. This paper exploits machine learning techniques to propose and implement a recommender system that recommends the optimal HPC architecture given the problem size. An approach for multi-objective function optimization based on neural network (neural network inversion) is employed. The neural network inversion approach is used for forward problem optimization. The objective functions in concern are maximizing the speedup and minimizing the power consumption. The recommendations of the proposed prediction systems achieved more than 89% accuracy for both validation and testing set. The experiments were conducted on 2500 CUDA core on Tesla K20 Kepler GPU Accelerator and Intel(R) Xeon(R) CPU E5-2695 v2.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"87 1","pages":"95-117"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84559494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the application of network encryption technology expands, malicious attacks will also be protected by encryption mechanism, increasing the difficulty of detection. This paper focuses on the analysis of encrypted traffic in the network by hosting long-day encrypted traffic, coupled with a weighted algorithm commonly used in information retrieval and SSL/TLS fingerprint to detect malicious encrypted links. The experimental results show that the system proposed in this paper can identify potential malicious SSL/TLS fingerprints and malicious IP which cannot be recognized by other external threat information providers. The network packet decryption is not required to help clarify the full picture of the security incident and provide the basis of digital identification. Finally, the new threat intelligence obtained from the correlation analysis of this paper can be applied to regional joint defense or intelligence exchange between organizations. In addition, the framework adopts Google cloud platform and microservice technology to form an integrated serverless computing architecture.
{"title":"Cloud Computing for Malicious Encrypted Traffic Analysis and Collaboration","authors":"Tzung-Han Jeng, Wen-Yang Luo, Chuan-Chiang Huang, Chien-Chih Chen, Kuang-Hung Chang, Yi-Ming Chen","doi":"10.4018/IJGHPC.2021070102","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021070102","url":null,"abstract":"As the application of network encryption technology expands, malicious attacks will also be protected by encryption mechanism, increasing the difficulty of detection. This paper focuses on the analysis of encrypted traffic in the network by hosting long-day encrypted traffic, coupled with a weighted algorithm commonly used in information retrieval and SSL/TLS fingerprint to detect malicious encrypted links. The experimental results show that the system proposed in this paper can identify potential malicious SSL/TLS fingerprints and malicious IP which cannot be recognized by other external threat information providers. The network packet decryption is not required to help clarify the full picture of the security incident and provide the basis of digital identification. Finally, the new threat intelligence obtained from the correlation analysis of this paper can be applied to regional joint defense or intelligence exchange between organizations. In addition, the framework adopts Google cloud platform and microservice technology to form an integrated serverless computing architecture.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"299 1","pages":"12-29"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73175852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJGHPC.2021070103
Yu-sheng Lin, Chi-Lung Wang, Chao-Tang Lee
NVMe SSDs are deployed in data centers for applications with high performance, but its capacity and bandwidth are often underutilized. Remote access NVMe SSD enables flexible scaling and high utilization of Flash capacity and bandwidth within data centers. The current issue of remote access NVMe SSD has significant performance overheads. The research focuses on remote access NVMe SSD via NTB (non-transparent bridge). NTB is a type of PCI-Express; its memory mapping technology can allow to access memory belonging to peer servers. NVMe SSD supports multiple I/O queues to maximize the I/O parallel processing of flash; hence, NVMe SSD can provide significant performance when comparing with traditional hard drives. The research proposes a novel design based on features of NTB memory mapping and NVMe SSD multiple I/O queues. The remote and local servers can access the same NVMe SSD concurrently. The experimental results show the performance of remote access NVMe SSD can approach the local access. It is significantly excellent and proved feasible.
{"title":"Remote Access NVMe SSD via NTB","authors":"Yu-sheng Lin, Chi-Lung Wang, Chao-Tang Lee","doi":"10.4018/IJGHPC.2021070103","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021070103","url":null,"abstract":"NVMe SSDs are deployed in data centers for applications with high performance, but its capacity and bandwidth are often underutilized. Remote access NVMe SSD enables flexible scaling and high utilization of Flash capacity and bandwidth within data centers. The current issue of remote access NVMe SSD has significant performance overheads. The research focuses on remote access NVMe SSD via NTB (non-transparent bridge). NTB is a type of PCI-Express; its memory mapping technology can allow to access memory belonging to peer servers. NVMe SSD supports multiple I/O queues to maximize the I/O parallel processing of flash; hence, NVMe SSD can provide significant performance when comparing with traditional hard drives. The research proposes a novel design based on features of NTB memory mapping and NVMe SSD multiple I/O queues. The remote and local servers can access the same NVMe SSD concurrently. The experimental results show the performance of remote access NVMe SSD can approach the local access. It is significantly excellent and proved feasible.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"2002 16","pages":"30-42"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72400464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/ijghpc.2021010104
Rahaf Maher Ghazal, S. Jafar, M. Alhamad
With the increase in high-performance computing platform size, it makes the system reliability more challenging, and system mean time between failures (MTBF) may be too short to supply a total fault-free run. Thereby, to achieve greater benefit from these systems, the applications must include fault tolerance mechanisms to satisfy the required reliability. This manuscript focuses on grid computing platform that exposes to two types of threats: crash and silent data corruption faults, which cause the application's failure. This manuscript also addresses the problem of modeling resource availability and aims to minimize the overhead of checkpoint/recovery-fault tolerance techniques. Modeling resources faults has commonly been addressed with exponential distribution, but that isn't fully realistic for the transient errors, which appear randomly. In the manuscript, the authors use Weibull distribution to express these random faults to create the optimal time to save checkpoints.
{"title":"Modeling of Two-Level Checkpointing With Silent and Fail-Stop Errors in Grid Computing Systems","authors":"Rahaf Maher Ghazal, S. Jafar, M. Alhamad","doi":"10.4018/ijghpc.2021010104","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010104","url":null,"abstract":"With the increase in high-performance computing platform size, it makes the system reliability more challenging, and system mean time between failures (MTBF) may be too short to supply a total fault-free run. Thereby, to achieve greater benefit from these systems, the applications must include fault tolerance mechanisms to satisfy the required reliability. This manuscript focuses on grid computing platform that exposes to two types of threats: crash and silent data corruption faults, which cause the application's failure. This manuscript also addresses the problem of modeling resource availability and aims to minimize the overhead of checkpoint/recovery-fault tolerance techniques. Modeling resources faults has commonly been addressed with exponential distribution, but that isn't fully realistic for the transient errors, which appear randomly. In the manuscript, the authors use Weibull distribution to express these random faults to create the optimal time to save checkpoints.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"67 1","pages":"65-81"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87800606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/ijghpc.2021010103
B. Dewangan, M. Venkatadri, A. Agarwal, Ashutosh Pasricha, T. Choudhury
In cloud computing, applications, administrations, and assets have a place with various associations with various goals. Elements in the cloud are self-sufficient and self-adjusting. In such a collaborative environment, the scheduling decision on available resources is a challenge given the decentralized nature of the environment. Fault tolerance is an utmost challenge in the task scheduling of available resources. In this paper, self-healing fault tolerance techniques have been introducing to detect the faulty resources and measured the best resource value through CPU, RAM, and bandwidth utilization of each resource. Through the self-healing method, less than threshold values have been considering as a faulty resource and separate from the resource pool. The workloads submitted by the user have been assigned to the available best resource. The proposed method has been simulated in cloudsim and compared the multi-objective performance metrics with existing methods, and it is observed that the proposed method performs utmost.
{"title":"An Automated Self-Healing Cloud Computing Framework for Resource Scheduling","authors":"B. Dewangan, M. Venkatadri, A. Agarwal, Ashutosh Pasricha, T. Choudhury","doi":"10.4018/ijghpc.2021010103","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010103","url":null,"abstract":"In cloud computing, applications, administrations, and assets have a place with various associations with various goals. Elements in the cloud are self-sufficient and self-adjusting. In such a collaborative environment, the scheduling decision on available resources is a challenge given the decentralized nature of the environment. Fault tolerance is an utmost challenge in the task scheduling of available resources. In this paper, self-healing fault tolerance techniques have been introducing to detect the faulty resources and measured the best resource value through CPU, RAM, and bandwidth utilization of each resource. Through the self-healing method, less than threshold values have been considering as a faulty resource and separate from the resource pool. The workloads submitted by the user have been assigned to the available best resource. The proposed method has been simulated in cloudsim and compared the multi-objective performance metrics with existing methods, and it is observed that the proposed method performs utmost.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"1 1","pages":"47-64"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76183684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/ijghpc.2021010102
N. Baskaran, R. Eswari
Cloud computing has grown exponentially in the recent years. Data growth is increasing day by day, which increases the demand for cloud storage, which leads to setting up cloud data centers. But they consume enormous amounts of power, use the resources inefficiently, and also violate service-level agreements. In this paper, an adaptive fuzzy-based VM selection algorithm (AFT_FS) is proposed to address these problems. The proposed algorithm uses four thresholds to detect overloaded host and fuzzy-based approach to select VM for migration. The algorithm is experimentally tested for real-world data, and the performance is compared with existing algorithms for various metrics. The simulation results testify to the proposed AFT_FS method is the utmost energy efficient and minimizes the SLA rate compared to other algorithms.
{"title":"An Efficient Threshold-Fuzzy-Based Algorithm for VM Consolidation in Cloud Datacenter","authors":"N. Baskaran, R. Eswari","doi":"10.4018/ijghpc.2021010102","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010102","url":null,"abstract":"Cloud computing has grown exponentially in the recent years. Data growth is increasing day by day, which increases the demand for cloud storage, which leads to setting up cloud data centers. But they consume enormous amounts of power, use the resources inefficiently, and also violate service-level agreements. In this paper, an adaptive fuzzy-based VM selection algorithm (AFT_FS) is proposed to address these problems. The proposed algorithm uses four thresholds to detect overloaded host and fuzzy-based approach to select VM for migration. The algorithm is experimentally tested for real-world data, and the performance is compared with existing algorithms for various metrics. The simulation results testify to the proposed AFT_FS method is the utmost energy efficient and minimizes the SLA rate compared to other algorithms.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"57 1","pages":"18-46"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88007859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/ijghpc.2021010101
Shamik Tiwari
Anonymous network communication using onion routing networks such as Tor are used to guard the privacy of sender by encrypting all messages in the overlapped network. These days most of the onion routed communications are not only used for decent cause but also cyber offenders are ill-using onion routings for scanning the ports, hacking, exfiltration of theft data, and other types of online frauds. These cyber-crime attempts are very vulnerable for cloud security. Deep learning is highly effective machine learning method for prediction and classification. Ensembling multiple models is an influential approach to increase the efficiency of learning models. In this work, an ensemble deep learning-based classification model is proposed to detect communication through Tor and non-Tor network. Three different deep learning models are combined to achieve the ensemble model. The proposed model is also compared with other machine learning models. Classification results shows the superiority of the proposed model than other models.
{"title":"An Ensemble Deep Neural Network Model for Onion-Routed Traffic Detection to Boost Cloud Security","authors":"Shamik Tiwari","doi":"10.4018/ijghpc.2021010101","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010101","url":null,"abstract":"Anonymous network communication using onion routing networks such as Tor are used to guard the privacy of sender by encrypting all messages in the overlapped network. These days most of the onion routed communications are not only used for decent cause but also cyber offenders are ill-using onion routings for scanning the ports, hacking, exfiltration of theft data, and other types of online frauds. These cyber-crime attempts are very vulnerable for cloud security. Deep learning is highly effective machine learning method for prediction and classification. Ensembling multiple models is an influential approach to increase the efficiency of learning models. In this work, an ensemble deep learning-based classification model is proposed to detect communication through Tor and non-Tor network. Three different deep learning models are combined to achieve the ensemble model. The proposed model is also compared with other machine learning models. Classification results shows the superiority of the proposed model than other models.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"4 1","pages":"1-17"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78759697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}