首页 > 最新文献

International Journal of Grid and High Performance Computing最新文献

英文 中文
Performance Comparison of Various Algorithms During Software Fault Prediction 软件故障预测中各种算法的性能比较
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-04-01 DOI: 10.4018/IJGHPC.2021040105
Munish Khanna, Abhishek Toofani, Siddharth Bansal, M. Asif
Producing software of high quality is challenging in view of the large volume, size, and complexity of the developed software. Checking the software for faults in the early phases helps to bring down testing resources. This empirical study explores the performance of different machine learning model, fuzzy logic algorithms against the problem of predicting software fault proneness. The work experiments on the public domain KC1 NASA data set. Performance of different methods of fault prediction is evaluated using parameters such as receiver characteristics (ROC) analysis and RMS (root mean squared), etc. Comparison is made among different algorithms/models using such results which are presented in this paper.
鉴于已开发软件的体积、尺寸和复杂性,生产高质量的软件是具有挑战性的。在早期阶段检查软件的故障有助于减少测试资源。本实证研究探讨了不同机器学习模型、模糊逻辑算法对软件故障倾向预测问题的性能。在公共领域KC1 NASA数据集上进行的工作实验。利用接收机特征(ROC)分析和均方根(RMS)等参数对不同故障预测方法的性能进行了评价。利用本文给出的结果,对不同的算法/模型进行了比较。
{"title":"Performance Comparison of Various Algorithms During Software Fault Prediction","authors":"Munish Khanna, Abhishek Toofani, Siddharth Bansal, M. Asif","doi":"10.4018/IJGHPC.2021040105","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040105","url":null,"abstract":"Producing software of high quality is challenging in view of the large volume, size, and complexity of the developed software. Checking the software for faults in the early phases helps to bring down testing resources. This empirical study explores the performance of different machine learning model, fuzzy logic algorithms against the problem of predicting software fault proneness. The work experiments on the public domain KC1 NASA data set. Performance of different methods of fault prediction is evaluated using parameters such as receiver characteristics (ROC) analysis and RMS (root mean squared), etc. Comparison is made among different algorithms/models using such results which are presented in this paper.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"193 1","pages":"70-94"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83085124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network Blueprint for Maximizing the Lifetime of Smart Devices in Low Power IoT Networks 在低功耗物联网网络中最大化智能设备寿命的网络蓝图
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-04-01 DOI: 10.4018/IJGHPC.2021040102
P. Sarwesh, K. Chandrasekaran, S. Thamizharasan
In the modern communication and computation era, internet of things (IoT) is developing as the key technology that satisfies the requirements of various applications. Prolonging device lifetime and maintaining network reliability is the evident objective for IoT network. Therefore, the authors come up with the network architecture that integrates node placement technique and routing technique. In the architecture, node placement is implemented by varying the density of nodes, by varying battery level of nodes, and by varying transmission range of nodes. Energy efficient and reliable path computation is addressed by routing technique. Therefore, enhancing the features of routing and node placement technique and integrating them together in network architecture can efficiently prolong the network lifetime. From the results, the authors observed that the proposed network architecture prolongs the network lifetime two times better than the standard model and also outperforms EQSR protocol and maintains the reliable data transfer.
在现代通信和计算时代,物联网(IoT)作为满足各种应用需求的关键技术正在发展。延长设备寿命和维护网络可靠性是物联网网络的明显目标。因此,作者提出了节点放置技术和路由技术相结合的网络架构。在该体系结构中,通过改变节点的密度、改变节点的电池级别和改变节点的传输范围来实现节点放置。路由技术解决了高效、可靠的路径计算问题。因此,增强路由技术和节点放置技术的特性,并将它们整合到网络架构中,可以有效地延长网络的生命周期。结果表明,本文提出的网络结构比标准模型延长了两倍的网络寿命,并且优于EQSR协议,保持了可靠的数据传输。
{"title":"Network Blueprint for Maximizing the Lifetime of Smart Devices in Low Power IoT Networks","authors":"P. Sarwesh, K. Chandrasekaran, S. Thamizharasan","doi":"10.4018/IJGHPC.2021040102","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040102","url":null,"abstract":"In the modern communication and computation era, internet of things (IoT) is developing as the key technology that satisfies the requirements of various applications. Prolonging device lifetime and maintaining network reliability is the evident objective for IoT network. Therefore, the authors come up with the network architecture that integrates node placement technique and routing technique. In the architecture, node placement is implemented by varying the density of nodes, by varying battery level of nodes, and by varying transmission range of nodes. Energy efficient and reliable path computation is addressed by routing technique. Therefore, enhancing the features of routing and node placement technique and integrating them together in network architecture can efficiently prolong the network lifetime. From the results, the authors observed that the proposed network architecture prolongs the network lifetime two times better than the standard model and also outperforms EQSR protocol and maintains the reliable data transfer.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"15 1","pages":"21-38"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86009701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remote Health Patient Monitoring System for Early Detection of Heart Disease 用于心脏病早期检测的远程健康患者监测系统
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-04-01 DOI: 10.4018/IJGHPC.2021040107
Gokulnath Chandra Babu, Shantharajah S. Periyasamy
This paper presents a heart disease prediction model. Among the recent technology, internet of things-enabled healthcare plays a vital role. The medical sensors used in healthcare provide a huge volume of medical data in a continuous manner. The speed of data generation in IoT healthcare is high so the volume of data is also high. In order to overcome this problem, the proposed model is a novel three-step process to store and analyze the large volumes of data. The first step focuses on a collection of data from sensor devices. In Step 2, HBase has been used to store the large volume of medical sensor data from a wearable device to the cloud. Step 3 uses Mahout for devolving logistic regression-based prediction model. At last, ROC curve is used to find the parameters that cause heart disease.
本文提出了一种心脏病预测模型。在最近的技术中,物联网医疗保健发挥着至关重要的作用。医疗保健中使用的医疗传感器以连续的方式提供大量的医疗数据。物联网医疗保健中的数据生成速度很高,因此数据量也很高。为了克服这一问题,提出了一种新的三步处理模型来存储和分析大量数据。第一步的重点是收集来自传感器设备的数据。在步骤2中,已经使用HBase将大量的医疗传感器数据从可穿戴设备存储到云端。步骤3使用Mahout对基于逻辑回归的预测模型进行下放。最后利用ROC曲线找出引起心脏疾病的参数。
{"title":"Remote Health Patient Monitoring System for Early Detection of Heart Disease","authors":"Gokulnath Chandra Babu, Shantharajah S. Periyasamy","doi":"10.4018/IJGHPC.2021040107","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040107","url":null,"abstract":"This paper presents a heart disease prediction model. Among the recent technology, internet of things-enabled healthcare plays a vital role. The medical sensors used in healthcare provide a huge volume of medical data in a continuous manner. The speed of data generation in IoT healthcare is high so the volume of data is also high. In order to overcome this problem, the proposed model is a novel three-step process to store and analyze the large volumes of data. The first step focuses on a collection of data from sensor devices. In Step 2, HBase has been used to store the large volume of medical sensor data from a wearable device to the cloud. Step 3 uses Mahout for devolving logistic regression-based prediction model. At last, ROC curve is used to find the parameters that cause heart disease.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"4 1","pages":"118-130"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72938044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Neural Network Inversion-Based Model for Predicting an Optimal Hardware Configuration: Solving Computationally Intensive Problems 基于神经网络的预测最优硬件配置模型:解决计算密集型问题
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-04-01 DOI: 10.4018/IJGHPC.2021040106
M. M. Al-Qutt, H. Khaled, Rania El-Gohary
Deciding the number of processors that can efficiently speed-up solving a computationally intensive problem while perceiving efficient power consumption constitutes a major challenge to researcher in the HPC high performance computing realm. This paper exploits machine learning techniques to propose and implement a recommender system that recommends the optimal HPC architecture given the problem size. An approach for multi-objective function optimization based on neural network (neural network inversion) is employed. The neural network inversion approach is used for forward problem optimization. The objective functions in concern are maximizing the speedup and minimizing the power consumption. The recommendations of the proposed prediction systems achieved more than 89% accuracy for both validation and testing set. The experiments were conducted on 2500 CUDA core on Tesla K20 Kepler GPU Accelerator and Intel(R) Xeon(R) CPU E5-2695 v2.
在计算密集型问题的求解过程中,如何在保证高效功耗的前提下确定处理器的数量,是高性能计算领域研究人员面临的一个重大挑战。本文利用机器学习技术提出并实现了一个推荐系统,该系统在给定问题规模的情况下推荐最优的HPC架构。采用了一种基于神经网络的多目标函数优化方法(神经网络反演)。采用神经网络反演方法进行正向优化。所关注的目标函数是加速最大化和功耗最小化。所建议的预测系统在验证集和测试集的准确率均超过89%。实验在Tesla K20 Kepler GPU Accelerator和Intel(R) Xeon(R) CPU E5-2695 v2上,在2500 CUDA核上进行。
{"title":"Neural Network Inversion-Based Model for Predicting an Optimal Hardware Configuration: Solving Computationally Intensive Problems","authors":"M. M. Al-Qutt, H. Khaled, Rania El-Gohary","doi":"10.4018/IJGHPC.2021040106","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021040106","url":null,"abstract":"Deciding the number of processors that can efficiently speed-up solving a computationally intensive problem while perceiving efficient power consumption constitutes a major challenge to researcher in the HPC high performance computing realm. This paper exploits machine learning techniques to propose and implement a recommender system that recommends the optimal HPC architecture given the problem size. An approach for multi-objective function optimization based on neural network (neural network inversion) is employed. The neural network inversion approach is used for forward problem optimization. The objective functions in concern are maximizing the speedup and minimizing the power consumption. The recommendations of the proposed prediction systems achieved more than 89% accuracy for both validation and testing set. The experiments were conducted on 2500 CUDA core on Tesla K20 Kepler GPU Accelerator and Intel(R) Xeon(R) CPU E5-2695 v2.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"87 1","pages":"95-117"},"PeriodicalIF":1.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84559494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Cloud Computing for Malicious Encrypted Traffic Analysis and Collaboration 基于云计算的恶意加密流量分析与协作
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-01-01 DOI: 10.4018/IJGHPC.2021070102
Tzung-Han Jeng, Wen-Yang Luo, Chuan-Chiang Huang, Chien-Chih Chen, Kuang-Hung Chang, Yi-Ming Chen
As the application of network encryption technology expands, malicious attacks will also be protected by encryption mechanism, increasing the difficulty of detection. This paper focuses on the analysis of encrypted traffic in the network by hosting long-day encrypted traffic, coupled with a weighted algorithm commonly used in information retrieval and SSL/TLS fingerprint to detect malicious encrypted links. The experimental results show that the system proposed in this paper can identify potential malicious SSL/TLS fingerprints and malicious IP which cannot be recognized by other external threat information providers. The network packet decryption is not required to help clarify the full picture of the security incident and provide the basis of digital identification. Finally, the new threat intelligence obtained from the correlation analysis of this paper can be applied to regional joint defense or intelligence exchange between organizations. In addition, the framework adopts Google cloud platform and microservice technology to form an integrated serverless computing architecture.
随着网络加密技术应用的扩大,恶意攻击也会受到加密机制的保护,增加了检测的难度。本文主要通过托管长日加密流量对网络中的加密流量进行分析,结合信息检索中常用的加权算法和SSL/TLS指纹检测恶意加密链路。实验结果表明,本文提出的系统能够识别出其他外部威胁信息提供者无法识别的潜在恶意SSL/TLS指纹和恶意IP。不需要网络数据包解密来帮助澄清安全事件的全貌,并提供数字识别的基础。最后,本文通过相关分析得到的新的威胁情报可用于区域联合防御或组织间的情报交换。此外,该框架采用Google云平台和微服务技术,形成一体化的无服务器计算架构。
{"title":"Cloud Computing for Malicious Encrypted Traffic Analysis and Collaboration","authors":"Tzung-Han Jeng, Wen-Yang Luo, Chuan-Chiang Huang, Chien-Chih Chen, Kuang-Hung Chang, Yi-Ming Chen","doi":"10.4018/IJGHPC.2021070102","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021070102","url":null,"abstract":"As the application of network encryption technology expands, malicious attacks will also be protected by encryption mechanism, increasing the difficulty of detection. This paper focuses on the analysis of encrypted traffic in the network by hosting long-day encrypted traffic, coupled with a weighted algorithm commonly used in information retrieval and SSL/TLS fingerprint to detect malicious encrypted links. The experimental results show that the system proposed in this paper can identify potential malicious SSL/TLS fingerprints and malicious IP which cannot be recognized by other external threat information providers. The network packet decryption is not required to help clarify the full picture of the security incident and provide the basis of digital identification. Finally, the new threat intelligence obtained from the correlation analysis of this paper can be applied to regional joint defense or intelligence exchange between organizations. In addition, the framework adopts Google cloud platform and microservice technology to form an integrated serverless computing architecture.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"299 1","pages":"12-29"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73175852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Remote Access NVMe SSD via NTB 通过NTB远程访问NVMe SSD
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-01-01 DOI: 10.4018/IJGHPC.2021070103
Yu-sheng Lin, Chi-Lung Wang, Chao-Tang Lee
NVMe SSDs are deployed in data centers for applications with high performance, but its capacity and bandwidth are often underutilized. Remote access NVMe SSD enables flexible scaling and high utilization of Flash capacity and bandwidth within data centers. The current issue of remote access NVMe SSD has significant performance overheads. The research focuses on remote access NVMe SSD via NTB (non-transparent bridge). NTB is a type of PCI-Express; its memory mapping technology can allow to access memory belonging to peer servers. NVMe SSD supports multiple I/O queues to maximize the I/O parallel processing of flash; hence, NVMe SSD can provide significant performance when comparing with traditional hard drives. The research proposes a novel design based on features of NTB memory mapping and NVMe SSD multiple I/O queues. The remote and local servers can access the same NVMe SSD concurrently. The experimental results show the performance of remote access NVMe SSD can approach the local access. It is significantly excellent and proved feasible.
NVMe ssd主要用于数据中心的高性能应用,但其容量和带宽往往未得到充分利用。远程访问NVMe SSD可以在数据中心内灵活扩展和提高闪存容量和带宽的利用率。当前问题的远程访问NVMe SSD具有显著的性能开销。研究重点是通过NTB(非透明桥)远程访问NVMe SSD。NTB是一种PCI-Express;它的内存映射技术允许访问属于对等服务器的内存。NVMe SSD支持多个I/O队列,最大限度地实现闪存的I/O并行处理;因此,与传统硬盘相比,NVMe SSD可以提供显著的性能。本研究提出了一种基于NTB内存映射和NVMe SSD多I/O队列特性的新设计。远程服务器和本地服务器可以同时访问同一个NVMe SSD。实验结果表明,远程访问NVMe固态硬盘的性能可以接近本地访问。它是非常优秀的,并被证明是可行的。
{"title":"Remote Access NVMe SSD via NTB","authors":"Yu-sheng Lin, Chi-Lung Wang, Chao-Tang Lee","doi":"10.4018/IJGHPC.2021070103","DOIUrl":"https://doi.org/10.4018/IJGHPC.2021070103","url":null,"abstract":"NVMe SSDs are deployed in data centers for applications with high performance, but its capacity and bandwidth are often underutilized. Remote access NVMe SSD enables flexible scaling and high utilization of Flash capacity and bandwidth within data centers. The current issue of remote access NVMe SSD has significant performance overheads. The research focuses on remote access NVMe SSD via NTB (non-transparent bridge). NTB is a type of PCI-Express; its memory mapping technology can allow to access memory belonging to peer servers. NVMe SSD supports multiple I/O queues to maximize the I/O parallel processing of flash; hence, NVMe SSD can provide significant performance when comparing with traditional hard drives. The research proposes a novel design based on features of NTB memory mapping and NVMe SSD multiple I/O queues. The remote and local servers can access the same NVMe SSD concurrently. The experimental results show the performance of remote access NVMe SSD can approach the local access. It is significantly excellent and proved feasible.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"2002 16","pages":"30-42"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72400464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling of Two-Level Checkpointing With Silent and Fail-Stop Errors in Grid Computing Systems 网格计算系统中具有静止和故障停止错误的两级检查点建模
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-01-01 DOI: 10.4018/ijghpc.2021010104
Rahaf Maher Ghazal, S. Jafar, M. Alhamad
With the increase in high-performance computing platform size, it makes the system reliability more challenging, and system mean time between failures (MTBF) may be too short to supply a total fault-free run. Thereby, to achieve greater benefit from these systems, the applications must include fault tolerance mechanisms to satisfy the required reliability. This manuscript focuses on grid computing platform that exposes to two types of threats: crash and silent data corruption faults, which cause the application's failure. This manuscript also addresses the problem of modeling resource availability and aims to minimize the overhead of checkpoint/recovery-fault tolerance techniques. Modeling resources faults has commonly been addressed with exponential distribution, but that isn't fully realistic for the transient errors, which appear randomly. In the manuscript, the authors use Weibull distribution to express these random faults to create the optimal time to save checkpoints.
随着高性能计算平台规模的增加,系统可靠性变得更加具有挑战性,系统平均故障间隔时间(MTBF)可能太短,无法提供完全无故障的运行。因此,为了从这些系统中获得更大的好处,应用程序必须包含容错机制,以满足所需的可靠性。本文的重点是暴露于两种威胁的网格计算平台:崩溃和静默数据损坏错误,它们会导致应用程序失败。本文还解决了资源可用性建模的问题,并旨在最小化检查点/恢复容错技术的开销。资源故障建模通常采用指数分布的方法,但对于随机出现的暂态误差,这种方法并不完全现实。在手稿中,作者使用威布尔分布来表示这些随机故障,以创建保存检查点的最佳时间。
{"title":"Modeling of Two-Level Checkpointing With Silent and Fail-Stop Errors in Grid Computing Systems","authors":"Rahaf Maher Ghazal, S. Jafar, M. Alhamad","doi":"10.4018/ijghpc.2021010104","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010104","url":null,"abstract":"With the increase in high-performance computing platform size, it makes the system reliability more challenging, and system mean time between failures (MTBF) may be too short to supply a total fault-free run. Thereby, to achieve greater benefit from these systems, the applications must include fault tolerance mechanisms to satisfy the required reliability. This manuscript focuses on grid computing platform that exposes to two types of threats: crash and silent data corruption faults, which cause the application's failure. This manuscript also addresses the problem of modeling resource availability and aims to minimize the overhead of checkpoint/recovery-fault tolerance techniques. Modeling resources faults has commonly been addressed with exponential distribution, but that isn't fully realistic for the transient errors, which appear randomly. In the manuscript, the authors use Weibull distribution to express these random faults to create the optimal time to save checkpoints.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"67 1","pages":"65-81"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87800606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automated Self-Healing Cloud Computing Framework for Resource Scheduling 一种用于资源调度的自动化自修复云计算框架
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-01-01 DOI: 10.4018/ijghpc.2021010103
B. Dewangan, M. Venkatadri, A. Agarwal, Ashutosh Pasricha, T. Choudhury
In cloud computing, applications, administrations, and assets have a place with various associations with various goals. Elements in the cloud are self-sufficient and self-adjusting. In such a collaborative environment, the scheduling decision on available resources is a challenge given the decentralized nature of the environment. Fault tolerance is an utmost challenge in the task scheduling of available resources. In this paper, self-healing fault tolerance techniques have been introducing to detect the faulty resources and measured the best resource value through CPU, RAM, and bandwidth utilization of each resource. Through the self-healing method, less than threshold values have been considering as a faulty resource and separate from the resource pool. The workloads submitted by the user have been assigned to the available best resource. The proposed method has been simulated in cloudsim and compared the multi-objective performance metrics with existing methods, and it is observed that the proposed method performs utmost.
在云计算中,应用程序、管理和资产与不同的目标有不同的关联。云中的元素是自给自足和自我调整的。在这样的协作环境中,考虑到环境的分散性,对可用资源的调度决策是一个挑战。在可用资源的任务调度中,容错是一个最大的挑战。本文引入了自修复容错技术,通过对每个资源的CPU、RAM和带宽利用率来检测故障资源,并测量最佳资源值。通过自愈方法,将小于阈值的资源视为故障资源,并从资源池中分离出来。用户提交的工作负载已分配给可用的最佳资源。在cloudsim中对所提方法进行了仿真,并与现有方法进行了多目标性能指标的比较,结果表明所提方法性能最好。
{"title":"An Automated Self-Healing Cloud Computing Framework for Resource Scheduling","authors":"B. Dewangan, M. Venkatadri, A. Agarwal, Ashutosh Pasricha, T. Choudhury","doi":"10.4018/ijghpc.2021010103","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010103","url":null,"abstract":"In cloud computing, applications, administrations, and assets have a place with various associations with various goals. Elements in the cloud are self-sufficient and self-adjusting. In such a collaborative environment, the scheduling decision on available resources is a challenge given the decentralized nature of the environment. Fault tolerance is an utmost challenge in the task scheduling of available resources. In this paper, self-healing fault tolerance techniques have been introducing to detect the faulty resources and measured the best resource value through CPU, RAM, and bandwidth utilization of each resource. Through the self-healing method, less than threshold values have been considering as a faulty resource and separate from the resource pool. The workloads submitted by the user have been assigned to the available best resource. The proposed method has been simulated in cloudsim and compared the multi-objective performance metrics with existing methods, and it is observed that the proposed method performs utmost.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"1 1","pages":"47-64"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76183684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An Efficient Threshold-Fuzzy-Based Algorithm for VM Consolidation in Cloud Datacenter 一种基于阈值模糊的云数据中心虚拟机整合算法
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-01-01 DOI: 10.4018/ijghpc.2021010102
N. Baskaran, R. Eswari
Cloud computing has grown exponentially in the recent years. Data growth is increasing day by day, which increases the demand for cloud storage, which leads to setting up cloud data centers. But they consume enormous amounts of power, use the resources inefficiently, and also violate service-level agreements. In this paper, an adaptive fuzzy-based VM selection algorithm (AFT_FS) is proposed to address these problems. The proposed algorithm uses four thresholds to detect overloaded host and fuzzy-based approach to select VM for migration. The algorithm is experimentally tested for real-world data, and the performance is compared with existing algorithms for various metrics. The simulation results testify to the proposed AFT_FS method is the utmost energy efficient and minimizes the SLA rate compared to other algorithms.
云计算近年来呈指数级增长。随着数据的日益增长,对云存储的需求也越来越大,因此需要建立云数据中心。但是它们消耗了大量的电力,低效地使用资源,而且还违反了服务水平协议。本文提出了一种基于自适应模糊的虚拟机选择算法(AFT_FS)来解决这些问题。该算法采用四个阈值检测过载主机,并采用基于模糊的方法选择虚拟机进行迁移。该算法在实际数据中进行了实验测试,并在各种指标上与现有算法进行了性能比较。仿真结果表明,与其他算法相比,所提出的AFT_FS方法具有最大的能效和最小的SLA率。
{"title":"An Efficient Threshold-Fuzzy-Based Algorithm for VM Consolidation in Cloud Datacenter","authors":"N. Baskaran, R. Eswari","doi":"10.4018/ijghpc.2021010102","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010102","url":null,"abstract":"Cloud computing has grown exponentially in the recent years. Data growth is increasing day by day, which increases the demand for cloud storage, which leads to setting up cloud data centers. But they consume enormous amounts of power, use the resources inefficiently, and also violate service-level agreements. In this paper, an adaptive fuzzy-based VM selection algorithm (AFT_FS) is proposed to address these problems. The proposed algorithm uses four thresholds to detect overloaded host and fuzzy-based approach to select VM for migration. The algorithm is experimentally tested for real-world data, and the performance is compared with existing algorithms for various metrics. The simulation results testify to the proposed AFT_FS method is the utmost energy efficient and minimizes the SLA rate compared to other algorithms.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"57 1","pages":"18-46"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88007859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Ensemble Deep Neural Network Model for Onion-Routed Traffic Detection to Boost Cloud Security 基于集成深度神经网络的洋葱路由流量检测模型提高云安全
IF 1 Q4 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-01-01 DOI: 10.4018/ijghpc.2021010101
Shamik Tiwari
Anonymous network communication using onion routing networks such as Tor are used to guard the privacy of sender by encrypting all messages in the overlapped network. These days most of the onion routed communications are not only used for decent cause but also cyber offenders are ill-using onion routings for scanning the ports, hacking, exfiltration of theft data, and other types of online frauds. These cyber-crime attempts are very vulnerable for cloud security. Deep learning is highly effective machine learning method for prediction and classification. Ensembling multiple models is an influential approach to increase the efficiency of learning models. In this work, an ensemble deep learning-based classification model is proposed to detect communication through Tor and non-Tor network. Three different deep learning models are combined to achieve the ensemble model. The proposed model is also compared with other machine learning models. Classification results shows the superiority of the proposed model than other models.
使用洋葱路由网络(如Tor)的匿名网络通信通过加密重叠网络中的所有消息来保护发送者的隐私。如今,大多数洋葱路由通信不仅被用于正当目的,而且网络罪犯也在滥用洋葱路由扫描端口、黑客攻击、窃取数据和其他类型的在线欺诈。这些网络犯罪的企图对云安全来说是非常脆弱的。深度学习是一种高效的机器学习预测和分类方法。多模型集成是提高模型学习效率的有效方法。在这项工作中,提出了一个基于集成深度学习的分类模型来检测通过Tor和非Tor网络的通信。将三种不同的深度学习模型结合起来实现集成模型。该模型还与其他机器学习模型进行了比较。分类结果表明,该模型优于其他模型。
{"title":"An Ensemble Deep Neural Network Model for Onion-Routed Traffic Detection to Boost Cloud Security","authors":"Shamik Tiwari","doi":"10.4018/ijghpc.2021010101","DOIUrl":"https://doi.org/10.4018/ijghpc.2021010101","url":null,"abstract":"Anonymous network communication using onion routing networks such as Tor are used to guard the privacy of sender by encrypting all messages in the overlapped network. These days most of the onion routed communications are not only used for decent cause but also cyber offenders are ill-using onion routings for scanning the ports, hacking, exfiltration of theft data, and other types of online frauds. These cyber-crime attempts are very vulnerable for cloud security. Deep learning is highly effective machine learning method for prediction and classification. Ensembling multiple models is an influential approach to increase the efficiency of learning models. In this work, an ensemble deep learning-based classification model is proposed to detect communication through Tor and non-Tor network. Three different deep learning models are combined to achieve the ensemble model. The proposed model is also compared with other machine learning models. Classification results shows the superiority of the proposed model than other models.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"4 1","pages":"1-17"},"PeriodicalIF":1.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78759697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
International Journal of Grid and High Performance Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1