Pub Date : 2022-01-01DOI: 10.4018/ijcac.2022010102
T. Nisha, Amit Khandebharad
DevOps development strategy is based on lean and agile principles and developed to ensure faster delivery. It ensures the collaboration of all stakeholders in the software development process and incorporates user’s feedback in a faster manner. This strategy is developed to guarantee customer satisfaction, increased business value, reduced time for bagging the feedback and adjusting the deliverables. They identified a requirement of prioritizing security in DevOps and started conferring about security to be embedded in DevOps. This introduced a mission-critical issue in many organizations as it requires breaking down of the barriers of operations and security team and review of many security policies in place. The challenge is to find the best way in DevOps can still perform Continuous Integration and Continuous Delivery after implanting security in a DevOps environment. This paper introduces a complete migration framework from DevOps to DevSecOps.This paper also identifies the attributes on which the migration framework can be evaluated.
{"title":"Migration From DevOps to DevSecOps: A Complete Migration Framework, Challenges, and Evaluation","authors":"T. Nisha, Amit Khandebharad","doi":"10.4018/ijcac.2022010102","DOIUrl":"https://doi.org/10.4018/ijcac.2022010102","url":null,"abstract":"DevOps development strategy is based on lean and agile principles and developed to ensure faster delivery. It ensures the collaboration of all stakeholders in the software development process and incorporates user’s feedback in a faster manner. This strategy is developed to guarantee customer satisfaction, increased business value, reduced time for bagging the feedback and adjusting the deliverables. They identified a requirement of prioritizing security in DevOps and started conferring about security to be embedded in DevOps. This introduced a mission-critical issue in many organizations as it requires breaking down of the barriers of operations and security team and review of many security policies in place. The challenge is to find the best way in DevOps can still perform Continuous Integration and Continuous Delivery after implanting security in a DevOps environment. This paper introduces a complete migration framework from DevOps to DevSecOps.This paper also identifies the attributes on which the migration framework can be evaluated.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Jain, Aakash Yadav, Manish Kumar, F. García-Peñalvo, Kwok Tai Chui, Domenico Santaniello
This paper proposes an efficient approach to detecting and predicting drivers' drowsiness based on the cloud. This work focuses on the behavioral as well as facial expressions of the driver to detect drowsiness. This paper proposes an efficient approach to predicting drivers' drowsiness based on facial expressions and activities. Four different models with distinct features were experimented upon. Of these, two were VGG and the others were CNN and ResNet. VGG models were used to detect the movement of lips (yawning) and to detect facial behavior. A CNN model was used to capture the details of the eyes. ResNet detects the nodding of the driver. The proposed approach also exceeds the results set by the benchmark mode and provides high accuracy, an easy-to-use framework for embedded devices in real-time drowsiness detection. To train the proposed model, the authors have used the National Tsing Hua University (NTHU) Drivers Drowsiness data set. The overall accuracy of the proposed approach is 90.1%.
{"title":"A Cloud-Based Model for Driver Drowsiness Detection and Prediction Based on Facial Expressions and Activities","authors":"A. Jain, Aakash Yadav, Manish Kumar, F. García-Peñalvo, Kwok Tai Chui, Domenico Santaniello","doi":"10.4018/ijcac.312565","DOIUrl":"https://doi.org/10.4018/ijcac.312565","url":null,"abstract":"This paper proposes an efficient approach to detecting and predicting drivers' drowsiness based on the cloud. This work focuses on the behavioral as well as facial expressions of the driver to detect drowsiness. This paper proposes an efficient approach to predicting drivers' drowsiness based on facial expressions and activities. Four different models with distinct features were experimented upon. Of these, two were VGG and the others were CNN and ResNet. VGG models were used to detect the movement of lips (yawning) and to detect facial behavior. A CNN model was used to capture the details of the eyes. ResNet detects the nodding of the driver. The proposed approach also exceeds the results set by the benchmark mode and provides high accuracy, an easy-to-use framework for embedded devices in real-time drowsiness detection. To train the proposed model, the authors have used the National Tsing Hua University (NTHU) Drivers Drowsiness data set. The overall accuracy of the proposed approach is 90.1%.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133858548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing technology has emerged as the basic infrastructure technology in the 4thindustrial revolution and has been utilized in various fields. Therefore, the authors analyze the requirements of academic and administrative affairs, education affairs, and affiliated institutions in the university. Proposed cloud computing service models in the universities are implemented using an OpenStack in CentOS 7.3 and Windows. To evaluate the performance, the author compares various performance factors including launching time, booting time, ping time, and software installation time. Lastly, the economic benefits in the university will be evaluated by comparing the adoption cost, maintenance cost, and total cost between the existing client-server system and cloud computing system. Future work aims to implement a cloud computing system that can be applied to the entire university. Finally, based on these studies, the authors will implement a regional base cloud system that can connect all universities in the region with one cloud computing system.
{"title":"A Case Study of Cloud Computing Service Models for the General Computing Environment in a University","authors":"Ho Yeon Kang, Jong Yun Lee, S. Noh","doi":"10.4018/ijcac.297096","DOIUrl":"https://doi.org/10.4018/ijcac.297096","url":null,"abstract":"Cloud computing technology has emerged as the basic infrastructure technology in the 4thindustrial revolution and has been utilized in various fields. Therefore, the authors analyze the requirements of academic and administrative affairs, education affairs, and affiliated institutions in the university. Proposed cloud computing service models in the universities are implemented using an OpenStack in CentOS 7.3 and Windows. To evaluate the performance, the author compares various performance factors including launching time, booting time, ping time, and software installation time. Lastly, the economic benefits in the university will be evaluated by comparing the adoption cost, maintenance cost, and total cost between the existing client-server system and cloud computing system. Future work aims to implement a cloud computing system that can be applied to the entire university. Finally, based on these studies, the authors will implement a regional base cloud system that can connect all universities in the region with one cloud computing system.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133988965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile networks, in particular, are composed of wireless cellular communication nodes (MANET). Communication between these mobile nodes is not under centric systems. MANET is a network of randomly traveling nodes that self-configure and self-organize. Routing is a fundamental topic of MANET, and performance analysis of routing protocols is the focus of this study. AODV, DSR, and WRP are three routing protocols that are compared in this article. Glomosim will be used for simulation. The throughput, average end-to-end latency, and packet delivery ratio of various routing systems are all examined. Two scenarios depending on mobility and node density are considered in this research. As node density rises, PDR and throughput rise with it. Low node density resulted in the shortest delay. AODV has a higher packet delivery ratio and throughput in both scenarios, while WRP has the shortest delay. The authors also analyzed the average energy consumption with a best routing protocol that was decided by the result and conclude the efficiency of the ad-hoc network.
{"title":"Performance Optimization of Multi-Hop Routing Protocols With Clustering-Based Hybrid Networking Architecture in Mobile Adhoc Cloud Networks","authors":"Deepak Srivastava, Ajay Kumar, Anupama Mishra, Varsa Arya, Ammar Almomani, Ching-Hsien Hsu, Domenico Santaniello","doi":"10.4018/ijcac.309932","DOIUrl":"https://doi.org/10.4018/ijcac.309932","url":null,"abstract":"Mobile networks, in particular, are composed of wireless cellular communication nodes (MANET). Communication between these mobile nodes is not under centric systems. MANET is a network of randomly traveling nodes that self-configure and self-organize. Routing is a fundamental topic of MANET, and performance analysis of routing protocols is the focus of this study. AODV, DSR, and WRP are three routing protocols that are compared in this article. Glomosim will be used for simulation. The throughput, average end-to-end latency, and packet delivery ratio of various routing systems are all examined. Two scenarios depending on mobility and node density are considered in this research. As node density rises, PDR and throughput rise with it. Low node density resulted in the shortest delay. AODV has a higher packet delivery ratio and throughput in both scenarios, while WRP has the shortest delay. The authors also analyzed the average energy consumption with a best routing protocol that was decided by the result and conclude the efficiency of the ad-hoc network.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126367470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ammi, Oluwasegun A. Adedugbe, Fahad M. Al-Harby, E. Benkhelifa
As attackers continue to devise new means of exploiting vulnerabilities in computer systems,security personnel are doing their best to identify loopholes and threats.Analysis of threats to come up with effective mitigation techniques requires all-encompassing information about them.Security analysts can represent and share cyber threat information with semantic knowledge graphs within cyber security space to access. However, there should be no conflicting information because the response to threats must be immediate.This calls for a standardized taxonomy that is generally accepted within the cybersecurity space to represent information,ultimately making cyber threat intelligence (CTI) credible.This review looks into existing CTI-based ontologies,taxonomies,and knowledge graphs.The absence of standardized taxonomy identified could be responsible for limited taxonomy encoding and integration among existing CTI-based ontologies, as well as missing interconnections between taxonomies and existing ontologies. Hence, the development of a standardized taxonomy will enhance CTI effectiveness
{"title":"Taxonomical Challenges for Cyber Incident Response Threat Intelligence: A Review","authors":"M. Ammi, Oluwasegun A. Adedugbe, Fahad M. Al-Harby, E. Benkhelifa","doi":"10.4018/ijcac.300770","DOIUrl":"https://doi.org/10.4018/ijcac.300770","url":null,"abstract":"As attackers continue to devise new means of exploiting vulnerabilities in computer systems,security personnel are doing their best to identify loopholes and threats.Analysis of threats to come up with effective mitigation techniques requires all-encompassing information about them.Security analysts can represent and share cyber threat information with semantic knowledge graphs within cyber security space to access. However, there should be no conflicting information because the response to threats must be immediate.This calls for a standardized taxonomy that is generally accepted within the cybersecurity space to represent information,ultimately making cyber threat intelligence (CTI) credible.This review looks into existing CTI-based ontologies,taxonomies,and knowledge graphs.The absence of standardized taxonomy identified could be responsible for limited taxonomy encoding and integration among existing CTI-based ontologies, as well as missing interconnections between taxonomies and existing ontologies. Hence, the development of a standardized taxonomy will enhance CTI effectiveness","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130985834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abderraziq Semmoud, M. Hakem, B. Benmammar, Jean-Claude Charr
Cloud computing is a promising paradigm that provides users higher computation advantages in terms of cost, flexibility, and availability. Nevertheless, with potentially thousands of connected machines, faults become more frequent. Consequently, fault-tolerant load balancing becomes necessary in order to optimize resources utilization while ensuring the reliability of the system. Common fault tolerance techniques in cloud computing have been proposed in the literature. However, they suffer from several shortcomings: some fault tolerance techniques use checkpoint-recovery which increases the average waiting time and thus the mean response time. While other models rely on task replication which reduces the cloud's efficiency in terms of resource utilization under variable loads. To address these deficiencies, an efficient and adaptive fault tolerant algorithm for load balancing is proposed. Based on the CloudSim simulator, some series of test-bed scenarios are considered to assess the behavior of the proposed algorithm.
{"title":"A New Fault-Tolerant Algorithm Based on Replication and Preemptive Migration in Cloud Computing","authors":"Abderraziq Semmoud, M. Hakem, B. Benmammar, Jean-Claude Charr","doi":"10.4018/ijcac.305214","DOIUrl":"https://doi.org/10.4018/ijcac.305214","url":null,"abstract":"Cloud computing is a promising paradigm that provides users higher computation advantages in terms of cost, flexibility, and availability. Nevertheless, with potentially thousands of connected machines, faults become more frequent. Consequently, fault-tolerant load balancing becomes necessary in order to optimize resources utilization while ensuring the reliability of the system. Common fault tolerance techniques in cloud computing have been proposed in the literature. However, they suffer from several shortcomings: some fault tolerance techniques use checkpoint-recovery which increases the average waiting time and thus the mean response time. While other models rely on task replication which reduces the cloud's efficiency in terms of resource utilization under variable loads. To address these deficiencies, an efficient and adaptive fault tolerant algorithm for load balancing is proposed. Based on the CloudSim simulator, some series of test-bed scenarios are considered to assess the behavior of the proposed algorithm.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131591436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. A. Issa, Mustafa Hamzeh Al-Jarah, Ammar Almomani, Ahmad Al Nawasrah
Cloud computing is a very large storage space, can be accessed via an internet connection, this concept has appeared to facilitate the preservation of personal and corporate data and the easily of sharing, and this data can also be accessed from anywhere in the world as long as it is on the Internet, large gaps have emerged around data theft and viewing. Accordingly, researchers have developed algorithms and methods to protect this data, but the attempts to penetrate the data did not stop. In this research, we developed a method that combines XOR and Genetic algorithm to protect the data on the cloud through encryption operations and keep the key from being lost or stolen. The data that is uploaded to cloud computing may be important and we should not allow any party to see it or steal it. Therefore, it became imperative to protect this data and encrypt it. We have developed an algorithm that uses XOR and genetic algorithms in the encryption process.
{"title":"Encryption and Decryption Cloud Computing Data Based on XOR and Genetic Algorithm","authors":"H. A. Issa, Mustafa Hamzeh Al-Jarah, Ammar Almomani, Ahmad Al Nawasrah","doi":"10.4018/ijcac.297101","DOIUrl":"https://doi.org/10.4018/ijcac.297101","url":null,"abstract":"Cloud computing is a very large storage space, can be accessed via an internet connection, this concept has appeared to facilitate the preservation of personal and corporate data and the easily of sharing, and this data can also be accessed from anywhere in the world as long as it is on the Internet, large gaps have emerged around data theft and viewing. Accordingly, researchers have developed algorithms and methods to protect this data, but the attempts to penetrate the data did not stop. In this research, we developed a method that combines XOR and Genetic algorithm to protect the data on the cloud through encryption operations and keep the key from being lost or stolen. The data that is uploaded to cloud computing may be important and we should not allow any party to see it or steal it. Therefore, it became imperative to protect this data and encrypt it. We have developed an algorithm that uses XOR and genetic algorithms in the encryption process.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124412858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet has become the most significant technology with tremendous impact on everyday human life, where IOT is a major technology which uses internet applications to make everything around us ‘Smart’ providing ease of accessibility of physical devices and reducing human effort. In this paper we analyze the role of Artificial Intelligence in field of Internet of Things or the IOT. .AI makes the network more ‘intelligent’ increasing the scope of its connectivity and vast data streams. In this paper, we will analyze the critical impact of Artificial Intelligence deployed in IOT. We will also provide a quick overview on the current approaches of IOT and the challenges faced. Later we discuss how IOT and Artificial Intelligence (AI) together play a vital role in the coming years with the emergence of Internet of Intelligent Things (IoIT) that make these devices even smarter.
{"title":"Future of Internet of Things: Enhancing Cloud-Based IoT Using Artificial Intelligence","authors":"Sana Khanam, Safdar Tanweer, S. Khalid","doi":"10.4018/ijcac.297094","DOIUrl":"https://doi.org/10.4018/ijcac.297094","url":null,"abstract":"Internet has become the most significant technology with tremendous impact on everyday human life, where IOT is a major technology which uses internet applications to make everything around us ‘Smart’ providing ease of accessibility of physical devices and reducing human effort. In this paper we analyze the role of Artificial Intelligence in field of Internet of Things or the IOT. .AI makes the network more ‘intelligent’ increasing the scope of its connectivity and vast data streams. In this paper, we will analyze the critical impact of Artificial Intelligence deployed in IOT. We will also provide a quick overview on the current approaches of IOT and the challenges faced. Later we discuss how IOT and Artificial Intelligence (AI) together play a vital role in the coming years with the emergence of Internet of Intelligent Things (IoIT) that make these devices even smarter.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115227207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud is used to store and process data at a very high rate. Moreover, nearly everyone in this world is using the cloud. However, the problem arises that the data centers are not positioned well. The data reach the cloud by passing through the various links, due to which more delays occur. So this world is now moving into fog. Fog computing provides us the capability to process data nearer to the IoT devices. During the past decade, IoT devices have been growing rapidly, resulting in the production of a tremendous amount of data every day. For the processing of this ever-growing data, efficient algorithms are required to reduce the load on the cloud and give the results in a faster and more precise manner. The processing should be done on the fog node to handle this issue. In this paper, the authors study load balancing on fog nodes with a novel technique so that they distribute the load among different fog nodes so that none of the fog nodes remains idle while other takes time for processing the data.
{"title":"Fog Computing for Delay Minimization and Load Balancing","authors":"W. Akram, Z. Najar, A. Sarwar, Iraq Ahmad Reshi","doi":"10.4018/ijcac.312563","DOIUrl":"https://doi.org/10.4018/ijcac.312563","url":null,"abstract":"Cloud is used to store and process data at a very high rate. Moreover, nearly everyone in this world is using the cloud. However, the problem arises that the data centers are not positioned well. The data reach the cloud by passing through the various links, due to which more delays occur. So this world is now moving into fog. Fog computing provides us the capability to process data nearer to the IoT devices. During the past decade, IoT devices have been growing rapidly, resulting in the production of a tremendous amount of data every day. For the processing of this ever-growing data, efficient algorithms are required to reduce the load on the cloud and give the results in a faster and more precise manner. The processing should be done on the fog node to handle this issue. In this paper, the authors study load balancing on fog nodes with a novel technique so that they distribute the load among different fog nodes so that none of the fog nodes remains idle while other takes time for processing the data.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115411967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cryptographic hash functions and HMACs are used to achieve various security goals such as message integrity, message authentication, digital signatures, and entity authentication. This article proposes (1) a new hash function (QGMD5-384) and (2) an efficient message authentication code (QGMAC-384) based on a quasigroup. A quasigroup is a non-associative algebraic structure and its number grows exponentially with its order. Note that the existing hash functions and HMACs that use quasigroups are vulnerable to prefix and suffix attacks. The security of the proposed hash function is analyzed by comparing it with the MD5 and SHA-384. It is found that the proposed QGMD5-384 is more secure. Also, QGMAC-384 is analyzed against brute force and forgery attacks and it is found to be resistant to these attacks. The performance of the new schemes is compared with their counterparts, such as SHA-384 and HMAC-SHA-384. It is observed that QGMD5-384 and QGMAC-384 are slightly slower than MD5 and HMAC-MD5, respectively, but faster than both the SHA-384 and the HMAC-SHA-384.
{"title":"An Efficient Message Authentication Code Based on Modified MD5-384 Bits Hash Function and Quasigroup","authors":"Umesh Kumar, V. Venkaiah","doi":"10.4018/ijcac.308275","DOIUrl":"https://doi.org/10.4018/ijcac.308275","url":null,"abstract":"Cryptographic hash functions and HMACs are used to achieve various security goals such as message integrity, message authentication, digital signatures, and entity authentication. This article proposes (1) a new hash function (QGMD5-384) and (2) an efficient message authentication code (QGMAC-384) based on a quasigroup. A quasigroup is a non-associative algebraic structure and its number grows exponentially with its order. Note that the existing hash functions and HMACs that use quasigroups are vulnerable to prefix and suffix attacks. The security of the proposed hash function is analyzed by comparing it with the MD5 and SHA-384. It is found that the proposed QGMD5-384 is more secure. Also, QGMAC-384 is analyzed against brute force and forgery attacks and it is found to be resistant to these attacks. The performance of the new schemes is compared with their counterparts, such as SHA-384 and HMAC-SHA-384. It is observed that QGMD5-384 and QGMAC-384 are slightly slower than MD5 and HMAC-MD5, respectively, but faster than both the SHA-384 and the HMAC-SHA-384.","PeriodicalId":442336,"journal":{"name":"Int. J. Cloud Appl. Comput.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127764095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}