Abstract Link prediction is one of the methods of social network analysis. Bipartite networks are a type of complex network that can be used to model many natural events. In this study, a novel similarity measure for link prediction in bipartite networks is presented. Due to the fact that classical social network link prediction methods are less efficient and effective for use in bipartite network, it is necessary to use bipartite network-specific methods to solve this problem. The purpose of this study is to provide a centralized and comprehensive method based on the neighborhood structure that performs better than the existing classical methods. The proposed method consists of a combination of criteria based on the neighborhood structure. Here, the classical criteria for link prediction by modifying the bipartite network are defined. These modified criteria constitute the main component of the proposed similarity measure. In addition to low simplicity and complexity, this method has high efficiency. The simulation results show that the proposed method with a superiority of 0.5% over MetaPath, 1.32% over FriendLink, and 1.8% over Katz in the f-measure criterion shows the best performance.
{"title":"A novel similarity measure of link prediction in bipartite social networks based on neighborhood structure","authors":"Fariba Sarhangnia, Shima Mahjoobi, Samaneh Jamshidi","doi":"10.1515/comp-2022-0233","DOIUrl":"https://doi.org/10.1515/comp-2022-0233","url":null,"abstract":"Abstract Link prediction is one of the methods of social network analysis. Bipartite networks are a type of complex network that can be used to model many natural events. In this study, a novel similarity measure for link prediction in bipartite networks is presented. Due to the fact that classical social network link prediction methods are less efficient and effective for use in bipartite network, it is necessary to use bipartite network-specific methods to solve this problem. The purpose of this study is to provide a centralized and comprehensive method based on the neighborhood structure that performs better than the existing classical methods. The proposed method consists of a combination of criteria based on the neighborhood structure. Here, the classical criteria for link prediction by modifying the bipartite network are defined. These modified criteria constitute the main component of the proposed similarity measure. In addition to low simplicity and complexity, this method has high efficiency. The simulation results show that the proposed method with a superiority of 0.5% over MetaPath, 1.32% over FriendLink, and 1.8% over Katz in the f-measure criterion shows the best performance.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"112 - 122"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41516809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In previous experimental study with three-way-reversal and juggling sequence rotation algorithms, using 20,000,000 elements for type LONG in Java, the average execution times have been shown to be 49.66761ms and 246.4394ms, respectively. These results have revealed appreciable low performance in the juggling algorithm despite its proven optimality. However, the juggling algorithm has also exhibited efficiency with some offset ranges. Due to this pattern of the juggling algorithm, the current study is focused on investigating source of the inefficiency on the average performance. Samples were extracted from the previous experimental data, presented differently and analyzed both graphically and in tabular form. Greatest common divisor values from the data that equal offsets were used. As emanating from the previous study, the Java language used for the rotation was to simulate ordering of tasks for safety and efficiency in the context of real-time task scheduling. Outcome of the investigation shows that juggling rotation performance competes favorably with three-way-reversal rotation (and even better in few cases) for certain offsets, but poorly with the rests. This study identifies the poorest performances around offsets in the neighborhood of square root of the sequence size. From the outcome, the study therefore strongly advises application developers (especially for real-time systems) to be mindful of where and how to in using juggling rotation.
{"title":"Greatest-common-divisor dependency of juggling sequence rotation efficient performance","authors":"Joseph A. Erho, J. I. Consul, B. R. Japheth","doi":"10.1515/comp-2022-0234","DOIUrl":"https://doi.org/10.1515/comp-2022-0234","url":null,"abstract":"Abstract In previous experimental study with three-way-reversal and juggling sequence rotation algorithms, using 20,000,000 elements for type LONG in Java, the average execution times have been shown to be 49.66761ms and 246.4394ms, respectively. These results have revealed appreciable low performance in the juggling algorithm despite its proven optimality. However, the juggling algorithm has also exhibited efficiency with some offset ranges. Due to this pattern of the juggling algorithm, the current study is focused on investigating source of the inefficiency on the average performance. Samples were extracted from the previous experimental data, presented differently and analyzed both graphically and in tabular form. Greatest common divisor values from the data that equal offsets were used. As emanating from the previous study, the Java language used for the rotation was to simulate ordering of tasks for safety and efficiency in the context of real-time task scheduling. Outcome of the investigation shows that juggling rotation performance competes favorably with three-way-reversal rotation (and even better in few cases) for certain offsets, but poorly with the rests. This study identifies the poorest performances around offsets in the neighborhood of square root of the sequence size. From the outcome, the study therefore strongly advises application developers (especially for real-time systems) to be mindful of where and how to in using juggling rotation.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"92 - 102"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46679192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This article discusses computing systems that operate in residue number systems (RNSs). The main direction of improving computer systems (CSs) is increasing the speed of implementation of arithmetic operations and the reliability of their functioning. Encoding data in RNS solves the problem of optimal redundancy, i.e., the creation of such computing systems provides maximum reliability with restrictions on weight and size characteristics. This article proposes new structures of fault-tolerant CSs operating in RNS in the case of the application with an active fault-tolerant method. The use of the active fault-tolerant method (dynamic redundancy) in the RNSs provides higher reliability. In addition, with an increase in the digits of CSs, the efficiency of using the proposed structures increases.
{"title":"Designing of fault-tolerant computer system structures using residue number systems","authors":"V. Krasnobayev, A. Kuznetsov, A. Kiian","doi":"10.1515/comp-2020-0171","DOIUrl":"https://doi.org/10.1515/comp-2020-0171","url":null,"abstract":"Abstract This article discusses computing systems that operate in residue number systems (RNSs). The main direction of improving computer systems (CSs) is increasing the speed of implementation of arithmetic operations and the reliability of their functioning. Encoding data in RNS solves the problem of optimal redundancy, i.e., the creation of such computing systems provides maximum reliability with restrictions on weight and size characteristics. This article proposes new structures of fault-tolerant CSs operating in RNS in the case of the application with an active fault-tolerant method. The use of the active fault-tolerant method (dynamic redundancy) in the RNSs provides higher reliability. In addition, with an increase in the digits of CSs, the efficiency of using the proposed structures increases.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"66 - 74"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45898286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Securing medical records is a significant task in Healthcare communication. The major setback during the transfer of medical data in the electronic medium is the inherent difficulty in preserving data confidentiality and patients’ privacy. The innovation in technology and improvisation in the medical field has given numerous advancements in transferring the medical data with foolproof security. In today’s healthcare industry, federated network operation is gaining significance to deal with distributed network resources due to the efficient handling of privacy issues. The design of a federated security system for healthcare services is one of the intense research topics. This article highlights the importance of federated learning in healthcare. Also, the article discusses the privacy and security issues in communicating the e-health data.
{"title":"Security and privacy issues in federated healthcare – An overview","authors":"Jansi Rani Amalraj, Robert Lourdusamy","doi":"10.1515/comp-2022-0230","DOIUrl":"https://doi.org/10.1515/comp-2022-0230","url":null,"abstract":"Abstract Securing medical records is a significant task in Healthcare communication. The major setback during the transfer of medical data in the electronic medium is the inherent difficulty in preserving data confidentiality and patients’ privacy. The innovation in technology and improvisation in the medical field has given numerous advancements in transferring the medical data with foolproof security. In today’s healthcare industry, federated network operation is gaining significance to deal with distributed network resources due to the efficient handling of privacy issues. The design of a federated security system for healthcare services is one of the intense research topics. This article highlights the importance of federated learning in healthcare. Also, the article discusses the privacy and security issues in communicating the e-health data.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"57 - 65"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43797937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Intelligent traffic recognition system is the development direction of the future traffic system. It effectively integrates advanced information technology, data communication transmission technology, electronic sensing technology, control technology, and computer technology into the entire ground traffic management system. It establishes a real-time, accurate, and efficient integrated transportation management system that plays a role in a wide range and all directions. The aim of this article is to integrate cross-modal biometrics into an intelligent traffic recognition system combined with real-time data operations. Based on the cross-modal recognition algorithm, it can better re-identify the vehicle cross-modally by building a model. First, this article first presents a general introduction to the cross-modal recognition method. Then, the experimental analysis is conducted on the classification of vehicle images recognized by the intelligent transportation system, the complexity of vehicle logo recognition, and the recognition of vehicle images with different lights. Finally, the cross-modal recognition algorithm is introduced into the dynamic analysis of the intelligent traffic recognition system. The cross-modal traffic recognition system experiment is carried out. The experimental results show that the intraclass distribution loss function can improve the Rank 1 recognition rate and mAP value by 6–7% points on the basis of the baseline method. This shows that improving the modal invariance feature by reducing the distribution difference between different modal images of the same vehicle can effectively deal with the feature information imbalance caused by modal changes.
{"title":"Cross-modal biometric fusion intelligent traffic recognition system combined with real-time data operation","authors":"Wei Xu, Yujin Zhai","doi":"10.1515/comp-2022-0252","DOIUrl":"https://doi.org/10.1515/comp-2022-0252","url":null,"abstract":"Abstract Intelligent traffic recognition system is the development direction of the future traffic system. It effectively integrates advanced information technology, data communication transmission technology, electronic sensing technology, control technology, and computer technology into the entire ground traffic management system. It establishes a real-time, accurate, and efficient integrated transportation management system that plays a role in a wide range and all directions. The aim of this article is to integrate cross-modal biometrics into an intelligent traffic recognition system combined with real-time data operations. Based on the cross-modal recognition algorithm, it can better re-identify the vehicle cross-modally by building a model. First, this article first presents a general introduction to the cross-modal recognition method. Then, the experimental analysis is conducted on the classification of vehicle images recognized by the intelligent transportation system, the complexity of vehicle logo recognition, and the recognition of vehicle images with different lights. Finally, the cross-modal recognition algorithm is introduced into the dynamic analysis of the intelligent traffic recognition system. The cross-modal traffic recognition system experiment is carried out. The experimental results show that the intraclass distribution loss function can improve the Rank 1 recognition rate and mAP value by 6–7% points on the basis of the baseline method. This shows that improving the modal invariance feature by reducing the distribution difference between different modal images of the same vehicle can effectively deal with the feature information imbalance caused by modal changes.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"332 - 344"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41451055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this study, a student-based placement model using the A* algorithm is proposed and applied to solve the problem of placing the courses in exam sessions. The application area of the model is midterm and final exams, conducted by the Open Education Faculty. The reason for choosing open education exams for the practice is that the exams are applied across the country and more than 100,000 students participate. The main problem is to obtain a suitable distribution that can satisfy many constraints simultaneously. In the current system, the lessons in the sessions were placed once using the curriculum knowledge. This placement plan is applied in all exams. When the placement is done according to the curriculum information, the courses in the sessions cannot be placed effectively and efficiently due to a large number of common courses and the large number of students taking the exam. This makes the booklets more expensive and the organization more prone to errors. Both the opening of new programs and the increase in the number of students regularly lead to the necessity of placing the classes in sessions dynamically each semester. In addition, to prevent conflicts with the calendars of other central exams, it is necessary to conduct all exams in three sessions. A better solution was obtained by using a different model than the currently used model in the study. With this solution, distribution of the courses of successful students with few courses to all sessions is provided, and difficult courses of unsuccessful students who have a large number of courses were gathered in the same session. This study can support future studies on two issues: the first issue is the approach of using the course that will be taken by most students instead of the courses taught in most departments in the selection of the course to be placed in the booklet. The second issue is to try to find the most suitable solution by performing performance tests on many algorithms whose performance has been determined by many academic studies.
{"title":"A student-based central exam scheduling model using A* algorithm","authors":"M. S. Başar, Sinan Kul","doi":"10.1515/comp-2022-0237","DOIUrl":"https://doi.org/10.1515/comp-2022-0237","url":null,"abstract":"Abstract In this study, a student-based placement model using the A* algorithm is proposed and applied to solve the problem of placing the courses in exam sessions. The application area of the model is midterm and final exams, conducted by the Open Education Faculty. The reason for choosing open education exams for the practice is that the exams are applied across the country and more than 100,000 students participate. The main problem is to obtain a suitable distribution that can satisfy many constraints simultaneously. In the current system, the lessons in the sessions were placed once using the curriculum knowledge. This placement plan is applied in all exams. When the placement is done according to the curriculum information, the courses in the sessions cannot be placed effectively and efficiently due to a large number of common courses and the large number of students taking the exam. This makes the booklets more expensive and the organization more prone to errors. Both the opening of new programs and the increase in the number of students regularly lead to the necessity of placing the classes in sessions dynamically each semester. In addition, to prevent conflicts with the calendars of other central exams, it is necessary to conduct all exams in three sessions. A better solution was obtained by using a different model than the currently used model in the study. With this solution, distribution of the courses of successful students with few courses to all sessions is provided, and difficult courses of unsuccessful students who have a large number of courses were gathered in the same session. This study can support future studies on two issues: the first issue is the approach of using the course that will be taken by most students instead of the courses taught in most departments in the selection of the course to be placed in the booklet. The second issue is to try to find the most suitable solution by performing performance tests on many algorithms whose performance has been determined by many academic studies.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"181 - 190"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48049071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Word2Vec is a prominent model for natural language processing tasks. Similar inspiration is found in distributed embeddings (word-vectors) in recent state-of-the-art deep neural networks. However, wrong combination of hyperparameters can produce embeddings with poor quality. The objective of this work is to empirically show that Word2Vec optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the publicly released, original Word2Vec embedding. Both intrinsic and extrinsic (downstream) evaluations are carried out, including named entity recognition and sentiment analysis. Our main contributions include showing that the best model is usually task-specific, high analogy scores do not necessarily correlate positively with F1 scores, and performance is not dependent on data size alone. If ethical considerations to save time, energy, and the environment are made, then relatively smaller corpora may do just as well or even better in some cases. Increasing the dimension size of embeddings after a point leads to poor quality or performance. In addition, using a relatively small corpus, we obtain better WordSim scores, corresponding Spearman correlation, and better downstream performances (with significance tests) compared to the original model, which is trained on a 100 billion-word corpus.
{"title":"Word2Vec: Optimal hyperparameters and their impact on natural language processing downstream tasks","authors":"Tosin P. Adewumi, F. Liwicki, M. Liwicki","doi":"10.1515/comp-2022-0236","DOIUrl":"https://doi.org/10.1515/comp-2022-0236","url":null,"abstract":"Abstract Word2Vec is a prominent model for natural language processing tasks. Similar inspiration is found in distributed embeddings (word-vectors) in recent state-of-the-art deep neural networks. However, wrong combination of hyperparameters can produce embeddings with poor quality. The objective of this work is to empirically show that Word2Vec optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the publicly released, original Word2Vec embedding. Both intrinsic and extrinsic (downstream) evaluations are carried out, including named entity recognition and sentiment analysis. Our main contributions include showing that the best model is usually task-specific, high analogy scores do not necessarily correlate positively with F1 scores, and performance is not dependent on data size alone. If ethical considerations to save time, energy, and the environment are made, then relatively smaller corpora may do just as well or even better in some cases. Increasing the dimension size of embeddings after a point leads to poor quality or performance. In addition, using a relatively small corpus, we obtain better WordSim scores, corresponding Spearman correlation, and better downstream performances (with significance tests) compared to the original model, which is trained on a 100 billion-word corpus.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"134 - 141"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42899205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Through the analysis of the current research situation at home and abroad, this article finds that there is a lack of evaluation standards and methods in the virtual simulation experiment of e-commerce logistics smart warehousing. Therefore, it seriously affects the standardization and rationality of the experiment. To solve the problems in the evaluation of the current virtual simulation experiment, this article proposes a virtual simulation experiment evaluation model of e-commerce logistics smart warehousing based on multidimensional weighting. This article firstly sorts out the basic process of e-commerce logistics smart warehousing experiment activities and establishes the evaluation object. Then, based on the duality degree of the output results of the experimental steps, it proposes a method that conforms to the corresponding operation steps. Thus, a three-dimensional evaluation model of the completion degree of the operation steps, the reasonable degree of the operation steps, and the completion time of the operation steps are constructed. An automatic scoring evaluation model is proposed based on the combination of three-dimensional weighted evaluation of experimental steps. Finally, the feasibility and convenience of the evaluation model are verified through the experiment analysis.
{"title":"Research on the virtual simulation experiment evaluation model of e-commerce logistics smart warehousing based on multidimensional weighting","authors":"Ganglong Fan, Bo Fan, Hongsheng Xu, Chuqiao Wang","doi":"10.1515/comp-2022-0249","DOIUrl":"https://doi.org/10.1515/comp-2022-0249","url":null,"abstract":"Abstract Through the analysis of the current research situation at home and abroad, this article finds that there is a lack of evaluation standards and methods in the virtual simulation experiment of e-commerce logistics smart warehousing. Therefore, it seriously affects the standardization and rationality of the experiment. To solve the problems in the evaluation of the current virtual simulation experiment, this article proposes a virtual simulation experiment evaluation model of e-commerce logistics smart warehousing based on multidimensional weighting. This article firstly sorts out the basic process of e-commerce logistics smart warehousing experiment activities and establishes the evaluation object. Then, based on the duality degree of the output results of the experimental steps, it proposes a method that conforms to the corresponding operation steps. Thus, a three-dimensional evaluation model of the completion degree of the operation steps, the reasonable degree of the operation steps, and the completion time of the operation steps are constructed. An automatic scoring evaluation model is proposed based on the combination of three-dimensional weighted evaluation of experimental steps. Finally, the feasibility and convenience of the evaluation model are verified through the experiment analysis.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"314 - 322"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43001277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the rapid development and progress of big data technology, people can already use big data to judge the transmission and distribution of network information and make better decisions in time, but it also faces major network threats such as Trojan horses and viruses. Traditional network security functions generally wait until the network power is turned on to a certain extent before starting, and it is difficult to ensure the security of big data networks. To protect the network security of big data and improve its ability to defend against attacks, this article introduces the deep learning algorithm into the research of big data network security defense mode. The test results show that the introduction of deep learning algorithms into the research of network security model can enhance the security defense capability of the network by 5.12%, proactively detect, and kill cyber attacks that can pose threats. At the same time, the security defense mode will evaluate the network security of big data and analyze potential network security risks in detail, which will prevent risks before they occur and effectively protect the network security in the context of big data.
{"title":"Big data network security defense mode of deep learning algorithm","authors":"Ying Yu","doi":"10.1515/comp-2022-0257","DOIUrl":"https://doi.org/10.1515/comp-2022-0257","url":null,"abstract":"Abstract With the rapid development and progress of big data technology, people can already use big data to judge the transmission and distribution of network information and make better decisions in time, but it also faces major network threats such as Trojan horses and viruses. Traditional network security functions generally wait until the network power is turned on to a certain extent before starting, and it is difficult to ensure the security of big data networks. To protect the network security of big data and improve its ability to defend against attacks, this article introduces the deep learning algorithm into the research of big data network security defense mode. The test results show that the introduction of deep learning algorithms into the research of network security model can enhance the security defense capability of the network by 5.12%, proactively detect, and kill cyber attacks that can pose threats. At the same time, the security defense mode will evaluate the network security of big data and analyze potential network security risks in detail, which will prevent risks before they occur and effectively protect the network security in the context of big data.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"345 - 356"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48869891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. C. Ukwuoma, A. J. Gabriel, A. Thompson, B. Alese
Abstract Data security in the cloud has been a major issue since the inception and adoption of cloud computing. Various frameworks have been proposed, and yet data breach prevails. With encryption being the dominant method of cloud data security, the advent of quantum computing implies an urgent need to proffer a model that will provide adequate data security for both classical and quantum computing. Thus, most cryptosystems will be rendered susceptible and obsolete, though some cryptosystems will stand the test of quantum computing. The article proposes a model that comprises the application of a variant of McEliece cryptosystem, which has been tipped to replace Rivest–Shamir–Adleman (RSA) in the quantum computing era to secure access control data and the application of a variant of N-th degree truncated polynomial ring units (NTRU) cryptosystem to secure cloud user data. The simulation of the proposed McEliece algorithm showed that the algorithm has a better time complexity than the existing McEliece cryptosystem. Furthermore, the novel tweaking of parameters S and P further improves the security of the proposed algorithms. More so, the simulation of the proposed NTRU algorithm revealed that the existing NTRU cryptosystem had a superior time complexity when juxtaposed with the proposed NTRU cryptosystem.
{"title":"Post-quantum cryptography-driven security framework for cloud computing","authors":"H. C. Ukwuoma, A. J. Gabriel, A. Thompson, B. Alese","doi":"10.1515/comp-2022-0235","DOIUrl":"https://doi.org/10.1515/comp-2022-0235","url":null,"abstract":"Abstract Data security in the cloud has been a major issue since the inception and adoption of cloud computing. Various frameworks have been proposed, and yet data breach prevails. With encryption being the dominant method of cloud data security, the advent of quantum computing implies an urgent need to proffer a model that will provide adequate data security for both classical and quantum computing. Thus, most cryptosystems will be rendered susceptible and obsolete, though some cryptosystems will stand the test of quantum computing. The article proposes a model that comprises the application of a variant of McEliece cryptosystem, which has been tipped to replace Rivest–Shamir–Adleman (RSA) in the quantum computing era to secure access control data and the application of a variant of N-th degree truncated polynomial ring units (NTRU) cryptosystem to secure cloud user data. The simulation of the proposed McEliece algorithm showed that the algorithm has a better time complexity than the existing McEliece cryptosystem. Furthermore, the novel tweaking of parameters S and P further improves the security of the proposed algorithms. More so, the simulation of the proposed NTRU algorithm revealed that the existing NTRU cryptosystem had a superior time complexity when juxtaposed with the proposed NTRU cryptosystem.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"142 - 153"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49345890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}