Pub Date : 2022-08-10DOI: 10.1108/ijpcc-03-2022-0123
Shoayee Alotaibi
Purpose Be that as it may, BC is computationally costly, has restricted versatility and brings about critical transmission capacity upward and postpones, those seems not to be fit with Internet of Things (IoT) setting. A lightweight scalable blockchain (LSB) which is improved toward IoT necessities is suggested by the authors and investigates LSB within brilliant house setup like an agent model to enable more extensive IoT apps. Less asset gadgets inside brilliant house advantage via any unified chief which lays out common units for correspondence also cycles generally approaching and active solicitations. Design/methodology/approach Federated learning and blockchain (BC) have drawn in huge consideration due to the unchanging property and the relevant safety measure and protection benefits. FL and IoT safety measures’ difficulties can be conquered possibly by BC. Findings LSB accomplishes fragmentation through shaping any overlaid web with more asset gadgets mutually deal with a public BC and federated learning which assures complete protection also security. Originality/value This overlaid is coordinated as without error bunches and reduces extra efforts, also batch leader will be with answer to handle commonly known BCs. LSB joins some of advancements which also includes computations related to lesser weighing agreement, optimal belief also throughput regulatory body.
{"title":"A novel Internet of Things and federated learning-based privacy protection in blockchain technology","authors":"Shoayee Alotaibi","doi":"10.1108/ijpcc-03-2022-0123","DOIUrl":"https://doi.org/10.1108/ijpcc-03-2022-0123","url":null,"abstract":"\u0000Purpose\u0000Be that as it may, BC is computationally costly, has restricted versatility and brings about critical transmission capacity upward and postpones, those seems not to be fit with Internet of Things (IoT) setting. A lightweight scalable blockchain (LSB) which is improved toward IoT necessities is suggested by the authors and investigates LSB within brilliant house setup like an agent model to enable more extensive IoT apps. Less asset gadgets inside brilliant house advantage via any unified chief which lays out common units for correspondence also cycles generally approaching and active solicitations.\u0000\u0000\u0000Design/methodology/approach\u0000Federated learning and blockchain (BC) have drawn in huge consideration due to the unchanging property and the relevant safety measure and protection benefits. FL and IoT safety measures’ difficulties can be conquered possibly by BC.\u0000\u0000\u0000Findings\u0000LSB accomplishes fragmentation through shaping any overlaid web with more asset gadgets mutually deal with a public BC and federated learning which assures complete protection also security.\u0000\u0000\u0000Originality/value\u0000This overlaid is coordinated as without error bunches and reduces extra efforts, also batch leader will be with answer to handle commonly known BCs. LSB joins some of advancements which also includes computations related to lesser weighing agreement, optimal belief also throughput regulatory body.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42345909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements. Design/methodology/approach The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques. Findings Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen. Originality/value With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.
{"title":"Video compression based on zig-zag 3D DCT and run-length encoding for multimedia communication systems","authors":"Sravanthi Chutke, Nandhitha. N.M, Praveen Kumar Lendale","doi":"10.1108/ijpcc-01-2022-0012","DOIUrl":"https://doi.org/10.1108/ijpcc-01-2022-0012","url":null,"abstract":"\u0000Purpose\u0000With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements.\u0000\u0000\u0000Design/methodology/approach\u0000The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques.\u0000\u0000\u0000Findings\u0000Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen.\u0000\u0000\u0000Originality/value\u0000With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44569105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-21DOI: 10.1108/ijpcc-03-2022-0127
Sanjiv Rao Godla, J. Haro, Suresh Ch, R. Krishna
Purpose The purpose of the study is to develop a cloud supporting model for green computing. In today's contemporary world, information technology (IT) plays a significant role. Because of the rapid growth of the IT business and the high level of greenhouse gas emissions, salient data centers are increasingly considering green IT techniques to reduce their environmental impacts. Both developing and underdeveloped countries are widely adopting green infrastructure and services over the cloud because of its cost-effectiveness, scalability and guaranteed high uptime. Several studies have investigated the fact that cloud computing provides beyond green information and communication technology (ICT) services and solutions. Therefore, anything offered over clouds also needs to be green to reduce the adverse influence on the environment. Design/methodology/approach This paper examines the rationale for the use of green ICT in higher education and finds crucial success variables for the implementation of green ICT on the basis of an analysis of chosen educational organizations and interviews with key academic experts from the Universities of Ethiopia, in general, and BuleHora University, in particular. Findings Finally, this paper described the design and development of a green cloud selection supporting model for green ICTs in higher educational institutions that helps cloud service customers choose the most green cloud-based ICT products as well as services. Originality/value This study may be a significant source of new information for green ICT design and implementation in higher education institutions to preserve the environment and its impact on human life.
{"title":"Development of cloud selection supporting model for green information and communication technology services","authors":"Sanjiv Rao Godla, J. Haro, Suresh Ch, R. Krishna","doi":"10.1108/ijpcc-03-2022-0127","DOIUrl":"https://doi.org/10.1108/ijpcc-03-2022-0127","url":null,"abstract":"\u0000Purpose\u0000The purpose of the study is to develop a cloud supporting model for green computing. In today's contemporary world, information technology (IT) plays a significant role. Because of the rapid growth of the IT business and the high level of greenhouse gas emissions, salient data centers are increasingly considering green IT techniques to reduce their environmental impacts. Both developing and underdeveloped countries are widely adopting green infrastructure and services over the cloud because of its cost-effectiveness, scalability and guaranteed high uptime. Several studies have investigated the fact that cloud computing provides beyond green information and communication technology (ICT) services and solutions. Therefore, anything offered over clouds also needs to be green to reduce the adverse influence on the environment.\u0000\u0000\u0000Design/methodology/approach\u0000This paper examines the rationale for the use of green ICT in higher education and finds crucial success variables for the implementation of green ICT on the basis of an analysis of chosen educational organizations and interviews with key academic experts from the Universities of Ethiopia, in general, and BuleHora University, in particular.\u0000\u0000\u0000Findings\u0000Finally, this paper described the design and development of a green cloud selection supporting model for green ICTs in higher educational institutions that helps cloud service customers choose the most green cloud-based ICT products as well as services.\u0000\u0000\u0000Originality/value\u0000This study may be a significant source of new information for green ICT design and implementation in higher education institutions to preserve the environment and its impact on human life.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43132020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-14DOI: 10.1108/ijpcc-03-2022-0113
P. Tripathy, Anurag Shrivastava, Varsha Agarwal, Devangkumar Umakant Shah, C. L, S. .. Akilandeeswari
Purpose This paper aims to provide the security and privacy for Byzantine clients from different types of attacks. Design/methodology/approach In this paper, the authors use Federated Learning Algorithm Based On Matrix Mapping For Data Privacy over Edge Computing. Findings By using Softmax layer probability distribution for model byzantine tolerance can be increased from 40% to 45% in the blocking-convergence attack, and the edge backdoor attack can be stopped. Originality/value By using Softmax layer probability distribution for model the results of the tests, the aggregation method can protect at least 30% of Byzantine clients.
{"title":"Federated learning algorithm based on matrix mapping for data privacy over edge computing","authors":"P. Tripathy, Anurag Shrivastava, Varsha Agarwal, Devangkumar Umakant Shah, C. L, S. .. Akilandeeswari","doi":"10.1108/ijpcc-03-2022-0113","DOIUrl":"https://doi.org/10.1108/ijpcc-03-2022-0113","url":null,"abstract":"\u0000Purpose\u0000This paper aims to provide the security and privacy for Byzantine clients from different types of attacks.\u0000\u0000\u0000Design/methodology/approach\u0000In this paper, the authors use Federated Learning Algorithm Based On Matrix Mapping For Data Privacy over Edge Computing.\u0000\u0000\u0000Findings\u0000By using Softmax layer probability distribution for model byzantine tolerance can be increased from 40% to 45% in the blocking-convergence attack, and the edge backdoor attack can be stopped.\u0000\u0000\u0000Originality/value\u0000By using Softmax layer probability distribution for model the results of the tests, the aggregation method can protect at least 30% of Byzantine clients.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43684701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-12DOI: 10.1108/ijpcc-02-2022-0056
V. Sriram, M. Sikandar, Eti Khatri, S. Choubey, Ity Patni, L. K., K. Gulati
Purpose The young population of the globe is defined by individuals aged 15 to 24 years. Based on statistics from the Instituto Brasileiro de Geografia e Estatística (IBGE), the second largest women population among 15 years as well as 19 years was in 2017 only behind 35 and 39 years. At this time, the Brazilian male population was higher. The difficulties of the young generation affected the preceding generation and promoted social dynamism. The worldwide data shows that the generation of young and the digital world have been constantly sought, but in reality, approximately one-third of the population in 2017 had no access to the internet. Design/methodology/approach The worldwide movement around topics such as strategy on its threefold basis and Industry 4.0 enable a link to company duty towards society to be established. This present study was produced from 1 March 2020 to 2 September 2020 via resources of human and literature evaluation relating to the idea of strategic, Industry 4.0, the responsibility of society and the creation of youth. Its motive is the global creation of youth. Two recommendations should be made after studying the literature and information gathering that enabled “analyzing social responsibility of the company and industry 4.0 with a pivot on young creation: a strategic framework for resources of human management”. Findings The adoption of defensible practices and technology bring forth by the revolution in industrial is emphasized worldwide. Originality/value The focus on the usage of these ideas is essential, so that young people can absorb the workforce in the labour market. To achieve this, the CSR idea combines this theoretical triple-created recent study.
{"title":"Federate learning of corporate social authority and industry 4.0 that focus on young people: a strategic management framework for human resources","authors":"V. Sriram, M. Sikandar, Eti Khatri, S. Choubey, Ity Patni, L. K., K. Gulati","doi":"10.1108/ijpcc-02-2022-0056","DOIUrl":"https://doi.org/10.1108/ijpcc-02-2022-0056","url":null,"abstract":"\u0000Purpose\u0000The young population of the globe is defined by individuals aged 15 to 24 years. Based on statistics from the Instituto Brasileiro de Geografia e Estatística (IBGE), the second largest women population among 15 years as well as 19 years was in 2017 only behind 35 and 39 years. At this time, the Brazilian male population was higher. The difficulties of the young generation affected the preceding generation and promoted social dynamism. The worldwide data shows that the generation of young and the digital world have been constantly sought, but in reality, approximately one-third of the population in 2017 had no access to the internet.\u0000\u0000\u0000Design/methodology/approach\u0000The worldwide movement around topics such as strategy on its threefold basis and Industry 4.0 enable a link to company duty towards society to be established. This present study was produced from 1 March 2020 to 2 September 2020 via resources of human and literature evaluation relating to the idea of strategic, Industry 4.0, the responsibility of society and the creation of youth. Its motive is the global creation of youth. Two recommendations should be made after studying the literature and information gathering that enabled “analyzing social responsibility of the company and industry 4.0 with a pivot on young creation: a strategic framework for resources of human management”.\u0000\u0000\u0000Findings\u0000The adoption of defensible practices and technology bring forth by the revolution in industrial is emphasized worldwide.\u0000\u0000\u0000Originality/value\u0000The focus on the usage of these ideas is essential, so that young people can absorb the workforce in the labour market. To achieve this, the CSR idea combines this theoretical triple-created recent study.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42434799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1108/ijpcc-03-2022-0106
Mukesh Soni, N. Nayak, Ashima Kalra, S. Degadwala, Nikhil Kumar Singh, Shweta Singh
Purpose The purpose of this paper is to improve the existing paradigm of edge computing to maintain a balanced energy usage. Design/methodology/approach The new greedy algorithm is proposed to balance the energy consumption in edge computing. Findings The new greedy algorithm can balance energy more efficiently than the random approach by an average of 66.59 percent. Originality/value The results are shown in this paper which are better as compared to existing algorithms.
{"title":"Energy efficient multi-tasking for edge computing using federated learning","authors":"Mukesh Soni, N. Nayak, Ashima Kalra, S. Degadwala, Nikhil Kumar Singh, Shweta Singh","doi":"10.1108/ijpcc-03-2022-0106","DOIUrl":"https://doi.org/10.1108/ijpcc-03-2022-0106","url":null,"abstract":"\u0000Purpose\u0000The purpose of this paper is to improve the existing paradigm of edge computing to maintain a balanced energy usage.\u0000\u0000\u0000Design/methodology/approach\u0000The new greedy algorithm is proposed to balance the energy consumption in edge computing.\u0000\u0000\u0000Findings\u0000The new greedy algorithm can balance energy more efficiently than the random approach by an average of 66.59 percent.\u0000\u0000\u0000Originality/value\u0000The results are shown in this paper which are better as compared to existing algorithms.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47170438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-24DOI: 10.1108/ijpcc-02-2022-0073
Maitri Patel, Rajan Patel, Nimisha Patel, P. Shah, K. Gulati
Purpose In the field of cryptography, authentication, secrecy and identification can be accomplished by use of secret keys for any computer-based system. The need to acquire certificates endorsed through CA to substantiate users for the barter of encoded communications is one of the most significant constraints for the extensive recognition of PKC, as the technique takes too much time and susceptible to error. PKC’s certificate and key management operating costs are reduced with IBC. IBE is a crucial primeval in IBC. The thought behind presenting the IBE scheme was to diminish the complexity of certificate and key management, but it also gives rise to key escrow and key revocation problem, which provides access to unauthorised users for the encrypted information. Design/methodology/approach This paper aims to compare the result of IIBES with the existing system and to provide security analysis for the same and the proposed system can be used for the security in federated learning. Findings Furthermore, it can be implemented using other encryption/decryption algorithms like elliptic curve cryptography (ECC) to compare the execution efficiency. The proposed system can be used for the security in federated learning. Originality/value As a result, a novel enhanced IBE scheme: IIBES is suggested and implemented in JAVA programming language using RSA algorithm, which eradicates the key escrow problem through eliminating the need for a KGC and key revocation problem by sing sub-KGC (SKGC) and a shared secret with nonce. IIBES also provides authentication through IBS as well as it can be used for securing the data in federated learning.
{"title":"IIBES: a proposed framework to improve the identity-based encryption system for securing federated learning","authors":"Maitri Patel, Rajan Patel, Nimisha Patel, P. Shah, K. Gulati","doi":"10.1108/ijpcc-02-2022-0073","DOIUrl":"https://doi.org/10.1108/ijpcc-02-2022-0073","url":null,"abstract":"\u0000Purpose\u0000In the field of cryptography, authentication, secrecy and identification can be accomplished by use of secret keys for any computer-based system. The need to acquire certificates endorsed through CA to substantiate users for the barter of encoded communications is one of the most significant constraints for the extensive recognition of PKC, as the technique takes too much time and susceptible to error. PKC’s certificate and key management operating costs are reduced with IBC. IBE is a crucial primeval in IBC. The thought behind presenting the IBE scheme was to diminish the complexity of certificate and key management, but it also gives rise to key escrow and key revocation problem, which provides access to unauthorised users for the encrypted information.\u0000\u0000\u0000Design/methodology/approach\u0000This paper aims to compare the result of IIBES with the existing system and to provide security analysis for the same and the proposed system can be used for the security in federated learning.\u0000\u0000\u0000Findings\u0000Furthermore, it can be implemented using other encryption/decryption algorithms like elliptic curve cryptography (ECC) to compare the execution efficiency. The proposed system can be used for the security in federated learning.\u0000\u0000\u0000Originality/value\u0000As a result, a novel enhanced IBE scheme: IIBES is suggested and implemented in JAVA programming language using RSA algorithm, which eradicates the key escrow problem through eliminating the need for a KGC and key revocation problem by sing sub-KGC (SKGC) and a shared secret with nonce. IIBES also provides authentication through IBS as well as it can be used for securing the data in federated learning.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44933815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-22DOI: 10.1108/ijpcc-02-2022-0047
Shubangini Patil, Rekha Patil
Purpose Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that
到目前为止,已经进行了大量的研究和应用,以提供从一个用户到另一个用户的安全性和原始数据,例如第三方审计和几种保护数据的方案,例如借助Rivest-Shamir-Adleman等加密算法生成密钥。这里是一些之前已经完成的相关工作。Yan等人(2017)提出了基于最小带宽的远程损伤控制复苏(RDCR)方案。通过使第三方能够执行公共诚信的验证。虽然它支持损坏数据的修复管理,并尝试恢复原始数据,但在实际应用中无法做到这一点,因此比我们提出的系统需要更多的计算和通信成本。在Chen等人(2015)的一篇论文中,使用广播加密,开发了云存储数据共享的想法。该技术旨在实现广播数据和动态共享,允许用户在不影响电子新闻包(EPK)的情况下加入和离开一个组。在这种情况下,理论概念是正确的和新的,但系统的实用性和效率是不可接受的,并且系统的安全性也受到了损害,因为它提出了在不更改任何密钥的情况下增加成员。在本研究中,研究了用于数据共享的基于身份的加密策略,以及用于提高模型安全性的密钥管理和元数据技术(Jiang和Guo, 2017)。这里提供正向和反向密文安全性。然而,它很难付诸实践,其局限性之一是它只能用于非常大量的云存储。在这里,它通过批处理审计扩展了对动态数据修改的支持。云存储方案中安全有效的隐私保护可证明数据拥有的重要特征是支持每一个重要特征,包括数据动态、隐私保护、批量审计和不可信和外包存储模型的拦截器验证(Pathare和Chouragadec, 2017)。为了防止使用基于新id的公钥证书,设计了一种同态签名机制。该签名系统被证明能够抵抗随机oracle模型的id攻击和伪造消息攻击(Nayak and Tripathy, 2018;Lin等人,2017)。在公共云中存储数据时,一个问题是数据所有者必须向用户提供大量密钥,以便他们访问文件。在这里,知识辅助软件工程(KASE)计划首次公开亮相。在共享大量文档时,数据所有者只需向用户提供特定的密钥,而用户只需要提供单个活板门。虽然这个概念是创新的,但KASE技术并不适用于日益常见的人造云。Cui等人(2016)认为,随着数据量的增长,分销管理系统(DMS)将无法处理它。因此,已经开发了各种经过验证的数据占有(PDP)方案,实际上所有数据都缺乏安全性。因此,在这些证书中,引入了基于双线性配对的PDP。由于其健壮和高效的特点,这主要适用于DMS。本研究的主要目的是设计和实现一个安全的云基础设施,用于共享组数据。本研究为云中的多用户数据提供了一种高效且安全的协议,允许多用户轻松共享数据。本文的研究方法和贡献如下:本研究的主要目标是设计和实现一个安全的云基础设施,用于共享组数据。本研究为云中的多用户数据提供了一种高效、安全的协议,使多个用户可以轻松共享数据。本研究的主要目的是设计和实现一个安全的云基础设施,用于共享组数据。本研究针对云中的多用户数据开发了一种高效且安全的协议,允许众多用户轻松交换数据。选择方案设计(SSD)包括两种算法;算法1针对有限用户进行了设计,算法2针对多用户进行了重新设计。此外,作者还设计了固态硬盘安全协议,该协议包括一个三阶段模型,即第一阶段、第二阶段和第三阶段。第一阶段生成参数并分发私钥,第二阶段为所有可用的用户生成通用密钥,第三阶段旨在防止不诚实的用户在数据共享中进行欺骗。 发现云计算中的数据共享为企业和个人提供了无限的计算资源和存储空间;此外,云计算还会导致一些隐私和安全问题,如容错、可靠性、机密性和数据完整性。密钥共识机制是安全通信的基本密码原语;此外,在这种现象的激励下,作者开发了数据共享模型中包含多个用户的ssd机制。为了安全起见,在云中共享的原创性/价值文件应该加密;稍后对这些文件进行解密,以便用户访问该文件。此外,密钥共识过程是安全通信的关键密码原语;此外,由于这种现象,作者设计了SSD机制,该机制在数据共享模型中包含了众多用户。对于SSD方法的评估,作者考虑了系统的理想环境,即作者使用java作为编程语言,eclipse作为集成的驱动电子工具对所提出的模型进行评估。该模型的硬件配置是这样的,它包含4 GB RAM和i7处理器,作者使用PBC库进行配对操作(PBC库,2022)。此外,在本文的下一节中,为了与现有的方法RDIC进行比较,用户数量有所不同(Li et al., 2020)。出于ssd安全协议的目的,选择一个素数作为此工作中的用户数量。
{"title":"An optimized and efficient multiuser data sharing using the selection scheme design secure approach and federated learning in cloud environment","authors":"Shubangini Patil, Rekha Patil","doi":"10.1108/ijpcc-02-2022-0047","DOIUrl":"https://doi.org/10.1108/ijpcc-02-2022-0047","url":null,"abstract":"\u0000Purpose\u0000Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44543744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-22DOI: 10.1108/ijpcc-12-2021-0297
Suvarna Patil, Prasad Gokhale
Purpose With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy. Design/methodology/approach With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achi
{"title":"AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks","authors":"Suvarna Patil, Prasad Gokhale","doi":"10.1108/ijpcc-12-2021-0297","DOIUrl":"https://doi.org/10.1108/ijpcc-12-2021-0297","url":null,"abstract":"\u0000Purpose\u0000With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.\u0000\u0000\u0000Design/methodology/approach\u0000With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achi","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43806530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-17DOI: 10.1108/ijpcc-02-2022-0045
A. I., K. Selvakumar
Purpose Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed adaptive threshold gradient feature (ATGF) model. A received signal strength indicator (RSSI) model with node estimated features is implicated with localization problem and enhanced with hybrid cumulative approach (HCA) algorithm for node optimizations with distance predicting. Design/methodology/approach Using a theoretical or empirical signal propagation model, the RSSI (known transmitting power) is converted to distance, the received power (measured at the receiving node) is converted to distance and the distance is converted to RSSI (known receiving power). As a result, the approximate distance between the transceiver node and the receiver may be determined by measuring the intensity of the received signal. After acquiring information on the distance between the anchor node and the unknown node, the location of the unknown node may be determined using either the trilateral technique or the maximum probability estimate approach, depending on the circumstances using federated learning. Findings Improvisation of localization for wireless sensor network has become one of the prime design features for estimating the different conditional changes externally and internally. One such feature of improvement is observed in this paper, via HCA where each feature of localization is depicted with machine learning algorithms imparting the energy reduction problem for each newer localized nodes in Section 5. All affected parametric features on energy levels and localization problem for newer and extinct nodes are implicated with hybrid cumulative approach as in Section 4. The proposed algorithm (HCA with AGTF) has implicated with significant change in energy levels of nodes which are generated newly and which are non-active for a stipulated time which are mentioned and tabulated in figures and tables in Section 6. Originality/value Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed ATGF model. An RSSI model with node estimated features is implicated with localization problem and enhanced with HCA algorithm for node optimizations with distance predicting.
{"title":"Hybrid cumulative approach for localization of nodes with adaptive threshold gradient feature on energy minimization using federated learning","authors":"A. I., K. Selvakumar","doi":"10.1108/ijpcc-02-2022-0045","DOIUrl":"https://doi.org/10.1108/ijpcc-02-2022-0045","url":null,"abstract":"\u0000Purpose\u0000Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed adaptive threshold gradient feature (ATGF) model. A received signal strength indicator (RSSI) model with node estimated features is implicated with localization problem and enhanced with hybrid cumulative approach (HCA) algorithm for node optimizations with distance predicting.\u0000\u0000\u0000Design/methodology/approach\u0000Using a theoretical or empirical signal propagation model, the RSSI (known transmitting power) is converted to distance, the received power (measured at the receiving node) is converted to distance and the distance is converted to RSSI (known receiving power). As a result, the approximate distance between the transceiver node and the receiver may be determined by measuring the intensity of the received signal. After acquiring information on the distance between the anchor node and the unknown node, the location of the unknown node may be determined using either the trilateral technique or the maximum probability estimate approach, depending on the circumstances using federated learning.\u0000\u0000\u0000Findings\u0000Improvisation of localization for wireless sensor network has become one of the prime design features for estimating the different conditional changes externally and internally. One such feature of improvement is observed in this paper, via HCA where each feature of localization is depicted with machine learning algorithms imparting the energy reduction problem for each newer localized nodes in Section 5. All affected parametric features on energy levels and localization problem for newer and extinct nodes are implicated with hybrid cumulative approach as in Section 4. The proposed algorithm (HCA with AGTF) has implicated with significant change in energy levels of nodes which are generated newly and which are non-active for a stipulated time which are mentioned and tabulated in figures and tables in Section 6.\u0000\u0000\u0000Originality/value\u0000Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed ATGF model. An RSSI model with node estimated features is implicated with localization problem and enhanced with HCA algorithm for node optimizations with distance predicting.\u0000","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42247435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}