The development of modern digital economies requires trusted digital asset management (DAM), for which blockchain technology is increasingly being adopted. However, the current architecture for constructing blockchain-based DAM systems (BDAMSs) is inadequate. Existing BDAMSs adopt a modified layered architecture, which enriches the database layer by adding a blockchain platform that acts as a third party to process DAM business logic with pre-written smart contracts. This architecture faces four issues that make it non-credible and non-customizable to DAM demands: 1) pseudo decentralization, 2) not asset-oriented, 3) contract dependency, and 4) high load of chains. To overcome these issues, we propose the One-Network-Multi-Chain Architecture (ONMCA), which allows multiple heterogeneous chains to be established within a same network. ONMCA enables diverse digital assets to be managed in a customizable way through the following features: 1) asset stakeholders are allowed to join the blockchain network, eliminating third parties; 2) transactions are designed to portray the changes in asset states, making the system asset-oriented; and 3) a control layer is added to take over business logic, and smart contracts are forced to regulate asset transactions full-time. We formalize ONMCA and analyze it comprehensively, and the results show that ONMCA meets the requirements of DAM and is qualified to build credible and adaptive BDAMSs.
现代数字经济的发展需要可信的数字资产管理(DAM),而区块链技术正越来越多地被采用。然而,目前构建基于区块链的数字资产管理系统(BDAMS)的架构并不完善。现有的 BDAMS 采用经过修改的分层架构,通过添加一个区块链平台来丰富数据库层,该平台作为第三方使用预先编写的智能合约来处理 DAM 业务逻辑。这种架构面临四个问题,使其不可信,也无法满足 DAM 的需求:1) 伪去中心化;2) 不以资产为导向;3) 合约依赖性;4) 链负载高。为了克服这些问题,我们提出了单网络多链架构(ONMCA),允许在同一网络内建立多个异构链。ONMCA 可通过以下功能以可定制的方式管理各种数字资产:1)允许资产利益相关者加入区块链网络,消除第三方;2)设计交易来描绘资产状态的变化,使系统以资产为导向;3)添加控制层来接管业务逻辑,强制智能合约全时监管资产交易。我们将ONMCA形式化并进行了全面分析,结果表明ONMCA符合DAM的要求,有资格构建可信的自适应BDAMS。
{"title":"ONMCA: One-Network-Multi-Chain Architecture for customizable asset-oriented blockchain systems","authors":"Liang Wang, Wenying Zhou, Lina Zuo, Haibo Liu, Wenchi Ying","doi":"10.1007/s12083-024-01698-8","DOIUrl":"https://doi.org/10.1007/s12083-024-01698-8","url":null,"abstract":"<p>The development of modern digital economies requires trusted digital asset management (DAM), for which blockchain technology is increasingly being adopted. However, the current architecture for constructing blockchain-based DAM systems (BDAMSs) is inadequate. Existing BDAMSs adopt a modified layered architecture, which enriches the database layer by adding a blockchain platform that acts as a third party to process DAM business logic with pre-written smart contracts. This architecture faces four issues that make it non-credible and non-customizable to DAM demands: 1) pseudo decentralization, 2) not asset-oriented, 3) contract dependency, and 4) high load of chains. To overcome these issues, we propose the One-Network-Multi-Chain Architecture (ONMCA), which allows multiple heterogeneous chains to be established within a same network. ONMCA enables diverse digital assets to be managed in a customizable way through the following features: 1) asset stakeholders are allowed to join the blockchain network, eliminating third parties; 2) transactions are designed to portray the changes in asset states, making the system asset-oriented; and 3) a control layer is added to take over business logic, and smart contracts are forced to regulate asset transactions full-time. We formalize ONMCA and analyze it comprehensively, and the results show that ONMCA meets the requirements of DAM and is qualified to build credible and adaptive BDAMSs.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"94 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1007/s12083-024-01696-w
S. Kishore Verma, K. Lokeshwaran, J. Martin Sahayaraj, J. S. Adeline Johnsana
In the design of Wireless sensor networks (WSNs), maximizing network lifetime and sustaining energy stability is identified as the challenging problem since it comprises of compact sized and energy limited sensor nodes that cooperates during data routing. The existing clustering-based routing mechanisms accomplished energy efficiency and attempted to minimize the distance between the cluster head (CH) and the sink node for network lifetime improvement. The adoption of swarm intelligence algorithms and fuzzy logic is determined to the ideal computational intelligence techniques which are suitable for NP-hard problem like the multi-hop route selection process. In this paper, A Modified Dingo Optimization Algorithm-based Clustering Mechanism (MDOACM) is proposed for addressing the limitations of the clustering protocol with respect to cluster head (CH) lifetime and cluster quality. This MDOACM-based clustering protocol utilized Interval Type-2 Fuzzy Logic (IT2FL) for determining the trust level of each sensor node since the existence of an untrustworthy node introduces adverse impact on the data quality and reliability. It specifically used MDOA for achieving better clustering with balanced trade-off between the rate of exploration and exploitation such that frequent re-clustering is prevented. It effectively prevented malicious nodes with minimized energy consumption and enhanced network lifetime. It also adopted a communication system that supports the sensors in attaining the objective with reduced energy and maximized confidence level during the transmission of full exploration data. The results of MDOACM confirmed an average improvement in network lifetime of 23.18% and 25.16% with respect to different energy levels and density of sensor nodes.
{"title":"Energy efficient multi-objective cluster-based routing protocol for WSN using Interval Type-2 Fuzzy Logic modified dingo optimization","authors":"S. Kishore Verma, K. Lokeshwaran, J. Martin Sahayaraj, J. S. Adeline Johnsana","doi":"10.1007/s12083-024-01696-w","DOIUrl":"https://doi.org/10.1007/s12083-024-01696-w","url":null,"abstract":"<p>In the design of Wireless sensor networks (WSNs), maximizing network lifetime and sustaining energy stability is identified as the challenging problem since it comprises of compact sized and energy limited sensor nodes that cooperates during data routing. The existing clustering-based routing mechanisms accomplished energy efficiency and attempted to minimize the distance between the cluster head (CH) and the sink node for network lifetime improvement. The adoption of swarm intelligence algorithms and fuzzy logic is determined to the ideal computational intelligence techniques which are suitable for NP-hard problem like the multi-hop route selection process. In this paper, A Modified Dingo Optimization Algorithm-based Clustering Mechanism (MDOACM) is proposed for addressing the limitations of the clustering protocol with respect to cluster head (CH) lifetime and cluster quality. This MDOACM-based clustering protocol utilized Interval Type-2 Fuzzy Logic (IT2FL) for determining the trust level of each sensor node since the existence of an untrustworthy node introduces adverse impact on the data quality and reliability. It specifically used MDOA for achieving better clustering with balanced trade-off between the rate of exploration and exploitation such that frequent re-clustering is prevented. It effectively prevented malicious nodes with minimized energy consumption and enhanced network lifetime. It also adopted a communication system that supports the sensors in attaining the objective with reduced energy and maximized confidence level during the transmission of full exploration data. The results of MDOACM confirmed an average improvement in network lifetime of 23.18% and 25.16% with respect to different energy levels and density of sensor nodes.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"5 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1007/s12083-024-01700-3
Guangfu Wu, Xin Lai, Daojing He, Sammy Chan, Xiaoyan Fu
In the context of distributed systems, Byzantine fault tolerance plays a critical role in ensuring the normal operation of the system, particularly when facing with malicious nodes. However, challenges remain in enhancing the security and reliability of Byzantine fault-tolerant systems. This paper addresses these challenges by improving a Byzantine fault-tolerant approach based on stake evaluation and improved consistency hashing. We propose a method that leverages node stakes to enhance system security and reliability by allocating different trust values. Additionally, we introduce improvements to the consistency hashing technique, enabling its effective operation in a Byzantine fault-tolerant environment. By introducing redundant nodes on the hash ring to mitigate the impact of malicious nodes, we enhance system fault tolerance and scalability. Experimental results demonstrate a significant improvement in system security and performance using this approach. These findings suggest that our method holds considerable potential for widespread application in the field of Byzantine fault tolerance, supporting the development of more reliable blockchain systems.
{"title":"Improving byzantine fault tolerance based on stake evaluation and consistent hashing","authors":"Guangfu Wu, Xin Lai, Daojing He, Sammy Chan, Xiaoyan Fu","doi":"10.1007/s12083-024-01700-3","DOIUrl":"https://doi.org/10.1007/s12083-024-01700-3","url":null,"abstract":"<p>In the context of distributed systems, Byzantine fault tolerance plays a critical role in ensuring the normal operation of the system, particularly when facing with malicious nodes. However, challenges remain in enhancing the security and reliability of Byzantine fault-tolerant systems. This paper addresses these challenges by improving a Byzantine fault-tolerant approach based on stake evaluation and improved consistency hashing. We propose a method that leverages node stakes to enhance system security and reliability by allocating different trust values. Additionally, we introduce improvements to the consistency hashing technique, enabling its effective operation in a Byzantine fault-tolerant environment. By introducing redundant nodes on the hash ring to mitigate the impact of malicious nodes, we enhance system fault tolerance and scalability. Experimental results demonstrate a significant improvement in system security and performance using this approach. These findings suggest that our method holds considerable potential for widespread application in the field of Byzantine fault tolerance, supporting the development of more reliable blockchain systems.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1007/s12083-024-01647-5
Xingguang Zhou, Weihan Li, Lin Zhong
Data security is a crucial issue for aviation business transaction. In order to prevent privacy leakage for the business participants, it is essential to construct a credible transaction environment. For this purpose, we construct a strict cryptographic scheme based on consortium blockchain that addresses two key perspectives. Firstly, an identity-based homomorphic scheme is proposed, so that the encapsulated transaction result can be correctly calculated and verified for the flowing amount. Secondly, the supervised function is incorporated to the homomorphic scheme. One more public key is added into the encryption trapdoor, and this key is assigned to the supervisor. Consequently, both the airline and the supervisor who work as the two recipients can independently decrypt the ciphertext without time-consuming interaction. Experimental results show that the system achieves encryption time of 15 ms, and decryption time is 15 ms and 15.45 ms for both recipients. Compared to the popular cryptocurrency schemes, the system significantly achieves the supervised function without compromising efficiency. Finally, the application network architecture is put forward for the supervised privacy preservation transaction system.
{"title":"A supervised privacy preservation transaction system for aviation business","authors":"Xingguang Zhou, Weihan Li, Lin Zhong","doi":"10.1007/s12083-024-01647-5","DOIUrl":"https://doi.org/10.1007/s12083-024-01647-5","url":null,"abstract":"<p>Data security is a crucial issue for aviation business transaction. In order to prevent privacy leakage for the business participants, it is essential to construct a credible transaction environment. For this purpose, we construct a strict cryptographic scheme based on consortium blockchain that addresses two key perspectives. Firstly, an identity-based homomorphic scheme is proposed, so that the encapsulated transaction result can be correctly calculated and verified for the flowing amount. Secondly, the supervised function is incorporated to the homomorphic scheme. One more public key is added into the encryption trapdoor, and this key is assigned to the supervisor. Consequently, both the airline and the supervisor who work as the two recipients can independently decrypt the ciphertext without time-consuming interaction. Experimental results show that the system achieves encryption time of 15 ms, and decryption time is 15 ms and 15.45 ms for both recipients. Compared to the popular cryptocurrency schemes, the system significantly achieves the supervised function without compromising efficiency. Finally, the application network architecture is put forward for the supervised privacy preservation transaction system.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"214 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s12083-024-01670-6
R. Shanmugapriya, Santhosh Kumar SVN
Internet of Things (IoT) is the collection of physical objects which consists of integrated technologies to sense, interact and collaborate with other smart objects to collect the data from the deployed environment and send it to the base station. Data dissemination is a network management service which is provided to devices of IoT, by the base station to monitor and manage the device related configuration parameter in the network. In data dissemination, it is very much essential to identify the legitimate nodes which are required for reprogramming and reconfiguring for device configuration during data transmission in order to ensure the security and reliability of the network. Since a greater number of devices are being reprogrammed to exchange data and commands autonomously in IoT, providing security to the disseminated configuration parameters is very essential. Therefore, efficient security authentication mechanism is required to prevent the various types of attacks which occurs during data dissemination. In this paper, an energy efficient Swan Intelligent based Clustering Technique (SICT) has been proposed to provide efficient clustering of nodes in the network. Moreover, trust based secured lightweight authentication protocol is proposed to provide better authentication and secure data dissemination to the devices of IoT. Additionally, the proposed protocol employs fuzzy logic to discover optimal route by selecting only trusted nodes during routing process. The advantages of the proposed system are it improves the security during data dissemination and optimizes the energy by identifying the relevant devices which are required for configuration parameters during data dissemination. The proposed system is implemented in NS3 simulation with realistic simulation parameters namely energy efficiency, network lifetime, throughput, computational cost, communication cost, average signing time, average verification time, packet delivery ratio and network delay. The simulation results justifies that the proposed protocol improves average energy consumption by 34%, computational cost by 41.85%, communication cost by 36.83%, network delay by 31.66%, signing time by 26.25% and verification time by 33.46%. Moreover, the proposed system improves packet delivery ratio by 30% and provides efficient authentication to mitigate various types of attacks during data dissemination when it is compared with other existing protocols in IoT environment.
{"title":"An energy efficient Swan Intelligent based Clustering Technique (SICT) with fuzzy based secure routing protocol in IoT","authors":"R. Shanmugapriya, Santhosh Kumar SVN","doi":"10.1007/s12083-024-01670-6","DOIUrl":"https://doi.org/10.1007/s12083-024-01670-6","url":null,"abstract":"<p>Internet of Things (IoT) is the collection of physical objects which consists of integrated technologies to sense, interact and collaborate with other smart objects to collect the data from the deployed environment and send it to the base station. Data dissemination is a network management service which is provided to devices of IoT, by the base station to monitor and manage the device related configuration parameter in the network. In data dissemination, it is very much essential to identify the legitimate nodes which are required for reprogramming and reconfiguring for device configuration during data transmission in order to ensure the security and reliability of the network. Since a greater number of devices are being reprogrammed to exchange data and commands autonomously in IoT, providing security to the disseminated configuration parameters is very essential. Therefore, efficient security authentication mechanism is required to prevent the various types of attacks which occurs during data dissemination. In this paper, an energy efficient Swan Intelligent based Clustering Technique (SICT) has been proposed to provide efficient clustering of nodes in the network. Moreover, trust based secured lightweight authentication protocol is proposed to provide better authentication and secure data dissemination to the devices of IoT. Additionally, the proposed protocol employs fuzzy logic to discover optimal route by selecting only trusted nodes during routing process. The advantages of the proposed system are it improves the security during data dissemination and optimizes the energy by identifying the relevant devices which are required for configuration parameters during data dissemination. The proposed system is implemented in NS3 simulation with realistic simulation parameters namely energy efficiency, network lifetime, throughput, computational cost, communication cost, average signing time, average verification time, packet delivery ratio and network delay. The simulation results justifies that the proposed protocol improves average energy consumption by 34%, computational cost by 41.85%, communication cost by 36.83%, network delay by 31.66%, signing time by 26.25% and verification time by 33.46%. Moreover, the proposed system improves packet delivery ratio by 30% and provides efficient authentication to mitigate various types of attacks during data dissemination when it is compared with other existing protocols in IoT environment.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"213 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s12083-024-01691-1
Jinsung Kim, Eunsam Kim
Peer-to-Peer cloud storage has emerged as an alternative to address the high installation and maintenance costs in conventional cloud storage based on client/server architectures. Since P2P cloud storage must guarantee the same level of data availability as conventional cloud storage, it has employed replication and erasure coding to redundantly store data among peers in P2P environments where the peer churn rate is high. However, most studies using two techniques have focused only on increasing data availability. Especially for video files stored in P2P cloud storage, in addition to guaranteeing their availability, it is critical but challenging to ensure that they are played back in real time by video player applications as if they were being read from local storage. To address this challenge in this paper, we propose a novel hybrid redundancy scheme to support efficient video file streaming while ensuring the availability of video files in P2P cloud storage. The main contributions of our work are threefold. First, we can achieve higher storage efficiency and better streaming performance by employing both erasure coding and replication simultaneously. Second, we can maximize the number of concurrent playback requests supported while minimizing the decrease in file availability by dynamically adjusting the redundancy degree of each video file according to its popularity. Third, we can further improve the performance by efficiently using storage space with our proposed two-phase replacement policy. Finally, we demonstrate through extensive experiments that our scheme outperforms other techniques by utilizing the benefits of both replication and erasure coding.
{"title":"Supporting efficient video file streaming in P2P cloud storage","authors":"Jinsung Kim, Eunsam Kim","doi":"10.1007/s12083-024-01691-1","DOIUrl":"https://doi.org/10.1007/s12083-024-01691-1","url":null,"abstract":"<p>Peer-to-Peer cloud storage has emerged as an alternative to address the high installation and maintenance costs in conventional cloud storage based on client/server architectures. Since P2P cloud storage must guarantee the same level of data availability as conventional cloud storage, it has employed replication and erasure coding to redundantly store data among peers in P2P environments where the peer churn rate is high. However, most studies using two techniques have focused only on increasing data availability. Especially for video files stored in P2P cloud storage, in addition to guaranteeing their availability, it is critical but challenging to ensure that they are played back in real time by video player applications as if they were being read from local storage. To address this challenge in this paper, we propose a novel hybrid redundancy scheme to support efficient video file streaming while ensuring the availability of video files in P2P cloud storage. The main contributions of our work are threefold. First, we can achieve higher storage efficiency and better streaming performance by employing both erasure coding and replication simultaneously. Second, we can maximize the number of concurrent playback requests supported while minimizing the decrease in file availability by dynamically adjusting the redundancy degree of each video file according to its popularity. Third, we can further improve the performance by efficiently using storage space with our proposed two-phase replacement policy. Finally, we demonstrate through extensive experiments that our scheme outperforms other techniques by utilizing the benefits of both replication and erasure coding.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"1 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s12083-024-01667-1
Abstract
With the evolution of the Internet of Things (IoT) several users take part in different applications via sensors. The foremost confront here remains in selecting the most confidential users or sensors in the edge computing system of the IoT. Here, both the end-users and the edge servers are likely to be malicious or compromised sensors. Several works have been contributed to identifying and isolating the malicious end-users or edge servers. Our work concentrates on the security aspects of edge servers of IoT. The Frank-Wolfe Optimal Service Requests (FWOSR) algorithm is utilized to evaluate the boundaries or limits of the logistic regression model, in which the convex problem under a linear approximation is solved for weight sparsity (i.e. several user requests competing for closest edge server) to avoid over-fitting in the supervised machine learning process. We design a Frank Wolfe Supervised Machine Learning (FWSL) technique to choose an optimal edge server and further minimize the computational and communication costs between the user requests and the edge server. Next, Dirichlet Gaussian Blocked Gibbs Vicinity-based Authentication model for location-based services in Cloud networks is proposed. Here, the vicinity-based authentication is implemented based on Received Signal Strength Indicators (RSSI), MAC address and packet arrival time. With this, the authentication accuracy is improved by introducing the Gaussian function in the vicinity test and provides flexible vicinity range control by taking into account multiple locations. Simulation and experiment are also conducted to validate the computational cost, communication cost, time complexity and detection error rate.
{"title":"Secured Frank Wolfe learning and Dirichlet Gaussian Vicinity based authentication for IoT edge computing","authors":"","doi":"10.1007/s12083-024-01667-1","DOIUrl":"https://doi.org/10.1007/s12083-024-01667-1","url":null,"abstract":"<h3>Abstract</h3> <p>With the evolution of the Internet of Things (IoT) several users take part in different applications via sensors. The foremost confront here remains in selecting the most confidential users or sensors in the edge computing system of the IoT. Here, both the end-users and the edge servers are likely to be malicious or compromised sensors. Several works have been contributed to identifying and isolating the malicious end-users or edge servers. Our work concentrates on the security aspects of edge servers of IoT. The Frank-Wolfe Optimal Service Requests (FWOSR) algorithm is utilized to evaluate the boundaries or limits of the logistic regression model, in which the convex problem under a linear approximation is solved for weight sparsity (i.e. several user requests competing for closest edge server) to avoid over-fitting in the supervised machine learning process. We design a Frank Wolfe Supervised Machine Learning (FWSL) technique to choose an optimal edge server and further minimize the computational and communication costs between the user requests and the edge server. Next, Dirichlet Gaussian Blocked Gibbs Vicinity-based Authentication model for location-based services in Cloud networks is proposed. Here, the vicinity-based authentication is implemented based on Received Signal Strength Indicators (RSSI), MAC address and packet arrival time. With this, the authentication accuracy is improved by introducing the Gaussian function in the vicinity test and provides flexible vicinity range control by taking into account multiple locations. Simulation and experiment are also conducted to validate the computational cost, communication cost, time complexity and detection error rate.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"55 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1007/s12083-024-01680-4
Malle Gopal, T. Velmurugan
The device-to-device communication (D2D) concept allows direct communication between nearby devices without a base station. At the same time, cellular resources are reused. It reduces the end-to-end delay of D2D active users significantly. Most of the traditional methods consider allocating resources by downlink or uplink alone. The present study considers a novel hybrid approach for joint downlink and uplink to allocate resources, maximizing the network throughput. Further, it minimally restricts cellular and D2D pairs’ interference and ensures smooth D2D communication. The challenge is that power control and Quality of service constraints are seriously degraded by strong intra-cell and inter-cell interference due to spectrum reusability and deployment. A hybrid structure that exploits efficient resource allocation is needed to tackle this situation. The optimization problem is formulated as a mixed-integer non-linear problem that is usually NP-hard. Such a problem is divided into two stages, namely channel assignment and power allocation. The factors considered for the objective problem of resource allocation are the transmission power of the cellular user, D2D active user, base station, connection distance, and Quality of Service constraints. The proposed novel hybrid scheme can improve network throughput and improves spectrum efficiency. The numerical results imply that the hybrid method in the proposal functions efficiently and is verified by comparing it with the present joint resource allocation methods.
{"title":"Efficient hybrid resource allocation for uplink and downlink device-to-device underlay communication in 5G and beyond wireless networks","authors":"Malle Gopal, T. Velmurugan","doi":"10.1007/s12083-024-01680-4","DOIUrl":"https://doi.org/10.1007/s12083-024-01680-4","url":null,"abstract":"<p>The device-to-device communication (D2D) concept allows direct communication between nearby devices without a base station. At the same time, cellular resources are reused. It reduces the end-to-end delay of D2D active users significantly. Most of the traditional methods consider allocating resources by downlink or uplink alone. The present study considers a novel hybrid approach for joint downlink and uplink to allocate resources, maximizing the network throughput. Further, it minimally restricts cellular and D2D pairs’ interference and ensures smooth D2D communication. The challenge is that power control and Quality of service constraints are seriously degraded by strong intra-cell and inter-cell interference due to spectrum reusability and deployment. A hybrid structure that exploits efficient resource allocation is needed to tackle this situation. The optimization problem is formulated as a mixed-integer non-linear problem that is usually NP-hard. Such a problem is divided into two stages, namely channel assignment and power allocation. The factors considered for the objective problem of resource allocation are the transmission power of the cellular user, D2D active user, base station, connection distance, and Quality of Service constraints. The proposed novel hybrid scheme can improve network throughput and improves spectrum efficiency. The numerical results imply that the hybrid method in the proposal functions efficiently and is verified by comparing it with the present joint resource allocation methods.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"41 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140595780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1007/s12083-024-01642-w
Abstract
Fog computing, a technology that offers adaptable and scalable computing resources, facing a significant difficulty in task scheduling, affecting system performance and customer satisfaction. Finding solutions to the task scheduling problem is challenging due to its NP-completeness. Researchers suggest a hybrid approach that combines the Grey Wolf Optimization Algorithm (GWO) and Heterogeneous earliest finishing time (HEFT) to address this problem. The hybrid IGWOA (Improved Grey Wolf optimization algorithm) method seeks to minimize makespan and throughput while focusing on multi-objective resource scheduling in Fog computing. Proposed algorithm is suggested to improve the exploration and exploitation phases of the traditional grey wolf algorithm. Furthermore, the HEFT-based GWO algorithm has the benefit of faster convergence in larger scheduling problems. The effectiveness of the suggested algorithm in comparison to existing techniques has been evaluated using the iFogsim toolkit. Real data set and pseudo workloads both are used for working. The statistical method Analysis of Variance (ANOVA) is used to confirm the results. The effectiveness of it in reducing makespan, and throughput is demonstrated by experimental results on 200–1000 tasks. Particularly, the proposed approach outperforms peer competing techniques AEOSSA, HHO, PSO, and FA in relation to makespan and throughput; successfully, improvement is noticed on makespan up to 9.34% over the AEOSSA and up to 72.56% over other optimization techniques for pseudo workload. Additionally, it also showed improvement on makespan up to 6.89% over the AEOSSA and up to 69.73% over other optimization techniques on NASA iPSC and HPC2N real data sets, while improving throughput by 62.4%, 52.8%, and 41.6% on pseudo workload, NASA iPSC, and HPC2N data sets, respectively. These results show proposed approach solves the resource scheduling issue in Fog computing settings.
{"title":"IGWOA: Improved Grey Wolf optimization algorithm for resource scheduling in cloud-fog environment for delay-sensitive applications","authors":"","doi":"10.1007/s12083-024-01642-w","DOIUrl":"https://doi.org/10.1007/s12083-024-01642-w","url":null,"abstract":"<h3>Abstract</h3> <p>Fog computing, a technology that offers adaptable and scalable computing resources, facing a significant difficulty in task scheduling, affecting system performance and customer satisfaction. Finding solutions to the task scheduling problem is challenging due to its NP-completeness. Researchers suggest a hybrid approach that combines the Grey Wolf Optimization Algorithm (GWO) and Heterogeneous earliest finishing time (HEFT) to address this problem. The hybrid IGWOA (Improved Grey Wolf optimization algorithm) method seeks to minimize makespan and throughput while focusing on multi-objective resource scheduling in Fog computing. Proposed algorithm is suggested to improve the exploration and exploitation phases of the traditional grey wolf algorithm. Furthermore, the HEFT-based GWO algorithm has the benefit of faster convergence in larger scheduling problems. The effectiveness of the suggested algorithm in comparison to existing techniques has been evaluated using the iFogsim toolkit. Real data set and pseudo workloads both are used for working. The statistical method Analysis of Variance (ANOVA) is used to confirm the results. The effectiveness of it in reducing makespan, and throughput is demonstrated by experimental results on 200–1000 tasks. Particularly, the proposed approach outperforms peer competing techniques AEOSSA, HHO, PSO, and FA in relation to makespan and throughput; successfully, improvement is noticed on makespan up to 9.34% over the AEOSSA and up to 72.56% over other optimization techniques for pseudo workload. Additionally, it also showed improvement on makespan up to 6.89% over the AEOSSA and up to 69.73% over other optimization techniques on NASA iPSC and HPC2N real data sets, while improving throughput by 62.4%, 52.8%, and 41.6% on pseudo workload, NASA iPSC, and HPC2N data sets, respectively. These results show proposed approach solves the resource scheduling issue in Fog computing settings.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"30 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140325996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In high-speed and dynamic Vehicular Ad-hoc Networks (VANETs), cooperative transmission mechanism is a promising scheme to ensure the sustainable transmission of data. However, due to the possible malicious behavior of vehicles and the dynamic network topology of VANETs, not all vehicles are trustworthy to become relays and perform the cooperative transmission task reliably. Therefore, how to ensure the security and reliability of the selected vehicles is still an urgent problem to be solved. In this paper, we propose a risk-aware relay selection scheme (ARSL-V) using reinforcement learning in VANETs. Specifically, we design a risk assessment mechanism based on multiple parameters to dynamically assess the potential risk of relay vehicles by considering the reputation variability, abnormal behavior, and environmental impact of vehicles. Also, we model the relay selection problem as an improved Kuhn-Munkres algorithm based on the risk assessment to realize relay selection in multi-relay and multi-target vehicle scenarios. Besides, we use a reinforcement learning algorithm combined with feedback data to achieve dynamic adjustment of the parameter weights. Simulation results show that compared with the existing schemes, ARSL-V can improve the detection rate of malicious behavior and cooperative transmission success rate by about 25% and 6%, respectively.
{"title":"ARSL-V: A risk-aware relay selection scheme using reinforcement learning in VANETs","authors":"Xuejiao Liu, Chuanhua Wang, Lingfeng Huang, Yingjie Xia","doi":"10.1007/s12083-023-01589-4","DOIUrl":"https://doi.org/10.1007/s12083-023-01589-4","url":null,"abstract":"<p>In high-speed and dynamic Vehicular Ad-hoc Networks (VANETs), cooperative transmission mechanism is a promising scheme to ensure the sustainable transmission of data. However, due to the possible malicious behavior of vehicles and the dynamic network topology of VANETs, not all vehicles are trustworthy to become relays and perform the cooperative transmission task reliably. Therefore, how to ensure the security and reliability of the selected vehicles is still an urgent problem to be solved. In this paper, we propose a risk-aware relay selection scheme (ARSL-V) using reinforcement learning in VANETs. Specifically, we design a risk assessment mechanism based on multiple parameters to dynamically assess the potential risk of relay vehicles by considering the reputation variability, abnormal behavior, and environmental impact of vehicles. Also, we model the relay selection problem as an improved Kuhn-Munkres algorithm based on the risk assessment to realize relay selection in multi-relay and multi-target vehicle scenarios. Besides, we use a reinforcement learning algorithm combined with feedback data to achieve dynamic adjustment of the parameter weights. Simulation results show that compared with the existing schemes, ARSL-V can improve the detection rate of malicious behavior and cooperative transmission success rate by about 25% and 6%, respectively.</p>","PeriodicalId":49313,"journal":{"name":"Peer-To-Peer Networking and Applications","volume":"33 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140311138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}