Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.10.001
Hongjiang Lei , Chen Zhu , Ki-Hong Park , Imran Shafique Ansari , Weijia Lei , Hong Tang , Kyeong Jin Kim
In this paper, we analyze the outage performance of Unmanned Aerial Vehicles (UAVs)-enabled downlink Non-Orthogonal Multiple Access (NOMA) communication systems with the Semi-Grant-Free (SGF) transmission scheme. A UAV provides coverage services for a Grant-Based (GB) user and one Grant-Free (GF) user is allowed to utilize the same channel resource opportunistically. The analytical expressions for the exact and asymptotic Outage Probability (OP) of the GF user are derived. The results demonstrate that no-zero diversity order can be achieved only under stringent conditions on users' quality of service requirements. Subsequently, an efficient Dynamic Power Allocation (DPA) scheme is proposed to relax such data rate constraints. The analytical expressions for the exact and asymptotic OP of the GF user with the DPA scheme are derived. Finally, Monte Carlo simulation results are presented to validate the correctness of the derived analytical expressions and demonstrate the effects of the UAV's location and altitude on the OP of the GF user.
本文分析了采用半自由赠送(SGF)传输方案的无人机(UAV)下行非正交多址(NOMA)通信系统的中断性能。无人飞行器为一个基于授权(GB)的用户提供覆盖服务,并允许一个无授权(GF)用户伺机利用相同的信道资源。推导出了 GF 用户的精确和渐近中断概率 (OP) 的分析表达式。结果表明,只有在用户服务质量要求严格的条件下才能实现无零分集阶。随后,提出了一种有效的动态功率分配 (DPA) 方案,以放宽此类数据速率限制。推导出了使用 DPA 方案的 GF 用户精确和渐进 OP 的分析表达式。最后,介绍了蒙特卡罗仿真结果,以验证推导出的分析表达式的正确性,并证明无人机的位置和高度对 GF 用户 OP 的影响。
{"title":"Outage analysis of aerial semi-grant-free NOMA systems","authors":"Hongjiang Lei , Chen Zhu , Ki-Hong Park , Imran Shafique Ansari , Weijia Lei , Hong Tang , Kyeong Jin Kim","doi":"10.1016/j.dcan.2023.10.001","DOIUrl":"10.1016/j.dcan.2023.10.001","url":null,"abstract":"<div><div>In this paper, we analyze the outage performance of Unmanned Aerial Vehicles (UAVs)-enabled downlink Non-Orthogonal Multiple Access (NOMA) communication systems with the Semi-Grant-Free (SGF) transmission scheme. A UAV provides coverage services for a Grant-Based (GB) user and one Grant-Free (GF) user is allowed to utilize the same channel resource opportunistically. The analytical expressions for the exact and asymptotic Outage Probability (OP) of the GF user are derived. The results demonstrate that no-zero diversity order can be achieved only under stringent conditions on users' quality of service requirements. Subsequently, an efficient Dynamic Power Allocation (DPA) scheme is proposed to relax such data rate constraints. The analytical expressions for the exact and asymptotic OP of the GF user with the DPA scheme are derived. Finally, Monte Carlo simulation results are presented to validate the correctness of the derived analytical expressions and demonstrate the effects of the UAV's location and altitude on the OP of the GF user.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1529-1541"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135810412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.06.013
Zhichao Liu , Jinhua Yang , Juan Wang , Lin Yue
<div><div>Intelligent assembly of large-scale, complex structures using an intelligent manufacturing platform represents the future development direction for industrial manufacturing. During large-scale structural assembly processes, several bottleneck problems occur in the existing auxiliary assembly technology. First, the traditional LiDAR-based assembly technology is often limited by the openness of the manufacturing environment, in which there are blind spots, and continuous online assembly adjustment thus cannot be realized. Second, for assembly of large structures, a single-station LiDAR system cannot achieve complete coverage, which means that a multi-station combination method must be used to acquire the complete three-dimensional data; many more data errors are caused by the transfer between stations than by the measurement accuracy of a single station, which means that the overall system's measurement and adjustment errors are increased greatly. Third, because of the large numbers of structural components contained in a large assembly, the accumulated errors may lead to assembly interference, but the LiDAR-assisted assembly process does not have a feedback perception capability, and thus assembly component loss can easily be caused when assembly interference occurs. Therefore, this paper proposes to combine an optical fiber sensor network with digital twin technology, which will allow the test data from the assembly entity state in the real world to be applied to the “twin” model in the virtual world and thus solve the problems with test openness and data transfer. The problem of station and perception feedback is also addressed and represents the main innovation of this work. The system uses an optical fiber sensor network as a flexible sensing medium to monitor the strain field distribution within a complex area in real time, and then completes real-time parameter adjustment of the virtual assembly based on the distributed data. Complex areas include areas that are laser-unreachable, areas with complex contact surfaces, and areas with large-scale bending deformations. An assembly condition monitoring system is designed based on the optical fiber sensor network, and an assembly condition monitoring algorithm based on multiple physical quantities is proposed. The feasibility of use of the optical fiber sensor network as the real-state parameter acquisition module for the digital twin intelligent assembly system is discussed. The offset of any position in the test area is calculated using the convolutional neural network of a residual module to provide the compensation parameters required for the virtual model of the assembly structure. In the model optimization parameter module, a correction data table is obtained through iterative learning of the algorithm to realize state prediction from the test data. The experiment simulates a large-scale structure assembly process, and performs virtual and real mapping for a variety of situations w
{"title":"Design of modified model of intelligent assembly digital twins based on optical fiber sensor network","authors":"Zhichao Liu , Jinhua Yang , Juan Wang , Lin Yue","doi":"10.1016/j.dcan.2022.06.013","DOIUrl":"10.1016/j.dcan.2022.06.013","url":null,"abstract":"<div><div>Intelligent assembly of large-scale, complex structures using an intelligent manufacturing platform represents the future development direction for industrial manufacturing. During large-scale structural assembly processes, several bottleneck problems occur in the existing auxiliary assembly technology. First, the traditional LiDAR-based assembly technology is often limited by the openness of the manufacturing environment, in which there are blind spots, and continuous online assembly adjustment thus cannot be realized. Second, for assembly of large structures, a single-station LiDAR system cannot achieve complete coverage, which means that a multi-station combination method must be used to acquire the complete three-dimensional data; many more data errors are caused by the transfer between stations than by the measurement accuracy of a single station, which means that the overall system's measurement and adjustment errors are increased greatly. Third, because of the large numbers of structural components contained in a large assembly, the accumulated errors may lead to assembly interference, but the LiDAR-assisted assembly process does not have a feedback perception capability, and thus assembly component loss can easily be caused when assembly interference occurs. Therefore, this paper proposes to combine an optical fiber sensor network with digital twin technology, which will allow the test data from the assembly entity state in the real world to be applied to the “twin” model in the virtual world and thus solve the problems with test openness and data transfer. The problem of station and perception feedback is also addressed and represents the main innovation of this work. The system uses an optical fiber sensor network as a flexible sensing medium to monitor the strain field distribution within a complex area in real time, and then completes real-time parameter adjustment of the virtual assembly based on the distributed data. Complex areas include areas that are laser-unreachable, areas with complex contact surfaces, and areas with large-scale bending deformations. An assembly condition monitoring system is designed based on the optical fiber sensor network, and an assembly condition monitoring algorithm based on multiple physical quantities is proposed. The feasibility of use of the optical fiber sensor network as the real-state parameter acquisition module for the digital twin intelligent assembly system is discussed. The offset of any position in the test area is calculated using the convolutional neural network of a residual module to provide the compensation parameters required for the virtual model of the assembly structure. In the model optimization parameter module, a correction data table is obtained through iterative learning of the algorithm to realize state prediction from the test data. The experiment simulates a large-scale structure assembly process, and performs virtual and real mapping for a variety of situations w","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1542-1552"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46696447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.11.011
Zufan Zhang , Yang Li , Xiaoqin Yan , Zonghua Ouyang
Signal detection plays an essential role in massive Multiple-Input Multiple-Output (MIMO) systems. However, existing detection methods have not yet made a good tradeoff between Bit Error Rate (BER) and computational complexity, resulting in slow convergence or high complexity. To address this issue, a low-complexity Approximate Message Passing (AMP) detection algorithm with Deep Neural Network (DNN) (denoted as AMP-DNN) is investigated in this paper. Firstly, an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation (BP) algorithm. Secondly, by unfolding the obtained AMP detection algorithm, a DNN is specifically designed for the optimal performance gain. For the proposed AMP-DNN, the number of trainable parameters is only related to that of layers, regardless of modulation scheme, antenna number and matrix calculation, thus facilitating fast and stable training of the network. In addition, the AMP-DNN can detect different channels under the same distribution with only one training. The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments. It is found that the proposed algorithm enables the reduction of BER without signal prior information, especially in the spatially correlated channel, and has a lower computational complexity compared with existing state-of-the-art methods.
{"title":"A low-complexity AMP detection algorithm with deep neural network for massive mimo systems","authors":"Zufan Zhang , Yang Li , Xiaoqin Yan , Zonghua Ouyang","doi":"10.1016/j.dcan.2022.11.011","DOIUrl":"10.1016/j.dcan.2022.11.011","url":null,"abstract":"<div><div>Signal detection plays an essential role in massive Multiple-Input Multiple-Output (MIMO) systems. However, existing detection methods have not yet made a good tradeoff between Bit Error Rate (BER) and computational complexity, resulting in slow convergence or high complexity. To address this issue, a low-complexity Approximate Message Passing (AMP) detection algorithm with Deep Neural Network (DNN) (denoted as AMP-DNN) is investigated in this paper. Firstly, an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation (BP) algorithm. Secondly, by unfolding the obtained AMP detection algorithm, a DNN is specifically designed for the optimal performance gain. For the proposed AMP-DNN, the number of trainable parameters is only related to that of layers, regardless of modulation scheme, antenna number and matrix calculation, thus facilitating fast and stable training of the network. In addition, the AMP-DNN can detect different channels under the same distribution with only one training. The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments. It is found that the proposed algorithm enables the reduction of BER without signal prior information, especially in the spatially correlated channel, and has a lower computational complexity compared with existing state-of-the-art methods.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1375-1386"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44965846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.08.002
Ruiyu Wang , Yao Sun , Chao Zhang , Bowen Yang , Muhammad Imran , Lei Zhang
The millimeter-Wave (mmWave) communication with the advantages of abundant bandwidth and immunity to interference has been deemed a promising technology to greatly improve network capacity. However, due to such characteristics of mmWave, as short transmission distance, high sensitivity to the blockage, and large propagation path loss, handover issues (including trigger condition and target beam selection) become much complicated. In this paper, we design a novel handover scheme to optimize the overall system throughput as well as the total system delay while guaranteeing the Quality of Service (QoS) of each User Equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the Reinforcement Learning (RL) algorithm and optimization theory. The RL algorithm known as Multi-Agent Proximal Policy Optimization (MAPPO) plays a role in determining handover trigger conditions. Further, we propose an optimization problem in conjunction with MAPPO to select the target base station. The aim is to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. The numerical results show the overall system throughput and delay with our method are slightly worse than that with the exhaustive search method but much better than that using another typical RL algorithm Deep Deterministic Policy Gradient (DDPG).
{"title":"A novel handover scheme for millimeter wave network: An approach of integrating reinforcement learning and optimization","authors":"Ruiyu Wang , Yao Sun , Chao Zhang , Bowen Yang , Muhammad Imran , Lei Zhang","doi":"10.1016/j.dcan.2023.08.002","DOIUrl":"10.1016/j.dcan.2023.08.002","url":null,"abstract":"<div><div>The millimeter-Wave (mmWave) communication with the advantages of abundant bandwidth and immunity to interference has been deemed a promising technology to greatly improve network capacity. However, due to such characteristics of mmWave, as short transmission distance, high sensitivity to the blockage, and large propagation path loss, handover issues (including trigger condition and target beam selection) become much complicated. In this paper, we design a novel handover scheme to optimize the overall system throughput as well as the total system delay while guaranteeing the Quality of Service (QoS) of each User Equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the Reinforcement Learning (RL) algorithm and optimization theory. The RL algorithm known as Multi-Agent Proximal Policy Optimization (MAPPO) plays a role in determining handover trigger conditions. Further, we propose an optimization problem in conjunction with MAPPO to select the target base station. The aim is to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. The numerical results show the overall system throughput and delay with our method are slightly worse than that with the exhaustive search method but much better than that using another typical RL algorithm Deep Deterministic Policy Gradient (DDPG).</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1493-1502"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41564998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.10.020
Yuhan Liu , He Yan , Qilie Liu , Wei Zhang , Junbin Huang
Efficient Convolution Operator (ECO) algorithms have achieved impressive performances in visual tracking. However, its feature extraction network of ECO is unconducive for capturing the correlation features of occluded and blurred targets between long-range complex scene frames. More so, its fixed weight fusion strategy does not use the complementary properties of deep and shallow features. In this paper, we propose a new target tracking method, namely ECO++, using deep feature adaptive fusion in a complex scene, in the following two aspects: First, we constructed a new temporal convolution mode and used it to replace the underlying convolution layer in Conformer network to obtain an improved Conformer network. Second, we adaptively fuse the deep features, which output through the improved Conformer network, by combining the Peak to Sidelobe Ratio (PSR), frame smoothness scores and adaptive adjustment weight. Extensive experiments on the OTB-2013, OTB-2015, UAV123, and VOT2019 benchmarks demonstrate that the proposed approach outperforms the state-of-the-art algorithms in tracking accuracy and robustness in complex scenes with occluded, blurred, and fast-moving targets.
{"title":"ECO++: Adaptive deep feature fusion target tracking method in complex scene","authors":"Yuhan Liu , He Yan , Qilie Liu , Wei Zhang , Junbin Huang","doi":"10.1016/j.dcan.2022.10.020","DOIUrl":"10.1016/j.dcan.2022.10.020","url":null,"abstract":"<div><div>Efficient Convolution Operator (ECO) algorithms have achieved impressive performances in visual tracking. However, its feature extraction network of ECO is unconducive for capturing the correlation features of occluded and blurred targets between long-range complex scene frames. More so, its fixed weight fusion strategy does not use the complementary properties of deep and shallow features. In this paper, we propose a new target tracking method, namely ECO++, using deep feature adaptive fusion in a complex scene, in the following two aspects: First, we constructed a new temporal convolution mode and used it to replace the underlying convolution layer in Conformer network to obtain an improved Conformer network. Second, we adaptively fuse the deep features, which output through the improved Conformer network, by combining the Peak to Sidelobe Ratio (PSR), frame smoothness scores and adaptive adjustment weight. Extensive experiments on the OTB-2013, OTB-2015, UAV123, and VOT2019 benchmarks demonstrate that the proposed approach outperforms the state-of-the-art algorithms in tracking accuracy and robustness in complex scenes with occluded, blurred, and fast-moving targets.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1352-1364"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43434686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2022.10.018
Peng Shixin, Chen Kai, Tian Tian, Chen Jingying
Although speech emotion recognition is challenging, it has broad application prospects in human-computer interaction. Building a system that can accurately and stably recognize emotions from human languages can provide a better user experience. However, the current unimodal emotion feature representations are not distinctive enough to accomplish the recognition, and they do not effectively simulate the inter-modality dynamics in speech emotion recognition tasks. This paper proposes a multimodal method that utilizes both audio and semantic content for speech emotion recognition. The proposed method consists of three parts: two high-level feature extractors for text and audio modalities, and an autoencoder-based feature fusion. For audio modality, we propose a structure called Temporal Global Feature Extractor (TGFE) to extract the high-level features of the time-frequency domain relationship from the original speech signal. Considering that text lacks frequency information, we use only a Bidirectional Long Short-Term Memory network (BLSTM) and attention mechanism to simulate an intra-modal dynamic. Once these steps have been accomplished, the high-level text and audio features are sent to the autoencoder in parallel to learn their shared representation for final emotion classification. We conducted extensive experiments on three public benchmark datasets to evaluate our method. The results on Interactive Emotional Motion Capture (IEMOCAP) and Multimodal EmotionLines Dataset (MELD) outperform the existing method. Additionally, the results of CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) are competitive. Furthermore, experimental results show that compared to unimodal information and autoencoder-based feature level fusion, the joint multimodal information (audio and text) improves the overall performance and can achieve greater accuracy than simple feature concatenation.
{"title":"An autoencoder-based feature level fusion for speech emotion recognition","authors":"Peng Shixin, Chen Kai, Tian Tian, Chen Jingying","doi":"10.1016/j.dcan.2022.10.018","DOIUrl":"10.1016/j.dcan.2022.10.018","url":null,"abstract":"<div><div>Although speech emotion recognition is challenging, it has broad application prospects in human-computer interaction. Building a system that can accurately and stably recognize emotions from human languages can provide a better user experience. However, the current unimodal emotion feature representations are not distinctive enough to accomplish the recognition, and they do not effectively simulate the inter-modality dynamics in speech emotion recognition tasks. This paper proposes a multimodal method that utilizes both audio and semantic content for speech emotion recognition. The proposed method consists of three parts: two high-level feature extractors for text and audio modalities, and an autoencoder-based feature fusion. For audio modality, we propose a structure called Temporal Global Feature Extractor (TGFE) to extract the high-level features of the time-frequency domain relationship from the original speech signal. Considering that text lacks frequency information, we use only a Bidirectional Long Short-Term Memory network (BLSTM) and attention mechanism to simulate an intra-modal dynamic. Once these steps have been accomplished, the high-level text and audio features are sent to the autoencoder in parallel to learn their shared representation for final emotion classification. We conducted extensive experiments on three public benchmark datasets to evaluate our method. The results on Interactive Emotional Motion Capture (IEMOCAP) and Multimodal EmotionLines Dataset (MELD) outperform the existing method. Additionally, the results of CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) are competitive. Furthermore, experimental results show that compared to unimodal information and autoencoder-based feature level fusion, the joint multimodal information (audio and text) improves the overall performance and can achieve greater accuracy than simple feature concatenation.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1341-1351"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47336974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.01.021
Fei Tang , Chunliang Ma , Kefei Cheng
Zero trust architecture is an end-to-end approach for server resources and data security which contains identity authentication, access control, dynamic evaluation, and so on. This work focuses on authentication technology in the zero trust network. In this paper, a Traceable Universal Designated Verifier Signature (TUDVS) is used to construct a privacy-preserving authentication scheme for zero trust architecture. Specifically, when a client requests access to server resources, we want to protect the client's access privacy which means that the server administrator cannot disclose the client's access behavior to any third party. In addition, the security of the proposed scheme is proved and its efficiency is analyzed. Finally, TUDVS is applied to the single packet authorization scenario of the zero trust architecture to prove the practicability of the proposed scheme.
{"title":"Privacy-preserving authentication scheme based on zero trust architecture","authors":"Fei Tang , Chunliang Ma , Kefei Cheng","doi":"10.1016/j.dcan.2023.01.021","DOIUrl":"10.1016/j.dcan.2023.01.021","url":null,"abstract":"<div><div>Zero trust architecture is an end-to-end approach for server resources and data security which contains identity authentication, access control, dynamic evaluation, and so on. This work focuses on authentication technology in the zero trust network. In this paper, a Traceable Universal Designated Verifier Signature (TUDVS) is used to construct a privacy-preserving authentication scheme for zero trust architecture. Specifically, when a client requests access to server resources, we want to protect the client's access privacy which means that the server administrator cannot disclose the client's access behavior to any third party. In addition, the security of the proposed scheme is proved and its efficiency is analyzed. Finally, TUDVS is applied to the single packet authorization scenario of the zero trust architecture to prove the practicability of the proposed scheme.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1211-1220"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46147599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.11.009
Sampa Sahoo , Kshira Sagar Sahoo , Bibhudatta Sahoo , Amir H. Gandomi
The development of the Internet of Things (IoT) technology is leading to a new era of smart applications such as smart transportation, buildings, and smart homes. Moreover, these applications act as the building blocks of IoT-enabled smart cities. The high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for processing. However, there is a high computation latency due to the presence of a remote cloud server. Edge computing, which brings the computation close to the data source is introduced to overcome this problem. In an IoT-enabled smart city environment, one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay constraint. An efficient resource allocation at the edge is helpful to address this issue. In this paper, an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation problem. First, we presented a three-layer network architecture for IoT-enabled smart cities. Then, we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization problem. Learning Automata (LA) is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource mapping. An extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.
物联网(IoT)技术的发展正在引领智能交通、楼宇和智能家居等智能应用进入新时代。此外,这些应用还是物联网智能城市的基石。各种智慧城市应用产生的大量高速数据被发送到灵活高效的云计算资源进行处理。然而,由于远程云服务器的存在,计算延迟较高。为了解决这个问题,我们引入了边缘计算,它能使计算接近数据源。在启用了物联网的智慧城市环境中,主要关注点之一是在执行满足延迟约束的任务时消耗最少的能源。边缘的高效资源分配有助于解决这一问题。本文将智能城市环境中的能量和延迟最小化问题表述为一个双目标边缘资源分配问题。首先,我们介绍了物联网智能城市的三层网络架构。然后,考虑到三层网络架构,我们设计了一种基于学习自动机的边缘资源分配方法,以解决上述双目标最小化问题。学习自动机(LA)是一种基于强化的自适应决策制定器,有助于找到最佳任务和边缘资源映射。为了证明基于 LA 的方法在物联网智能城市环境中的适用性和有效性,我们进行了大量的模拟。
{"title":"A learning automata based edge resource allocation approach for IoT-enabled smart cities","authors":"Sampa Sahoo , Kshira Sagar Sahoo , Bibhudatta Sahoo , Amir H. Gandomi","doi":"10.1016/j.dcan.2023.11.009","DOIUrl":"10.1016/j.dcan.2023.11.009","url":null,"abstract":"<div><div>The development of the Internet of Things (IoT) technology is leading to a new era of smart applications such as smart transportation, buildings, and smart homes. Moreover, these applications act as the building blocks of IoT-enabled smart cities. The high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for processing. However, there is a high computation latency due to the presence of a remote cloud server. Edge computing, which brings the computation close to the data source is introduced to overcome this problem. In an IoT-enabled smart city environment, one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay constraint. An efficient resource allocation at the edge is helpful to address this issue. In this paper, an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation problem. First, we presented a three-layer network architecture for IoT-enabled smart cities. Then, we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization problem. Learning Automata (LA) is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource mapping. An extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1258-1266"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139021138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.06.008
Sai Zou , Junrui Wu , Haisheng Yu , Wenyong Wang , Lisheng Huang , Wei Ni , Yan Liu
The future Sixth-Generation (6G) wireless systems are expected to encounter emerging services with diverse requirements. In this paper, 6G network resource orchestration is optimized to support customized network slicing of services, and place network functions generated by heterogeneous devices into available resources. This is a combinatorial optimization problem that is solved by developing a Particle Swarm Optimization (PSO) based scheduling strategy with enhanced inertia weight, particle variation, and nonlinear learning factor, thereby balancing the local and global solutions and improving the convergence speed to globally near-optimal solutions. Simulations show that the method improves the convergence speed and the utilization of network resources compared with other variants of PSO.
{"title":"Efficiency-optimized 6G: A virtual network resource orchestration strategy by enhanced particle swarm optimization","authors":"Sai Zou , Junrui Wu , Haisheng Yu , Wenyong Wang , Lisheng Huang , Wei Ni , Yan Liu","doi":"10.1016/j.dcan.2023.06.008","DOIUrl":"10.1016/j.dcan.2023.06.008","url":null,"abstract":"<div><div>The future Sixth-Generation (6G) wireless systems are expected to encounter emerging services with diverse requirements. In this paper, 6G network resource orchestration is optimized to support customized network slicing of services, and place network functions generated by heterogeneous devices into available resources. This is a combinatorial optimization problem that is solved by developing a Particle Swarm Optimization (PSO) based scheduling strategy with enhanced inertia weight, particle variation, and nonlinear learning factor, thereby balancing the local and global solutions and improving the convergence speed to globally near-optimal solutions. Simulations show that the method improves the convergence speed and the utilization of network resources compared with other variants of PSO.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1221-1233"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46688400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.dcan.2023.05.009
Alicja Olejniczak, Olga Blaszkiewicz, Krzysztof K. Cwalina, Piotr Rajchowski, Jaroslaw Sadowski
Visibility conditions between antennas, i.e. Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) can be crucial in the context of indoor localization, for which detecting the NLOS condition and further correcting constant position estimation errors or allocating resources can reduce the negative influence of multipath propagation on wireless communication and positioning. In this paper a Deep Learning (DL) model to classify LOS/NLOS condition while analyzing two Channel Impulse Response (CIR) parameters: Total Power (TP) [dBm] and First Path Power (FP) [dBm] is proposed. The experiments were conducted using DWM1000 DecaWave radio module based on measurements collected in a real indoor environment and the proposed architecture provides LOS/NLOS identification with an accuracy of more than 100% and 95% in static and dynamic senarios, respectively. The proposed model improves the classification rate by 2-5% compared to other Machine Learning (ML) methods proposed in the literature.
{"title":"LOS and NLOS identification in real indoor environment using deep learning approach","authors":"Alicja Olejniczak, Olga Blaszkiewicz, Krzysztof K. Cwalina, Piotr Rajchowski, Jaroslaw Sadowski","doi":"10.1016/j.dcan.2023.05.009","DOIUrl":"10.1016/j.dcan.2023.05.009","url":null,"abstract":"<div><div>Visibility conditions between antennas, i.e. Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) can be crucial in the context of indoor localization, for which detecting the NLOS condition and further correcting constant position estimation errors or allocating resources can reduce the negative influence of multipath propagation on wireless communication and positioning. In this paper a Deep Learning (DL) model to classify LOS/NLOS condition while analyzing two Channel Impulse Response (CIR) parameters: Total Power (TP) [dBm] and First Path Power (FP) [dBm] is proposed. The experiments were conducted using DWM1000 DecaWave radio module based on measurements collected in a real indoor environment and the proposed architecture provides LOS/NLOS identification with an accuracy of more than 100% and 95% in static and dynamic senarios, respectively. The proposed model improves the classification rate by 2-5% compared to other Machine Learning (ML) methods proposed in the literature.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1305-1312"},"PeriodicalIF":7.5,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49035926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}