Pub Date : 2024-06-01DOI: 10.1016/j.icte.2023.12.001
Manh Cuong Ho , Anh Tien Tran , Donghyun Lee , Jeongyeup Paek , Wonjong Noh , Sungrae Cho
Federated learning (FL) has emerged as a promising distributed machine learning technique. It has the potential to play a key role in future Internet of Things (IoT) networks by ensuring the security and privacy of user data combined with efficient utilization of communication resources. This paper addresses the challenge of maximizing energy efficiency in FL systems. We employed simultaneous wireless information and power transfer (SWIPT) and multi-carrier non-orthogonal multiple access (MC-NOMA) techniques. Also, we jointly optimized power allocation and central processing unit (CPU) resource allocation to minimize latency-constrained energy consumption. We formulated an optimization problem using a Markov decision process (MDP) and utilized a deep deterministic policy gradient (DDPG) reinforcement learning algorithm to solve our MDP problem. We tested the proposed algorithm through extensive simulations and confirmed it converges in a stable manner and provides enhanced energy efficiency compared to conventional schemes.
{"title":"A DDPG-based energy efficient federated learning algorithm with SWIPT and MC-NOMA","authors":"Manh Cuong Ho , Anh Tien Tran , Donghyun Lee , Jeongyeup Paek , Wonjong Noh , Sungrae Cho","doi":"10.1016/j.icte.2023.12.001","DOIUrl":"10.1016/j.icte.2023.12.001","url":null,"abstract":"<div><p>Federated learning (FL) has emerged as a promising distributed machine learning technique. It has the potential to play a key role in future Internet of Things (IoT) networks by ensuring the security and privacy of user data combined with efficient utilization of communication resources. This paper addresses the challenge of maximizing energy efficiency in FL systems. We employed simultaneous wireless information and power transfer (SWIPT) and multi-carrier non-orthogonal multiple access (MC-NOMA) techniques. Also, we jointly optimized power allocation and central processing unit (CPU) resource allocation to minimize latency-constrained energy consumption. We formulated an optimization problem using a Markov decision process (MDP) and utilized a deep deterministic policy gradient (DDPG) reinforcement learning algorithm to solve our MDP problem. We tested the proposed algorithm through extensive simulations and confirmed it converges in a stable manner and provides enhanced energy efficiency compared to conventional schemes.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 600-607"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959523001534/pdfft?md5=b839734416c8d6f8c91205a34647aba5&pid=1-s2.0-S2405959523001534-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138620639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.icte.2024.01.003
Jongtaek Oh , Sunghoon Kim
Although estimating the azimuth using a geomagnetic sensor is very useful, the estimation error may be very large due to the surrounding geomagnetic disturbance. We proposed a novel method for preprocessing appropriately for geomagnetic and inertial sensor data to be suitable for the proposed Artificial Neural Network model and training method for the model. As a result, the probability of azimuth estimation error within 1 degree is 96.4% with regression estimation. For classification estimation, when the azimuth estimation probability is 90% or more, the probability that the azimuth estimation error is within 1 degree is 100%.
{"title":"Azimuth estimation based on CNN and LSTM for geomagnetic and inertial sensors data","authors":"Jongtaek Oh , Sunghoon Kim","doi":"10.1016/j.icte.2024.01.003","DOIUrl":"10.1016/j.icte.2024.01.003","url":null,"abstract":"<div><p>Although estimating the azimuth using a geomagnetic sensor is very useful, the estimation error may be very large due to the surrounding geomagnetic disturbance. We proposed a novel method for preprocessing appropriately for geomagnetic and inertial sensor data to be suitable for the proposed Artificial Neural Network model and training method for the model. As a result, the probability of azimuth estimation error within 1 degree is 96.4% with regression estimation. For classification estimation, when the azimuth estimation probability is 90% or more, the probability that the azimuth estimation error is within 1 degree is 100%.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 626-631"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959524000031/pdfft?md5=e1253fa6fcefd9e12cab4c7859badc1b&pid=1-s2.0-S2405959524000031-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139539471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.icte.2024.02.011
Najam Us Saqib , Shilun Song , Huiyang Xie , Zhenyu Cao , Gyeong-June Hahm , Kyung-Yul Cheon , Hyenyeon Kwon , Seungkeun Park , Sang-Woon Jeon , Hu Jin
Digital twin (DT) technologies have been increasingly important and useful for wireless communications. In particular, to support soaring wireless traffic with limited frequency spectrum assigned to legacy cellular systems, efficient operation and management of wireless resources as well as preemptively prediction of future spectrum usage are crucially important. For such purpose, DT networks for current fourth-generation (4G) and fifth-generation (5G) networks are constructed in this paper, by simultaneously utilizing measurement data from user equipments (UEs) and geographical information and characteristics of 4G and 5G base stations (BSs) within a specific observation area. Representative case studies are provided to demonstrate the usefulness of DT enabled cellular network management and prediction. As a real or near real time DT application, the impact and benefit of dual connectivity and dynamic spectrum sharing between 4G and 5G networks are analyzed. As a non-real time DT application, long-term improvement of 5G networks such as densification of BSs, implementation of advanced multiple input and multiple output technologies, and assignment of additional spectrum are analyzed and compared.
{"title":"Digital twin enabled cellular network management and prediction","authors":"Najam Us Saqib , Shilun Song , Huiyang Xie , Zhenyu Cao , Gyeong-June Hahm , Kyung-Yul Cheon , Hyenyeon Kwon , Seungkeun Park , Sang-Woon Jeon , Hu Jin","doi":"10.1016/j.icte.2024.02.011","DOIUrl":"https://doi.org/10.1016/j.icte.2024.02.011","url":null,"abstract":"<div><p>Digital twin (DT) technologies have been increasingly important and useful for wireless communications. In particular, to support soaring wireless traffic with limited frequency spectrum assigned to legacy cellular systems, efficient operation and management of wireless resources as well as preemptively prediction of future spectrum usage are crucially important. For such purpose, DT networks for current fourth-generation (4G) and fifth-generation (5G) networks are constructed in this paper, by simultaneously utilizing measurement data from user equipments (UEs) and geographical information and characteristics of 4G and 5G base stations (BSs) within a specific observation area. Representative case studies are provided to demonstrate the usefulness of DT enabled cellular network management and prediction. As a real or near real time DT application, the impact and benefit of dual connectivity and dynamic spectrum sharing between 4G and 5G networks are analyzed. As a non-real time DT application, long-term improvement of 5G networks such as densification of BSs, implementation of advanced multiple input and multiple output technologies, and assignment of additional spectrum are analyzed and compared.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 479-484"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959524000249/pdfft?md5=7dcbb6b72beaeccf0c7b92156f6e9327&pid=1-s2.0-S2405959524000249-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141438751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.icte.2023.12.008
Joonseo Ha, Heejun Roh
Recently, QUIC for the secure and faster connections has standardized but it is unclear that QUIC can cope with website fingerprinting (WF), a technique to infer visited websites from network traffic, since most existing efforts targeted TCP-induced traffic. To this end, we propose a novel QUIC WF technique based on Automated Machine Learning (AutoML). In our approach, we revisit traffic features appeared in literature, but relies on an AutoML framework to achieve best practice without manual intervention. Through experiments, we show that our technique outperforms state-of-the-art WF techniques with an F1-score of 99.79% and a 20-precision of 92.60%.
{"title":"QUIC website fingerprinting based on automated machine learning","authors":"Joonseo Ha, Heejun Roh","doi":"10.1016/j.icte.2023.12.008","DOIUrl":"https://doi.org/10.1016/j.icte.2023.12.008","url":null,"abstract":"<div><p>Recently, QUIC for the secure and faster connections has standardized but it is unclear that QUIC can cope with website fingerprinting (WF), a technique to infer visited websites from network traffic, since most existing efforts targeted TCP-induced traffic. To this end, we propose a novel QUIC WF technique based on Automated Machine Learning (AutoML). In our approach, we revisit traffic features appeared in literature, but relies on an AutoML framework to achieve best practice without manual intervention. Through experiments, we show that our technique outperforms state-of-the-art WF techniques with an F1-score of 99.79% and a 20-precision of 92.60%.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 594-599"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959523001662/pdfft?md5=167bdfd44dc869b16bc3198356f20e4e&pid=1-s2.0-S2405959523001662-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141439133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The network of resource constraint devices, also known as the Low power and Lossy Networks (LLNs), constitutes the edge tire of the Internet of Things applications like smart homes, smart cities, and connected vehicles. The IPv6 Routing Protocol over Low power and lossy networks (RPL) ensures efficient routing in the edge tire of the IoT environment. However, RPL has inherent vulnerabilities that allow malicious insider entities to instigate several security attacks in the IoT network. As a result, the IoT networks suffer from resource depletion, performance degradation, and traffic disruption. Recent literature discusses several machine learning algorithms to detect one or more routing attacks. However, IoT infrastructures are expanding, and so are the attack surfaces. Therefore, it is essential to have a solution that can adapt to this change. This paper introduces a comprehensive framework to detect routing attacks within Low Power and Lossy Networks (LLNs). The proposed solution leverages deep learning by combining Restricted Boltzmann Machine (RBM) and Long Short-Term Memory (LSTM). The framework is trained on 11 network parameters to understand and predict normal network behavior. Anomalies, identified as deviations from the forecast trends, serve as indicators of potential routing attacks and thus address vulnerabilities in the RPL.
{"title":"Routing attack induced anomaly detection in IoT network using RBM-LSTM","authors":"Rashmi Sahay , Anand Nayyar , Rajesh Kumar Shrivastava , Muhammad Bilal , Simar Preet Singh , Sangheon Pack","doi":"10.1016/j.icte.2024.04.012","DOIUrl":"10.1016/j.icte.2024.04.012","url":null,"abstract":"<div><p>The network of resource constraint devices, also known as the Low power and Lossy Networks (LLNs), constitutes the edge tire of the Internet of Things applications like smart homes, smart cities, and connected vehicles. The IPv6 Routing Protocol over Low power and lossy networks (RPL) ensures efficient routing in the edge tire of the IoT environment. However, RPL has inherent vulnerabilities that allow malicious insider entities to instigate several security attacks in the IoT network. As a result, the IoT networks suffer from resource depletion, performance degradation, and traffic disruption. Recent literature discusses several machine learning algorithms to detect one or more routing attacks. However, IoT infrastructures are expanding, and so are the attack surfaces. Therefore, it is essential to have a solution that can adapt to this change. This paper introduces a comprehensive framework to detect routing attacks within Low Power and Lossy Networks (LLNs). The proposed solution leverages deep learning by combining Restricted Boltzmann Machine (RBM) and Long Short-Term Memory (LSTM). The framework is trained on 11 network parameters to understand and predict normal network behavior. Anomalies, identified as deviations from the forecast trends, serve as indicators of potential routing attacks and thus address vulnerabilities in the RPL.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 459-464"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959524000493/pdfft?md5=49d65ad955ce303fd98e4af529009f98&pid=1-s2.0-S2405959524000493-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141029786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.icte.2024.03.002
Hyunwoo Cho , Jae Min Ahn , Jae Hee Noh , Hong-Yeop Song
In this paper, we design some new LDPC-coded orthogonal modulation (OM) schemes for high data rate transmissions (HDRT) in the navigation satellite systems. We analyze their error-performance utilizing soft-decision bit metrics and compare them with those of L61 and L62 signals in the quasi-zenith satellite system (QZSS) for centimeter-level augmentation services (CLAS). Compare to the L62 signals of QZSS, both schemes have higher data rates (14.6% increase) and essentially the better error performance at high SNR region. At the region where frame error rate (FER) , one of the proposed schemes has better error performance of 1.4 dB in terms of carrier-to-noise ratio .
{"title":"Some new LDPC-coded orthogonal modulation schemes for high data rate transmissions in navigation satellite systems","authors":"Hyunwoo Cho , Jae Min Ahn , Jae Hee Noh , Hong-Yeop Song","doi":"10.1016/j.icte.2024.03.002","DOIUrl":"https://doi.org/10.1016/j.icte.2024.03.002","url":null,"abstract":"<div><p>In this paper, we design some new LDPC-coded orthogonal modulation (OM) schemes for high data rate transmissions (HDRT) in the navigation satellite systems. We analyze their error-performance utilizing soft-decision bit metrics and compare them with those of L61 and L62 signals in the quasi-zenith satellite system (QZSS) for centimeter-level augmentation services (CLAS). Compare to the L62 signals of QZSS, both schemes have higher data rates (14.6% increase) and essentially the better error performance at high SNR region. At the region where frame error rate (FER) <span><math><mrow><mo>=</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mo>−</mo><mn>3</mn></mrow></msup></mrow></math></span>, one of the proposed schemes has better error performance of 1.4 dB in terms of carrier-to-noise ratio <span><math><mrow><mo>(</mo><mi>C</mi><mo>/</mo><msub><mrow><mi>N</mi></mrow><mrow><mn>0</mn></mrow></msub><mo>)</mo></mrow></math></span>.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 588-593"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959524000262/pdfft?md5=11c32e7a05dd74090a11d21c01013434&pid=1-s2.0-S2405959524000262-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141439132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we have investigated the performance of energy efficiency (EE) for Intelligent Transportation Systems (ITS), which recently emerged and advanced to preserve speed as well as safe transportation expansion via a cooperative IRS-relay network. To improve the EE, the relay model has been integrated with an IRS block consisting of a number of passive reflective elements. We analyze the ITS in terms of EE, and achievable rate, with different signal-to-noise ratio (SNR) values under Nakagami-m fading channel conditions that help the system to implement in a practical scenario. From the numerical results it is noticed that the EE for the only relay, IRS, and proposed cooperative relay-IRS-aided network at SNR value of 100 dBm is 30, 17, and 48 bits/joule respectively. In addition, we compare the impact of multi-IRS with the proposed cooperative IRS-relay and conventional relay-supported ITS. Simulation results show that both the proposed cooperative IRS-relay-aided ITS network and multi-IRS-aided network outperform the relay-assisted ITS with the increase in SNR.
在本文中,我们研究了智能交通系统(ITS)的能源效率(EE)性能,ITS 是最近出现和发展起来的,通过合作 IRS- 中继网络来保持速度和安全的交通扩展。为了提高能效,中继模型与由多个无源反射元件组成的 IRS 模块进行了整合。我们分析了在 Nakagami-m 消隐信道条件下,不同信噪比(SNR)值下 ITS 的 EE 和可实现速率,这有助于系统在实际场景中的实施。从数值结果可以看出,在信噪比为 100 dBm 时,唯一中继、IRS 和拟议的合作中继-IRS 辅助网络的 EE 分别为 30、17 和 48 比特/焦耳。此外,我们还比较了多中继系统与拟议的合作中继-IRS 和传统中继辅助 ITS 的影响。仿真结果表明,随着信噪比的增加,拟议的合作 IRS-中继辅助 ITS 网络和多IRS 辅助网络都优于中继辅助 ITS。
{"title":"A novel energy efficient IRS-relay network for ITS with Nakagami-m fading channels","authors":"Shaik Rajak , Inbarasan Muniraj , Poongundran Selvaprabhu , Vinoth Babu Kumaravelu , Md. Abdul Latif Sarker , Sunil Chinnadurai , Dong Seog Han","doi":"10.1016/j.icte.2023.11.005","DOIUrl":"10.1016/j.icte.2023.11.005","url":null,"abstract":"<div><p>In this paper, we have investigated the performance of energy efficiency (EE) for Intelligent Transportation Systems (ITS), which recently emerged and advanced to preserve speed as well as safe transportation expansion via a cooperative IRS-relay network. To improve the EE, the relay model has been integrated with an IRS block consisting of a number of passive reflective elements. We analyze the ITS in terms of EE, and achievable rate, with different signal-to-noise ratio (SNR) values under Nakagami-m fading channel conditions that help the system to implement in a practical scenario. From the numerical results it is noticed that the EE for the only relay, IRS, and proposed cooperative relay-IRS-aided network at SNR value of 100 dBm is 30, 17, and 48 bits/joule respectively. In addition, we compare the impact of multi-IRS with the proposed cooperative IRS-relay and conventional relay-supported ITS. Simulation results show that both the proposed cooperative IRS-relay-aided ITS network and multi-IRS-aided network outperform the relay-assisted ITS with the increase in SNR.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 507-512"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959523001480/pdfft?md5=0dab6f9a82eaa1ac09e78e02eb3a68ed&pid=1-s2.0-S2405959523001480-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139304121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.icte.2024.04.002
Sooyoung Jang, Hyung-Il Kim
Despite the growing interest in using deep reinforcement learning (DRL) for drone control, several challenges remain to be addressed, including issues with generalization across task variations and agent training (which requires significant computational power and time). When the agent’s input changes owing to the drone’s sensors or mission variations, significant retraining overhead is required to handle the changes in the input data pattern and the neural network architecture to accommodate the input data. These difficulties severely limit their applicability in dynamic real-world environments. In this paper, we propose an efficient DRL method that leverages the knowledge of the source agent to accelerate the training of the target agent under task variations. The proposed method consists of three phases: collecting training data for the target agent using the source agent, supervised pre-training of the target agent, and DRL-based fine-tuning. Experimental validation demonstrated a remarkable reduction in the training time (up to 94.29%), suggesting a potential avenue for the successful and efficient application of DRL in drone control.
{"title":"Efficient deep reinforcement learning under task variations via knowledge transfer for drone control","authors":"Sooyoung Jang, Hyung-Il Kim","doi":"10.1016/j.icte.2024.04.002","DOIUrl":"10.1016/j.icte.2024.04.002","url":null,"abstract":"<div><p>Despite the growing interest in using deep reinforcement learning (DRL) for drone control, several challenges remain to be addressed, including issues with generalization across task variations and agent training (which requires significant computational power and time). When the agent’s input changes owing to the drone’s sensors or mission variations, significant retraining overhead is required to handle the changes in the input data pattern and the neural network architecture to accommodate the input data. These difficulties severely limit their applicability in dynamic real-world environments. In this paper, we propose an efficient DRL method that leverages the knowledge of the source agent to accelerate the training of the target agent under task variations. The proposed method consists of three phases: collecting training data for the target agent using the source agent, supervised pre-training of the target agent, and DRL-based fine-tuning. Experimental validation demonstrated a remarkable reduction in the training time (up to 94.29%), suggesting a potential avenue for the successful and efficient application of DRL in drone control.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 576-582"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S240595952400033X/pdfft?md5=7d370e1bd566b1fe70dbc9a76bf4c077&pid=1-s2.0-S240595952400033X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140765468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computing-aware networking (CAN) is introduced to unite the computing resources distributed in different platforms. This paper proposes a joint routing and resource allocation model to minimize the total operational cost in CAN. We formulate the problem as an integer linear programming problem. We introduce a polynomial-time algorithm for larger-size problems. The numerical results reveal that the introduced algorithm reduces the computation time 9.18 times, with increasing the objective no more than 4% compared to the optimal solution; the proposed model can reduce 70% of the total cost compared to a baseline adopting a two-stage strategy in our examined cases.
计算感知网络(CAN)的引入是为了联合分布在不同平台上的计算资源。本文提出了一种联合路由和资源分配模型,以最小化 CAN 中的总运行成本。我们将该问题表述为一个整数线性规划问题。我们为较大的问题引入了一种多项式时间算法。数值结果表明,与最优解相比,引入的算法减少了 9.18 倍的计算时间,目标增加不超过 4%;在我们研究的案例中,与采用两阶段策略的基线相比,所提出的模型可减少 70% 的总成本。
{"title":"Optimal routing and heterogeneous resource allocation for computing-aware networks","authors":"Hongqing Ding , Fujun He , Pengfei Zhang , Liang Zhang , Xiaoxiao Zhang , Meiyu Qi","doi":"10.1016/j.icte.2024.01.004","DOIUrl":"10.1016/j.icte.2024.01.004","url":null,"abstract":"<div><p>Computing-aware networking (CAN) is introduced to unite the computing resources distributed in different platforms. This paper proposes a joint routing and resource allocation model to minimize the total operational cost in CAN. We formulate the problem as an integer linear programming problem. We introduce a polynomial-time algorithm for larger-size problems. The numerical results reveal that the introduced algorithm reduces the computation time 9.18 times, with increasing the objective no more than 4% compared to the optimal solution; the proposed model can reduce 70% of the total cost compared to a baseline adopting a two-stage strategy in our examined cases.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 614-619"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959524000043/pdfft?md5=7f67010b74f61693f4d4107c7f0f7b8b&pid=1-s2.0-S2405959524000043-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139537059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.icte.2024.02.001
Mao V. Ngo , Nguyen-Bao-Long Tran , Hyun-Min Yoo , Yong-Hao Pua , Thanh-Long Le , Xian-Loong Liang , Binbin Chen , Een-Kee Hong , Tony Q.S. Quek
Open Radio Access Network (RAN) is an important architecture design shift for 5G and next generation telecommunications networks. With open RAN, mobile network operators would be able to mix and match multi-vendor RAN solutions as long as the solutions comply with open standards. O-RAN Alliance is the global leader in standardizing the open RAN architecture, where RAN Intelligent Controller (RIC) is positioned centrally as the brain of a RAN to manage and optimize the RAN operations. Open-source projects play a key role in accelerating the adoption of open RAN architecture, especially the use of RIC. However, the fast pace of development and the lack of documentation in open-source projects create steep learning curve for beginners. In this paper, we first provide an overview of widely used open-source RIC projects and discuss their pros and cons. We then share our first-hand experience to use RIC in our campus 5G network that consists of commercial-grade RAN solutions. In particular, we developed a suite of three RAN control applications (i.e., energy efficiency, interference management, and predictive maintenance) on an open-source RIC, and we deploy and evaluate them on a commercial-grade 5G network in a university campus. For these RIC applications, we design and evaluate different ML models based on real-world data collected from our 5G network, which we publish together with this paper. Our experimental results show that AI-based RIC applications can achieve more than 90% of accuracy in inferring the situation of the RAN for each given task. Our energy-saving RIC application can reduce 65% of energy consumption of the RAN over a simulated period of one year. Our project also validates the feasibility to interfacing an open-source RIC with existing commercial-grade 5G solutions.
开放式无线接入网(RAN)是 5G 和下一代电信网络的重要架构设计转变。有了开放式 RAN,移动网络运营商就可以混合和匹配多家供应商的 RAN 解决方案,只要这些解决方案符合开放标准即可。O-RAN 联盟是开放 RAN 架构标准化的全球领导者,其中 RAN 智能控制器(RIC)被定位为 RAN 的大脑,负责管理和优化 RAN 的运行。开源项目在加速采用开放式 RAN 架构,特别是使用 RIC 方面发挥了关键作用。然而,开源项目开发速度快、文档缺乏,给初学者带来了陡峭的学习曲线。在本文中,我们首先概述了广泛使用的开源 RIC 项目,并讨论了它们的优缺点。然后,我们分享了在校园 5G 网络中使用 RIC 的第一手经验,该网络由商业级 RAN 解决方案组成。特别是,我们在开源 RIC 上开发了三套 RAN 控制应用程序(即能效、干扰管理和预测性维护),并在大学校园的商用级 5G 网络上进行了部署和评估。针对这些 RIC 应用,我们根据从 5G 网络收集到的真实世界数据设计并评估了不同的 ML 模型,并将其与本文一起发表。我们的实验结果表明,基于人工智能的 RIC 应用在推断每个给定任务的 RAN 情况方面可以达到 90% 以上的准确率。我们的节能 RIC 应用可在一年的模拟期内减少 65% 的 RAN 能源消耗。我们的项目还验证了将开源 RIC 与现有商业级 5G 解决方案对接的可行性。
{"title":"RAN Intelligent Controller (RIC): From open-source implementation to real-world validation","authors":"Mao V. Ngo , Nguyen-Bao-Long Tran , Hyun-Min Yoo , Yong-Hao Pua , Thanh-Long Le , Xian-Loong Liang , Binbin Chen , Een-Kee Hong , Tony Q.S. Quek","doi":"10.1016/j.icte.2024.02.001","DOIUrl":"10.1016/j.icte.2024.02.001","url":null,"abstract":"<div><p>Open Radio Access Network (RAN) is an important architecture design shift for 5G and next generation telecommunications networks. With open RAN, mobile network operators would be able to mix and match multi-vendor RAN solutions as long as the solutions comply with open standards. O-RAN Alliance is the global leader in standardizing the open RAN architecture, where RAN Intelligent Controller (RIC) is positioned centrally as the brain of a RAN to manage and optimize the RAN operations. Open-source projects play a key role in accelerating the adoption of open RAN architecture, especially the use of RIC. However, the fast pace of development and the lack of documentation in open-source projects create steep learning curve for beginners. In this paper, we first provide an overview of widely used open-source RIC projects and discuss their pros and cons. We then share our first-hand experience to use RIC in our campus 5G network that consists of commercial-grade RAN solutions. In particular, we developed a suite of three RAN control applications (i.e., energy efficiency, interference management, and predictive maintenance) on an open-source RIC, and we deploy and evaluate them on a commercial-grade 5G network in a university campus. For these RIC applications, we design and evaluate different ML models based on real-world data collected from our 5G network, which we publish together with this paper. Our experimental results show that AI-based RIC applications can achieve more than 90% of accuracy in inferring the situation of the RAN for each given task. Our energy-saving RIC application can reduce 65% of energy consumption of the RAN over a simulated period of one year. Our project also validates the feasibility to interfacing an open-source RIC with existing commercial-grade 5G solutions.</p></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"10 3","pages":"Pages 680-691"},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405959524000067/pdfft?md5=031459c9690c75736f57dacdd858b45f&pid=1-s2.0-S2405959524000067-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139890545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}