Wenshuai Liu, Yaru Fu, Yongna Guo, Fu Lee Wang, Wen Sun, Yan Zhang
Digital twins (DTs) have emerged as a promising enabler for representing the real-time states of physical worlds and realizing self-sustaining systems. In practice, DTs of physical devices, such as mobile users (MUs), are commonly deployed in multi-access edge computing (MEC) networks for the sake of reducing latency. To ensure the accuracy and fidelity of DTs, it is essential for MUs to regularly synchronize their status with their DTs. However, MU mobility introduces significant challenges to DT synchronization. Firstly, MU mobility triggers DT migration which could cause synchronization failures. Secondly, MUs require frequent synchronization with their DTs to ensure DT fidelity. Nonetheless, DT migration among MEC servers, caused by MU mobility, may occur infrequently. Accordingly, we propose a two-timescale DT synchronization and migration framework with reliability consideration by establishing a non-convex stochastic problem to minimize the long-term average energy consumption of MUs. We use Lyapunov theory to convert the reliability constraints and reformulate the new problem as a partially observable Markov decision-making process (POMDP). Furthermore, we develop a heterogeneous agent proximal policy optimization with Beta distribution (Beta-HAPPO) method to solve it. Numerical results show that our proposed Beta-HAPPO method achieves significant improvements in energy savings when compared with other benchmarks.
{"title":"Two-Timescale Synchronization and Migration for Digital Twin Networks: A Multi-Agent Deep Reinforcement Learning Approach","authors":"Wenshuai Liu, Yaru Fu, Yongna Guo, Fu Lee Wang, Wen Sun, Yan Zhang","doi":"arxiv-2409.01092","DOIUrl":"https://doi.org/arxiv-2409.01092","url":null,"abstract":"Digital twins (DTs) have emerged as a promising enabler for representing the\u0000real-time states of physical worlds and realizing self-sustaining systems. In\u0000practice, DTs of physical devices, such as mobile users (MUs), are commonly\u0000deployed in multi-access edge computing (MEC) networks for the sake of reducing\u0000latency. To ensure the accuracy and fidelity of DTs, it is essential for MUs to\u0000regularly synchronize their status with their DTs. However, MU mobility\u0000introduces significant challenges to DT synchronization. Firstly, MU mobility\u0000triggers DT migration which could cause synchronization failures. Secondly, MUs\u0000require frequent synchronization with their DTs to ensure DT fidelity.\u0000Nonetheless, DT migration among MEC servers, caused by MU mobility, may occur\u0000infrequently. Accordingly, we propose a two-timescale DT synchronization and\u0000migration framework with reliability consideration by establishing a non-convex\u0000stochastic problem to minimize the long-term average energy consumption of MUs.\u0000We use Lyapunov theory to convert the reliability constraints and reformulate\u0000the new problem as a partially observable Markov decision-making process\u0000(POMDP). Furthermore, we develop a heterogeneous agent proximal policy\u0000optimization with Beta distribution (Beta-HAPPO) method to solve it. Numerical\u0000results show that our proposed Beta-HAPPO method achieves significant\u0000improvements in energy savings when compared with other benchmarks.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sotiris Michaelides, David Rupprecht, Katharina Kohls
Open Radio Access Networks (ORAN) is a new architectural approach, having been proposed only a few years ago, and it is an expansion of the current Next Generation Radio Access Networks (NG-RAN) of 5G. ORAN aims to break this closed RAN market that is controlled by a handful of vendors, by implementing open interfaces between the different Radio Access Networks (RAN) components, and by introducing modern technologies to the RAN like machine learning, virtualization, and disaggregation. However, the architectural design of ORAN was recently causing concerns and debates about its security, which is considered one of its major drawbacks. Several theoretical risk analyses related to ORAN have been conducted, but to the best of our knowledge, not even a single practical one has been performed yet. In this poster, we discuss and propose a way for a minimal, future-proof deployment of an ORAN 5G network, able to accommodate various hands-on security analyses for its different elements.
开放无线接入网(ORAN)是一种新的架构方法,几年前才被提出,是当前 5G 下一代无线接入网(NG-RAN)的扩展。ORAN 旨在通过在不同的无线接入网(RAN)组件之间实施开放接口,并在 RAN 中引入机器学习、虚拟化和分解等现代技术,打破由少数供应商控制的封闭式 RAN 市场。然而,ORAN的架构设计最近引起了人们对其安全性的担忧和争论,这被认为是其主要缺点之一。已经进行了几项与ORAN相关的理论风险分析,但据我们所知,还没有进行过任何一项实际分析。在本海报中,我们讨论并提出了一种最小化、面向未来的 ORAN 5G 网络部署方法,该方法能够适应针对其不同要素的各种实践性安全分析。
{"title":"Poster: Developing an O-RAN Security Test Lab","authors":"Sotiris Michaelides, David Rupprecht, Katharina Kohls","doi":"arxiv-2409.01107","DOIUrl":"https://doi.org/arxiv-2409.01107","url":null,"abstract":"Open Radio Access Networks (ORAN) is a new architectural approach, having\u0000been proposed only a few years ago, and it is an expansion of the current Next\u0000Generation Radio Access Networks (NG-RAN) of 5G. ORAN aims to break this closed\u0000RAN market that is controlled by a handful of vendors, by implementing open\u0000interfaces between the different Radio Access Networks (RAN) components, and by\u0000introducing modern technologies to the RAN like machine learning,\u0000virtualization, and disaggregation. However, the architectural design of ORAN\u0000was recently causing concerns and debates about its security, which is\u0000considered one of its major drawbacks. Several theoretical risk analyses\u0000related to ORAN have been conducted, but to the best of our knowledge, not even\u0000a single practical one has been performed yet. In this poster, we discuss and\u0000propose a way for a minimal, future-proof deployment of an ORAN 5G network,\u0000able to accommodate various hands-on security analyses for its different\u0000elements.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olivier Bélanger, Jean-Luc Lupien, Olfa Ben Yahia, Stéphane Martel, Antoine Lesage-Landry, Gunes Karabulut Kurt
The rise in low Earth orbit (LEO) satellite Internet services has led to increasing demand, often exceeding available data rates and compromising the quality of service. While deploying more satellites offers a short-term fix, designing higher-performance satellites with enhanced transmission capabilities provides a more sustainable solution. Achieving the necessary high capacity requires interconnecting multiple modem banks within a satellite payload. However, there is a notable gap in research on internal packet routing within extremely high-throughput satellites. To address this, we propose a real-time optimal flow allocation and priority queue scheduling method using online convex optimization-based model predictive control. We model the problem as a multi-commodity flow instance and employ an online interior-point method to solve the routing and scheduling optimization iteratively. This approach minimizes packet loss and supports real-time rerouting with low computational overhead. Our method is tested in simulation on a next-generation extremely high-throughput satellite model, demonstrating its effectiveness compared to a reference batch optimization and to traditional methods.
{"title":"Online Convex Optimization for On-Board Routing in High-Throughput Satellites","authors":"Olivier Bélanger, Jean-Luc Lupien, Olfa Ben Yahia, Stéphane Martel, Antoine Lesage-Landry, Gunes Karabulut Kurt","doi":"arxiv-2409.01488","DOIUrl":"https://doi.org/arxiv-2409.01488","url":null,"abstract":"The rise in low Earth orbit (LEO) satellite Internet services has led to\u0000increasing demand, often exceeding available data rates and compromising the\u0000quality of service. While deploying more satellites offers a short-term fix,\u0000designing higher-performance satellites with enhanced transmission capabilities\u0000provides a more sustainable solution. Achieving the necessary high capacity\u0000requires interconnecting multiple modem banks within a satellite payload.\u0000However, there is a notable gap in research on internal packet routing within\u0000extremely high-throughput satellites. To address this, we propose a real-time\u0000optimal flow allocation and priority queue scheduling method using online\u0000convex optimization-based model predictive control. We model the problem as a\u0000multi-commodity flow instance and employ an online interior-point method to\u0000solve the routing and scheduling optimization iteratively. This approach\u0000minimizes packet loss and supports real-time rerouting with low computational\u0000overhead. Our method is tested in simulation on a next-generation extremely\u0000high-throughput satellite model, demonstrating its effectiveness compared to a\u0000reference batch optimization and to traditional methods.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktor Trón, Viktor Tóth, Callum Toner, Dan Nickless, Dániel A. Nagy, Áron Fischer, György Barabás
This paper describes in detail how erasure codes are implemented in the Swarm system. First, in Section 1, we introduce erasure codes, and show how to apply them to files in Swarm (Section 2). In Section 3, we introduce security levels of data availability and derive their respective parameterisations. In Section 4, we describe a construct that enables cross-neighbourhood redundancy for singleton chunks and which completes erasure coding. Finally, in 5, we propose a number of retrieval strategies applicable to erasure-coded files.
{"title":"Non-local redundancy: Erasure coding and dispersed replicas for robust retrieval in the Swarm peer-to-peer network","authors":"Viktor Trón, Viktor Tóth, Callum Toner, Dan Nickless, Dániel A. Nagy, Áron Fischer, György Barabás","doi":"arxiv-2409.01259","DOIUrl":"https://doi.org/arxiv-2409.01259","url":null,"abstract":"This paper describes in detail how erasure codes are implemented in the Swarm\u0000system. First, in Section 1, we introduce erasure codes, and show how to apply\u0000them to files in Swarm (Section 2). In Section 3, we introduce security levels\u0000of data availability and derive their respective parameterisations. In Section\u00004, we describe a construct that enables cross-neighbourhood redundancy for\u0000singleton chunks and which completes erasure coding. Finally, in 5, we propose\u0000a number of retrieval strategies applicable to erasure-coded files.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low Earth Orbit (LEO) Earth Observation (EO) satellites have changed the way we monitor Earth. Acting like moving cameras, EO satellites are formed in constellations with different missions and priorities, and capture vast data that needs to be transmitted to the ground for processing. However, EO satellites have very limited downlink communication capability, limited by transmission bandwidth, number and location of ground stations, and small transmission windows due to high velocity satellite movement. To optimize resource utilization, EO constellations are expected to share communication spectrum and ground stations for maximum communication efficiency. In this paper, we investigate a new attack surface exposed by resource competition in EO constellations, targeting the delay or drop of Earth monitoring data using legitimate EO services. Specifically, an attacker can inject high-priority requests to temporarily preempt low-priority data transmission windows. Furthermore, we show that by utilizing predictable satellite dynamics, an attacker can intelligently target critical data from low-priority satellites, either delaying its delivery or irreversibly dropping the data. We formulate two attacks, the data delay attack and the data overflow attack, design algorithms to assist attackers in devising attack strategies, and analyze their feasibility or optimality in typical scenarios. We then conduct trace-driven simulations using real-world satellite images and orbit data to evaluate the success probability of launching these attacks under realistic satellite communication settings. We also discuss possible defenses against these attacks.
低地球轨道(LEO)地球观测(EO)卫星改变了我们监测地球的方式。地球观测卫星就像移动的照相机一样,根据不同的任务和优先事项组成不同的星群,并捕捉需要传输到地面进行处理的大量数据。然而,受传输带宽、地面站的数量和位置以及卫星高速移动造成的传输窗口小等因素的限制,EO 卫星的下行链路通信能力非常有限。为了优化资源利用,EO 星群需要共享通信频谱和地面站,以实现最高的通信效率。在本文中,我们研究了 EO 星群中的资源竞争所暴露出的新攻击面,目标是使用合法 EO 服务延迟或丢弃地球监测数据。具体来说,攻击者可以注入高优先级请求,临时抢占低优先级数据传输窗口。此外,我们还展示了通过利用可预测的卫星动态,攻击者可以智能地瞄准低优先级卫星的关键数据,要么延迟其传输,要么不可逆转地丢弃数据。我们提出了两种攻击方式--数据延迟攻击和数据溢出攻击,设计了帮助攻击者制定攻击策略的算法,并分析了它们在典型场景中的可行性或最优性。然后,我们利用真实世界的卫星图像和轨道数据进行了跟踪驱动模拟,以评估在现实的卫星通信环境下发动这些攻击的成功概率。我们还讨论了针对这些攻击可能采取的防御措施。
{"title":"Infiltrating the Sky: Data Delay and Overflow Attacks in Earth Observation Constellations","authors":"Xiaojian Wang, Ruozhou Yu, Dejun Yang, Guoliang Xue","doi":"arxiv-2409.00897","DOIUrl":"https://doi.org/arxiv-2409.00897","url":null,"abstract":"Low Earth Orbit (LEO) Earth Observation (EO) satellites have changed the way\u0000we monitor Earth. Acting like moving cameras, EO satellites are formed in\u0000constellations with different missions and priorities, and capture vast data\u0000that needs to be transmitted to the ground for processing. However, EO\u0000satellites have very limited downlink communication capability, limited by\u0000transmission bandwidth, number and location of ground stations, and small\u0000transmission windows due to high velocity satellite movement. To optimize\u0000resource utilization, EO constellations are expected to share communication\u0000spectrum and ground stations for maximum communication efficiency. In this paper, we investigate a new attack surface exposed by resource\u0000competition in EO constellations, targeting the delay or drop of Earth\u0000monitoring data using legitimate EO services. Specifically, an attacker can\u0000inject high-priority requests to temporarily preempt low-priority data\u0000transmission windows. Furthermore, we show that by utilizing predictable\u0000satellite dynamics, an attacker can intelligently target critical data from\u0000low-priority satellites, either delaying its delivery or irreversibly dropping\u0000the data. We formulate two attacks, the data delay attack and the data overflow\u0000attack, design algorithms to assist attackers in devising attack strategies,\u0000and analyze their feasibility or optimality in typical scenarios. We then\u0000conduct trace-driven simulations using real-world satellite images and orbit\u0000data to evaluate the success probability of launching these attacks under\u0000realistic satellite communication settings. We also discuss possible defenses\u0000against these attacks.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study presents a novel method combining Graph Neural Networks (GNNs) and Generative Adversarial Networks (GANs) for generating packet-level header traces. By incorporating word2vec embeddings, this work significantly mitigates the dimensionality curse often associated with traditional one-hot encoding, thereby enhancing the training effectiveness of the model. Experimental results demonstrate that word2vec encoding captures semantic relationships between field values more effectively than one-hot encoding, improving the accuracy and naturalness of the generated data. Additionally, the introduction of GNNs further boosts the discriminator's ability to distinguish between real and synthetic data, leading to more realistic and diverse generated samples. The findings not only provide a new theoretical approach for network traffic data generation but also offer practical insights into improving data synthesis quality through enhanced feature representation and model architecture. Future research could focus on optimizing the integration of GNNs and GANs, reducing computational costs, and validating the model's generalizability on larger datasets. Exploring other encoding methods and model structure improvements may also yield new possibilities for network data generation. This research advances the field of data synthesis, with potential applications in network security and traffic analysis.
本研究提出了一种结合图神经网络(GNN)和生成对抗网络(GAN)的新方法,用于生成数据包级标题跟踪。通过结合 word2vec 嵌入,这项工作大大缓解了传统单次编码经常带来的维度诅咒,从而提高了模型的训练效果。实验结果表明,与单次编码相比,word2vec 编码能更有效地捕捉字段值之间的语义关系,从而提高了生成数据的准确性和自然度。此外,GNN 的引入进一步提高了判别器区分真实数据和合成数据的能力,从而生成更真实、更多样的样本。这些发现不仅为网络流量数据生成提供了一种新的理论方法,还为通过增强特征表示和模型架构来提高数据合成质量提供了实践启示。未来的研究重点可以放在优化 GNN 和 GAN 的集成、降低计算成本以及验证模型在更大数据集上的通用性上。探索其他编码方法和改进模型结构也可能为网络数据生成带来新的可能性。这项研究推动了数据合成领域的发展,并有可能应用于网络安全和流量分析。
{"title":"Generating Packet-Level Header Traces Using GNN-powered GAN","authors":"Zhen Xu","doi":"arxiv-2409.01265","DOIUrl":"https://doi.org/arxiv-2409.01265","url":null,"abstract":"This study presents a novel method combining Graph Neural Networks (GNNs) and\u0000Generative Adversarial Networks (GANs) for generating packet-level header\u0000traces. By incorporating word2vec embeddings, this work significantly mitigates\u0000the dimensionality curse often associated with traditional one-hot encoding,\u0000thereby enhancing the training effectiveness of the model. Experimental results\u0000demonstrate that word2vec encoding captures semantic relationships between\u0000field values more effectively than one-hot encoding, improving the accuracy and\u0000naturalness of the generated data. Additionally, the introduction of GNNs\u0000further boosts the discriminator's ability to distinguish between real and\u0000synthetic data, leading to more realistic and diverse generated samples. The\u0000findings not only provide a new theoretical approach for network traffic data\u0000generation but also offer practical insights into improving data synthesis\u0000quality through enhanced feature representation and model architecture. Future\u0000research could focus on optimizing the integration of GNNs and GANs, reducing\u0000computational costs, and validating the model's generalizability on larger\u0000datasets. Exploring other encoding methods and model structure improvements may\u0000also yield new possibilities for network data generation. This research\u0000advances the field of data synthesis, with potential applications in network\u0000security and traffic analysis.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Monzurul Amin Ifath, Miguel Neves, Israat Haque
Stream processing applications have been widely adopted due to real-time data analytics demands, e.g., fraud detection, video analytics, IoT applications. Unfortunately, prototyping and testing these applications is still a cumbersome process for developers that usually requires an expensive testbed and deep multi-disciplinary expertise, including in areas such as networking, distributed systems, and data engineering. As a result, it takes a long time to deploy stream processing applications into production and yet users face several correctness and performance issues. In this paper, we present stream2gym, a tool for the fast prototyping of large-scale distributed stream processing applications. stream2gym builds on Mininet, a widely adopted network emulation platform, and provides a high-level interface to enable developers to easily test their applications under various operating conditions. We demonstrate the benefits of stream2gym by prototyping and testing several applications as well as reproducing key findings from prior research work in video analytics and network traffic monitoring. Moreover, we show stream2gym presents accurate results compared to a hardware testbed while consuming a small amount of resources (enough to be supported in a single commodity laptop even when emulating a dozen of processing nodes).
{"title":"Fast Prototyping of Distributed Stream Processing Applications with stream2gym","authors":"Md. Monzurul Amin Ifath, Miguel Neves, Israat Haque","doi":"arxiv-2409.00577","DOIUrl":"https://doi.org/arxiv-2409.00577","url":null,"abstract":"Stream processing applications have been widely adopted due to real-time data\u0000analytics demands, e.g., fraud detection, video analytics, IoT applications.\u0000Unfortunately, prototyping and testing these applications is still a cumbersome\u0000process for developers that usually requires an expensive testbed and deep\u0000multi-disciplinary expertise, including in areas such as networking,\u0000distributed systems, and data engineering. As a result, it takes a long time to\u0000deploy stream processing applications into production and yet users face\u0000several correctness and performance issues. In this paper, we present\u0000stream2gym, a tool for the fast prototyping of large-scale distributed stream\u0000processing applications. stream2gym builds on Mininet, a widely adopted network\u0000emulation platform, and provides a high-level interface to enable developers to\u0000easily test their applications under various operating conditions. We\u0000demonstrate the benefits of stream2gym by prototyping and testing several\u0000applications as well as reproducing key findings from prior research work in\u0000video analytics and network traffic monitoring. Moreover, we show stream2gym\u0000presents accurate results compared to a hardware testbed while consuming a\u0000small amount of resources (enough to be supported in a single commodity laptop\u0000even when emulating a dozen of processing nodes).","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Air components, including UAVs, planes, balloons, and satellites have been widely utilized since the fixed capacity of ground infrastructure cannot meet the dynamic load of the users. However, since those air components should be coordinated in order to achieve the desired quality of service, several next-generation paradigms have been defined including air computing. Nevertheless, even though many studies and open research issues exist for air computing, there are limited test environments that cannot satisfy the performance evaluation requirements of the dynamic environment. Therefore, in this study, we introduce our discrete event simulator, AirCompSim, which fulfills an air computing environment considering dynamically changing requirements, loads, and capacities through its modular structure. To show its capabilities, a dynamic capacity enhancement scenario is used for investigating the effect of the number of users, UAVs, and requirements of different application types on the average task success rate, service time, and server utilization. The results demonstrate that AirCompSim can be used for experiments in air computing.
{"title":"AirCompSim: A Discrete Event Simulator for Air Computing","authors":"Baris Yamansavascilar, Atay Ozgovde, Cem Ersoy","doi":"arxiv-2409.00689","DOIUrl":"https://doi.org/arxiv-2409.00689","url":null,"abstract":"Air components, including UAVs, planes, balloons, and satellites have been\u0000widely utilized since the fixed capacity of ground infrastructure cannot meet\u0000the dynamic load of the users. However, since those air components should be\u0000coordinated in order to achieve the desired quality of service, several\u0000next-generation paradigms have been defined including air computing.\u0000Nevertheless, even though many studies and open research issues exist for air\u0000computing, there are limited test environments that cannot satisfy the\u0000performance evaluation requirements of the dynamic environment. Therefore, in\u0000this study, we introduce our discrete event simulator, AirCompSim, which\u0000fulfills an air computing environment considering dynamically changing\u0000requirements, loads, and capacities through its modular structure. To show its\u0000capabilities, a dynamic capacity enhancement scenario is used for investigating\u0000the effect of the number of users, UAVs, and requirements of different\u0000application types on the average task success rate, service time, and server\u0000utilization. The results demonstrate that AirCompSim can be used for\u0000experiments in air computing.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of 5G platoon communications, the Platoon Leader Vehicle (PLV) employs groupcasting to transmit control messages to Platoon Member Vehicles (PMVs). Due to the restricted transmission power for groupcasting, it may need to pick one PMV as the Platoon Relay Vehicle (PRV) to be responsible for re-groupcasting messages of PLVs. To optimize the usage of limited spectrum resources, resource sharing can adopted to enhance spectrum efficiency within the platoon. This study proposes a resource allocation method, which is called Resource Sharing for Platoon Groupcasting (RSPG), for platoon groupcasting based on transmission reliability. RSPG utilizes tripartite matching to assign a subchannel to either a PLV or PRV that shares the assigned subchannel with the corresponding individual entity (IE), which does not belong to any platoon. The simulation results show that the proposed method performs better in terms of the QoS satisfaction rate of IEs, the number of allocated subchannels for platoons, and spectral efficiency.
{"title":"Reliability-considered Multi-platoon's Groupcasting using the Resource Sharing Method","authors":"Chung-Ming Huang, Yen-Hung Wu, Duy-Tuan Dao","doi":"arxiv-2409.00719","DOIUrl":"https://doi.org/arxiv-2409.00719","url":null,"abstract":"In the context of 5G platoon communications, the Platoon Leader Vehicle (PLV)\u0000employs groupcasting to transmit control messages to Platoon Member Vehicles\u0000(PMVs). Due to the restricted transmission power for groupcasting, it may need\u0000to pick one PMV as the Platoon Relay Vehicle (PRV) to be responsible for\u0000re-groupcasting messages of PLVs. To optimize the usage of limited spectrum\u0000resources, resource sharing can adopted to enhance spectrum efficiency within\u0000the platoon. This study proposes a resource allocation method, which is called\u0000Resource Sharing for Platoon Groupcasting (RSPG), for platoon groupcasting\u0000based on transmission reliability. RSPG utilizes tripartite matching to assign\u0000a subchannel to either a PLV or PRV that shares the assigned subchannel with\u0000the corresponding individual entity (IE), which does not belong to any platoon.\u0000The simulation results show that the proposed method performs better in terms\u0000of the QoS satisfaction rate of IEs, the number of allocated subchannels for\u0000platoons, and spectral efficiency.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future wireless networks are envisioned to support both sensing and artificial intelligence (AI) services. However, conventional integrated sensing and communication (ISAC) networks may not be suitable due to the ignorance of diverse task-specific data utilities in different AI applications. In this letter, a full-duplex unmanned aerial vehicle (UAV)-enabled wireless network providing sensing and edge learning services is investigated. To maximize the learning performance while ensuring sensing quality, a convergence-guaranteed iterative algorithm is developed to jointly determine the uplink time allocation, as well as UAV trajectory and transmit power. Simulation results show that the proposed algorithm significantly outperforms the baselines and demonstrate the critical tradeoff between sensing and learning performance.
{"title":"UAV-Enabled Wireless Networks for Integrated Sensing and Learning-Oriented Communication","authors":"Wenhao Zhuang, Xinyu He, Yuyi Mao, Juan Liu","doi":"arxiv-2409.00405","DOIUrl":"https://doi.org/arxiv-2409.00405","url":null,"abstract":"Future wireless networks are envisioned to support both sensing and\u0000artificial intelligence (AI) services. However, conventional integrated sensing\u0000and communication (ISAC) networks may not be suitable due to the ignorance of\u0000diverse task-specific data utilities in different AI applications. In this\u0000letter, a full-duplex unmanned aerial vehicle (UAV)-enabled wireless network\u0000providing sensing and edge learning services is investigated. To maximize the\u0000learning performance while ensuring sensing quality, a convergence-guaranteed\u0000iterative algorithm is developed to jointly determine the uplink time\u0000allocation, as well as UAV trajectory and transmit power. Simulation results\u0000show that the proposed algorithm significantly outperforms the baselines and\u0000demonstrate the critical tradeoff between sensing and learning performance.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}