The computing demand for massive applications has led to the ubiquitous deployment of computing power. This trend results in the urgent need for higher-level computing resource scheduling services. The Computing and Network Convergence (CNC), a new type of infrastructure, has become a hot topic. To realize the visions of CNC, such as computing-network integration, ubiquitous collaboration, latency-free, and ready-to-use, an intelligent scheduling strategy for CNC should integrate and collaborate with the network. However, the Computing and Network Convergence is built on the cloud, edge, and endless terminals, making the scheduling problem more difficult due to its wide-area requests, available flexibility arrangements, interconnections, and resource adaptations. In view of this, in this survey, we comprehensively review the literature on scheduling in various scenarios. We cover the scheduling problem of Computing and Network Convergence from heterogeneous resources, multiple-objective optimization, and diverse tasks. Possible explanations and implications are discussed. Finally, we point out important challenges for future work.
{"title":"A Survey on Scheduling Techniques in Computing and Network Convergence","authors":"Shujiong Tang;Yue Yu;Hui Wang;Guiliang Wang;Wuhui Chen;Zenglin Xu;Song Guo;Wen Gao","doi":"10.1109/COMST.2023.3329027","DOIUrl":"10.1109/COMST.2023.3329027","url":null,"abstract":"The computing demand for massive applications has led to the ubiquitous deployment of computing power. This trend results in the urgent need for higher-level computing resource scheduling services. The Computing and Network Convergence (CNC), a new type of infrastructure, has become a hot topic. To realize the visions of CNC, such as computing-network integration, ubiquitous collaboration, latency-free, and ready-to-use, an intelligent scheduling strategy for CNC should integrate and collaborate with the network. However, the Computing and Network Convergence is built on the cloud, edge, and endless terminals, making the scheduling problem more difficult due to its wide-area requests, available flexibility arrangements, interconnections, and resource adaptations. In view of this, in this survey, we comprehensively review the literature on scheduling in various scenarios. We cover the scheduling problem of Computing and Network Convergence from heterogeneous resources, multiple-objective optimization, and diverse tasks. Possible explanations and implications are discussed. Finally, we point out important challenges for future work.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"160-195"},"PeriodicalIF":35.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134884066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1109/COMST.2023.3327321
Moisés V. Ribeiro;Mateus de L. Filomeno;Ândrei Camponogara;Thiago Rodrigues Oliveira;Túlio F. Moreira;Stefano Galli;H. Vincent Poor
Aimed at inspiring researchers and practitioners, this paper presents comprehensive and multidimensional discussions of the potential and benefits of integrating power line and wireless communication systems to enhance data communication in both indoor and outdoor environments. By systematically examining the strengths and weaknesses of power line and wireless communication systems, we first elucidate the rationale behind their integration—hybrid communication systems. Then we categorize these hybrid communication systems and provide an in-depth analysis of each category, considering technical, chronological, industrial, and standardization aspects. Moreover, we discuss strategies for designing hybrid communication devices. Next, a qualitative assessment of hybrid communication systems follows, offering guidance for advancing their use, while a concise summary of key lessons imparts valuable insights for future endeavors. Lastly, the paper identifies a range of pertinent topics for further exploration and, consequently, advancing hybrid communication systems for smart grid, Internet of Things, and Industry 4.0 or 5.0 applications.
{"title":"Seamless Connectivity: The Power of Integrating Power Line and Wireless Communications","authors":"Moisés V. Ribeiro;Mateus de L. Filomeno;Ândrei Camponogara;Thiago Rodrigues Oliveira;Túlio F. Moreira;Stefano Galli;H. Vincent Poor","doi":"10.1109/COMST.2023.3327321","DOIUrl":"10.1109/COMST.2023.3327321","url":null,"abstract":"Aimed at inspiring researchers and practitioners, this paper presents comprehensive and multidimensional discussions of the potential and benefits of integrating power line and wireless communication systems to enhance data communication in both indoor and outdoor environments. By systematically examining the strengths and weaknesses of power line and wireless communication systems, we first elucidate the rationale behind their integration—hybrid communication systems. Then we categorize these hybrid communication systems and provide an in-depth analysis of each category, considering technical, chronological, industrial, and standardization aspects. Moreover, we discuss strategies for designing hybrid communication devices. Next, a qualitative assessment of hybrid communication systems follows, offering guidance for advancing their use, while a concise summary of key lessons imparts valuable insights for future endeavors. Lastly, the paper identifies a range of pertinent topics for further exploration and, consequently, advancing hybrid communication systems for smart grid, Internet of Things, and Industry 4.0 or 5.0 applications.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"1-40"},"PeriodicalIF":35.6,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135214243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pervasiveness of commercial Internet of Things (IoT) around the globe is expected to reach significant levels with the upcoming sixth generation of mobile networks (6G). Throughout the past years, wireless standardization units worldwide have been prominently active in the deployment and performance optimization of such IoT networks and fusing them with current and futuristic cellular networks. Nonetheless, the openness of wireless transmissions and the forecasted overwhelm in connected devices will provoke unprecedented security leakages and vulnerabilities. In addition to the key targets of the 6G and IoT, it has been of paramount importance to cater to decent and lightweight security mechanisms in ultra-massively connected heterogeneous networks. Recently, significant efforts have been made to pave the way for the integration of physical layer security (PLS) in contemporary and futuristic networks. The primary motivation behind its deployment resides in its low complexity and ability to provide information-theoretic secure transmissions, which alleviates the complexity burden caused by implementing complex cryptographic schemes. This survey overviews the recent advancement in PLS techniques with a particular interest in its application to the Internet of Things (IoT). We review essentially recent PLS techniques aiming at ensuring message confidentiality along with node/message authentication and malicious nodes’ detection, where their corresponding application scenarios and underlying pros and cons are discussed. On top of that, we explore recent findings in the incorporation of cutting-edge technologies at the physical layer, such as non-orthogonal multiple-access, reconfigurable intelligent surfaces, joint communication and sensing, and optical wireless/Terahertz communications in boosting confidentiality and authentication at the physical layer. Lastly, promising extensions and future directions are discussed based on the quantified pros and cons of each PLS category, opening up ways for timely research directions within the topic and current/future challenges faced by PLS.
{"title":"Physical Layer Security for Authentication, Confidentiality, and Malicious Node Detection: A Paradigm Shift in Securing IoT Networks","authors":"Elmehdi Illi;Marwa Qaraqe;Saud Althunibat;Abdullah Alhasanat;Moath Alsafasfeh;Marcus de Ree;Georgios Mantas;Jonathan Rodriguez;Waqas Aman;Saif Al-Kuwari","doi":"10.1109/COMST.2023.3327327","DOIUrl":"10.1109/COMST.2023.3327327","url":null,"abstract":"The pervasiveness of commercial Internet of Things (IoT) around the globe is expected to reach significant levels with the upcoming sixth generation of mobile networks (6G). Throughout the past years, wireless standardization units worldwide have been prominently active in the deployment and performance optimization of such IoT networks and fusing them with current and futuristic cellular networks. Nonetheless, the openness of wireless transmissions and the forecasted overwhelm in connected devices will provoke unprecedented security leakages and vulnerabilities. In addition to the key targets of the 6G and IoT, it has been of paramount importance to cater to decent and lightweight security mechanisms in ultra-massively connected heterogeneous networks. Recently, significant efforts have been made to pave the way for the integration of physical layer security (PLS) in contemporary and futuristic networks. The primary motivation behind its deployment resides in its low complexity and ability to provide information-theoretic secure transmissions, which alleviates the complexity burden caused by implementing complex cryptographic schemes. This survey overviews the recent advancement in PLS techniques with a particular interest in its application to the Internet of Things (IoT). We review essentially recent PLS techniques aiming at ensuring message confidentiality along with node/message authentication and malicious nodes’ detection, where their corresponding application scenarios and underlying pros and cons are discussed. On top of that, we explore recent findings in the incorporation of cutting-edge technologies at the physical layer, such as non-orthogonal multiple-access, reconfigurable intelligent surfaces, joint communication and sensing, and optical wireless/Terahertz communications in boosting confidentiality and authentication at the physical layer. Lastly, promising extensions and future directions are discussed based on the quantified pros and cons of each PLS category, opening up ways for timely research directions within the topic and current/future challenges faced by PLS.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"347-388"},"PeriodicalIF":35.6,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135158607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data-driven machine learning (ML) is promoted as one potential technology to be used in next-generation wireless systems. This led to a large body of research work that applies ML techniques to solve problems in different layers of the wireless transmission link. However, most of these applications rely on supervised learning which assumes that the source (training) and target (test) data are independent and identically distributed (i.i.d). This assumption is often violated in the real world due to domain or distribution shifts between the source and the target data. Thus, it is important to ensure that these algorithms generalize to out-of-distribution (OOD) data. In this context, domain generalization (DG) tackles the OOD-related issues by learning models on different and distinct source domains/datasets with generalization capabilities to unseen new domains without additional finetuning. Motivated by the importance of DG requirements for wireless applications, we present a comprehensive overview of the recent developments in DG and the different sources of domain shift. We also summarize the existing DG methods and review their applications in selected wireless communication problems, and conclude with insights and open questions.
{"title":"Domain Generalization in Machine Learning Models for Wireless Communications: Concepts, State-of-the-Art, and Open Issues","authors":"Mohamed Akrout;Amal Feriani;Faouzi Bellili;Amine Mezghani;Ekram Hossain","doi":"10.1109/COMST.2023.3326399","DOIUrl":"10.1109/COMST.2023.3326399","url":null,"abstract":"Data-driven machine learning (ML) is promoted as one potential technology to be used in next-generation wireless systems. This led to a large body of research work that applies ML techniques to solve problems in different layers of the wireless transmission link. However, most of these applications rely on supervised learning which assumes that the source (training) and target (test) data are independent and identically distributed (i.i.d). This assumption is often violated in the real world due to domain or distribution shifts between the source and the target data. Thus, it is important to ensure that these algorithms generalize to out-of-distribution (OOD) data. In this context, domain generalization (DG) tackles the OOD-related issues by learning models on different and distinct source domains/datasets with generalization capabilities to unseen new domains without additional finetuning. Motivated by the importance of DG requirements for wireless applications, we present a comprehensive overview of the recent developments in DG and the different sources of domain shift. We also summarize the existing DG methods and review their applications in selected wireless communication problems, and conclude with insights and open questions.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"3014-3037"},"PeriodicalIF":35.6,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135784599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.1109/COMST.2023.3325761
Zebo Yang;Haneen Alfauri;Behrooz Farkiani;Raj Jain;Roberto Di Pietro;Aiman Erbad
Blockchains have gained substantial attention from academia and industry for their ability to facilitate decentralized trust and communications. However, the rapid progress of quantum computing poses a significant threat to the security of existing blockchain technologies. Notably, the emergence of Shor’s and Grover’s algorithms raises concerns regarding the compromise of the cryptographic systems underlying blockchains. Consequently, it is essential to develop methods that reinforce blockchain technology against quantum attacks. In response to this challenge, two distinct approaches have been proposed. The first approach involves post-quantum blockchains, which aim to utilize classical cryptographic algorithms resilient to quantum attacks. The second approach explores quantum blockchains, which leverage the power of quantum computers and networks to rebuild the foundations of blockchains. This paper aims to provide a comprehensive overview and comparison of post-quantum and quantum blockchains while exploring open questions and remaining challenges in these domains. It offers an in-depth introduction, examines differences in blockchain structure, security, privacy, and other key factors, and concludes by discussing current research trends.
{"title":"A Survey and Comparison of Post-Quantum and Quantum Blockchains","authors":"Zebo Yang;Haneen Alfauri;Behrooz Farkiani;Raj Jain;Roberto Di Pietro;Aiman Erbad","doi":"10.1109/COMST.2023.3325761","DOIUrl":"10.1109/COMST.2023.3325761","url":null,"abstract":"Blockchains have gained substantial attention from academia and industry for their ability to facilitate decentralized trust and communications. However, the rapid progress of quantum computing poses a significant threat to the security of existing blockchain technologies. Notably, the emergence of Shor’s and Grover’s algorithms raises concerns regarding the compromise of the cryptographic systems underlying blockchains. Consequently, it is essential to develop methods that reinforce blockchain technology against quantum attacks. In response to this challenge, two distinct approaches have been proposed. The first approach involves post-quantum blockchains, which aim to utilize classical cryptographic algorithms resilient to quantum attacks. The second approach explores quantum blockchains, which leverage the power of quantum computers and networks to rebuild the foundations of blockchains. This paper aims to provide a comprehensive overview and comparison of post-quantum and quantum blockchains while exploring open questions and remaining challenges in these domains. It offers an in-depth introduction, examines differences in blockchain structure, security, privacy, and other key factors, and concludes by discussing current research trends.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 2","pages":"967-1002"},"PeriodicalIF":35.6,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10288193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135057264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1109/COMST.2023.3323640
Salabat Khan;Fei Luo;Zijian Zhang;Farhan Ullah;Farhan Amin;Syed Furqan Qadri;Md Belal Bin Heyat;Rukhsana Ruby;Lu Wang;Shamsher Ullah;Meng Li;Victor C. M. Leung;Kaishun Wu
Cyber-attacks are becoming more common against Internet users due to the increasing dependency on online communication in their daily lives. X.509 Public-Key Infrastructure (PKIX) is the most widely adopted and used system to secure online communications and digital identities. However, different attack vectors exist against the PKIX system, which attackers exploit to breach the security of the reliant protocols. Recently, various projects (e.g., Let’s Encrypt and Google Certificate Transparency) have been started to encrypt online communications, fix PKIX vulnerabilities, and guard Internet users against cyber-attacks. This survey focuses on classical PKIX proposals, certificate revocation proposals, and their implementation on blockchain as well as ledger technologies. First, we discuss the PKIX architecture, the history of the World Wide Web, the certificate issuance process, and possible attacks on the certificate issuance process. Second, a taxonomy of PKIX proposals, revocation proposals, and their modern implementation is provided. Then, a set of evaluation metrics is defined for comparison. Finally, the leading proposals are compared using 15 evaluation metrics and 13 cyber-attacks before presenting the lessons learned and suggesting future PKIX and revocation research.
{"title":"A Survey on X.509 Public-Key Infrastructure, Certificate Revocation, and Their Modern Implementation on Blockchain and Ledger Technologies","authors":"Salabat Khan;Fei Luo;Zijian Zhang;Farhan Ullah;Farhan Amin;Syed Furqan Qadri;Md Belal Bin Heyat;Rukhsana Ruby;Lu Wang;Shamsher Ullah;Meng Li;Victor C. M. Leung;Kaishun Wu","doi":"10.1109/COMST.2023.3323640","DOIUrl":"10.1109/COMST.2023.3323640","url":null,"abstract":"Cyber-attacks are becoming more common against Internet users due to the increasing dependency on online communication in their daily lives. X.509 Public-Key Infrastructure (PKIX) is the most widely adopted and used system to secure online communications and digital identities. However, different attack vectors exist against the PKIX system, which attackers exploit to breach the security of the reliant protocols. Recently, various projects (e.g., Let’s Encrypt and Google Certificate Transparency) have been started to encrypt online communications, fix PKIX vulnerabilities, and guard Internet users against cyber-attacks. This survey focuses on classical PKIX proposals, certificate revocation proposals, and their implementation on blockchain as well as ledger technologies. First, we discuss the PKIX architecture, the history of the World Wide Web, the certificate issuance process, and possible attacks on the certificate issuance process. Second, a taxonomy of PKIX proposals, revocation proposals, and their modern implementation is provided. Then, a set of evaluation metrics is defined for comparison. Finally, the leading proposals are compared using 15 evaluation metrics and 13 cyber-attacks before presenting the lessons learned and suggesting future PKIX and revocation research.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2529-2568"},"PeriodicalIF":35.6,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135783336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-12DOI: 10.1109/COMST.2023.3323344
Yu Bai;Hui Zhao;Xin Zhang;Zheng Chang;Riku Jäntti;Kun Yang
Unmanned aerial vehicle (UAV)-based wireless networks have received increasing research interest in recent years and are gradually being utilized in various aspects of our society. The growing complexity of UAV applications such as disaster management, plant protection, and environment monitoring, has resulted in escalating and stringent requirements for UAV networks that a single UAV cannot fulfill. To address this, multi-UAV wireless networks (MUWNs) have emerged, offering enhanced resource-carrying capacity and enabling collaborative mission completion by multiple UAVs. However, the effective operation of MUWNs necessitates a higher level of autonomy and intelligence, particularly in decision-making and multi-objective optimization under diverse environmental conditions. Reinforcement Learning (RL), an intelligent and goal-oriented decision-making approach, has emerged as a promising solution for addressing the intricate tasks associated with MUWNs. As one may notice, the literature still lacks a comprehensive survey of recent advancements in RL-based MUWNs. Thus, this paper aims to bridge this gap by providing a comprehensive review of RL-based approaches in the context of autonomous MUWNs. We present an informative overview of RL and demonstrate its application within the framework of MUWNs. Specifically, we summarize various applications of RL in MUWNs, including data access, sensing and collection, resource allocation for wireless connectivity, UAV-assisted mobile edge computing, localization, trajectory planning, and network security. Furthermore, we identify and discuss several open challenges based on the insights gained from our review.
{"title":"Toward Autonomous Multi-UAV Wireless Network: A Survey of Reinforcement Learning-Based Approaches","authors":"Yu Bai;Hui Zhao;Xin Zhang;Zheng Chang;Riku Jäntti;Kun Yang","doi":"10.1109/COMST.2023.3323344","DOIUrl":"10.1109/COMST.2023.3323344","url":null,"abstract":"Unmanned aerial vehicle (UAV)-based wireless networks have received increasing research interest in recent years and are gradually being utilized in various aspects of our society. The growing complexity of UAV applications such as disaster management, plant protection, and environment monitoring, has resulted in escalating and stringent requirements for UAV networks that a single UAV cannot fulfill. To address this, multi-UAV wireless networks (MUWNs) have emerged, offering enhanced resource-carrying capacity and enabling collaborative mission completion by multiple UAVs. However, the effective operation of MUWNs necessitates a higher level of autonomy and intelligence, particularly in decision-making and multi-objective optimization under diverse environmental conditions. Reinforcement Learning (RL), an intelligent and goal-oriented decision-making approach, has emerged as a promising solution for addressing the intricate tasks associated with MUWNs. As one may notice, the literature still lacks a comprehensive survey of recent advancements in RL-based MUWNs. Thus, this paper aims to bridge this gap by providing a comprehensive review of RL-based approaches in the context of autonomous MUWNs. We present an informative overview of RL and demonstrate its application within the framework of MUWNs. Specifically, we summarize various applications of RL in MUWNs, including data access, sensing and collection, resource allocation for wireless connectivity, UAV-assisted mobile edge computing, localization, trajectory planning, and network security. Furthermore, we identify and discuss several open challenges based on the insights gained from our review.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"3038-3067"},"PeriodicalIF":35.6,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10283826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135783332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-10DOI: 10.1109/COMST.2023.3323091
Renjie Xu;Saiedeh Razavi;Rong Zheng
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. EVA systems and their enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
{"title":"Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques","authors":"Renjie Xu;Saiedeh Razavi;Rong Zheng","doi":"10.1109/COMST.2023.3323091","DOIUrl":"10.1109/COMST.2023.3323091","url":null,"abstract":"Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. EVA systems and their enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2951-2982"},"PeriodicalIF":35.6,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136206858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emerging field of artificial intelligence of things (AIoT, AI+IoT) is driven by the widespread use of intelligent infrastructures and the impressive success of deep learning (DL). With the deployment of DL on various intelligent infrastructures featuring rich sensors and weak DL computing capabilities, a diverse range of AIoT applications has become possible. However, DL models are notoriously resource-intensive. Existing research strives to realize near-/realtime inference of AIoT live data and low-cost training using AIoT datasets on resource-scare infrastructures. Accordingly, the accuracy and responsiveness of DL models are bounded by resource availability. To this end, the algorithm-system co-design that jointly optimizes the resource-friendly DL models and model-adaptive system scheduling improves the runtime resource availability and thus pushes the performance boundary set by the standalone level. Unlike previous surveys on resource-friendly DL models or hand-crafted DL compilers/frameworks with partially fine-tuned components, this survey aims to provide a broader optimization space for more free resource-performance tradeoffs. The cross-level optimization landscape involves various granularity, including the DL model, computation graph, operator, memory schedule, and hardware instructor in both on-device and distributed paradigms. Furthermore, due to the dynamic nature of AIoT context, which includes heterogeneous hardware, agnostic sensing data, varying user-specified performance demands, and resource constraints, this survey explores the context-aware inter-/intra-device controllers for automatic cross-level adaptation. Additionally, we identify some potential directions for resource-efficient AIoT systems. By consolidating problems and techniques scattered over diverse levels, we aim to help readers understand their connections and stimulate further discussions.
{"title":"Enabling Resource-Efficient AIoT System With Cross-Level Optimization: A Survey","authors":"Sicong Liu;Bin Guo;Cheng Fang;Ziqi Wang;Shiyan Luo;Zimu Zhou;Zhiwen Yu","doi":"10.1109/COMST.2023.3319952","DOIUrl":"10.1109/COMST.2023.3319952","url":null,"abstract":"The emerging field of artificial intelligence of things (AIoT, AI+IoT) is driven by the widespread use of intelligent infrastructures and the impressive success of deep learning (DL). With the deployment of DL on various intelligent infrastructures featuring rich sensors and weak DL computing capabilities, a diverse range of AIoT applications has become possible. However, DL models are notoriously resource-intensive. Existing research strives to realize near-/realtime inference of AIoT live data and low-cost training using AIoT datasets on resource-scare infrastructures. Accordingly, the accuracy and responsiveness of DL models are bounded by resource availability. To this end, the algorithm-system co-design that jointly optimizes the resource-friendly DL models and model-adaptive system scheduling improves the runtime resource availability and thus pushes the performance boundary set by the standalone level. Unlike previous surveys on resource-friendly DL models or hand-crafted DL compilers/frameworks with partially fine-tuned components, this survey aims to provide a broader optimization space for more free resource-performance tradeoffs. The cross-level optimization landscape involves various granularity, including the DL model, computation graph, operator, memory schedule, and hardware instructor in both on-device and distributed paradigms. Furthermore, due to the dynamic nature of AIoT context, which includes heterogeneous hardware, agnostic sensing data, varying user-specified performance demands, and resource constraints, this survey explores the context-aware inter-/intra-device controllers for automatic cross-level adaptation. Additionally, we identify some potential directions for resource-efficient AIoT systems. By consolidating problems and techniques scattered over diverse levels, we aim to help readers understand their connections and stimulate further discussions.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"26 1","pages":"389-427"},"PeriodicalIF":35.6,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135794555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-26DOI: 10.1109/COMST.2023.3319492
Yulong Wang;Tong Sun;Shenghong Li;Xin Yuan;Wei Ni;Ekram Hossain;H. Vincent Poor
Adversarial attacks and defenses in machine learning and deep neural network (DNN) have been gaining significant attention due to the rapidly growing applications of deep learning in communication networks. This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques, with a focus on DNN-based classification models for communication applications. Specifically, we conduct a comprehensive classification of recent adversarial attack methods and state-of-the-art adversarial defense techniques based on attack principles, and present them in visually appealing tables and tree diagrams. This is based on a rigorous evaluation of the existing works, including an analysis of their strengths and limitations. We also categorize the methods into counter-attack detection and robustness enhancement, with a specific focus on regularization-based methods for enhancing robustness. New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks, and a hierarchical classification of the latest defense methods is provided, highlighting the challenges of balancing training costs with performance, maintaining clean accuracy, overcoming the effect of gradient masking, and ensuring method transferability. At last, the lessons learned and open challenges are summarized with future research opportunities recommended.
{"title":"Adversarial Attacks and Defenses in Machine Learning-Empowered Communication Systems and Networks: A Contemporary Survey","authors":"Yulong Wang;Tong Sun;Shenghong Li;Xin Yuan;Wei Ni;Ekram Hossain;H. Vincent Poor","doi":"10.1109/COMST.2023.3319492","DOIUrl":"10.1109/COMST.2023.3319492","url":null,"abstract":"Adversarial attacks and defenses in machine learning and deep neural network (DNN) have been gaining significant attention due to the rapidly growing applications of deep learning in communication networks. This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques, with a focus on DNN-based classification models for communication applications. Specifically, we conduct a comprehensive classification of recent adversarial attack methods and state-of-the-art adversarial defense techniques based on attack principles, and present them in visually appealing tables and tree diagrams. This is based on a rigorous evaluation of the existing works, including an analysis of their strengths and limitations. We also categorize the methods into counter-attack detection and robustness enhancement, with a specific focus on regularization-based methods for enhancing robustness. New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks, and a hierarchical classification of the latest defense methods is provided, highlighting the challenges of balancing training costs with performance, maintaining clean accuracy, overcoming the effect of gradient masking, and ensuring method transferability. At last, the lessons learned and open challenges are summarized with future research opportunities recommended.","PeriodicalId":34,"journal":{"name":"Crystal Growth & Design","volume":"25 4","pages":"2245-2298"},"PeriodicalIF":35.6,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135784392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}