Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3434202
{"title":"Proceedings of the IEEE: Stay Informed. Become Inspired","authors":"","doi":"10.1109/JPROC.2024.3434202","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3434202","url":null,"abstract":"","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"C4-C4"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640272","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiple-antenna technologies are advancing toward the development of extremely large aperture arrays and the utilization of extremely high frequencies, driving the progress of next-generation multiple access (NGMA). This evolution is accompanied by the emergence of near-field communications (NFCs), characterized by spherical-wave propagation, which introduces additional range dimensions to the channel and enhances system throughput. In this context, a tutorial-based primer on NFC is presented, emphasizing its applications in multiuser communications and multiple access (MA). The following areas are investigated: 1) the commonly used near-field channel models are reviewed along with their simplifications under various near-field conditions; 2) building upon these models, the information-theoretic capacity limits of NFC-MA are analyzed, including the derivation of the sum-rate capacity and capacity region, and their upper limits for both downlink and uplink scenarios; and 3) a detailed investigation of near-field multiuser beamforming design is presented, offering low-complexity and effective NFC-MA design methodologies in both the spatial and wavenumber (angular) domains. Throughout these investigations, near-field MA is compared with its far-field counterpart to highlight its superiority and flexibility in terms of interference management, thereby laying the groundwork for achieving NGMA.
{"title":"A Primer on Near-Field Communications for Next-Generation Multiple Access","authors":"Chongjun Ouyang;Zhaolin Wang;Yan Chen;Xidong Mu;Peiying Zhu","doi":"10.1109/JPROC.2024.3436513","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3436513","url":null,"abstract":"Multiple-antenna technologies are advancing toward the development of extremely large aperture arrays and the utilization of extremely high frequencies, driving the progress of next-generation multiple access (NGMA). This evolution is accompanied by the emergence of near-field communications (NFCs), characterized by spherical-wave propagation, which introduces additional range dimensions to the channel and enhances system throughput. In this context, a tutorial-based primer on NFC is presented, emphasizing its applications in multiuser communications and multiple access (MA). The following areas are investigated: 1) the commonly used near-field channel models are reviewed along with their simplifications under various near-field conditions; 2) building upon these models, the information-theoretic capacity limits of NFC-MA are analyzed, including the derivation of the sum-rate capacity and capacity region, and their upper limits for both downlink and uplink scenarios; and 3) a detailed investigation of near-field multiuser beamforming design is presented, offering low-complexity and effective NFC-MA design methodologies in both the spatial and wavenumber (angular) domains. Throughout these investigations, near-field MA is compared with its far-field counterpart to highlight its superiority and flexibility in terms of interference management, thereby laying the groundwork for achieving NGMA.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 9","pages":"1527-1565"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142798051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3434198
{"title":"Future Special Issues/Special Sections of the Proceedings","authors":"","doi":"10.1109/JPROC.2024.3434198","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3434198","url":null,"abstract":"","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"511-511"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3439969
{"title":"IEEE Connects You to a Universe of Information","authors":"","doi":"10.1109/JPROC.2024.3439969","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3439969","url":null,"abstract":"","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"512-512"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640257","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3434200
{"title":"IEEE Membership","authors":"","doi":"10.1109/JPROC.2024.3434200","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3434200","url":null,"abstract":"","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"C3-C3"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640260","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3437730
Zhijin Qin;Le Liang;Zijing Wang;Shi Jin;Xiaoming Tao;Wen Tong;Geoffrey Ye Li
Artificial intelligence (AI) and machine learning (ML) have shown tremendous potential in reshaping the landscape of wireless communications and are, therefore, widely expected to be an indispensable part of the next-generation wireless network. This article presents an overview of how AI/ML and wireless communications interact synergistically to improve system performance and provides useful tips and tricks on realizing such performance gains when training AI/ML models. In particular, we discuss in detail the use of AI/ML to revolutionize key physical layer and lower medium access control (MAC) layer functionalities in traditional wireless communication systems. In addition, we provide a comprehensive overview of the AI/ML-enabled semantic communication systems, including key techniques from data generation to transmission. We also investigate the role of AI/ML as an optimization tool to facilitate the design of efficient resource allocation algorithms in wireless communication networks at both bit and semantic levels. Finally, we analyze major challenges and roadblocks in applying AI/ML in practical wireless system design and share our thoughts and insights on potential solutions.
{"title":"AI Empowered Wireless Communications: From Bits to Semantics","authors":"Zhijin Qin;Le Liang;Zijing Wang;Shi Jin;Xiaoming Tao;Wen Tong;Geoffrey Ye Li","doi":"10.1109/JPROC.2024.3437730","DOIUrl":"10.1109/JPROC.2024.3437730","url":null,"abstract":"Artificial intelligence (AI) and machine learning (ML) have shown tremendous potential in reshaping the landscape of wireless communications and are, therefore, widely expected to be an indispensable part of the next-generation wireless network. This article presents an overview of how AI/ML and wireless communications interact synergistically to improve system performance and provides useful tips and tricks on realizing such performance gains when training AI/ML models. In particular, we discuss in detail the use of AI/ML to revolutionize key physical layer and lower medium access control (MAC) layer functionalities in traditional wireless communication systems. In addition, we provide a comprehensive overview of the AI/ML-enabled semantic communication systems, including key techniques from data generation to transmission. We also investigate the role of AI/ML as an optimization tool to facilitate the design of efficient resource allocation algorithms in wireless communication networks at both bit and semantic levels. Finally, we analyze major challenges and roadblocks in applying AI/ML in practical wireless system design and share our thoughts and insights on potential solutions.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 7","pages":"621-652"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10639525","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142022128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3434194
{"title":"Proceedings of the IEEE Publication Information","authors":"","doi":"10.1109/JPROC.2024.3434194","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3434194","url":null,"abstract":"","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"C2-C2"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-19DOI: 10.1109/JPROC.2024.3440211
Nathan J. Kong;J. Joe Payne;James Zhu;Aaron M. Johnson
Hybrid dynamical systems, i.e., systems that have both continuous and discrete states, are ubiquitous in engineering but are difficult to work with due to their discontinuous transitions. For example, a robot leg is able to exert very little control effort, while it is in the air compared to when it is on the ground. When the leg hits the ground, the penetrating velocity instantaneously collapses to zero. These instantaneous changes in dynamics and discontinuities (or jumps) in state make standard smooth tools for planning, estimation, control, and learning difficult for hybrid systems. One of the key tools for accounting for these jumps is called the saltation matrix. The saltation matrix is the sensitivity update when a hybrid jump occurs and has been used in a variety of fields, including robotics, power circuits, and computational neuroscience. This article presents an intuitive derivation of the saltation matrix and discusses what it captures, where it has been used in the past, how it is used for linear and quadratic forms, how it is computed for rigid body systems with unilateral constraints, and some of the structural properties of the saltation matrix in these cases.
{"title":"Saltation Matrices: The Essential Tool for Linearizing Hybrid Dynamical Systems","authors":"Nathan J. Kong;J. Joe Payne;James Zhu;Aaron M. Johnson","doi":"10.1109/JPROC.2024.3440211","DOIUrl":"10.1109/JPROC.2024.3440211","url":null,"abstract":"Hybrid dynamical systems, i.e., systems that have both continuous and discrete states, are ubiquitous in engineering but are difficult to work with due to their discontinuous transitions. For example, a robot leg is able to exert very little control effort, while it is in the air compared to when it is on the ground. When the leg hits the ground, the penetrating velocity instantaneously collapses to zero. These instantaneous changes in dynamics and discontinuities (or jumps) in state make standard smooth tools for planning, estimation, control, and learning difficult for hybrid systems. One of the key tools for accounting for these jumps is called the saltation matrix. The saltation matrix is the sensitivity update when a hybrid jump occurs and has been used in a variety of fields, including robotics, power circuits, and computational neuroscience. This article presents an intuitive derivation of the saltation matrix and discusses what it captures, where it has been used in the past, how it is used for linear and quadratic forms, how it is computed for rigid body systems with unilateral constraints, and some of the structural properties of the saltation matrix in these cases.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 6","pages":"585-608"},"PeriodicalIF":23.2,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) technologies have emerged as pivotal enablers across a multitude of industries, including consumer electronics, healthcare, and manufacturing, largely due to their significant resurgence over the past decade. The transformative power of AI is primarily derived from the utilization of deep neural networks (DNNs), which require extensive data for training and substantial computational resources for processing. Consequently, DNN models are typically trained and deployed on resource-rich cloud servers. However, due to potential latency issues associated with cloud communications, deep learning (DL) workflows (e.g., DNN training and inference) are increasingly being transitioned to wireless edge networks in proximity to end-user devices (EUDs). This shift is designed to support latency-sensitive applications and has given rise to a new paradigm of edge AI, which will play a critical role in upcoming sixth-generation (6G) networks to support ubiquitous AI applications. Despite its considerable potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL. Specifically, the acquisition of large-scale data, as well as the training and inference processes of DNNs, can rapidly deplete the battery energy of EUDs. This necessitates an energy-conscious approach to edge AI to ensure both optimal and sustainable performance. In this article, we present a contemporary survey on green edge AI. We commence by analyzing the principal energy consumption components of edge AI systems to identify the fundamental design principles of green edge AI. Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference. Finally, we underscore potential future research directions to further enhance the energy efficiency (EE) of edge AI.
{"title":"Green Edge AI: A Contemporary Survey","authors":"Yuyi Mao;Xianghao Yu;Kaibin Huang;Ying-Jun Angela Zhang;Jun Zhang","doi":"10.1109/JPROC.2024.3437365","DOIUrl":"10.1109/JPROC.2024.3437365","url":null,"abstract":"Artificial intelligence (AI) technologies have emerged as pivotal enablers across a multitude of industries, including consumer electronics, healthcare, and manufacturing, largely due to their significant resurgence over the past decade. The transformative power of AI is primarily derived from the utilization of deep neural networks (DNNs), which require extensive data for training and substantial computational resources for processing. Consequently, DNN models are typically trained and deployed on resource-rich cloud servers. However, due to potential latency issues associated with cloud communications, deep learning (DL) workflows (e.g., DNN training and inference) are increasingly being transitioned to wireless edge networks in proximity to end-user devices (EUDs). This shift is designed to support latency-sensitive applications and has given rise to a new paradigm of edge AI, which will play a critical role in upcoming sixth-generation (6G) networks to support ubiquitous AI applications. Despite its considerable potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL. Specifically, the acquisition of large-scale data, as well as the training and inference processes of DNNs, can rapidly deplete the battery energy of EUDs. This necessitates an energy-conscious approach to edge AI to ensure both optimal and sustainable performance. In this article, we present a contemporary survey on green edge AI. We commence by analyzing the principal energy consumption components of edge AI systems to identify the fundamental design principles of green edge AI. Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference. Finally, we underscore potential future research directions to further enhance the energy efficiency (EE) of edge AI.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 7","pages":"880-911"},"PeriodicalIF":23.2,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain-inspired computing (BIC) is an emerging research field that aims to build fundamental theories, models, hardware architectures, and application systems toward more general artificial intelligence (AI) by learning from the information processing mechanisms or structures/functions of biological nervous systems. It is regarded as one of the most promising research directions for future intelligent computing in the post-Moore era. In the past few years, various new schemes in this field have sprung up to explore more general AI. These works are quite divergent in the aspects of modeling/algorithm, software tool, hardware platform, and benchmark data since BIC is an interdisciplinary field that consists of many different domains, including computational neuroscience, AI, computer science, statistical physics, material science, and microelectronics. This situation greatly impedes researchers from obtaining a clear picture and getting started in the right way. Hence, there is an urgent requirement to do a comprehensive survey in this field to help correctly recognize and analyze such bewildering methodologies. What are the key issues to enhance the development of BIC? What roles do the current mainstream technologies play in the general framework of BIC? Which techniques are truly useful in real-world applications? These questions largely remain open. To address the above issues, in this survey, we first clarify the biggest challenge of BIC: how can AI models benefit from the recent advancements in computational neuroscience? With this challenge in mind, we will focus on discussing the concept of BIC and summarize four components of BIC infrastructure development: 1) modeling/algorithm; 2) hardware platform; 3) software tool; and 4) benchmark data. For each component, we will summarize its recent progress, main challenges to resolve, and future trends. Based on these studies, we present a general framework for the real-world applications of BIC systems, which is promising to benefit both AI and brain science. Finally, we claim that it is extremely important to build a research ecology to promote prosperity continuously in this field.
{"title":"Brain-Inspired Computing: A Systematic Survey and Future Trends","authors":"Guoqi Li;Lei Deng;Huajin Tang;Gang Pan;Yonghong Tian;Kaushik Roy;Wolfgang Maass","doi":"10.1109/JPROC.2024.3429360","DOIUrl":"10.1109/JPROC.2024.3429360","url":null,"abstract":"Brain-inspired computing (BIC) is an emerging research field that aims to build fundamental theories, models, hardware architectures, and application systems toward more general artificial intelligence (AI) by learning from the information processing mechanisms or structures/functions of biological nervous systems. It is regarded as one of the most promising research directions for future intelligent computing in the post-Moore era. In the past few years, various new schemes in this field have sprung up to explore more general AI. These works are quite divergent in the aspects of modeling/algorithm, software tool, hardware platform, and benchmark data since BIC is an interdisciplinary field that consists of many different domains, including computational neuroscience, AI, computer science, statistical physics, material science, and microelectronics. This situation greatly impedes researchers from obtaining a clear picture and getting started in the right way. Hence, there is an urgent requirement to do a comprehensive survey in this field to help correctly recognize and analyze such bewildering methodologies. What are the key issues to enhance the development of BIC? What roles do the current mainstream technologies play in the general framework of BIC? Which techniques are truly useful in real-world applications? These questions largely remain open. To address the above issues, in this survey, we first clarify the biggest challenge of BIC: how can AI models benefit from the recent advancements in computational neuroscience? With this challenge in mind, we will focus on discussing the concept of BIC and summarize four components of BIC infrastructure development: 1) modeling/algorithm; 2) hardware platform; 3) software tool; and 4) benchmark data. For each component, we will summarize its recent progress, main challenges to resolve, and future trends. Based on these studies, we present a general framework for the real-world applications of BIC systems, which is promising to benefit both AI and brain science. Finally, we claim that it is extremely important to build a research ecology to promote prosperity continuously in this field.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 6","pages":"544-584"},"PeriodicalIF":23.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141986412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}