Pub Date : 2024-08-20DOI: 10.1109/JPROC.2024.3434194
{"title":"Proceedings of the IEEE Publication Information","authors":"","doi":"10.1109/JPROC.2024.3434194","DOIUrl":"https://doi.org/10.1109/JPROC.2024.3434194","url":null,"abstract":"","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"C2-C2"},"PeriodicalIF":23.2,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10640258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-19DOI: 10.1109/JPROC.2024.3440211
Nathan J. Kong;J. Joe Payne;James Zhu;Aaron M. Johnson
Hybrid dynamical systems, i.e., systems that have both continuous and discrete states, are ubiquitous in engineering but are difficult to work with due to their discontinuous transitions. For example, a robot leg is able to exert very little control effort, while it is in the air compared to when it is on the ground. When the leg hits the ground, the penetrating velocity instantaneously collapses to zero. These instantaneous changes in dynamics and discontinuities (or jumps) in state make standard smooth tools for planning, estimation, control, and learning difficult for hybrid systems. One of the key tools for accounting for these jumps is called the saltation matrix. The saltation matrix is the sensitivity update when a hybrid jump occurs and has been used in a variety of fields, including robotics, power circuits, and computational neuroscience. This article presents an intuitive derivation of the saltation matrix and discusses what it captures, where it has been used in the past, how it is used for linear and quadratic forms, how it is computed for rigid body systems with unilateral constraints, and some of the structural properties of the saltation matrix in these cases.
{"title":"Saltation Matrices: The Essential Tool for Linearizing Hybrid Dynamical Systems","authors":"Nathan J. Kong;J. Joe Payne;James Zhu;Aaron M. Johnson","doi":"10.1109/JPROC.2024.3440211","DOIUrl":"10.1109/JPROC.2024.3440211","url":null,"abstract":"Hybrid dynamical systems, i.e., systems that have both continuous and discrete states, are ubiquitous in engineering but are difficult to work with due to their discontinuous transitions. For example, a robot leg is able to exert very little control effort, while it is in the air compared to when it is on the ground. When the leg hits the ground, the penetrating velocity instantaneously collapses to zero. These instantaneous changes in dynamics and discontinuities (or jumps) in state make standard smooth tools for planning, estimation, control, and learning difficult for hybrid systems. One of the key tools for accounting for these jumps is called the saltation matrix. The saltation matrix is the sensitivity update when a hybrid jump occurs and has been used in a variety of fields, including robotics, power circuits, and computational neuroscience. This article presents an intuitive derivation of the saltation matrix and discusses what it captures, where it has been used in the past, how it is used for linear and quadratic forms, how it is computed for rigid body systems with unilateral constraints, and some of the structural properties of the saltation matrix in these cases.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 6","pages":"585-608"},"PeriodicalIF":23.2,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) technologies have emerged as pivotal enablers across a multitude of industries, including consumer electronics, healthcare, and manufacturing, largely due to their significant resurgence over the past decade. The transformative power of AI is primarily derived from the utilization of deep neural networks (DNNs), which require extensive data for training and substantial computational resources for processing. Consequently, DNN models are typically trained and deployed on resource-rich cloud servers. However, due to potential latency issues associated with cloud communications, deep learning (DL) workflows (e.g., DNN training and inference) are increasingly being transitioned to wireless edge networks in proximity to end-user devices (EUDs). This shift is designed to support latency-sensitive applications and has given rise to a new paradigm of edge AI, which will play a critical role in upcoming sixth-generation (6G) networks to support ubiquitous AI applications. Despite its considerable potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL. Specifically, the acquisition of large-scale data, as well as the training and inference processes of DNNs, can rapidly deplete the battery energy of EUDs. This necessitates an energy-conscious approach to edge AI to ensure both optimal and sustainable performance. In this article, we present a contemporary survey on green edge AI. We commence by analyzing the principal energy consumption components of edge AI systems to identify the fundamental design principles of green edge AI. Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference. Finally, we underscore potential future research directions to further enhance the energy efficiency (EE) of edge AI.
{"title":"Green Edge AI: A Contemporary Survey","authors":"Yuyi Mao;Xianghao Yu;Kaibin Huang;Ying-Jun Angela Zhang;Jun Zhang","doi":"10.1109/JPROC.2024.3437365","DOIUrl":"10.1109/JPROC.2024.3437365","url":null,"abstract":"Artificial intelligence (AI) technologies have emerged as pivotal enablers across a multitude of industries, including consumer electronics, healthcare, and manufacturing, largely due to their significant resurgence over the past decade. The transformative power of AI is primarily derived from the utilization of deep neural networks (DNNs), which require extensive data for training and substantial computational resources for processing. Consequently, DNN models are typically trained and deployed on resource-rich cloud servers. However, due to potential latency issues associated with cloud communications, deep learning (DL) workflows (e.g., DNN training and inference) are increasingly being transitioned to wireless edge networks in proximity to end-user devices (EUDs). This shift is designed to support latency-sensitive applications and has given rise to a new paradigm of edge AI, which will play a critical role in upcoming sixth-generation (6G) networks to support ubiquitous AI applications. Despite its considerable potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL. Specifically, the acquisition of large-scale data, as well as the training and inference processes of DNNs, can rapidly deplete the battery energy of EUDs. This necessitates an energy-conscious approach to edge AI to ensure both optimal and sustainable performance. In this article, we present a contemporary survey on green edge AI. We commence by analyzing the principal energy consumption components of edge AI systems to identify the fundamental design principles of green edge AI. Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference. Finally, we underscore potential future research directions to further enhance the energy efficiency (EE) of edge AI.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 7","pages":"880-911"},"PeriodicalIF":23.2,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain-inspired computing (BIC) is an emerging research field that aims to build fundamental theories, models, hardware architectures, and application systems toward more general artificial intelligence (AI) by learning from the information processing mechanisms or structures/functions of biological nervous systems. It is regarded as one of the most promising research directions for future intelligent computing in the post-Moore era. In the past few years, various new schemes in this field have sprung up to explore more general AI. These works are quite divergent in the aspects of modeling/algorithm, software tool, hardware platform, and benchmark data since BIC is an interdisciplinary field that consists of many different domains, including computational neuroscience, AI, computer science, statistical physics, material science, and microelectronics. This situation greatly impedes researchers from obtaining a clear picture and getting started in the right way. Hence, there is an urgent requirement to do a comprehensive survey in this field to help correctly recognize and analyze such bewildering methodologies. What are the key issues to enhance the development of BIC? What roles do the current mainstream technologies play in the general framework of BIC? Which techniques are truly useful in real-world applications? These questions largely remain open. To address the above issues, in this survey, we first clarify the biggest challenge of BIC: how can AI models benefit from the recent advancements in computational neuroscience? With this challenge in mind, we will focus on discussing the concept of BIC and summarize four components of BIC infrastructure development: 1) modeling/algorithm; 2) hardware platform; 3) software tool; and 4) benchmark data. For each component, we will summarize its recent progress, main challenges to resolve, and future trends. Based on these studies, we present a general framework for the real-world applications of BIC systems, which is promising to benefit both AI and brain science. Finally, we claim that it is extremely important to build a research ecology to promote prosperity continuously in this field.
{"title":"Brain-Inspired Computing: A Systematic Survey and Future Trends","authors":"Guoqi Li;Lei Deng;Huajin Tang;Gang Pan;Yonghong Tian;Kaushik Roy;Wolfgang Maass","doi":"10.1109/JPROC.2024.3429360","DOIUrl":"10.1109/JPROC.2024.3429360","url":null,"abstract":"Brain-inspired computing (BIC) is an emerging research field that aims to build fundamental theories, models, hardware architectures, and application systems toward more general artificial intelligence (AI) by learning from the information processing mechanisms or structures/functions of biological nervous systems. It is regarded as one of the most promising research directions for future intelligent computing in the post-Moore era. In the past few years, various new schemes in this field have sprung up to explore more general AI. These works are quite divergent in the aspects of modeling/algorithm, software tool, hardware platform, and benchmark data since BIC is an interdisciplinary field that consists of many different domains, including computational neuroscience, AI, computer science, statistical physics, material science, and microelectronics. This situation greatly impedes researchers from obtaining a clear picture and getting started in the right way. Hence, there is an urgent requirement to do a comprehensive survey in this field to help correctly recognize and analyze such bewildering methodologies. What are the key issues to enhance the development of BIC? What roles do the current mainstream technologies play in the general framework of BIC? Which techniques are truly useful in real-world applications? These questions largely remain open. To address the above issues, in this survey, we first clarify the biggest challenge of BIC: how can AI models benefit from the recent advancements in computational neuroscience? With this challenge in mind, we will focus on discussing the concept of BIC and summarize four components of BIC infrastructure development: 1) modeling/algorithm; 2) hardware platform; 3) software tool; and 4) benchmark data. For each component, we will summarize its recent progress, main challenges to resolve, and future trends. Based on these studies, we present a general framework for the real-world applications of BIC systems, which is promising to benefit both AI and brain science. Finally, we claim that it is extremely important to build a research ecology to promote prosperity continuously in this field.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 6","pages":"544-584"},"PeriodicalIF":23.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141986412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The relentless demand for data in our society has driven the continuous evolution of wireless technologies to enhance network capacity. While current deployments of 5G have made strides in this direction using massive multiple-input-multiple-output (MIMO) and millimeter-wave (mmWave) bands, all existing wireless systems operate in a half-duplex (HD) mode. Full-duplex (FD) wireless communication, on the other hand, enables simultaneous transmission and reception (STAR) of signals at the same frequency, offering advantages such as enhanced spectrum efficiency, improved data rates, and reduced latency. This article presents a comprehensive review of FD wireless systems, with a focus on hardware design, implementation, cross-layered considerations, and applications. The major bottleneck in achieving FD communication is the presence of self-interference (SI) signals from the transmitter (TX) to the receiver, and achieving SI cancellation (SIC) with real-time adaption is critical for FD deployment. The review starts by establishing a system-level understanding of FD wireless systems, followed by a review of the architectures of antenna interfaces and integrated RF and baseband (BB) SI cancellers, which show promise in enabling low-cost, small-form-factor, portable FD systems. We then discuss digital cancellation techniques, including digital signal processing (DSP)- and learning-based algorithms. The challenges presented by FD phased-array and MIMO systems are discussed, followed by system-level aspects, including optimization algorithms, opportunities in the higher layers of the networking protocol stack, and testbed integration. Finally, the relevance of FD systems in applications such as next-generation (xG) wireless, mmWave repeaters, radars, and noncommunication domains is highlighted. Overall, this comprehensive review provides valuable insights into the design, implementation, and applications of FD wireless systems while opening up new directions for future research.
{"title":"Doubling Down on Wireless Capacity: A Review of Integrated Circuits, Systems, and Networks for Full Duplex","authors":"Aravind Nagulu;Negar Reiskarimian;Tingjun Chen;Sasank Garikapati;Igor Kadota;Tolga Dinc;Sastry Lakshmi Garimella;Manav Kohli;Alon Simon Levin;Gil Zussman;Harish Krishnaswamy","doi":"10.1109/JPROC.2024.3438755","DOIUrl":"10.1109/JPROC.2024.3438755","url":null,"abstract":"The relentless demand for data in our society has driven the continuous evolution of wireless technologies to enhance network capacity. While current deployments of 5G have made strides in this direction using massive multiple-input-multiple-output (MIMO) and millimeter-wave (mmWave) bands, all existing wireless systems operate in a half-duplex (HD) mode. Full-duplex (FD) wireless communication, on the other hand, enables simultaneous transmission and reception (STAR) of signals at the same frequency, offering advantages such as enhanced spectrum efficiency, improved data rates, and reduced latency. This article presents a comprehensive review of FD wireless systems, with a focus on hardware design, implementation, cross-layered considerations, and applications. The major bottleneck in achieving FD communication is the presence of self-interference (SI) signals from the transmitter (TX) to the receiver, and achieving SI cancellation (SIC) with real-time adaption is critical for FD deployment. The review starts by establishing a system-level understanding of FD wireless systems, followed by a review of the architectures of antenna interfaces and integrated RF and baseband (BB) SI cancellers, which show promise in enabling low-cost, small-form-factor, portable FD systems. We then discuss digital cancellation techniques, including digital signal processing (DSP)- and learning-based algorithms. The challenges presented by FD phased-array and MIMO systems are discussed, followed by system-level aspects, including optimization algorithms, opportunities in the higher layers of the networking protocol stack, and testbed integration. Finally, the relevance of FD systems in applications such as next-generation (xG) wireless, mmWave repeaters, radars, and noncommunication domains is highlighted. Overall, this comprehensive review provides valuable insights into the design, implementation, and applications of FD wireless systems while opening up new directions for future research.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"112 5","pages":"405-432"},"PeriodicalIF":23.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141986411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1109/JPROC.2024.3435012
Maxime Fontana;Michael Spratling;Miaojing Shi
Multitask learning (MTL) aims to learn multiple tasks simultaneously while exploiting their mutual relationships. By using shared resources to simultaneously calculate multiple outputs, this learning paradigm has the potential to have lower memory requirements and inference times compared to the traditional approach of using separate methods for each task. Previous work in MTL has mainly focused on fully supervised methods, as task relationships (TRs) can not only be leveraged to lower the level of data dependency of those methods but also improve the performance. However, MTL introduces a set of challenges due to a complex optimization scheme and a higher labeling requirement. This article focuses on how MTL could be utilized under different partial supervision settings to address these challenges. First, this article analyses how MTL traditionally uses different parameter sharing techniques to transfer knowledge in between tasks. Second, it presents different challenges arising from such a multiobjective optimization (MOO) scheme. Third, it introduces how task groupings (TGs) can be achieved by analyzing TRs. Fourth, it focuses on how partially supervised methods applied to MTL can tackle the aforementioned challenges. Lastly, this article presents the available datasets, tools, and benchmarking results of such methods. The reviewed articles, categorized following this work, are available at https://github.com/Klodivio355/MTL-CV-Review