When applied to autonomous vehicle (AV) settings, action recognition can enhance an environment model’s situational awareness. This is especially prevalent in scenarios where traditional geometric descriptions and heuristics in AVs are insufficient. However, action recognition has traditionally been studied for humans, and its limited adaptability to noisy, un-clipped, un-pampered, raw RGB data has limited its application in other fields. To push for the advancement and adoption of action recognition into AVs, this work proposes a novel two-stage action recognition system, termed RALACs. RALACs formulates the problem of action recognition for road scenes, and bridges the gap between it and the established field of human action recognition. This work shows how attention layers can be useful for encoding the relations across agents, and stresses how such a scheme can be class-agnostic. Furthermore, to address the dynamic nature of agents on the road, RALACs constructs a novel approach to adapting Region of Interest (ROI) alignment to agent tracks for downstream action classification. Finally, our scheme also considers the problem of active agent detection, and utilizes a novel application of fusing optical flow maps to discern relevant agents in a road scene. We show that our proposed scheme can outperform the baseline on the ICCV2021 Road Challenge dataset (Singh et al., 2023) algorithm and by deploying it on a real vehicle platform, we provide preliminary insight to the usefulness of action recognition in decision making. The code is publicly available at https://github.com/WATonomous/action-classification.
{"title":"RALACs: Action Recognition in Autonomous Vehicles Using Interaction Encoding and Optical Flow","authors":"Eddy Zhou;Owen Leather;Alex Zhuang;Alikasim Budhwani;Rowan Dempster;Quanquan Li;Mohammad Al-Sharman;Derek Rayside;William Melek","doi":"10.1109/TCYB.2024.3515104","DOIUrl":"10.1109/TCYB.2024.3515104","url":null,"abstract":"When applied to autonomous vehicle (AV) settings, action recognition can enhance an environment model’s situational awareness. This is especially prevalent in scenarios where traditional geometric descriptions and heuristics in AVs are insufficient. However, action recognition has traditionally been studied for humans, and its limited adaptability to noisy, un-clipped, un-pampered, raw RGB data has limited its application in other fields. To push for the advancement and adoption of action recognition into AVs, this work proposes a novel two-stage action recognition system, termed RALACs. RALACs formulates the problem of action recognition for road scenes, and bridges the gap between it and the established field of human action recognition. This work shows how attention layers can be useful for encoding the relations across agents, and stresses how such a scheme can be class-agnostic. Furthermore, to address the dynamic nature of agents on the road, RALACs constructs a novel approach to adapting Region of Interest (ROI) alignment to agent tracks for downstream action classification. Finally, our scheme also considers the problem of active agent detection, and utilizes a novel application of fusing optical flow maps to discern relevant agents in a road scene. We show that our proposed scheme can outperform the baseline on the ICCV2021 Road Challenge dataset (Singh et al., 2023) algorithm and by deploying it on a real vehicle platform, we provide preliminary insight to the usefulness of action recognition in decision making. The code is publicly available at <uri>https://github.com/WATonomous/action-classification</uri>.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 2","pages":"512-525"},"PeriodicalIF":9.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/tcyb.2024.3515456
Chengqian Zhou, Jun Yang, Shihua Li, Wen-Hua Chen
{"title":"Temporal Logic Disturbance Rejection Control of Nonlinear Systems Using Control Barrier Functions","authors":"Chengqian Zhou, Jun Yang, Shihua Li, Wen-Hua Chen","doi":"10.1109/tcyb.2024.3515456","DOIUrl":"https://doi.org/10.1109/tcyb.2024.3515456","url":null,"abstract":"","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"149 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/tcyb.2024.3515276
Yan Lei, Yan-Wu Wang, Ju H. Park
{"title":"Robust Output Regulation of Uncertain Singular Linear Systems Subject to Input Saturation and DoS Attacks","authors":"Yan Lei, Yan-Wu Wang, Ju H. Park","doi":"10.1109/tcyb.2024.3515276","DOIUrl":"https://doi.org/10.1109/tcyb.2024.3515276","url":null,"abstract":"","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"2 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1109/TCYB.2024.3515359
{"title":"IEEE Transactions on Cybernetics","authors":"","doi":"10.1109/TCYB.2024.3515359","DOIUrl":"10.1109/TCYB.2024.3515359","url":null,"abstract":"","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 1","pages":"C3-C3"},"PeriodicalIF":9.4,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10807685","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1109/TCYB.2024.3515357
{"title":"IEEE Transactions on Cybernetics","authors":"","doi":"10.1109/TCYB.2024.3515357","DOIUrl":"10.1109/TCYB.2024.3515357","url":null,"abstract":"","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 1","pages":"C4-C4"},"PeriodicalIF":9.4,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10807690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/tcyb.2024.3507275
Rongjiang Li, Die Gan, Siyu Xie, Haibo Gu, Jinhu Lü
{"title":"Analysis of the Compressed Distributed Kalman Filter Over Markovian Switching Topology","authors":"Rongjiang Li, Die Gan, Siyu Xie, Haibo Gu, Jinhu Lü","doi":"10.1109/tcyb.2024.3507275","DOIUrl":"https://doi.org/10.1109/tcyb.2024.3507275","url":null,"abstract":"","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"1 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142840883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1109/TCYB.2024.3504840
Haisheng Xia;Fei Liao;Binglei Bao;Jintao Chen;Binglu Wang;Qinghua Huang;Zhijun Li
Underwater areas have harsh environments with poor light, limited visibility, and high levels of noise. Humans have a weak perception of position, surroundings, and exterior information when staying underwater, which makes it difficult for humans to carry out complex underwater tasks, such as rescue, observation, and construction. Wearable devices have shown good results in enhancing human sensory function on land, thus they could potentially play a role in enhancing human underwater perception ability. This perspective aims to analyze the state-of-the-art of underwater wearable systems for human perception enhancement. This work discusses the core technology and challenges of human underwater perceptual enhancement, including wearable underwater navigation, underwater environment reconstruction, and underwater sensorial information delivery. Future research could focus on designing waterproof flexible human-machine interfaces for sensing and feedback, exploiting advanced sensors and fusion algorithms for wearable underwater positioning, and studying multimodal information interaction strategies of wearable systems.
{"title":"Perspective on Wearable Systems for Human Underwater Perceptual Enhancement","authors":"Haisheng Xia;Fei Liao;Binglei Bao;Jintao Chen;Binglu Wang;Qinghua Huang;Zhijun Li","doi":"10.1109/TCYB.2024.3504840","DOIUrl":"10.1109/TCYB.2024.3504840","url":null,"abstract":"Underwater areas have harsh environments with poor light, limited visibility, and high levels of noise. Humans have a weak perception of position, surroundings, and exterior information when staying underwater, which makes it difficult for humans to carry out complex underwater tasks, such as rescue, observation, and construction. Wearable devices have shown good results in enhancing human sensory function on land, thus they could potentially play a role in enhancing human underwater perception ability. This perspective aims to analyze the state-of-the-art of underwater wearable systems for human perception enhancement. This work discusses the core technology and challenges of human underwater perceptual enhancement, including wearable underwater navigation, underwater environment reconstruction, and underwater sensorial information delivery. Future research could focus on designing waterproof flexible human-machine interfaces for sensing and feedback, exploiting advanced sensors and fusion algorithms for wearable underwater positioning, and studying multimodal information interaction strategies of wearable systems.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 2","pages":"698-711"},"PeriodicalIF":9.4,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}