Pub Date : 2024-04-12DOI: 10.1109/TCDS.2024.3386664
Hao Zhou;Lu Qi;Hai Huang;Xu Yang;Jing Yang
Underwater object detection is challenged by the presence of image blur induced by light absorption and scattering, resulting in substantial performance degradation. It is hypothesized that the attenuation of light is directly correlated with the camera-to-object distance, manifesting as variable degrees of image blur across different regions within underwater images. Specifically, regions in close proximity to the camera exhibit less pronounced blur compared to distant regions. Within the same object category, objects situated in clear regions share similar feature embeddings with their counterparts in blurred regions. This observation underscores the potential for leveraging objects in clear regions to aid in the detection of objects within blurred areas, a critical requirement for autonomous agents, such as autonomous underwater vehicles, engaged in continuous underwater object detection. Motivated by this insight, we introduce the spatiotemporal feature enhancement network (STFEN), a novel framework engineered to autonomously extract discriminative features from objects in clear regions. These features are then harnessed to enhance the representations of objects in blurred regions, operating across both spatial and temporal dimensions. Notably, the proposed STFEN seamlessly integrates into two-stage detectors, such as the faster region-based convolutional neural networks (Faster R-CNN) and feature pyramid networks (FPN). Extensive experimentation conducted on two benchmark underwater datasets, URPC 2018 and URPC 2019, conclusively demonstrates the efficacy of the STFEN framework. It delivers substantial enhancements in performance relative to baseline methods, yielding improvements in the mAP evaluation metric ranging from 3.7% to 5.0%.
{"title":"Spatiotemporal Feature Enhancement Network for Blur Robust Underwater Object Detection","authors":"Hao Zhou;Lu Qi;Hai Huang;Xu Yang;Jing Yang","doi":"10.1109/TCDS.2024.3386664","DOIUrl":"10.1109/TCDS.2024.3386664","url":null,"abstract":"Underwater object detection is challenged by the presence of image blur induced by light absorption and scattering, resulting in substantial performance degradation. It is hypothesized that the attenuation of light is directly correlated with the camera-to-object distance, manifesting as variable degrees of image blur across different regions within underwater images. Specifically, regions in close proximity to the camera exhibit less pronounced blur compared to distant regions. Within the same object category, objects situated in clear regions share similar feature embeddings with their counterparts in blurred regions. This observation underscores the potential for leveraging objects in clear regions to aid in the detection of objects within blurred areas, a critical requirement for autonomous agents, such as autonomous underwater vehicles, engaged in continuous underwater object detection. Motivated by this insight, we introduce the spatiotemporal feature enhancement network (STFEN), a novel framework engineered to autonomously extract discriminative features from objects in clear regions. These features are then harnessed to enhance the representations of objects in blurred regions, operating across both spatial and temporal dimensions. Notably, the proposed STFEN seamlessly integrates into two-stage detectors, such as the faster region-based convolutional neural networks (Faster R-CNN) and feature pyramid networks (FPN). Extensive experimentation conducted on two benchmark underwater datasets, URPC 2018 and URPC 2019, conclusively demonstrates the efficacy of the STFEN framework. It delivers substantial enhancements in performance relative to baseline methods, yielding improvements in the mAP evaluation metric ranging from 3.7% to 5.0%.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1814-1828"},"PeriodicalIF":5.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1109/tcds.2024.3388152
Qinghui Hong, Qing Li, Jia Li, Jingru Sun, Sichun Du
{"title":"Programmable Bionic Control Circuit Based on Central Pattern Generator","authors":"Qinghui Hong, Qing Li, Jia Li, Jingru Sun, Sichun Du","doi":"10.1109/tcds.2024.3388152","DOIUrl":"https://doi.org/10.1109/tcds.2024.3388152","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"64 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1109/TCDS.2024.3387575
Peng Yu;Ning Tan;Zhaohui Zhong;Cong Hu;Binbin Qiu;Changsheng Li
In modern manufacturing, redundant manipulators have been widely deployed. Performing a task often requires the manipulator to follow specific trajectories while avoiding surrounding obstacles. Different from most existing obstacle-avoidance (OA) schemes that rely on the kinematic model of redundant manipulators, in this article, we propose a new data-driven obstacle-avoidance (DDOA) scheme for the collision-free tracking control of redundant manipulators. The OA task is formulated as a quadratic programming problem with inequality constraints. Then, the objectives of obstacle avoidance and tracking control are unitedly transformed into a computation problem of solving a system including three recurrent neural networks. With the Jacobian estimators designed based on zeroing neural networks, the manipulator Jacobian and critical-point Jacobian can be estimated in a data-driven way without knowing the kinematic model. Finally, the effectiveness of the proposed scheme is validated through extensive simulations and experiments.
{"title":"Unifying Obstacle Avoidance and Tracking Control of Redundant Manipulators Subject to Joint Constraints: A New Data-Driven Scheme","authors":"Peng Yu;Ning Tan;Zhaohui Zhong;Cong Hu;Binbin Qiu;Changsheng Li","doi":"10.1109/TCDS.2024.3387575","DOIUrl":"10.1109/TCDS.2024.3387575","url":null,"abstract":"In modern manufacturing, redundant manipulators have been widely deployed. Performing a task often requires the manipulator to follow specific trajectories while avoiding surrounding obstacles. Different from most existing obstacle-avoidance (OA) schemes that rely on the kinematic model of redundant manipulators, in this article, we propose a new data-driven obstacle-avoidance (DDOA) scheme for the collision-free tracking control of redundant manipulators. The OA task is formulated as a quadratic programming problem with inequality constraints. Then, the objectives of obstacle avoidance and tracking control are unitedly transformed into a computation problem of solving a system including three recurrent neural networks. With the Jacobian estimators designed based on zeroing neural networks, the manipulator Jacobian and critical-point Jacobian can be estimated in a data-driven way without knowing the kinematic model. Finally, the effectiveness of the proposed scheme is validated through extensive simulations and experiments.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1861-1871"},"PeriodicalIF":5.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.1109/TCDS.2024.3386656
Shi Chen;Ming Jiang;Qi Zhao
There is growing interest in understanding the visual behavioral patterns of individuals with autism spectrum disorder (ASD) based on their attentional preferences. Attention reveals the cognitive or perceptual variation in ASD and can serve as a biomarker to assist diagnosis and intervention. The development of machine learning methods for attention-based ASD screening shows promises, yet it has been limited by the need for high-precision eye trackers, the scope of stimuli, and black-box neural networks, making it impractical for real-life clinical scenarios. This study proposes an interpretable and generalizable framework for quantifying atypical attention in people with ASD. Our framework utilizes photos taken by participants with standard cameras to enable practical and flexible deployment in resource-constrained regions. With an emphasis on interpretability and trustworthiness, our method automates human-like diagnostic reasoning, associates photos with semantically plausible attention patterns, and provides clinical evidence to support ASD experts. We further evaluate models on both in-domain and out-of-domain data and demonstrate that our approach accurately classifies individuals with ASD and generalizes across different domains. The proposed method offers an innovative, reliable, and cost-effective tool to assist the diagnostic procedure, which can be an important effort toward transforming clinical research in ASD screening with artificial intelligence systems. Our code is publicly available at https://github.com/szzexpoi/proto_asd