Pub Date : 2024-04-16DOI: 10.1109/TCDS.2024.3390005
Sayantani Ghosh;Amit Konar;Atulya K. Nagar
Scientific creativity refers to natural/automated genesis of innovations in science, propelling scientific, technological, industrial, and/or societal progress. Mental paper folding (MPF) requires spatial reasoning, which is an important attribute to determine creative potential of people. The article proposes a novel approach to determine creative potential of people from their brain-connectivity network (BCN) during their participation in MPF tasks using functional near-infrared spectroscopy (fNIRS). The work involves three phases. The first phase includes construction of BCN using Pearson's correlation method. The centrality features of the nodes in the network are assessed in the second phase and transferred to a proposed graph convolutional-interval type-2 fuzzy network (GC-IT2FN) in the third phase to classify the creative potential of individuals in four grades. The novelty of the work includes: 1) a novel self-attention mechanism in the network to guide graph convolution layers to focus on the most relevant nodes; 2) selection of a new activation function, Logish, after graph convolution to enhance classifier accuracy; and 3) utilizing the promising region in the footprint of uncertainty (FOU) of the used fuzzy sets of IT2FN-based classifier to reduce the effect of uncertainty in brain data on classifier performance. Experiments conducted demonstrate the efficacy of the proposed framework in contrast to traditional approaches.
{"title":"Cognitive Assessment of Scientific Creative Skill by Brain-Connectivity Analysis Using Graph Convolutional Interval Type-2 Fuzzy Network","authors":"Sayantani Ghosh;Amit Konar;Atulya K. Nagar","doi":"10.1109/TCDS.2024.3390005","DOIUrl":"10.1109/TCDS.2024.3390005","url":null,"abstract":"Scientific creativity refers to natural/automated genesis of innovations in science, propelling scientific, technological, industrial, and/or societal progress. Mental paper folding (MPF) requires spatial reasoning, which is an important attribute to determine creative potential of people. The article proposes a novel approach to determine creative potential of people from their brain-connectivity network (BCN) during their participation in MPF tasks using functional near-infrared spectroscopy (fNIRS). The work involves three phases. The first phase includes construction of BCN using Pearson's correlation method. The centrality features of the nodes in the network are assessed in the second phase and transferred to a proposed graph convolutional-interval type-2 fuzzy network (GC-IT2FN) in the third phase to classify the creative potential of individuals in four grades. The novelty of the work includes: 1) a novel self-attention mechanism in the network to guide graph convolution layers to focus on the most relevant nodes; 2) selection of a new activation function, Logish, after graph convolution to enhance classifier accuracy; and 3) utilizing the promising region in the footprint of uncertainty (FOU) of the used fuzzy sets of IT2FN-based classifier to reduce the effect of uncertainty in brain data on classifier performance. Experiments conducted demonstrate the efficacy of the proposed framework in contrast to traditional approaches.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1872-1886"},"PeriodicalIF":5.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1109/TCDS.2024.3386664
Hao Zhou;Lu Qi;Hai Huang;Xu Yang;Jing Yang
Underwater object detection is challenged by the presence of image blur induced by light absorption and scattering, resulting in substantial performance degradation. It is hypothesized that the attenuation of light is directly correlated with the camera-to-object distance, manifesting as variable degrees of image blur across different regions within underwater images. Specifically, regions in close proximity to the camera exhibit less pronounced blur compared to distant regions. Within the same object category, objects situated in clear regions share similar feature embeddings with their counterparts in blurred regions. This observation underscores the potential for leveraging objects in clear regions to aid in the detection of objects within blurred areas, a critical requirement for autonomous agents, such as autonomous underwater vehicles, engaged in continuous underwater object detection. Motivated by this insight, we introduce the spatiotemporal feature enhancement network (STFEN), a novel framework engineered to autonomously extract discriminative features from objects in clear regions. These features are then harnessed to enhance the representations of objects in blurred regions, operating across both spatial and temporal dimensions. Notably, the proposed STFEN seamlessly integrates into two-stage detectors, such as the faster region-based convolutional neural networks (Faster R-CNN) and feature pyramid networks (FPN). Extensive experimentation conducted on two benchmark underwater datasets, URPC 2018 and URPC 2019, conclusively demonstrates the efficacy of the STFEN framework. It delivers substantial enhancements in performance relative to baseline methods, yielding improvements in the mAP evaluation metric ranging from 3.7% to 5.0%.
{"title":"Spatiotemporal Feature Enhancement Network for Blur Robust Underwater Object Detection","authors":"Hao Zhou;Lu Qi;Hai Huang;Xu Yang;Jing Yang","doi":"10.1109/TCDS.2024.3386664","DOIUrl":"10.1109/TCDS.2024.3386664","url":null,"abstract":"Underwater object detection is challenged by the presence of image blur induced by light absorption and scattering, resulting in substantial performance degradation. It is hypothesized that the attenuation of light is directly correlated with the camera-to-object distance, manifesting as variable degrees of image blur across different regions within underwater images. Specifically, regions in close proximity to the camera exhibit less pronounced blur compared to distant regions. Within the same object category, objects situated in clear regions share similar feature embeddings with their counterparts in blurred regions. This observation underscores the potential for leveraging objects in clear regions to aid in the detection of objects within blurred areas, a critical requirement for autonomous agents, such as autonomous underwater vehicles, engaged in continuous underwater object detection. Motivated by this insight, we introduce the spatiotemporal feature enhancement network (STFEN), a novel framework engineered to autonomously extract discriminative features from objects in clear regions. These features are then harnessed to enhance the representations of objects in blurred regions, operating across both spatial and temporal dimensions. Notably, the proposed STFEN seamlessly integrates into two-stage detectors, such as the faster region-based convolutional neural networks (Faster R-CNN) and feature pyramid networks (FPN). Extensive experimentation conducted on two benchmark underwater datasets, URPC 2018 and URPC 2019, conclusively demonstrates the efficacy of the STFEN framework. It delivers substantial enhancements in performance relative to baseline methods, yielding improvements in the mAP evaluation metric ranging from 3.7% to 5.0%.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1814-1828"},"PeriodicalIF":5.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1109/tcds.2024.3388152
Qinghui Hong, Qing Li, Jia Li, Jingru Sun, Sichun Du
{"title":"Programmable Bionic Control Circuit Based on Central Pattern Generator","authors":"Qinghui Hong, Qing Li, Jia Li, Jingru Sun, Sichun Du","doi":"10.1109/tcds.2024.3388152","DOIUrl":"https://doi.org/10.1109/tcds.2024.3388152","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"64 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1109/TCDS.2024.3387575
Peng Yu;Ning Tan;Zhaohui Zhong;Cong Hu;Binbin Qiu;Changsheng Li
In modern manufacturing, redundant manipulators have been widely deployed. Performing a task often requires the manipulator to follow specific trajectories while avoiding surrounding obstacles. Different from most existing obstacle-avoidance (OA) schemes that rely on the kinematic model of redundant manipulators, in this article, we propose a new data-driven obstacle-avoidance (DDOA) scheme for the collision-free tracking control of redundant manipulators. The OA task is formulated as a quadratic programming problem with inequality constraints. Then, the objectives of obstacle avoidance and tracking control are unitedly transformed into a computation problem of solving a system including three recurrent neural networks. With the Jacobian estimators designed based on zeroing neural networks, the manipulator Jacobian and critical-point Jacobian can be estimated in a data-driven way without knowing the kinematic model. Finally, the effectiveness of the proposed scheme is validated through extensive simulations and experiments.
{"title":"Unifying Obstacle Avoidance and Tracking Control of Redundant Manipulators Subject to Joint Constraints: A New Data-Driven Scheme","authors":"Peng Yu;Ning Tan;Zhaohui Zhong;Cong Hu;Binbin Qiu;Changsheng Li","doi":"10.1109/TCDS.2024.3387575","DOIUrl":"10.1109/TCDS.2024.3387575","url":null,"abstract":"In modern manufacturing, redundant manipulators have been widely deployed. Performing a task often requires the manipulator to follow specific trajectories while avoiding surrounding obstacles. Different from most existing obstacle-avoidance (OA) schemes that rely on the kinematic model of redundant manipulators, in this article, we propose a new data-driven obstacle-avoidance (DDOA) scheme for the collision-free tracking control of redundant manipulators. The OA task is formulated as a quadratic programming problem with inequality constraints. Then, the objectives of obstacle avoidance and tracking control are unitedly transformed into a computation problem of solving a system including three recurrent neural networks. With the Jacobian estimators designed based on zeroing neural networks, the manipulator Jacobian and critical-point Jacobian can be estimated in a data-driven way without knowing the kinematic model. Finally, the effectiveness of the proposed scheme is validated through extensive simulations and experiments.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1861-1871"},"PeriodicalIF":5.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.1109/TCDS.2024.3386656
Shi Chen;Ming Jiang;Qi Zhao
There is growing interest in understanding the visual behavioral patterns of individuals with autism spectrum disorder (ASD) based on their attentional preferences. Attention reveals the cognitive or perceptual variation in ASD and can serve as a biomarker to assist diagnosis and intervention. The development of machine learning methods for attention-based ASD screening shows promises, yet it has been limited by the need for high-precision eye trackers, the scope of stimuli, and black-box neural networks, making it impractical for real-life clinical scenarios. This study proposes an interpretable and generalizable framework for quantifying atypical attention in people with ASD. Our framework utilizes photos taken by participants with standard cameras to enable practical and flexible deployment in resource-constrained regions. With an emphasis on interpretability and trustworthiness, our method automates human-like diagnostic reasoning, associates photos with semantically plausible attention patterns, and provides clinical evidence to support ASD experts. We further evaluate models on both in-domain and out-of-domain data and demonstrate that our approach accurately classifies individuals with ASD and generalizes across different domains. The proposed method offers an innovative, reliable, and cost-effective tool to assist the diagnostic procedure, which can be an important effort toward transforming clinical research in ASD screening with artificial intelligence systems. Our code is publicly available at https://github.com/szzexpoi/proto_asd