首页 > 最新文献

IEEE Transactions on Cognitive and Developmental Systems最新文献

英文 中文
Spatiotemporal Feature Enhancement Network for Blur Robust Underwater Object Detection 用于模糊鲁棒水下物体检测的时空特征增强网络
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-12 DOI: 10.1109/TCDS.2024.3386664
Hao Zhou;Lu Qi;Hai Huang;Xu Yang;Jing Yang
Underwater object detection is challenged by the presence of image blur induced by light absorption and scattering, resulting in substantial performance degradation. It is hypothesized that the attenuation of light is directly correlated with the camera-to-object distance, manifesting as variable degrees of image blur across different regions within underwater images. Specifically, regions in close proximity to the camera exhibit less pronounced blur compared to distant regions. Within the same object category, objects situated in clear regions share similar feature embeddings with their counterparts in blurred regions. This observation underscores the potential for leveraging objects in clear regions to aid in the detection of objects within blurred areas, a critical requirement for autonomous agents, such as autonomous underwater vehicles, engaged in continuous underwater object detection. Motivated by this insight, we introduce the spatiotemporal feature enhancement network (STFEN), a novel framework engineered to autonomously extract discriminative features from objects in clear regions. These features are then harnessed to enhance the representations of objects in blurred regions, operating across both spatial and temporal dimensions. Notably, the proposed STFEN seamlessly integrates into two-stage detectors, such as the faster region-based convolutional neural networks (Faster R-CNN) and feature pyramid networks (FPN). Extensive experimentation conducted on two benchmark underwater datasets, URPC 2018 and URPC 2019, conclusively demonstrates the efficacy of the STFEN framework. It delivers substantial enhancements in performance relative to baseline methods, yielding improvements in the mAP evaluation metric ranging from 3.7% to 5.0%.
水下物体检测面临着光吸收和散射引起的图像模糊的挑战,导致性能大幅下降。据推测,光的衰减与摄像机到物体的距离直接相关,在水下图像的不同区域表现为不同程度的图像模糊。具体来说,与远处的区域相比,与摄像机距离较近的区域表现出的模糊程度较轻。在同一物体类别中,位于清晰区域的物体与位于模糊区域的物体具有相似的特征嵌入。这一观察结果凸显了利用清晰区域内的物体来帮助检测模糊区域内物体的潜力,而这正是自主代理(如自主水下航行器)进行连续水下物体检测的关键要求。受此启发,我们引入了时空特征增强网络(STFEN),这是一个新颖的框架,旨在自主地从清晰区域的物体中提取辨别特征。然后利用这些特征来增强模糊区域中物体的表征,并在空间和时间维度上进行操作。值得注意的是,所提出的 STFEN 可以无缝集成到两级检测器中,如更快的基于区域的卷积神经网络(Faster R-CNN)和特征金字塔网络(FPN)。在两个基准水下数据集(URPC 2018 和 URPC 2019)上进行的广泛实验最终证明了 STFEN 框架的功效。与基线方法相比,它的性能有了大幅提升,在 mAP 评估指标上取得了 3.7% 到 5.0% 的改进。
{"title":"Spatiotemporal Feature Enhancement Network for Blur Robust Underwater Object Detection","authors":"Hao Zhou;Lu Qi;Hai Huang;Xu Yang;Jing Yang","doi":"10.1109/TCDS.2024.3386664","DOIUrl":"10.1109/TCDS.2024.3386664","url":null,"abstract":"Underwater object detection is challenged by the presence of image blur induced by light absorption and scattering, resulting in substantial performance degradation. It is hypothesized that the attenuation of light is directly correlated with the camera-to-object distance, manifesting as variable degrees of image blur across different regions within underwater images. Specifically, regions in close proximity to the camera exhibit less pronounced blur compared to distant regions. Within the same object category, objects situated in clear regions share similar feature embeddings with their counterparts in blurred regions. This observation underscores the potential for leveraging objects in clear regions to aid in the detection of objects within blurred areas, a critical requirement for autonomous agents, such as autonomous underwater vehicles, engaged in continuous underwater object detection. Motivated by this insight, we introduce the spatiotemporal feature enhancement network (STFEN), a novel framework engineered to autonomously extract discriminative features from objects in clear regions. These features are then harnessed to enhance the representations of objects in blurred regions, operating across both spatial and temporal dimensions. Notably, the proposed STFEN seamlessly integrates into two-stage detectors, such as the faster region-based convolutional neural networks (Faster R-CNN) and feature pyramid networks (FPN). Extensive experimentation conducted on two benchmark underwater datasets, URPC 2018 and URPC 2019, conclusively demonstrates the efficacy of the STFEN framework. It delivers substantial enhancements in performance relative to baseline methods, yielding improvements in the mAP evaluation metric ranging from 3.7% to 5.0%.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1814-1828"},"PeriodicalIF":5.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Programmable Bionic Control Circuit Based on Central Pattern Generator 基于中央模式发生器的可编程仿生控制电路
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-04-12 DOI: 10.1109/tcds.2024.3388152
Qinghui Hong, Qing Li, Jia Li, Jingru Sun, Sichun Du
{"title":"Programmable Bionic Control Circuit Based on Central Pattern Generator","authors":"Qinghui Hong, Qing Li, Jia Li, Jingru Sun, Sichun Du","doi":"10.1109/tcds.2024.3388152","DOIUrl":"https://doi.org/10.1109/tcds.2024.3388152","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"64 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unifying Obstacle Avoidance and Tracking Control of Redundant Manipulators Subject to Joint Constraints: A New Data-Driven Scheme 受联合约束的冗余机械手的统一避障和跟踪控制:数据驱动的新方案
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-11 DOI: 10.1109/TCDS.2024.3387575
Peng Yu;Ning Tan;Zhaohui Zhong;Cong Hu;Binbin Qiu;Changsheng Li
In modern manufacturing, redundant manipulators have been widely deployed. Performing a task often requires the manipulator to follow specific trajectories while avoiding surrounding obstacles. Different from most existing obstacle-avoidance (OA) schemes that rely on the kinematic model of redundant manipulators, in this article, we propose a new data-driven obstacle-avoidance (DDOA) scheme for the collision-free tracking control of redundant manipulators. The OA task is formulated as a quadratic programming problem with inequality constraints. Then, the objectives of obstacle avoidance and tracking control are unitedly transformed into a computation problem of solving a system including three recurrent neural networks. With the Jacobian estimators designed based on zeroing neural networks, the manipulator Jacobian and critical-point Jacobian can be estimated in a data-driven way without knowing the kinematic model. Finally, the effectiveness of the proposed scheme is validated through extensive simulations and experiments.
在现代制造业中,冗余机械手得到了广泛应用。在执行任务时,机械手往往需要遵循特定的轨迹,同时避开周围的障碍物。与大多数依赖冗余机械手运动学模型的现有避障(OA)方案不同,本文提出了一种新的数据驱动避障(DDOA)方案,用于冗余机械手的无碰撞跟踪控制。OA 任务被表述为一个带不等式约束的二次编程问题。然后,避障目标和跟踪控制目标被统一转化为一个计算问题,即求解一个包括三个递归神经网络的系统。利用基于归零神经网络设计的雅各布估计器,可以在不知道运动学模型的情况下,以数据驱动的方式估计操纵器雅各布和临界点雅各布。最后,通过大量的模拟和实验验证了所提方案的有效性。
{"title":"Unifying Obstacle Avoidance and Tracking Control of Redundant Manipulators Subject to Joint Constraints: A New Data-Driven Scheme","authors":"Peng Yu;Ning Tan;Zhaohui Zhong;Cong Hu;Binbin Qiu;Changsheng Li","doi":"10.1109/TCDS.2024.3387575","DOIUrl":"10.1109/TCDS.2024.3387575","url":null,"abstract":"In modern manufacturing, redundant manipulators have been widely deployed. Performing a task often requires the manipulator to follow specific trajectories while avoiding surrounding obstacles. Different from most existing obstacle-avoidance (OA) schemes that rely on the kinematic model of redundant manipulators, in this article, we propose a new data-driven obstacle-avoidance (DDOA) scheme for the collision-free tracking control of redundant manipulators. The OA task is formulated as a quadratic programming problem with inequality constraints. Then, the objectives of obstacle avoidance and tracking control are unitedly transformed into a computation problem of solving a system including three recurrent neural networks. With the Jacobian estimators designed based on zeroing neural networks, the manipulator Jacobian and critical-point Jacobian can be estimated in a data-driven way without knowing the kinematic model. Finally, the effectiveness of the proposed scheme is validated through extensive simulations and experiments.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1861-1871"},"PeriodicalIF":5.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning to Interpret Autism Spectrum Disorder Behind the Camera 深度学习解读镜头背后的自闭症谱系障碍
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-09 DOI: 10.1109/TCDS.2024.3386656
Shi Chen;Ming Jiang;Qi Zhao
There is growing interest in understanding the visual behavioral patterns of individuals with autism spectrum disorder (ASD) based on their attentional preferences. Attention reveals the cognitive or perceptual variation in ASD and can serve as a biomarker to assist diagnosis and intervention. The development of machine learning methods for attention-based ASD screening shows promises, yet it has been limited by the need for high-precision eye trackers, the scope of stimuli, and black-box neural networks, making it impractical for real-life clinical scenarios. This study proposes an interpretable and generalizable framework for quantifying atypical attention in people with ASD. Our framework utilizes photos taken by participants with standard cameras to enable practical and flexible deployment in resource-constrained regions. With an emphasis on interpretability and trustworthiness, our method automates human-like diagnostic reasoning, associates photos with semantically plausible attention patterns, and provides clinical evidence to support ASD experts. We further evaluate models on both in-domain and out-of-domain data and demonstrate that our approach accurately classifies individuals with ASD and generalizes across different domains. The proposed method offers an innovative, reliable, and cost-effective tool to assist the diagnostic procedure, which can be an important effort toward transforming clinical research in ASD screening with artificial intelligence systems. Our code is publicly available at https://github.com/szzexpoi/proto_asd.
人们越来越有兴趣了解自闭症谱系障碍(ASD)患者基于注意力偏好的视觉行为模式。注意力揭示了自闭症谱系障碍的认知或知觉变异,可作为辅助诊断和干预的生物标志物。基于注意力的 ASD 筛查的机器学习方法的发展前景广阔,但由于需要高精度的眼动追踪器、刺激范围和黑盒神经网络等因素的限制,使其在实际临床场景中并不实用。本研究提出了一个可解释、可推广的框架,用于量化 ASD 患者的非典型注意力。我们的框架利用参与者用标准相机拍摄的照片,以便在资源有限的地区进行实用而灵活的部署。我们的方法强调可解释性和可信度,能自动进行类似人类的诊断推理,将照片与语义上可信的注意力模式联系起来,并为 ASD 专家提供临床证据支持。我们还对域内和域外数据的模型进行了评估,结果表明我们的方法能准确地对 ASD 患者进行分类,并能在不同领域进行推广。我们提出的方法为辅助诊断程序提供了一种创新、可靠、经济高效的工具,它可以成为利用人工智能系统改变 ASD 筛查临床研究的一项重要工作。我们的代码可在 https://github.com/szzexpoi/proto_asd 公开获取。
{"title":"Deep Learning to Interpret Autism Spectrum Disorder Behind the Camera","authors":"Shi Chen;Ming Jiang;Qi Zhao","doi":"10.1109/TCDS.2024.3386656","DOIUrl":"10.1109/TCDS.2024.3386656","url":null,"abstract":"There is growing interest in understanding the visual behavioral patterns of individuals with autism spectrum disorder (ASD) based on their attentional preferences. Attention reveals the cognitive or perceptual variation in ASD and can serve as a biomarker to assist diagnosis and intervention. The development of machine learning methods for attention-based ASD screening shows promises, yet it has been limited by the need for high-precision eye trackers, the scope of stimuli, and black-box neural networks, making it impractical for real-life clinical scenarios. This study proposes an interpretable and generalizable framework for quantifying atypical attention in people with ASD. Our framework utilizes photos taken by participants with standard cameras to enable practical and flexible deployment in resource-constrained regions. With an emphasis on interpretability and trustworthiness, our method automates human-like diagnostic reasoning, associates photos with semantically plausible attention patterns, and provides clinical evidence to support ASD experts. We further evaluate models on both in-domain and out-of-domain data and demonstrate that our approach accurately classifies individuals with ASD and generalizes across different domains. The proposed method offers an innovative, reliable, and cost-effective tool to assist the diagnostic procedure, which can be an important effort toward transforming clinical research in ASD screening with artificial intelligence systems. Our code is publicly available at \u0000<uri>https://github.com/szzexpoi/proto_asd</uri>\u0000.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1803-1813"},"PeriodicalIF":5.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Semisupervised Object Segmentation for Long-Term Videos Using Adaptive Memory Network 利用自适应记忆网络为长期视频提供高效的半监督物体分割技术
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-08 DOI: 10.1109/TCDS.2024.3385849
Shan Zhong;Guoqiang Li;Wenhao Ying;Fuzhou Zhao;Gengsheng Xie;Shengrong Gong
Video object segmentation (VOS) uses the first annotated video mask to achieve consistent and precise segmentation in subsequent frames. Recently, memory-based methods have received significant attention owing to their substantial performance enhancements. However, these approaches rely on a fixed global memory strategy, which poses a challenge to segmentation accuracy and speed in the context of longer videos. To alleviate this limitation, we propose a novel semisupervised VOS model, founded on the principles of the adaptive memory network. Our proposed model adaptively extracts object features by focusing on the object area while effectively filtering out extraneous background noise. An identification mechanism is also thoughtfully applied to discern each object in multiobject scenarios. To further reduce storage consumption without compromising the saliency of object information, the outdated features residing in the memory pool are compressed into salient features through the employment of a self-attention mechanism. Furthermore, we introduce a local matching module, specifically devised to refine object features by fusing the contextual information from historical frames. We demonstrate the efficiency of our approach through experiments, substantially augmenting both the speed and precision of segmentation for long-term videos, while maintaining comparable performance for short videos.
视频对象分割(VOS)利用首次注释的视频掩码来实现后续帧的一致和精确分割。最近,基于内存的方法因其性能大幅提升而备受关注。然而,这些方法依赖于固定的全局内存策略,这对较长视频的分割精度和速度提出了挑战。为了缓解这一限制,我们根据自适应记忆网络的原理,提出了一种新颖的半监督 VOS 模型。我们提出的模型通过聚焦对象区域自适应地提取对象特征,同时有效地过滤掉无关的背景噪声。此外,我们还贴心地采用了一种识别机制,以辨别多物体场景中的每个物体。为了在不影响对象信息显著性的前提下进一步减少存储消耗,我们采用了自我关注机制,将内存池中的过时特征压缩成显著特征。此外,我们还引入了一个局部匹配模块,专门用于通过融合历史帧的上下文信息来完善对象特征。我们通过实验证明了这一方法的高效性,大大提高了长期视频的分割速度和精度,同时保持了与短视频相当的性能。
{"title":"Efficient Semisupervised Object Segmentation for Long-Term Videos Using Adaptive Memory Network","authors":"Shan Zhong;Guoqiang Li;Wenhao Ying;Fuzhou Zhao;Gengsheng Xie;Shengrong Gong","doi":"10.1109/TCDS.2024.3385849","DOIUrl":"10.1109/TCDS.2024.3385849","url":null,"abstract":"Video object segmentation (VOS) uses the first annotated video mask to achieve consistent and precise segmentation in subsequent frames. Recently, memory-based methods have received significant attention owing to their substantial performance enhancements. However, these approaches rely on a fixed global memory strategy, which poses a challenge to segmentation accuracy and speed in the context of longer videos. To alleviate this limitation, we propose a novel semisupervised VOS model, founded on the principles of the adaptive memory network. Our proposed model adaptively extracts object features by focusing on the object area while effectively filtering out extraneous background noise. An identification mechanism is also thoughtfully applied to discern each object in multiobject scenarios. To further reduce storage consumption without compromising the saliency of object information, the outdated features residing in the memory pool are compressed into salient features through the employment of a self-attention mechanism. Furthermore, we introduce a local matching module, specifically devised to refine object features by fusing the contextual information from historical frames. We demonstrate the efficiency of our approach through experiments, substantially augmenting both the speed and precision of segmentation for long-term videos, while maintaining comparable performance for short videos.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1789-1802"},"PeriodicalIF":5.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GENet: A Generic Neural Network for Detecting Various Neurological Disorders From EEG GENet:从脑电图检测各种神经系统疾病的通用神经网络
IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-08 DOI: 10.1109/TCDS.2024.3386364
Md. Nurul Ahad Tawhid;Siuly Siuly;Kate Wang;Hua Wang
The global health burden of neurological disorders (NDs) is vast, and they are recognized as major causes of mortality and disability worldwide. Most existing NDs detection methods are disease-specific, which limits an algorithm's cross-disease applicability. A single diagnostic platform can save time and money over multiple diagnostic systems. There is currently no unified standard platform for diagnosing different types of NDs utilizing electroencephalogram (EEG) signal data. To address this issue, this study aims to develop a generic EEG neural Network (GENet) framework based on a convolutional neural network that can identify various NDs from EEG. The proposed framework consists of several parts: 1) preparing data using channel reduction, resampling, and segmentation for the GENet model; 2) designing and training the GENet model to carry out important features for the classification task; and 3) assessing the proposed model's performance using different signal segment lengths and several training batch sizes and also cross-validating using seven different EEG datasets of six distinct NDs namely schizophrenia, autism spectrum disorder, epilepsy, Parkinson's disease, mild cognitive impairment, and attention-deficit/hyperactivity disorder. In addition, this study also investigates whether the proposed GENet model can identify multiple NDs from EEG. The proposed model achieved much better performance for both binary and multiclass classification compared to state-of-the-art methods. In addition, the proposed model is validated using several ablation studies and layerwise feature visualization, which provide consistency and efficiency to the proposed model. The proposed GENet model will help technologists create standard software for detecting any of these NDs from EEG.
神经系统疾病(NDs)给全球健康造成了巨大负担,被认为是导致全球死亡和残疾的主要原因。现有的大多数 NDs 检测方法都是针对特定疾病的,这限制了算法的跨疾病适用性。与多种诊断系统相比,单一诊断平台可节省时间和金钱。目前还没有利用脑电图(EEG)信号数据诊断不同类型 NDs 的统一标准平台。为解决这一问题,本研究旨在开发一个基于卷积神经网络的通用脑电图神经网络(GENet)框架,该框架可从脑电图中识别各种 ND。建议的框架由几个部分组成:1) 使用通道缩减、重采样和分割为 GENet 模型准备数据;2) 设计和训练 GENet 模型,为分类任务提供重要特征;3) 使用不同的信号段长度和多种训练批量评估所提出模型的性能,并使用六种不同 ND(即精神分裂症、自闭症谱系障碍、癫痫、帕金森病、轻度认知障碍和注意力缺陷/多动障碍)的七个不同脑电图数据集进行交叉验证。此外,本研究还探讨了所提出的 GENet 模型能否从脑电图中识别出多种 ND。与最先进的方法相比,所提出的模型在二分类和多分类方面都取得了更好的性能。此外,还利用多项消融研究和分层特征可视化验证了所提出的模型,这为所提出的模型提供了一致性和效率。所提出的 GENet 模型将帮助技术人员创建标准软件,以便从脑电图中检测出任何一种 ND。
{"title":"GENet: A Generic Neural Network for Detecting Various Neurological Disorders From EEG","authors":"Md. Nurul Ahad Tawhid;Siuly Siuly;Kate Wang;Hua Wang","doi":"10.1109/TCDS.2024.3386364","DOIUrl":"10.1109/TCDS.2024.3386364","url":null,"abstract":"The global health burden of neurological disorders (NDs) is vast, and they are recognized as major causes of mortality and disability worldwide. Most existing NDs detection methods are disease-specific, which limits an algorithm's cross-disease applicability. A single diagnostic platform can save time and money over multiple diagnostic systems. There is currently no unified standard platform for diagnosing different types of NDs utilizing electroencephalogram (EEG) signal data. To address this issue, this study aims to develop a generic EEG neural Network (GENet) framework based on a convolutional neural network that can identify various NDs from EEG. The proposed framework consists of several parts: 1) preparing data using channel reduction, resampling, and segmentation for the GENet model; 2) designing and training the GENet model to carry out important features for the classification task; and 3) assessing the proposed model's performance using different signal segment lengths and several training batch sizes and also cross-validating using seven different EEG datasets of six distinct NDs namely schizophrenia, autism spectrum disorder, epilepsy, Parkinson's disease, mild cognitive impairment, and attention-deficit/hyperactivity disorder. In addition, this study also investigates whether the proposed GENet model can identify multiple NDs from EEG. The proposed model achieved much better performance for both binary and multiclass classification compared to state-of-the-art methods. In addition, the proposed model is validated using several ablation studies and layerwise feature visualization, which provide consistency and efficiency to the proposed model. The proposed GENet model will help technologists create standard software for detecting any of these NDs from EEG.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 5","pages":"1829-1842"},"PeriodicalIF":5.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Computational Intelligence Society 电气和电子工程师学会计算智能学会
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-04-04 DOI: 10.1109/TCDS.2024.3373153
{"title":"IEEE Computational Intelligence Society","authors":"","doi":"10.1109/TCDS.2024.3373153","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3373153","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 2","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491284","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140346603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Cognitive and Developmental Systems Publication Information 电气和电子工程师学会认知与发展系统论文集》出版信息
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-04-04 DOI: 10.1109/TCDS.2024.3373151
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3373151","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3373151","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 2","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491580","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140348382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial Special Issue on Movement Sciences in Cognitive Systems 认知系统中的运动科学》特刊客座编辑
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-04-04 DOI: 10.1109/TCDS.2024.3372274
Junpei Zhong;Ran Dong;Soichiro Ikuno;Yanan Li;Chenguang Yang
Movements play a critical role in robotic systems, with considerations varying across different robotic systems regarding factors, such as accuracy, speed, energy consumption, and naturalness of movements in various parts of the robotic mechanics. Over the past decades, the robotics community has developed computationally efficient mathematical tools for studying, simulating, and optimizing movements of articulated bodies to address these challenges.
运动在机器人系统中起着至关重要的作用,不同的机器人系统对运动的准确性、速度、能耗和自然性等因素的考虑各不相同。在过去的几十年里,机器人学界已经开发出了用于研究、模拟和优化关节体运动的高效计算数学工具,以应对这些挑战。
{"title":"Guest Editorial Special Issue on Movement Sciences in Cognitive Systems","authors":"Junpei Zhong;Ran Dong;Soichiro Ikuno;Yanan Li;Chenguang Yang","doi":"10.1109/TCDS.2024.3372274","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3372274","url":null,"abstract":"Movements play a critical role in robotic systems, with considerations varying across different robotic systems regarding factors, such as accuracy, speed, energy consumption, and naturalness of movements in various parts of the robotic mechanics. Over the past decades, the robotics community has developed computationally efficient mathematical tools for studying, simulating, and optimizing movements of articulated bodies to address these challenges.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 2","pages":"403-406"},"PeriodicalIF":5.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491578","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140348383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Cognitive and Developmental Systems Information for Authors 电气和电子工程师学会《认知与发展系统》期刊 为作者提供的信息
IF 5 3区 计算机科学 Q1 Computer Science Pub Date : 2024-04-04 DOI: 10.1109/TCDS.2024.3373155
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3373155","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3373155","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 2","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491285","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140348385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cognitive and Developmental Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1