Pub Date : 2025-06-04DOI: 10.1109/TCDS.2025.3553655
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2025.3553655","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3553655","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 3","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11024000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144213513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24DOI: 10.1109/TCDS.2025.3564285
Md Zesun Ahmed Mia;Malyaban Bal;Abhronil Sengupta
Preliminary attempts at incorporating the critical role of astrocytes—cells that constitute more than 50% of human brain cells—in brain-inspired neuromorphic computing remain in infancy. This article seeks to delve deeper into various key aspects of neuron-synapse-astrocyte interactions to mimic self-attention mechanisms in Transformers. The cross-layer perspective explored in this work involves bioplausible modeling of Hebbian and presynaptic plasticities in neuron-astrocyte networks, incorporating effects of nonlinearities and feedback along with algorithmic formulations to map the neuron-astrocyte computations to self-attention mechanism and evaluating the impact of incorporating bio-realistic effects from the machine learning application side. Our analysis on sentiment and image classification tasks (IMDB and CIFAR10 datasets) highlights the advantages of Astromorphic Transformers, offering improved accuracy and learning speed. Furthermore, the model demonstrates strong natural language generation capabilities on the WikiText-2 dataset, achieving better perplexity compared with conventional models, thus showcasing enhanced generalization and stability across diverse machine learning tasks.
{"title":"Delving Deeper Into Astromorphic Transformers","authors":"Md Zesun Ahmed Mia;Malyaban Bal;Abhronil Sengupta","doi":"10.1109/TCDS.2025.3564285","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3564285","url":null,"abstract":"Preliminary attempts at incorporating the critical role of astrocytes—cells that constitute more than 50% of human brain cells—in brain-inspired neuromorphic computing remain in infancy. This article seeks to delve deeper into various key aspects of neuron-synapse-astrocyte interactions to mimic self-attention mechanisms in Transformers. The cross-layer perspective explored in this work involves bioplausible modeling of Hebbian and presynaptic plasticities in neuron-astrocyte networks, incorporating effects of nonlinearities and feedback along with algorithmic formulations to map the neuron-astrocyte computations to self-attention mechanism and evaluating the impact of incorporating bio-realistic effects from the machine learning application side. Our analysis on sentiment and image classification tasks (IMDB and CIFAR10 datasets) highlights the advantages of Astromorphic Transformers, offering improved accuracy and learning speed. Furthermore, the model demonstrates strong natural language generation capabilities on the WikiText-2 dataset, achieving better perplexity compared with conventional models, thus showcasing enhanced generalization and stability across diverse machine learning tasks.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 6","pages":"1436-1446"},"PeriodicalIF":4.9,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-21DOI: 10.1109/TCDS.2025.3563357
Bin Cao;Xinxin Guan;Songlin Bao;Jiawei Wu;Jing Fan
Abstractive multi-document summarization (MDS) is a crucial technique in cognitive computing, enabling the efficient synthesis of a documents cluster into a concise and complete summary. Despite recent advances, existing approaches still face challenges in representation learning when processing large-scale documents clusters: 1) incomplete semantic learning caused by documents truncation or exclusion; 2) the incorporation of noise, such as irrelevant or redundant information from documents; and 3) the potential omission of critical content due to partial coverage of documents. These limitations collectively undermine the semantic integrity and conciseness of the generated summaries. To address these issues, we propose TALER, a two-stage representation architecture enhanced by adversarial learning for abstractive MDS, which reformulates the MDS task as a single-document optimization problem. In Stage I, TALER focuses on enhancing single-document representations by maximizing semantic learning from each document in the cluster and employing the adversarial learning to suppress the introduction of documents noise. In Stage II, TALER conducts multidocument semantic fusion and summary generation by aggregating the learned document embeddings based on Stage I into a cluster-level representation through a pooling mechanism, followed by a self-attention module to capture salient content and produce the final summary. Experimental results on the Multi-News, DUC04, and Multi-XScience datasets demonstrate that TALER consistently outperforms existing baseline models across multiple evaluation metrics.
{"title":"Optimizing Representation for Abstractive Multidocument Summarization Based on Adversarial Learning Strategy","authors":"Bin Cao;Xinxin Guan;Songlin Bao;Jiawei Wu;Jing Fan","doi":"10.1109/TCDS.2025.3563357","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3563357","url":null,"abstract":"Abstractive multi-document summarization (MDS) is a crucial technique in cognitive computing, enabling the efficient synthesis of a documents cluster into a concise and complete summary. Despite recent advances, existing approaches still face challenges in representation learning when processing large-scale documents clusters: 1) incomplete semantic learning caused by documents truncation or exclusion; 2) the incorporation of noise, such as irrelevant or redundant information from documents; and 3) the potential omission of critical content due to partial coverage of documents. These limitations collectively undermine the semantic integrity and conciseness of the generated summaries. To address these issues, we propose <italic>TALER</i>, a two-stage representation architecture enhanced by adversarial learning for abstractive MDS, which reformulates the MDS task as a single-document optimization problem. In Stage I, <italic>TALER</i> focuses on enhancing single-document representations by maximizing semantic learning from each document in the cluster and employing the adversarial learning to suppress the introduction of documents noise. In Stage II, <italic>TALER</i> conducts multidocument semantic fusion and summary generation by aggregating the learned document embeddings based on Stage I into a cluster-level representation through a pooling mechanism, followed by a self-attention module to capture salient content and produce the final summary. Experimental results on the Multi-News, DUC04, and Multi-XScience datasets demonstrate that <italic>TALER</i> consistently outperforms existing baseline models across multiple evaluation metrics.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 6","pages":"1426-1435"},"PeriodicalIF":4.9,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3-D human behavior is a highly nonlinear spatiotemporal interaction process. Therefore, early behavior prediction is a challenging task, especially prediction with low observation rates in unsupervised mode. To this end, we propose a novel self-supervised early 3-D behavior prediction framework that learns graph structures on hyperbolic manifold. First, we employ the sequence construction of multidynamic key information to enlarge the key details of spatiotemporal behavior sequences, addressing the high redundancy between frames of spatiotemporal interaction. Second, for capturing dependencies among long-distance joints, we explore a unique graph Laplacian on hyperbolic manifold to perceive the subtle local difference within frames. Finally, we leverage the learned spatiotemporal features under different observation rates for progressive contrast, forming self-supervised signals. This facilitates the extraction of more discriminative global and local spatiotemporal information from early behavior sequences in unsupervised mode. Extensive experiments on three behavior datasets have demonstrated the superiority of our approach at low to medium observation rates.
{"title":"Self-Supervised Hyperbolic Spectro-Temporal Graph Convolution Network for Early 3-D Behavior Prediction","authors":"Peng Liu;Qin Lai;Haibo Li;Chong Zhao;Qicong Wang;Hongying Meng","doi":"10.1109/TCDS.2025.3561422","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3561422","url":null,"abstract":"3-D human behavior is a highly nonlinear spatiotemporal interaction process. Therefore, early behavior prediction is a challenging task, especially prediction with low observation rates in unsupervised mode. To this end, we propose a novel self-supervised early 3-D behavior prediction framework that learns graph structures on hyperbolic manifold. First, we employ the sequence construction of multidynamic key information to enlarge the key details of spatiotemporal behavior sequences, addressing the high redundancy between frames of spatiotemporal interaction. Second, for capturing dependencies among long-distance joints, we explore a unique graph Laplacian on hyperbolic manifold to perceive the subtle local difference within frames. Finally, we leverage the learned spatiotemporal features under different observation rates for progressive contrast, forming self-supervised signals. This facilitates the extraction of more discriminative global and local spatiotemporal information from early behavior sequences in unsupervised mode. Extensive experiments on three behavior datasets have demonstrated the superiority of our approach at low to medium observation rates.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 6","pages":"1411-1425"},"PeriodicalIF":4.9,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge distillation (KD) transferring knowledge from a large teacher model to a lightweight student one has received great attention in deep model compression. In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model. Based on vanilla KD, various approaches have been developed to improve the performance of the student model further. However, few of these previous methods have considered the reliability of the supervision from teacher models. Supervision from erroneous predictions may mislead the training of the student model. This article therefore proposes to tackle this problem from two aspects: label revision to rectify the incorrect supervision and data selection to select appropriate samples for distillation to reduce the impact of erroneous supervision. In the former, we propose to rectify the teacher’s inaccurate predictions using the ground truth. In the latter, we introduce a data selection technique to choose suitable training samples to be supervised by the teacher, thereby reducing the impact of incorrect predictions to some extent. Experiment results demonstrate the effectiveness of the proposed method, which can be further combined with other distillation approaches to enhance their performance.
{"title":"Improve Knowledge Distillation via Label Revision and Data Selection","authors":"Weichao Lan;Yiu-ming Cheung;Qing Xu;Buhua Liu;Zhikai Hu;Mengke Li;Zhenghua Chen","doi":"10.1109/TCDS.2025.3559881","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3559881","url":null,"abstract":"Knowledge distillation (KD) transferring knowledge from a large teacher model to a lightweight student one has received great attention in deep model compression. In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model. Based on vanilla KD, various approaches have been developed to improve the performance of the student model further. However, few of these previous methods have considered the reliability of the supervision from teacher models. Supervision from erroneous predictions may mislead the training of the student model. This article therefore proposes to tackle this problem from two aspects: label revision to rectify the incorrect supervision and data selection to select appropriate samples for distillation to reduce the impact of erroneous supervision. In the former, we propose to rectify the teacher’s inaccurate predictions using the ground truth. In the latter, we introduce a data selection technique to choose suitable training samples to be supervised by the teacher, thereby reducing the impact of incorrect predictions to some extent. Experiment results demonstrate the effectiveness of the proposed method, which can be further combined with other distillation approaches to enhance their performance.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 6","pages":"1377-1388"},"PeriodicalIF":4.9,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10962557","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11DOI: 10.1109/TCDS.2025.3559771
Kara L. Combs;Isaiah Goble;Spencer V. Howlett;Yuki B. Adams;Trevor J. Bihl
Recent advances in large language models (LLMs) have led to the general public’s assumption of human-equivalent logic and cognition. However, the research community is inconclusive, especially concerning LLM’s analogical reasoning abilities. Twenty-one proprietary and open-source LLMs were evaluated on two long-text/story analogy datasets. The LLMs produced mixed results on the four qualitative and seven quantitative metrics. LLMs performed well when tasked with determining the presence or absence of similar elements between stories based on the qualitative assessment of their outputs. However, despite this success, LLMs still struggled with the correct identification of the most analogous story to the base story. Further inspection indicates that the models struggled with recognizing high-order (similar to cause and effect) relationships associated with higher cognitive function(s). Regardless of the overall performance, there is a clear advantage that propriety has over open-source models concerning analogical reasoning. Last, this study suggests that LLM accuracy and their number of parameters explain over half of the variation in the energy consumed based on a statistically significant multivariate regression model. Future work may consider evaluating other types of reasoning and LLMs’ learning abilities by providing “correct” responses to guide future results.
{"title":"Evaluating the Tradeoff Between Analogical Reasoning Ability and Efficiency in Large Language Models","authors":"Kara L. Combs;Isaiah Goble;Spencer V. Howlett;Yuki B. Adams;Trevor J. Bihl","doi":"10.1109/TCDS.2025.3559771","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3559771","url":null,"abstract":"Recent advances in large language models (LLMs) have led to the general public’s assumption of human-equivalent logic and cognition. However, the research community is inconclusive, especially concerning LLM’s analogical reasoning abilities. Twenty-one proprietary and open-source LLMs were evaluated on two long-text/story analogy datasets. The LLMs produced mixed results on the four qualitative and seven quantitative metrics. LLMs performed well when tasked with determining the presence or absence of similar elements between stories based on the qualitative assessment of their outputs. However, despite this success, LLMs still struggled with the correct identification of the most analogous story to the base story. Further inspection indicates that the models struggled with recognizing high-order (similar to cause and effect) relationships associated with higher cognitive function(s). Regardless of the overall performance, there is a clear advantage that propriety has over open-source models concerning analogical reasoning. Last, this study suggests that LLM accuracy and their number of parameters explain over half of the variation in the energy consumed based on a statistically significant multivariate regression model. Future work may consider evaluating other types of reasoning and LLMs’ learning abilities by providing “correct” responses to guide future results.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 6","pages":"1401-1410"},"PeriodicalIF":4.9,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10962558","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electroencephalogram (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) have significant potential in improving motor function for neurorehabilitation. Despite recent advancements, learning diversified EEG features across different frequency ranges remains a significant challenge, as the homogenization of feature representations often limits the generalization capabilities of EEG decoding models for BCIs. In this article, we propose a novel multiscale convolutional transformer framework for EEG decoding that integrates multiscale convolution, transformer, and diverse-aware feature learning scheme (MCTD) to tackle the above challenge. Specifically, we first capture multiple frequency features using dynamic one-dimensional temporal convolution with different kernel lengths. Subsequently, we incorporate convolutional layers and transformers with a contrastive learning scheme to extract discriminative local and global EEG features within a single frequency range. To mitigate the homogenization of features extracted from different frequency ranges, we propose a novel decorrelation regularization. It enables multiscale convolutional transformers to produce less correlated features with each other, thereby enhancing the overall expressiveness of EEG decoding model. The performance of MCTD is evaluated on four public MI-based EEG datasets, including the BCI competition III 3a and IV 2a, the BNCI 2015-001, and the OpenBMI. For the average Kappa/Accuracy scores, MCTD obtains improvements of 3.58%/2.68%, 3.09%/2.20%, 2.33%/1.54%, and 4.44%/2.22%, over the state-of-the-art method on four EEG datasets, respectively. Experimental results demonstrate that our method exhibits superior performance. Code is available at: https://github.com/kfhss/MCTD.
基于脑电图(EEG)的运动图像(MI)脑机接口(bci)在改善神经康复中的运动功能方面具有重要的潜力。尽管最近取得了一些进展,但学习不同频率范围内的多样化脑电特征仍然是一个重大挑战,因为特征表示的同质化往往限制了脑机接口的脑电解码模型的泛化能力。在本文中,我们提出了一种新的多尺度卷积变压器EEG解码框架,该框架集成了多尺度卷积、变压器和多元感知特征学习方案(MCTD)来解决上述挑战。具体来说,我们首先使用不同核长度的动态一维时间卷积捕获多个频率特征。随后,我们将卷积层和变压器与对比学习方案结合在一起,在单个频率范围内提取有区别的局部和全局脑电特征。为了减轻从不同频率范围提取的特征的均匀化,我们提出了一种新的去相关正则化方法。它使多尺度卷积变换产生的特征之间的相关性更小,从而增强了脑电信号解码模型的整体表达能力。MCTD的性能在4个公开的基于mi的EEG数据集上进行了评估,包括BCI competition III 3a和IV 2a、BNCI 2015-001和OpenBMI。对于Kappa/Accuracy的平均分数,MCTD在4个EEG数据集上分别比最先进的方法提高了3.58%/2.68%、3.09%/2.20%、2.33%/1.54%和4.44%/2.22%。实验结果表明,该方法具有良好的性能。代码可从https://github.com/kfhss/MCTD获得。
{"title":"Multiscale Convolutional Transformer With Diverse-Aware Feature Learning for Motor Imagery EEG Decoding","authors":"Wenlong Hang;Junliang Wang;Shuang Liang;Baiying Lei;Qiong Wang;Guanglin Li;Badong Chen;Jing Qin","doi":"10.1109/TCDS.2025.3559187","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3559187","url":null,"abstract":"Electroencephalogram (EEG)-based motor imagery (MI) brain–computer interfaces (BCIs) have significant potential in improving motor function for neurorehabilitation. Despite recent advancements, learning diversified EEG features across different frequency ranges remains a significant challenge, as the homogenization of feature representations often limits the generalization capabilities of EEG decoding models for BCIs. In this article, we propose a novel multiscale convolutional transformer framework for EEG decoding that integrates multiscale convolution, transformer, and diverse-aware feature learning scheme (MCTD) to tackle the above challenge. Specifically, we first capture multiple frequency features using dynamic one-dimensional temporal convolution with different kernel lengths. Subsequently, we incorporate convolutional layers and transformers with a contrastive learning scheme to extract discriminative local and global EEG features within a single frequency range. To mitigate the homogenization of features extracted from different frequency ranges, we propose a novel decorrelation regularization. It enables multiscale convolutional transformers to produce less correlated features with each other, thereby enhancing the overall expressiveness of EEG decoding model. The performance of MCTD is evaluated on four public MI-based EEG datasets, including the BCI competition III 3a and IV 2a, the BNCI 2015-001, and the OpenBMI. For the average Kappa/Accuracy scores, MCTD obtains improvements of 3.58%/2.68%, 3.09%/2.20%, 2.33%/1.54%, and 4.44%/2.22%, over the state-of-the-art method on four EEG datasets, respectively. Experimental results demonstrate that our method exhibits superior performance. Code is available at: <uri>https://github.com/kfhss/MCTD</uri>.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 6","pages":"1389-1400"},"PeriodicalIF":4.9,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-02DOI: 10.1109/TCDS.2025.3537202
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2025.3537202","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3537202","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10947666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-02DOI: 10.1109/TCDS.2025.3537206
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2025.3537206","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3537206","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10947664","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-02DOI: 10.1109/TCDS.2025.3537204
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TCDS.2025.3537204","DOIUrl":"https://doi.org/10.1109/TCDS.2025.3537204","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10947665","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}