Pub Date : 2024-02-02DOI: 10.1109/TCDS.2024.3353515
Huajin Tang
As we usher into the new year of 2024, in my capacity as the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems (TCDS), I am happy to extend to you a tapestry of New Year greetings, may this year be filled with prosperity, success, and groundbreaking achievements in our shared fields.
{"title":"Editorial IEEE Transactions on Cognitive and Developmental Systems","authors":"Huajin Tang","doi":"10.1109/TCDS.2024.3353515","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3353515","url":null,"abstract":"As we usher into the new year of 2024, in my capacity as the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems (TCDS), I am happy to extend to you a tapestry of New Year greetings, may this year be filled with prosperity, success, and groundbreaking achievements in our shared fields.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"3-3"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1109/TCDS.2023.3325505
Yang Tang;Wei Lin;Chenguang Yang;Nicola Gatti;Gary G. Yen
The development and cognition of biological and intelligent individuals shed light on the development of cognitive, autonomous, and evolutionary robotics. Take the collective behavior of birds as an example, each individual effectively communicates information and learns from multiple neighbors, facilitating cooperative decision making among them. These interactions among individuals illuminate the growth and cognition of natural groups throughout the evolutionary process, and they can be effectively modeled as multiagent systems. Multiagent systems have the ability to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve, which also improves the robustness and efficiency through collaborative learning. Multiagent learning is playing an increasingly important role in various fields, such as aerospace systems, intelligent transportation, smart grids, etc. With the environment growing increasingly intricate, characterized by factors, such as high dynamism and incomplete/imperfect observational data, the challenges associated with various tasks are escalating. These challenges encompass issues like information sharing, the definition of learning objectives, and grappling with the curse of dimensionality. Unfortunately, many of the existing methods are struggling to effectively address these multifaceted issues in the realm of cognitive intelligence. Furthermore, the field of cognitive learning in multiagent systems underscores the efficiency of distributed learning, demonstrating the capacity to acquire the skill of learning itself collectively. In light of this, multiagent learning, while holding substantial research significance, confronts a spectrum of learning problems that span from single to multiple agents, simplicity to complexity, low dimensionality to high dimensionality, and one domain to various other domains. Agents autonomously and rapidly make swarm intelligent decisions through cognitive learning overcoming the above challenges, which holds significant importance for the advancement of various practical fields.
{"title":"Guest Editorial Special Issue on Cognitive Learning of Multiagent Systems","authors":"Yang Tang;Wei Lin;Chenguang Yang;Nicola Gatti;Gary G. Yen","doi":"10.1109/TCDS.2023.3325505","DOIUrl":"https://doi.org/10.1109/TCDS.2023.3325505","url":null,"abstract":"The development and cognition of biological and intelligent individuals shed light on the development of cognitive, autonomous, and evolutionary robotics. Take the collective behavior of birds as an example, each individual effectively communicates information and learns from multiple neighbors, facilitating cooperative decision making among them. These interactions among individuals illuminate the growth and cognition of natural groups throughout the evolutionary process, and they can be effectively modeled as multiagent systems. Multiagent systems have the ability to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve, which also improves the robustness and efficiency through collaborative learning. Multiagent learning is playing an increasingly important role in various fields, such as aerospace systems, intelligent transportation, smart grids, etc. With the environment growing increasingly intricate, characterized by factors, such as high dynamism and incomplete/imperfect observational data, the challenges associated with various tasks are escalating. These challenges encompass issues like information sharing, the definition of learning objectives, and grappling with the curse of dimensionality. Unfortunately, many of the existing methods are struggling to effectively address these multifaceted issues in the realm of cognitive intelligence. Furthermore, the field of cognitive learning in multiagent systems underscores the efficiency of distributed learning, demonstrating the capacity to acquire the skill of learning itself collectively. In light of this, multiagent learning, while holding substantial research significance, confronts a spectrum of learning problems that span from single to multiple agents, simplicity to complexity, low dimensionality to high dimensionality, and one domain to various other domains. Agents autonomously and rapidly make swarm intelligent decisions through cognitive learning overcoming the above challenges, which holds significant importance for the advancement of various practical fields.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"4-7"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419126","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1109/TCDS.2024.3352771
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3352771","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352771","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1109/TCDS.2024.3352775
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3352775","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352775","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-29DOI: 10.1109/TCDS.2024.3357547
Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li
An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.
{"title":"An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback","authors":"Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li","doi":"10.1109/TCDS.2024.3357547","DOIUrl":"10.1109/TCDS.2024.3357547","url":null,"abstract":"An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1405-1417"},"PeriodicalIF":5.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1109/TCDS.2024.3358022
Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li
Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.
{"title":"Depression Detection Using an Automatic Sleep Staging Method With an Interpretable Channel-Temporal Attention Mechanism","authors":"Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li","doi":"10.1109/TCDS.2024.3358022","DOIUrl":"10.1109/TCDS.2024.3358022","url":null,"abstract":"Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1418-1432"},"PeriodicalIF":5.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.1109/TCDS.2024.3357618
Ruiqi Wang;Wonse Jo;Dezhong Zhao;Weizheng Wang;Arjun Gupte;Baijian Yang;Guohua Chen;Byung-Cheol Min
Human state recognition is a critical topic with pervasive and important applications in human–machine systems. Multimodal fusion, which entails integrating metrics from various data sources, has proven to be a potent method for boosting recognition performance. Although recent multimodal-based models have shown promising results, they often fall short in fully leveraging sophisticated fusion strategies essential for modeling adequate cross-modal dependencies in the fusion representation. Instead, they rely on costly and inconsistent feature crafting and alignment. To address this limitation, we propose an end-to-end multimodal transformer framework for multimodal human state recognition called Husformer