Pub Date : 2024-02-02DOI: 10.1109/TCDS.2024.3352775
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3352775","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352775","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-29DOI: 10.1109/TCDS.2024.3357547
Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li
An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.
{"title":"An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback","authors":"Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li","doi":"10.1109/TCDS.2024.3357547","DOIUrl":"10.1109/TCDS.2024.3357547","url":null,"abstract":"An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1109/TCDS.2024.3358022
Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li
Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.
{"title":"Depression Detection Using an Automatic Sleep Staging Method With an Interpretable Channel-Temporal Attention Mechanism","authors":"Jiahui Pan;Jie Liu;Jianhao Zhang;Xueli Li;Dongming Quan;Yuanqing Li","doi":"10.1109/TCDS.2024.3358022","DOIUrl":"10.1109/TCDS.2024.3358022","url":null,"abstract":"Despite previous efforts in depression detection studies, there is a scarcity of research on automatic depression detection using sleep structure, and several challenges remain: 1) how to apply sleep staging to detect depression and distinguish easily misjudged classes; and 2) how to adaptively capture attentive channel-dimensional information to enhance the interpretability of sleep staging methods. To address these challenges, an automatic sleep staging method based on a channel-temporal attention mechanism and a depression detection method based on sleep structure features are proposed. In sleep staging, a temporal attention mechanism is adopted to update the feature matrix, confidence scores are estimated for each sleep stage, the weight of each channel is adjusted based on these scores, and the final results are obtained through a temporal convolutional network. In depression detection, seven sleep structure features based on the results of sleep staging are extracted for depression detection between unipolar depressive disorder (UDD) patients, bipolar disorder (BD) patients, and healthy subjects. Experiments demonstrate the effectiveness of the proposed approaches, and the visualization of the channel attention mechanism illustrates the interpretability of our method. Additionally, this is the first attempt to employ sleep structure features to automatically detect UDD and BD in patients.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.1109/TCDS.2024.3357618
Ruiqi Wang;Wonse Jo;Dezhong Zhao;Weizheng Wang;Arjun Gupte;Baijian Yang;Guohua Chen;Byung-Cheol Min
Human state recognition is a critical topic with pervasive and important applications in human–machine systems. Multimodal fusion, which entails integrating metrics from various data sources, has proven to be a potent method for boosting recognition performance. Although recent multimodal-based models have shown promising results, they often fall short in fully leveraging sophisticated fusion strategies essential for modeling adequate cross-modal dependencies in the fusion representation. Instead, they rely on costly and inconsistent feature crafting and alignment. To address this limitation, we propose an end-to-end multimodal transformer framework for multimodal human state recognition called Husformer