Pub Date : 2015-11-09DOI: 10.1109/TAMD.2015.2478377
Xiaoying Song, Wenqiang Zhang, J. Weng
We model the autonomous development of brain-inspired circuits through two modalities-video stream and action stream that are synchronized in time. We assume that such multimodal streams are available to a baby through inborn reflexes, self-supervision, and caretaker's supervision, when the baby interacts with the real world. By autonomous development, we mean that not only that the internal (inside the “skull”) self-organization is fully autonomous, but the developmental program (DP) that regulates the computation of the network is also task nonspecific. In this work, the task-nonspecificity is reflected by the fact that the actions associated with an attended object in a cluttered, natural, and dynamic scene is taught after the DP is finished and the “life” has begun. The actions correspond to neuronal firing patterns representing object type, object location and object scale, but learning is directly from unsegmented cluttered scenes. Along the line of where-what networks (WWN), this is the first one that explicitly models multiple “brain” areas-each for a different range of object scales. Among experiments, large natural video experiments were conducted. To show the power of automatic attention in unknown cluttered backgrounds, the last experimental group demonstrated disjoint tests in the presence of large within-class variations (object 3-D-rotations in very different unknown backgrounds), but small between-class variations (small object patches in large similar and different unknown backgrounds), in contrast with global classification tests such as ImageNet and Atari Games.
{"title":"Types, Locations, and Scales from Cluttered Natural Video and Actions","authors":"Xiaoying Song, Wenqiang Zhang, J. Weng","doi":"10.1109/TAMD.2015.2478377","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2478377","url":null,"abstract":"We model the autonomous development of brain-inspired circuits through two modalities-video stream and action stream that are synchronized in time. We assume that such multimodal streams are available to a baby through inborn reflexes, self-supervision, and caretaker's supervision, when the baby interacts with the real world. By autonomous development, we mean that not only that the internal (inside the “skull”) self-organization is fully autonomous, but the developmental program (DP) that regulates the computation of the network is also task nonspecific. In this work, the task-nonspecificity is reflected by the fact that the actions associated with an attended object in a cluttered, natural, and dynamic scene is taught after the DP is finished and the “life” has begun. The actions correspond to neuronal firing patterns representing object type, object location and object scale, but learning is directly from unsegmented cluttered scenes. Along the line of where-what networks (WWN), this is the first one that explicitly models multiple “brain” areas-each for a different range of object scales. Among experiments, large natural video experiments were conducted. To show the power of automatic attention in unknown cluttered backgrounds, the last experimental group demonstrated disjoint tests in the presence of large within-class variations (object 3-D-rotations in very different unknown backgrounds), but small between-class variations (small object patches in large similar and different unknown backgrounds), in contrast with global classification tests such as ImageNet and Atari Games.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"273-286"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2478377","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-04DOI: 10.1109/TAMD.2015.2498458
Junwei, Tianming, Christine, Juyang
Human brains are the ultimate recipients and assessors of multimedia contents and semantics. Recent developments of neuroimaging techniques have enabled us to probe human brain activities during free viewing of multimedia contents. This special issue mainly focuses on the synergistic combinations of cognitive neuroscience, brain imaging, and multimedia analysis. It aims to capture the latest advances in the research community working on brain imaging informed multimedia analysis, as well as computational model of the brain processes driven by multimedia contents.
{"title":"Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging—Part 1","authors":"Junwei, Tianming, Christine, Juyang","doi":"10.1109/TAMD.2015.2498458","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2498458","url":null,"abstract":"Human brains are the ultimate recipients and assessors of multimedia contents and semantics. Recent developments of neuroimaging techniques have enabled us to probe human brain activities during free viewing of multimedia contents. This special issue mainly focuses on the synergistic combinations of cognitive neuroscience, brain imaging, and multimedia analysis. It aims to capture the latest advances in the research community working on brain imaging informed multimedia analysis, as well as computational model of the brain processes driven by multimedia contents.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"269-272"},"PeriodicalIF":0.0,"publicationDate":"2015-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2498458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62764077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-26DOI: 10.1109/TAMD.2015.2440298
Nan-Feng Jie, Mao-Hu Zhu, Xiao-Ying Ma, E. Osuch, M. Wammes, J. Théberge, Huan-Dong Li, Yu Zhang, Tianzi Jiang, J. Sui, V. Calhoun
Discriminating between bipolar disorder (BD) and major depressive disorder (MDD) is a major clinical challenge due to the absence of known biomarkers; hence a better understanding of their pathophysiology and brain alterations is urgently needed. Given the complexity, feature selection is especially important in neuroimaging applications, however, feature dimension and model understanding present serious challenges. In this study, a novel feature selection approach based on linear support vector machine with a forward-backward search strategy (SVM-FoBa) was developed and applied to structural and resting-state functional magnetic resonance imaging data collected from 21 BD, 25 MDD and 23 healthy controls. Discriminative features were drawn from both data modalities, with which the classification of BD and MDD achieved an accuracy of 92.1% (1000 bootstrap resamples). Weight analysis of the selected features further revealed that the inferior frontal gyrus may characterize a central role in BD-MDD differentiation, in addition to the default mode network and the cerebellum. A modality-wise comparison also suggested that functional information outweighs anatomical by a large margin when classifying the two clinical disorders. This work validated the advantages of multimodal joint analysis and the effectiveness of SVM-FoBa, which has potential for use in identifying possible biomarkers for several mental disorders.
{"title":"Discriminating Bipolar Disorder From Major Depression Based on SVM-FoBa: Efficient Feature Selection With Multimodal Brain Imaging Data","authors":"Nan-Feng Jie, Mao-Hu Zhu, Xiao-Ying Ma, E. Osuch, M. Wammes, J. Théberge, Huan-Dong Li, Yu Zhang, Tianzi Jiang, J. Sui, V. Calhoun","doi":"10.1109/TAMD.2015.2440298","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2440298","url":null,"abstract":"Discriminating between bipolar disorder (BD) and major depressive disorder (MDD) is a major clinical challenge due to the absence of known biomarkers; hence a better understanding of their pathophysiology and brain alterations is urgently needed. Given the complexity, feature selection is especially important in neuroimaging applications, however, feature dimension and model understanding present serious challenges. In this study, a novel feature selection approach based on linear support vector machine with a forward-backward search strategy (SVM-FoBa) was developed and applied to structural and resting-state functional magnetic resonance imaging data collected from 21 BD, 25 MDD and 23 healthy controls. Discriminative features were drawn from both data modalities, with which the classification of BD and MDD achieved an accuracy of 92.1% (1000 bootstrap resamples). Weight analysis of the selected features further revealed that the inferior frontal gyrus may characterize a central role in BD-MDD differentiation, in addition to the default mode network and the cerebellum. A modality-wise comparison also suggested that functional information outweighs anatomical by a large margin when classifying the two clinical disorders. This work validated the advantages of multimodal joint analysis and the effectiveness of SVM-FoBa, which has potential for use in identifying possible biomarkers for several mental disorders.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"320-331"},"PeriodicalIF":0.0,"publicationDate":"2015-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2440298","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/TAMD.2015.2495801
Angelo Salah
Presents information regarding the title change of the IEEE Transactions on Autonomous Mental Development to will change its name to the IEEE Transactions on Cognitive and Developmental Systems in 2016.
{"title":"Editorial Announcing the Title Change of the IEEE Transactions on Autonomous Mental Development in 2016","authors":"Angelo Salah","doi":"10.1109/TAMD.2015.2495801","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2495801","url":null,"abstract":"Presents information regarding the title change of the IEEE Transactions on Autonomous Mental Development to will change its name to the IEEE Transactions on Cognitive and Developmental Systems in 2016.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"107 5","pages":"157"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2495801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72371061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.1109/TAMD.2015.2416976
Q. Ling, Zhaohui Li, Qinghua Huang, Xuelong Li
We developed a novel algorithm to estimate bias fields from brain magnetic resonance (MR) images using a gradient-based method. The bias field is modeled as a multiplicative and slowly varying surface. We fit the bias field by a low-order polynomial. The polynomial's parameters are directly obtained by minimizing the sum of square errors between the gradients of MR images (both in the x-direction and y-direction) and the partial derivatives of the desired polynomial in the log domain. Compared to the existing retrospective algorithms, our algorithm combines the estimation of the gradient of the bias field and the reintegration of the obtained gradient polynomial together so that it is more robust against noise and can achieve better performance, which are demonstrated through experiments with both real and simulated brain MR images.
{"title":"A Robust Gradient-Based Algorithm to Correct Bias Fields of Brain MR Images","authors":"Q. Ling, Zhaohui Li, Qinghua Huang, Xuelong Li","doi":"10.1109/TAMD.2015.2416976","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2416976","url":null,"abstract":"We developed a novel algorithm to estimate bias fields from brain magnetic resonance (MR) images using a gradient-based method. The bias field is modeled as a multiplicative and slowly varying surface. We fit the bias field by a low-order polynomial. The polynomial's parameters are directly obtained by minimizing the sum of square errors between the gradients of MR images (both in the x-direction and y-direction) and the partial derivatives of the desired polynomial in the log domain. Compared to the existing retrospective algorithms, our algorithm combines the estimation of the gradient of the bias field and the reintegration of the obtained gradient polynomial together so that it is more robust against noise and can achieve better performance, which are demonstrated through experiments with both real and simulated brain MR images.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"256-264"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2416976","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-30DOI: 10.1109/TAMD.2015.2463113
Shangfei Wang, Yachen Zhu, Lihua Yue, Q. Ji
In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.
{"title":"Emotion Recognition with the Help of Privileged Information","authors":"Shangfei Wang, Yachen Zhu, Lihua Yue, Q. Ji","doi":"10.1109/TAMD.2015.2463113","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2463113","url":null,"abstract":"In this article, we propose a novel approach to recognize emotions with the help of privileged information, which is only available during training, but not available during testing. Such additional information can be exploited during training to construct a better classifier. Specifically, we recognize audience's emotion from EEG signals with the help of the stimulus videos, and tag videos' emotions with the aid of electroencephalogram (EEG) signals. First, frequency features are extracted from EEG signals and audio/visual features are extracted from video stimulus. Second, features are selected by statistical tests. Third, a new EEG feature space and a new video feature space are constructed simultaneously using canonical correlation analysis (CCA). Finally, two support vector machines (SVM) are trained on the new EEG and video feature spaces respectively. During emotion recognition from EEG, only EEG signals are available, and the SVM classifier obtained on EEG feature space is used; while for video emotion tagging, only video clips are available, and the SVM classifier constructed on video feature space is adopted. Experiments of EEG-based emotion recognition and emotion video tagging are conducted on three benchmark databases, demonstrating that video content, as the context, can improve the emotion recognition from EEG signals and EEG signals available during training can enhance emotion video tagging.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"189-200"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2463113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62764048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-24DOI: 10.1109/TAMD.2015.2449553
Seong-eun Moon, Jong-Seok Lee
High dynamic range (HDR) imaging has been attracting much attention as a technology that can provide immersive experience. Its ultimate goal is to provide better quality of experience (QoE) via enhanced contrast. In this paper, we analyze perceptual experience of tone-mapped HDR videos both explicitly by conducting a subjective questionnaire assessment and implicitly by using EEG and peripheral physiological signals. From the results of the subjective assessment, it is revealed that tone-mapped HDR videos are more interesting and more natural, and give better quality than low dynamic range (LDR) videos. Physiological signals were recorded during watching tone-mapped HDR and LDR videos, and classification systems are constructed to explore perceptual difference captured by the physiological signals. Significant difference in the physiological signals is observed between tone-mapped HDR and LDR videos in the classification under both a subject-dependent and a subject-independent scenarios. Also, significant difference in the signals between high versus low perceived contrast and overall quality is detected via classification under the subject-dependent scenario. Moreover, it is shown that features extracted from the gamma frequency band are effective for classification.
{"title":"Perceptual Experience Analysis for Tone-mapped HDR Videos Based on EEG and Peripheral Physiological Signals","authors":"Seong-eun Moon, Jong-Seok Lee","doi":"10.1109/TAMD.2015.2449553","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2449553","url":null,"abstract":"High dynamic range (HDR) imaging has been attracting much attention as a technology that can provide immersive experience. Its ultimate goal is to provide better quality of experience (QoE) via enhanced contrast. In this paper, we analyze perceptual experience of tone-mapped HDR videos both explicitly by conducting a subjective questionnaire assessment and implicitly by using EEG and peripheral physiological signals. From the results of the subjective assessment, it is revealed that tone-mapped HDR videos are more interesting and more natural, and give better quality than low dynamic range (LDR) videos. Physiological signals were recorded during watching tone-mapped HDR and LDR videos, and classification systems are constructed to explore perceptual difference captured by the physiological signals. Significant difference in the physiological signals is observed between tone-mapped HDR and LDR videos in the classification under both a subject-dependent and a subject-independent scenarios. Also, significant difference in the signals between high versus low perceived contrast and overall quality is detected via classification under the subject-dependent scenario. Moreover, it is shown that features extracted from the gamma frequency band are effective for classification.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"175 1","pages":"236-247"},"PeriodicalIF":0.0,"publicationDate":"2015-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2449553","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-17DOI: 10.1109/TAMD.2015.2446499
Yiwen Wang, Lei Jiang, Yun Wang, Bangyu Cai, Yueming Wang, Weidong Chen, S. Zhang, Xiaoxiang Zheng
Recent face recognition techniques have achieved remarkable successes in fast face retrieval on huge image datasets. But the performance is still limited when large illumination, pose, and facial expression variations are presented. In contrast, the human brain has powerful cognitive capability to recognize faces and demonstrates robustness across viewpoints, lighting conditions, even in the presence of partial occlusion. This paper proposes a closed-loop face retrieval system that combines the state-of-the-art face recognition method with the powerful cognitive function of the human brain illustrated in electroencephalography signals. The system starts with a random face image and outputs the ranking of all of the images in the database according to their similarity to the target individual. At each iteration, the single trial event related potentials (ERP) detector scores the user's interest in rapid serial visual presentation paradigm, where the presented images are selected from the computer face recognition module. When the system converges, the ERP detector further refines the lower ranking to achieve better performance. In total, 10 subjects participated in the experiment, exploring a database containing 1,854 images of 46 celebrities. Our approach outperforms existing methods with better average precision, indicating human cognitive ability complements computer face recognition and contributes to better face retrieval.
{"title":"An Iterative Approach for EEG-Based Rapid Face Search: A Refined Retrieval by Brain Computer Interfaces","authors":"Yiwen Wang, Lei Jiang, Yun Wang, Bangyu Cai, Yueming Wang, Weidong Chen, S. Zhang, Xiaoxiang Zheng","doi":"10.1109/TAMD.2015.2446499","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2446499","url":null,"abstract":"Recent face recognition techniques have achieved remarkable successes in fast face retrieval on huge image datasets. But the performance is still limited when large illumination, pose, and facial expression variations are presented. In contrast, the human brain has powerful cognitive capability to recognize faces and demonstrates robustness across viewpoints, lighting conditions, even in the presence of partial occlusion. This paper proposes a closed-loop face retrieval system that combines the state-of-the-art face recognition method with the powerful cognitive function of the human brain illustrated in electroencephalography signals. The system starts with a random face image and outputs the ranking of all of the images in the database according to their similarity to the target individual. At each iteration, the single trial event related potentials (ERP) detector scores the user's interest in rapid serial visual presentation paradigm, where the presented images are selected from the computer face recognition module. When the system converges, the ERP detector further refines the lower ranking to achieve better performance. In total, 10 subjects participated in the experiment, exploring a database containing 1,854 images of 46 celebrities. Our approach outperforms existing methods with better average precision, indicating human cognitive ability complements computer face recognition and contributes to better face retrieval.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"211-222"},"PeriodicalIF":0.0,"publicationDate":"2015-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2446499","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-04DOI: 10.1109/TAMD.2015.2441960
P. Zarjam, J. Epps, N. Lovell
Cognitive workload is an important indicator of mental activity that has implications for human-computer interaction, biomedical and task analysis applications. Previously, subjective rating (self-assessment) has often been a preferred measure, due to its ease of use and relative sensitivity to the cognitive load variations. However, it can only be feasibly measured in a post-hoc manner with the user's cooperation, and is not available as an online, continuous measurement during the progress of the cognitive task. In this paper, we used a cognitive task inducing seven different levels of workload to investigate workload discrimination using electroencephalography (EEG) signals. The entropy, energy, and standard deviation of the wavelet coefficients extracted from the segmented EEGs were found to change very consistently in accordance with the induced load, yielding strong significance in statistical tests of ranking accuracy. High accuracy for subject-independent multichannel classification among seven load levels was achieved, across the twelve subjects studied. We compare these results with alternative measures such as performance, subjective ratings, and reaction time (response time) of the subjects and compare their reliability with the EEG-based method introduced. We also investigate test/re-test reliability of the recorded EEG signals to evaluate their stability over time. These findings bring the use of passive brain-computer interfaces (BCI) for continuous memory load measurement closer to reality, and suggest EEG as the preferred measure of working memory load.
{"title":"Beyond Subjective Self-Rating: EEG Signal Classification of Cognitive Workload","authors":"P. Zarjam, J. Epps, N. Lovell","doi":"10.1109/TAMD.2015.2441960","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2441960","url":null,"abstract":"Cognitive workload is an important indicator of mental activity that has implications for human-computer interaction, biomedical and task analysis applications. Previously, subjective rating (self-assessment) has often been a preferred measure, due to its ease of use and relative sensitivity to the cognitive load variations. However, it can only be feasibly measured in a post-hoc manner with the user's cooperation, and is not available as an online, continuous measurement during the progress of the cognitive task. In this paper, we used a cognitive task inducing seven different levels of workload to investigate workload discrimination using electroencephalography (EEG) signals. The entropy, energy, and standard deviation of the wavelet coefficients extracted from the segmented EEGs were found to change very consistently in accordance with the induced load, yielding strong significance in statistical tests of ranking accuracy. High accuracy for subject-independent multichannel classification among seven load levels was achieved, across the twelve subjects studied. We compare these results with alternative measures such as performance, subjective ratings, and reaction time (response time) of the subjects and compare their reliability with the EEG-based method introduced. We also investigate test/re-test reliability of the recorded EEG signals to evaluate their stability over time. These findings bring the use of passive brain-computer interfaces (BCI) for continuous memory load measurement closer to reality, and suggest EEG as the preferred measure of working memory load.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"301-310"},"PeriodicalIF":0.0,"publicationDate":"2015-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2441960","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/TAMD.2015.2434951
F. Duan, Dongxue Lin, Wenyu Li, Zhao Zhang
Current EEG-based brain-computer interface technologies mainly focus on how to independently use SSVEP, motor imagery, P300, or other signals to recognize human intention and generate several control commands. SSVEP and P300 require external stimulus, while motor imagery does not require it. However, the generated control commands of these methods are limited and cannot control a robot to provide satisfactory service to the user. Taking advantage of both SSVEP and motor imagery, this paper aims to design a hybrid BCI system that can provide multimodal BCI control commands to the robot. In this hybrid BCI system, three SSVEP signals are used to control the robot to move forward, turn left, and turn right; one motor imagery signal is used to control the robot to execute the grasp motion. In order to enhance the performance of the hybrid BCI system, a visual servo module is also developed to control the robot to execute the grasp task. The effect of the entire system is verified in a simulation platform and a real humanoid robot, respectively. The experimental results show that all of the subjects were able to successfully use this hybrid BCI system with relative ease.
{"title":"Design of a Multimodal EEG-based Hybrid BCI System with Visual Servo Module","authors":"F. Duan, Dongxue Lin, Wenyu Li, Zhao Zhang","doi":"10.1109/TAMD.2015.2434951","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2434951","url":null,"abstract":"Current EEG-based brain-computer interface technologies mainly focus on how to independently use SSVEP, motor imagery, P300, or other signals to recognize human intention and generate several control commands. SSVEP and P300 require external stimulus, while motor imagery does not require it. However, the generated control commands of these methods are limited and cannot control a robot to provide satisfactory service to the user. Taking advantage of both SSVEP and motor imagery, this paper aims to design a hybrid BCI system that can provide multimodal BCI control commands to the robot. In this hybrid BCI system, three SSVEP signals are used to control the robot to move forward, turn left, and turn right; one motor imagery signal is used to control the robot to execute the grasp motion. In order to enhance the performance of the hybrid BCI system, a visual servo module is also developed to control the robot to execute the grasp task. The effect of the entire system is verified in a simulation platform and a real humanoid robot, respectively. The experimental results show that all of the subjects were able to successfully use this hybrid BCI system with relative ease.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"332-341"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2434951","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}