首页 > 最新文献

IEEE Transactions on Autonomous Mental Development最新文献

英文 中文
Age Effect in Human Brain Responses to Emotion Arousing Images: The EEG 3D-Vector Field Tomography Modeling Approach 人类大脑对情绪激发图像反应的年龄效应:脑电图三维矢量场断层成像建模方法
Pub Date : 2015-03-30 DOI: 10.1109/TAMD.2015.2416977
Chrysa D. Papadaniil, V. Kosmidou, A. Tsolaki, L. Hadjileontiadis, M. Tsolaki, Y. Kompatsiaris
Understanding of the brain responses to emotional stimulation remains a great challenge. Studies on the aging effect in neural activation report controversial results. In this paper, pictures of two classes of facial affect, i.e., anger and fear, were presented to young and elderly participants. High-density 256-channel EEG data were recorded and an innovative methodology was used to map the activated brain state at the N170 event-related potential component. The methodology, namely 3D Vector Field Tomography, reconstructs the electrostatic field within the head volume and requires no prior modeling of the individual's brain. Results showed that the elderly exhibited greater N170 amplitudes, while age-based differences were also observed in the topographic distribution of the EEG recordings at the N170 component. The brain activation analysis was performed by adopting a set of regions of interest. Results on the maximum activation area appeared to be emotion-specific; the anger emotional conditions induced the maximum activation in the inferior frontal gyrus, while fear activated more the superior temporal gyrus. The approach used here shows the potential of the proposed computational model to reveal the age effect on the brain activation upon emotion arousing images, which could be further transferred to the design of assistive clinical applications.
理解大脑对情绪刺激的反应仍然是一个巨大的挑战。关于神经激活中的衰老效应的研究报告了有争议的结果。在本文中,两类面部情绪的图片,即愤怒和恐惧,呈现给年轻和老年参与者。记录高密度的256通道脑电图数据,并采用创新的方法绘制N170事件相关电位分量的激活脑状态。该方法,即三维矢量场断层扫描,重建头部体积内的静电场,不需要事先对个体大脑进行建模。结果表明,老年人的N170振幅更大,N170分量的脑电记录的地形分布也存在年龄差异。大脑激活分析是通过采用一组感兴趣的区域进行的。最大激活区域的结果似乎与情绪有关;愤怒情绪诱发额下回的最大激活,而恐惧情绪诱发颞上回的最大激活。本文采用的方法显示了所提出的计算模型在揭示情绪激发图像对大脑激活的年龄影响方面的潜力,这可以进一步转移到辅助临床应用的设计中。
{"title":"Age Effect in Human Brain Responses to Emotion Arousing Images: The EEG 3D-Vector Field Tomography Modeling Approach","authors":"Chrysa D. Papadaniil, V. Kosmidou, A. Tsolaki, L. Hadjileontiadis, M. Tsolaki, Y. Kompatsiaris","doi":"10.1109/TAMD.2015.2416977","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2416977","url":null,"abstract":"Understanding of the brain responses to emotional stimulation remains a great challenge. Studies on the aging effect in neural activation report controversial results. In this paper, pictures of two classes of facial affect, i.e., anger and fear, were presented to young and elderly participants. High-density 256-channel EEG data were recorded and an innovative methodology was used to map the activated brain state at the N170 event-related potential component. The methodology, namely 3D Vector Field Tomography, reconstructs the electrostatic field within the head volume and requires no prior modeling of the individual's brain. Results showed that the elderly exhibited greater N170 amplitudes, while age-based differences were also observed in the topographic distribution of the EEG recordings at the N170 component. The brain activation analysis was performed by adopting a set of regions of interest. Results on the maximum activation area appeared to be emotion-specific; the anger emotional conditions induced the maximum activation in the inferior frontal gyrus, while fear activated more the superior temporal gyrus. The approach used here shows the potential of the proposed computational model to reveal the age effect on the brain activation upon emotion arousing images, which could be further transferred to the design of assistive clinical applications.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"223-235"},"PeriodicalIF":0.0,"publicationDate":"2015-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2416977","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Decoding Semantics Categorization during Natural Viewing of Video Streams 视频流自然观看过程中的解码语义分类
Pub Date : 2015-03-27 DOI: 10.1109/TAMD.2015.2415413
Xintao Hu, Lei Guo, Junwei Han, Tianming Liu
Exploring the functional mechanism of the human brain during semantics categorization and subsequently leverage current semantics-oriented multimedia analysis by functional brain imaging have been receiving great attention in recent years. In the field, most of existing studies utilized strictly controlled laboratory paradigms as experimental settings in brain imaging data acquisition. They also face the critical problem of modeling functional brain response from acquired brain imaging data. In this paper, we present a brain decoding study based on sparse multinomial logistic regression (SMLR) algorithm to explore the brain regions and functional interactions during semantics categorization. The setups of our study are two folds. First, we use naturalistic video streams as stimuli in functional magnetic resonance imaging (fMRI) to simulate the complex environment for semantics perception that the human brain has to process in real life. Second, we model brain responses to semantics categorization as functional interactions among large-scale brain networks. Our experimental results show that semantics categorization can be accurately predicted by both intrasubject and intersubject brain decoding models. The brain responses identified by the decoding model reveal that a wide range of brain regions and functional interactions are recruited during semantics categorization. Especially, the working memory system exhibits significant contributions. Other substantially involved brain systems include emotion, attention, vision and language systems.
近年来,利用脑功能成像技术探索人脑在语义分类过程中的功能机制,并利用当前面向语义的多媒体分析备受关注。在该领域,大多数现有的研究使用严格控制的实验室范式作为脑成像数据采集的实验设置。他们还面临着从获得的脑成像数据中建模功能性脑反应的关键问题。在本文中,我们提出了一种基于稀疏多项式逻辑回归(SMLR)算法的大脑解码研究,以探索语义分类过程中的大脑区域和功能相互作用。我们的研究设置有两层。首先,我们在功能磁共振成像(fMRI)中使用自然视频流作为刺激来模拟人类大脑在现实生活中必须处理的复杂语义感知环境。其次,我们将大脑对语义分类的反应建模为大规模大脑网络之间的功能相互作用。实验结果表明,主体内和主体间脑解码模型都能准确预测语义分类。解码模型识别的脑反应揭示了语义分类过程中广泛的脑区和功能相互作用。特别是,工作记忆系统表现出显著的贡献。其他实质性涉及的大脑系统包括情感、注意力、视觉和语言系统。
{"title":"Decoding Semantics Categorization during Natural Viewing of Video Streams","authors":"Xintao Hu, Lei Guo, Junwei Han, Tianming Liu","doi":"10.1109/TAMD.2015.2415413","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2415413","url":null,"abstract":"Exploring the functional mechanism of the human brain during semantics categorization and subsequently leverage current semantics-oriented multimedia analysis by functional brain imaging have been receiving great attention in recent years. In the field, most of existing studies utilized strictly controlled laboratory paradigms as experimental settings in brain imaging data acquisition. They also face the critical problem of modeling functional brain response from acquired brain imaging data. In this paper, we present a brain decoding study based on sparse multinomial logistic regression (SMLR) algorithm to explore the brain regions and functional interactions during semantics categorization. The setups of our study are two folds. First, we use naturalistic video streams as stimuli in functional magnetic resonance imaging (fMRI) to simulate the complex environment for semantics perception that the human brain has to process in real life. Second, we model brain responses to semantics categorization as functional interactions among large-scale brain networks. Our experimental results show that semantics categorization can be accurately predicted by both intrasubject and intersubject brain decoding models. The brain responses identified by the decoding model reveal that a wide range of brain regions and functional interactions are recruited during semantics categorization. Especially, the working memory system exhibits significant contributions. Other substantially involved brain systems include emotion, attention, vision and language systems.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"201-210"},"PeriodicalIF":0.0,"publicationDate":"2015-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2415413","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Motor-Primed Visual Attention for Humanoid Robots 人形机器人的运动启动视觉注意
Pub Date : 2015-03-26 DOI: 10.1109/TAMD.2015.2417353
L. Lukic, A. Billard, J. Santos-Victor
We present a novel, biologically inspired, approach to an efficient allocation of visual resources for humanoid robots in a form of a motor-primed visual attentional landscape. The attentional landscape is a more general, dynamic and a more complex concept of an arrangement of spatial attention than the popular “attentional spotlight” or “zoom-lens” models of attention. Motor-priming of attention is a mechanism for prioritizing visual processing to motor-relevant parts of the visual field, in contrast to other, motor-irrelevant, parts. In particular, we present two techniques for constructing a visual “attentional landscape”. The first, more general, technique, is to devote visual attention to the reachable space of a robot (peripersonal space-primed attention). The second, more specialized, technique is to allocate visual attention with respect to motor plans of the robot (motor plans-primed attention). Hence, in our model, visual attention is not exclusively defined in terms of visual saliency in color, texture or intensity cues, it is rather modulated by motor information. This computational model is inspired by recent findings in visual neuroscience and psychology. In addition to two approaches to constructing the attentional landscape, we present two methods for using the attentional landscape for driving visual processing. We show that motor-priming of visual attention can be used to very efficiently distribute limited computational resources devoted to the visual processing. The proposed model is validated in a series of experiments conducted with the iCub robot, both using the simulator and the real robot.
我们提出了一种新颖的、受生物学启发的方法,以一种运动启动的视觉注意力景观的形式,有效地分配人形机器人的视觉资源。与流行的“注意力聚光灯”或“变焦镜头”模式相比,“注意力景观”是一种更普遍、更动态、更复杂的空间注意力安排概念。注意的运动启动是一种将视觉处理优先于视野中与运动相关的部分,而不是其他与运动无关的部分的机制。我们特别提出了两种构建视觉“注意力景观”的技术。第一种更普遍的技术是将视觉注意力集中到机器人可触及的空间(personal space-primed attention)。第二种更专业的技术是根据机器人的运动计划分配视觉注意力(运动计划启动注意力)。因此,在我们的模型中,视觉注意并不完全是根据颜色、纹理或强度线索的视觉显著性来定义的,而是由运动信息调节的。这个计算模型的灵感来自于视觉神经科学和心理学的最新发现。除了两种构建注意景观的方法外,我们还提出了两种利用注意景观驱动视觉处理的方法。研究表明,视觉注意的运动启动可以非常有效地分配有限的用于视觉处理的计算资源。在iCub机器人上进行了一系列的仿真实验和真实机器人实验,验证了所提模型的有效性。
{"title":"Motor-Primed Visual Attention for Humanoid Robots","authors":"L. Lukic, A. Billard, J. Santos-Victor","doi":"10.1109/TAMD.2015.2417353","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2417353","url":null,"abstract":"We present a novel, biologically inspired, approach to an efficient allocation of visual resources for humanoid robots in a form of a motor-primed visual attentional landscape. The attentional landscape is a more general, dynamic and a more complex concept of an arrangement of spatial attention than the popular “attentional spotlight” or “zoom-lens” models of attention. Motor-priming of attention is a mechanism for prioritizing visual processing to motor-relevant parts of the visual field, in contrast to other, motor-irrelevant, parts. In particular, we present two techniques for constructing a visual “attentional landscape”. The first, more general, technique, is to devote visual attention to the reachable space of a robot (peripersonal space-primed attention). The second, more specialized, technique is to allocate visual attention with respect to motor plans of the robot (motor plans-primed attention). Hence, in our model, visual attention is not exclusively defined in terms of visual saliency in color, texture or intensity cues, it is rather modulated by motor information. This computational model is inspired by recent findings in visual neuroscience and psychology. In addition to two approaches to constructing the attentional landscape, we present two methods for using the attentional landscape for driving visual processing. We show that motor-priming of visual attention can be used to very efficiently distribute limited computational resources devoted to the visual processing. The proposed model is validated in a series of experiments conducted with the iCub robot, both using the simulator and the real robot.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"76-91"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2417353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Local Multimodal Serial Analysis for Fusing EEG-fMRI: A New Method to Study Familial Cortical Myoclonic Tremor and Epilepsy 脑电-功能磁共振局部多模态序列分析:一种研究家族性皮质肌阵挛性震颤和癫痫的新方法
Pub Date : 2015-03-10 DOI: 10.1109/TAMD.2015.2411740
Li Dong, Pu Wang, Yi Bin, Jiayan Deng, Y. Li, Leiting Chen, C. Luo, D. Yao
Integrating information of neuroimaging multimodalities, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), has become popularly for investigating various types of epilepsy. However, there are also some problems for the analysis of simultaneous EEG-fMRI data in epilepsy: one is the variation of HRFs, and another is low signal-to-noise ratio (SNR) in the data. Here, we propose a new multimodal unsupervised method, termed local multimodal serial analysis (LMSA), which may compensate for these deficiencies in multimodal integration. A simulation study with comparison to the traditional EEG-informed fMRI analysis which directly implemented the general linear model (GLM) was conducted to confirm the superior performance of LMSA. Then, applied to the simultaneous EEG-fMRI data of familial cortical myoclonic tremor and epilepsy (FCMTE), some meaningful information of BOLD changes related to the EEG discharges, such as the cerebellum and frontal lobe (especially in the inferior frontal gyrus), were found using LMSA. These results demonstrate that LMSA is a promising technique for exploring various data to provide integrated information that will further our understanding of brain dysfunction.
脑电图(EEG)和功能磁共振成像(fMRI)等神经成像多模式信息的整合已成为研究各种类型癫痫的热门方法。然而,对癫痫患者同时进行EEG-fMRI数据分析也存在一些问题:一是hrf的变化,二是数据的信噪比较低。在这里,我们提出了一种新的多模态无监督方法,称为局部多模态序列分析(LMSA),它可以弥补多模态集成的这些缺陷。通过与直接实现一般线性模型(general linear model, GLM)的传统eeg信息fMRI分析进行对比,验证了LMSA的优越性能。然后,将家族性皮质肌阵挛性震颤和癫痫(FCMTE)的同时EEG- fmri数据应用于LMSA,发现与脑电图放电相关的一些有意义的信息,如小脑和额叶(尤其是额下回)的BOLD变化。这些结果表明,LMSA是一种很有前途的技术,可以探索各种数据,提供综合信息,从而进一步了解脑功能障碍。
{"title":"Local Multimodal Serial Analysis for Fusing EEG-fMRI: A New Method to Study Familial Cortical Myoclonic Tremor and Epilepsy","authors":"Li Dong, Pu Wang, Yi Bin, Jiayan Deng, Y. Li, Leiting Chen, C. Luo, D. Yao","doi":"10.1109/TAMD.2015.2411740","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2411740","url":null,"abstract":"Integrating information of neuroimaging multimodalities, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), has become popularly for investigating various types of epilepsy. However, there are also some problems for the analysis of simultaneous EEG-fMRI data in epilepsy: one is the variation of HRFs, and another is low signal-to-noise ratio (SNR) in the data. Here, we propose a new multimodal unsupervised method, termed local multimodal serial analysis (LMSA), which may compensate for these deficiencies in multimodal integration. A simulation study with comparison to the traditional EEG-informed fMRI analysis which directly implemented the general linear model (GLM) was conducted to confirm the superior performance of LMSA. Then, applied to the simultaneous EEG-fMRI data of familial cortical myoclonic tremor and epilepsy (FCMTE), some meaningful information of BOLD changes related to the EEG discharges, such as the cerebellum and frontal lobe (especially in the inferior frontal gyrus), were found using LMSA. These results demonstrate that LMSA is a promising technique for exploring various data to provide integrated information that will further our understanding of brain dysfunction.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"311-319"},"PeriodicalIF":0.0,"publicationDate":"2015-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2411740","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Sparsity-Constrained fMRI Decoding of Visual Saliency in Naturalistic Video Streams 自然视频流中视觉显著性的稀疏约束fMRI解码
Pub Date : 2015-03-09 DOI: 10.1109/TAMD.2015.2409835
Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu
Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.
视频观看等自然刺激已越来越多地用于基于功能磁共振成像(fMRI)的大脑编码和解码研究,因为它们可以提供人类大脑在日常生活中必须处理的真实和动态信息。在本文中,我们提出了一个稀疏约束的解码模型,以探索由fMRI记录的大脑活动是否可以有效解码连续视频流中自下而上的视觉显著性,并检验稀疏约束是否可以改善视觉显著性解码。具体来说,我们使用生物学上合理的计算模型来量化视频流中的视觉显著性,并采用稀疏表示算法来学习代表全脑功能磁共振成像信号模式的原子信号字典。稀疏表示还将学习到的原子字典与量化的视频显著性联系起来。实验结果表明,视频流中的时间视觉显著性可以很好地解码,稀疏约束可以提高fMRI解码模型的性能。
{"title":"Sparsity-Constrained fMRI Decoding of Visual Saliency in Naturalistic Video Streams","authors":"Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu","doi":"10.1109/TAMD.2015.2409835","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2409835","url":null,"abstract":"Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"65-75"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2409835","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Editorial IEEE Transactions on Autonomous Mental Development 编辑IEEE自主心理发展汇刊
Pub Date : 2015-03-01 DOI: 10.1109/TAMD.2015.2410094
A. Cangelosi
{"title":"Editorial IEEE Transactions on Autonomous Mental Development","authors":"A. Cangelosi","doi":"10.1109/TAMD.2015.2410094","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2410094","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2410094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ecological Active Vision: Four Bioinspired Principles to Integrate Bottom–Up and Adaptive Top–Down Attention Tested With a Simple Camera-Arm Robot 生态主动视觉:整合自底向上和自适应自顶向下注意力的四种生物启发原则,用一个简单的相机臂机器人测试
Pub Date : 2015-03-01 DOI: 10.1109/TAMD.2014.2341351
D. Ognibene, G. Baldassarre
Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture (“BITPIC”) to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob “objects.” The results show that the architecture solves the problems, and hence the tasks, very efficiently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions.
视觉为灵长类动物提供了大量有用的信息来操纵环境,但同时也很容易压倒它们的计算资源。主动视觉是自然界找到的解决这个问题的关键方案:一个有限的中央凹在空间上主动移位,只收集相关的信息。这里我们强调,在生态条件下,该解决方案遇到四个问题:1)智能体需要根据其目标学习到哪里;2)操作在可能在注意焦点之外的空间区域引起学习反馈;3)需要良好的视觉动作来指导操作动作,但只有这样才能产生学习反馈;有限的中央凹导致混叠问题。然后,我们提出了一个计算架构(“BITPIC”)来克服这四个问题,整合了四个生物启发的关键成分:1)基于强化学习的自上而下的中央凹注意力;2)强烈的视觉操纵耦合;3)自下而上的外围注意;4)一种新颖的动作导向记忆。该系统通过一个简单的模拟摄像臂机器人进行测试,该机器人解决了一类涉及彩色斑点“物体”的搜索和到达任务。结果表明,该架构非常有效地解决了问题,从而解决了任务,并突出了架构原则如何有助于充分利用生态条件下主动视觉的优势。
{"title":"Ecological Active Vision: Four Bioinspired Principles to Integrate Bottom–Up and Adaptive Top–Down Attention Tested With a Simple Camera-Arm Robot","authors":"D. Ognibene, G. Baldassarre","doi":"10.1109/TAMD.2014.2341351","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2341351","url":null,"abstract":"Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture (“BITPIC”) to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob “objects.” The results show that the architecture solves the problems, and hence the tasks, very efficiently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"3-25"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2341351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Mental States, EEG Manifestations, and Mentally Emulated Digital Circuits for Brain-Robot Interaction 脑-机器人交互的心理状态、脑电图表现和心理模拟数字电路
Pub Date : 2015-02-10 DOI: 10.1109/TAMD.2014.2387271
S. Bozinovski, Adrijan Božinovski
This paper focuses on electroencephalogram (EEG) manifestations of mental states and actions, emulation of control and communication structures using EEG manifestations, and their application in brain-robot interactions. The paper introduces a mentally emulated demultiplexer, a device which uses mental actions to demultiplex a single EEG channel into multiple digital commands. The presented device is applicable in controlling several objects through a single EEG channel. The experimental proof of the concept is given by an obstacle-containing trajectory which should be negotiated by a robotic arm with two degrees of freedom, controlled by mental states of a human brain using a single EEG channel. The work is presented in the framework of Human-Robot interaction (HRI), specifically in the framework of brain-robot interaction (BRI). This work is a continuation of a previous work on developing mentally emulated digital devices, such as a mental action switch, and a mental states flip-flop.
本文重点研究了心理状态和动作的脑电图表现,利用脑电图表现模拟控制和通信结构,以及它们在脑-机器人交互中的应用。本文介绍了一种心理模拟解复用器,它是一种利用心理动作将单个脑电信号信道解复用为多个数字命令的装置。该装置适用于通过一个EEG通道控制多个对象。实验证明了这一概念,并给出了一个包含障碍物的轨迹,该轨迹应由具有两个自由度的机械臂通过,由人类大脑的精神状态控制,使用单个脑电图通道。这项工作是在人机交互(HRI)的框架下提出的,特别是在脑-机器人交互(BRI)的框架下。这项工作是先前开发心理模拟数字设备的工作的延续,例如心理动作开关和心理状态触发器。
{"title":"Mental States, EEG Manifestations, and Mentally Emulated Digital Circuits for Brain-Robot Interaction","authors":"S. Bozinovski, Adrijan Božinovski","doi":"10.1109/TAMD.2014.2387271","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2387271","url":null,"abstract":"This paper focuses on electroencephalogram (EEG) manifestations of mental states and actions, emulation of control and communication structures using EEG manifestations, and their application in brain-robot interactions. The paper introduces a mentally emulated demultiplexer, a device which uses mental actions to demultiplex a single EEG channel into multiple digital commands. The presented device is applicable in controlling several objects through a single EEG channel. The experimental proof of the concept is given by an obstacle-containing trajectory which should be negotiated by a robotic arm with two degrees of freedom, controlled by mental states of a human brain using a single EEG channel. The work is presented in the framework of Human-Robot interaction (HRI), specifically in the framework of brain-robot interaction (BRI). This work is a continuation of a previous work on developing mentally emulated digital devices, such as a mental action switch, and a mental states flip-flop.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"39-51"},"PeriodicalIF":0.0,"publicationDate":"2015-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2387271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Can Real-Time, Adaptive Human–Robot Motor Coordination Improve Humans’ Overall Perception of a Robot? 实时、自适应的人机运动协调能提高人类对机器人的整体感知吗?
Pub Date : 2015-01-30 DOI: 10.1109/TAMD.2015.2398451
Qiming Shen, K. Dautenhahn, J. Saunders, H. Kose-Bagci
Previous research on social interaction among humans suggested that interpersonal motor coordination can help to establish social rapport. Our research addresses the question of whether, in a human-humanoid interaction experiment, the human's overall perception of a robot can be improved by realizing motor coordination behavior that allows the robot to adapt in real-time to a person's behavior. A synchrony detection method using information distance was adopted to realize the real-time human-robot motor coordination behavior, which guided the humanoid robot to coordinate its movements to a human by measuring the behavior synchrony between the robot and the human. The feedback of the participants indicated that most of the participants preferred to interact with the humanoid robot with the adaptive motor coordination capability. The results of this proof-of-concept study suggest that the motor coordination mechanism improved humans' overall perception of the humanoid robot. Together with our previous findings, namely that humans actively coordinate their behaviors to a humanoid robot's behaviors, this study further supports the hypothesis that bidirectional motor coordination could be a valid approach to facilitate adaptive human-humanoid interaction.
以往关于人类社会互动的研究表明,人际运动协调有助于建立社会关系。我们的研究解决了一个问题,即在一个人-人交互实验中,人类对机器人的整体感知是否可以通过实现运动协调行为来改善,从而使机器人能够实时适应人的行为。采用基于信息距离的同步检测方法实现实时人机运动协调行为,通过测量机器人与人之间的行为同步性,引导仿人机器人向人协调运动。参与者的反馈表明,大多数参与者更倾向于与具有自适应运动协调能力的人形机器人进行互动。这项概念验证研究的结果表明,运动协调机制提高了人类对类人机器人的整体感知。结合我们之前的研究结果,即人类主动协调自己的行为以适应类人机器人的行为,本研究进一步支持了双向运动协调可能是促进适应性人-类人互动的有效方法的假设。
{"title":"Can Real-Time, Adaptive Human–Robot Motor Coordination Improve Humans’ Overall Perception of a Robot?","authors":"Qiming Shen, K. Dautenhahn, J. Saunders, H. Kose-Bagci","doi":"10.1109/TAMD.2015.2398451","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2398451","url":null,"abstract":"Previous research on social interaction among humans suggested that interpersonal motor coordination can help to establish social rapport. Our research addresses the question of whether, in a human-humanoid interaction experiment, the human's overall perception of a robot can be improved by realizing motor coordination behavior that allows the robot to adapt in real-time to a person's behavior. A synchrony detection method using information distance was adopted to realize the real-time human-robot motor coordination behavior, which guided the humanoid robot to coordinate its movements to a human by measuring the behavior synchrony between the robot and the human. The feedback of the participants indicated that most of the participants preferred to interact with the humanoid robot with the adaptive motor coordination capability. The results of this proof-of-concept study suggest that the motor coordination mechanism improved humans' overall perception of the humanoid robot. Together with our previous findings, namely that humans actively coordinate their behaviors to a humanoid robot's behaviors, this study further supports the hypothesis that bidirectional motor coordination could be a valid approach to facilitate adaptive human-humanoid interaction.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"52-64"},"PeriodicalIF":0.0,"publicationDate":"2015-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2398451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging - Part 1 基于脑成像的多模态建模和分析-第一部分
Pub Date : 2015-01-01 DOI: 10.1109/TAMD.2015.2495698
Junwei Han, Tianming Liu, C. Guo, Deniz Erdoğmuş, J. Weng
{"title":"Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging - Part 1","authors":"Junwei Han, Tianming Liu, C. Guo, Deniz Erdoğmuş, J. Weng","doi":"10.1109/TAMD.2015.2495698","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2495698","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"38 1","pages":"158-161"},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75088975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Autonomous Mental Development
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1