Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344548
Chris G. Christou, Kyriakos Herakleous, A. Tzanavari, Charalambos (Charis) Poullis
Human responses to crowds were investigated with a simulation of a busy street scene using virtual reality. Both psychophysiological measures and a memory test were used to assess the influence of large crowds or individual agents who stood close to the participant while they performed a memory task. Results from most individuals revealed strong orienting responses to changes in the crowd. This was indicated by sharp increases in skin conductance and reduction in peripheral blood volume amplitude. Furthermore, cognitive function appeared to be affected. Results of the memory test appeared to be influenced by how closely virtual agents approached the participants. These findings are discussed with respect to wearable affective computing which seeks robust identifiable correlates of autonomic activity that can be used in everyday contexts.
{"title":"Psychophysiological responses to virtual crowds: Implications for wearable computing","authors":"Chris G. Christou, Kyriakos Herakleous, A. Tzanavari, Charalambos (Charis) Poullis","doi":"10.1109/ACII.2015.7344548","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344548","url":null,"abstract":"Human responses to crowds were investigated with a simulation of a busy street scene using virtual reality. Both psychophysiological measures and a memory test were used to assess the influence of large crowds or individual agents who stood close to the participant while they performed a memory task. Results from most individuals revealed strong orienting responses to changes in the crowd. This was indicated by sharp increases in skin conductance and reduction in peripheral blood volume amplitude. Furthermore, cognitive function appeared to be affected. Results of the memory test appeared to be influenced by how closely virtual agents approached the participants. These findings are discussed with respect to wearable affective computing which seeks robust identifiable correlates of autonomic activity that can be used in everyday contexts.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"19 1","pages":"35-41"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85180846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344685
I. Daly, Asad Malik, James Weaver, F. Hwang, S. Nasuto, Duncan A. H. Williams, Alexis Kirke, E. Miranda
Brain-computer music interfaces (BCMI) provide a method to modulate an individuals affective state via the selection or generation of music according to their current affective state. Potential applications of such systems may include entertainment of therapeutic applications. We outline a proposed design for such a BCMI and seek a method for automatically differentiating different music induced affective states. Band-power features are explored for use in automatically identifying music-induced affective states. Additionally, a linear discriminant analysis classifier and a support vector machine are evaluated with respect to their ability to classify music induced affective states from the electroencephalogram recorded during a BCMI calibration task. Accuracies of up to 79.5% (p <; 0.001) are achieved with the support vector machine.
{"title":"Identifying music-induced emotions from EEG for use in brain-computer music interfacing","authors":"I. Daly, Asad Malik, James Weaver, F. Hwang, S. Nasuto, Duncan A. H. Williams, Alexis Kirke, E. Miranda","doi":"10.1109/ACII.2015.7344685","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344685","url":null,"abstract":"Brain-computer music interfaces (BCMI) provide a method to modulate an individuals affective state via the selection or generation of music according to their current affective state. Potential applications of such systems may include entertainment of therapeutic applications. We outline a proposed design for such a BCMI and seek a method for automatically differentiating different music induced affective states. Band-power features are explored for use in automatically identifying music-induced affective states. Additionally, a linear discriminant analysis classifier and a support vector machine are evaluated with respect to their ability to classify music induced affective states from the electroencephalogram recorded during a BCMI calibration task. Accuracies of up to 79.5% (p <; 0.001) are achieved with the support vector machine.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"22 1","pages":"923-929"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90694375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344554
Yoann Baveye, E. Dellandréa, Christel Chamaret, Liming Luke Chen
Recently, mainly due to the advances of deep learning, the performances in scene and object recognition have been progressing intensively. On the other hand, more subjective recognition tasks, such as emotion prediction, stagnate at moderate levels. In such context, is it possible to make affective computational models benefit from the breakthroughs in deep learning? This paper proposes to introduce the strength of deep learning in the context of emotion prediction in videos. The two main contributions are as follow: (i) a new dataset, composed of 30 movies under Creative Commons licenses, continuously annotated along the induced valence and arousal axes (publicly available) is introduced, for which (ii) the performance of the Convolutional Neural Networks (CNN) through supervised fine-tuning, the Support Vector Machines for Regression (SVR) and the combination of both (Transfer Learning) are computed and discussed. To the best of our knowledge, it is the first approach in the literature using CNNs to predict dimensional affective scores from videos. The experimental results show that the limited size of the dataset prevents the learning or finetuning of CNN-based frameworks but that transfer learning is a promising solution to improve the performance of affective movie content analysis frameworks as long as very large datasets annotated along affective dimensions are not available.
{"title":"Deep learning vs. kernel methods: Performance for emotion prediction in videos","authors":"Yoann Baveye, E. Dellandréa, Christel Chamaret, Liming Luke Chen","doi":"10.1109/ACII.2015.7344554","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344554","url":null,"abstract":"Recently, mainly due to the advances of deep learning, the performances in scene and object recognition have been progressing intensively. On the other hand, more subjective recognition tasks, such as emotion prediction, stagnate at moderate levels. In such context, is it possible to make affective computational models benefit from the breakthroughs in deep learning? This paper proposes to introduce the strength of deep learning in the context of emotion prediction in videos. The two main contributions are as follow: (i) a new dataset, composed of 30 movies under Creative Commons licenses, continuously annotated along the induced valence and arousal axes (publicly available) is introduced, for which (ii) the performance of the Convolutional Neural Networks (CNN) through supervised fine-tuning, the Support Vector Machines for Regression (SVR) and the combination of both (Transfer Learning) are computed and discussed. To the best of our knowledge, it is the first approach in the literature using CNNs to predict dimensional affective scores from videos. The experimental results show that the limited size of the dataset prevents the learning or finetuning of CNN-based frameworks but that transfer learning is a promising solution to improve the performance of affective movie content analysis frameworks as long as very large datasets annotated along affective dimensions are not available.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"61 1","pages":"77-83"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91218702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344644
S. Cosentino, S. Sessa, W. Kong, Di Zhang, A. Takanishi, N. Bianchi-Berthouze
Laughter is a very interesting non-verbal human vocalization. It is classified as a semi voluntary behavior despite being a direct form of social interaction, and can be elicited by a variety of very different stimuli, both cognitive and physical. Automatic laughter detection, analysis and classification will boost progress in affective computing, leading to the development of more natural human-machine communication interfaces. Surface Electromyography (sEMG) on abdominal muscles or invasive EMG on the larynx show potential in this direction, but these kinds of EMG-based sensing systems cannot be used in ecological settings due to their size, lack of reusability and uncomfortable setup. For this reason, they cannot be easily used for natural detection and measurement of a volatile social behavior like laughter in a variety of different situations. We propose the use of miniaturized, wireless, dry-electrode sEMG sensors on the neck for the detection and analysis of laughter. Even if with this solution the activation of specific larynx muscles cannot be precisely measured, it is possible to detect different EMG patterns related to larynx function. In addition, integrating sEMG analysis on a multisensory compact system positioned on the neck would improve the overall robustness of the whole sensing system, enabling the synchronized measure of different characteristics of laughter, like vocal production, head movement or facial expression; being at the same time less intrusive, as the neck is normally more accessible than abdominal muscles. In this paper, we report laughter discrimination rate obtained with our system depending on different conditions.
{"title":"Automatic discrimination of laughter using distributed sEMG","authors":"S. Cosentino, S. Sessa, W. Kong, Di Zhang, A. Takanishi, N. Bianchi-Berthouze","doi":"10.1109/ACII.2015.7344644","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344644","url":null,"abstract":"Laughter is a very interesting non-verbal human vocalization. It is classified as a semi voluntary behavior despite being a direct form of social interaction, and can be elicited by a variety of very different stimuli, both cognitive and physical. Automatic laughter detection, analysis and classification will boost progress in affective computing, leading to the development of more natural human-machine communication interfaces. Surface Electromyography (sEMG) on abdominal muscles or invasive EMG on the larynx show potential in this direction, but these kinds of EMG-based sensing systems cannot be used in ecological settings due to their size, lack of reusability and uncomfortable setup. For this reason, they cannot be easily used for natural detection and measurement of a volatile social behavior like laughter in a variety of different situations. We propose the use of miniaturized, wireless, dry-electrode sEMG sensors on the neck for the detection and analysis of laughter. Even if with this solution the activation of specific larynx muscles cannot be precisely measured, it is possible to detect different EMG patterns related to larynx function. In addition, integrating sEMG analysis on a multisensory compact system positioned on the neck would improve the overall robustness of the whole sensing system, enabling the synchronized measure of different characteristics of laughter, like vocal production, head movement or facial expression; being at the same time less intrusive, as the neck is normally more accessible than abdominal muscles. In this paper, we report laughter discrimination rate obtained with our system depending on different conditions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"1 1","pages":"691-697"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90460660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344647
K. Karpouzis, Georgios N. Yannakakis, Noor Shaker, S. Asteriadis
Player modeling and estimation of player experience have become very active research fields within affective computing, human computer interaction, and game artificial intelligence in recent years. For advancing our knowledge and understanding on player experience this paper introduces the Platformer Experience Dataset (PED) - the first open-access game experience corpus - that contains multiple modalities of user data of Super Mario Bros players. The open-access database aims to be used for player experience capture through context-based (i.e. game content), behavioral and visual recordings of platform game players. In addition, the database contains demographical data of the players and self-reported annotations of experience in two forms: ratings and ranks. PED opens up the way to desktop and console games that use video from webcameras and visual sensors and offer possibilities for holistic player experience modeling approaches that can, in turn, yield richer game personalization.
{"title":"The platformer experience dataset","authors":"K. Karpouzis, Georgios N. Yannakakis, Noor Shaker, S. Asteriadis","doi":"10.1109/ACII.2015.7344647","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344647","url":null,"abstract":"Player modeling and estimation of player experience have become very active research fields within affective computing, human computer interaction, and game artificial intelligence in recent years. For advancing our knowledge and understanding on player experience this paper introduces the Platformer Experience Dataset (PED) - the first open-access game experience corpus - that contains multiple modalities of user data of Super Mario Bros players. The open-access database aims to be used for player experience capture through context-based (i.e. game content), behavioral and visual recordings of platform game players. In addition, the database contains demographical data of the players and self-reported annotations of experience in two forms: ratings and ranks. PED opens up the way to desktop and console games that use video from webcameras and visual sensors and offer possibilities for holistic player experience modeling approaches that can, in turn, yield richer game personalization.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"46 1","pages":"712-718"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89411853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344660
Andra Adams, P. Robinson
Imitation is an important aspect of emotion recognition. We present an expression training interface which evaluates the imitation of facial expressions and head movements. The system provides feedback on complex emotion expression, via an integrated emotion classifier which can recognize 18 complex emotions. Feedback is also provided for exact-expression imitation via dynamic time warping. Discrepancies in intensity and frequency of action units are communicated via simple graphs. This work has applications as a training tool for customer-facing professionals and people with Autism Spectrum Conditions.
{"title":"Expression training for complex emotions using facial expressions and head movements","authors":"Andra Adams, P. Robinson","doi":"10.1109/ACII.2015.7344660","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344660","url":null,"abstract":"Imitation is an important aspect of emotion recognition. We present an expression training interface which evaluates the imitation of facial expressions and head movements. The system provides feedback on complex emotion expression, via an integrated emotion classifier which can recognize 18 complex emotions. Feedback is also provided for exact-expression imitation via dynamic time warping. Discrepancies in intensity and frequency of action units are communicated via simple graphs. This work has applications as a training tool for customer-facing professionals and people with Autism Spectrum Conditions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"1 1","pages":"784-786"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73487344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech are widely used to express one's emotion, intention, desire, etc. in social network communication, deriving abundant of internet speech data with different speaking styles. Such data provides a good resource for social multimedia research. However, regarding different styles are mixed together in the internet speech data, how to classify such data remains a challenging problem. In previous work, utterance-level statistics of acoustic features are utilized as features in classifying speaking styles, ignoring the local context information. Long short-term memory (LSTM) recurrent neural network (RNN) has achieved exciting success in lots of research areas, such as speech recognition. It is able to retrieve context information for long time duration, which is important in characterizing speaking styles. To train LSTM, huge number of labeled training data is required. While for the scenario of internet speech data classification, it is quite difficult to get such large scale labeled data. On the other hand, we can get some publicly available data for other tasks (such as speech emotion recognition), which offers us a new possibility to exploit LSTM in the low-resource task. We adopt retraining strategy to train LSTM to recognize speaking styles in speech data by training the network on emotion and speaking style datasets sequentially without reset the weights of the network. Experimental results demonstrate that retraining improves the training speed and the accuracy of network in speaking style classification.
{"title":"Understanding speaking styles of internet speech data with LSTM and low-resource training","authors":"Xixin Wu, Zhiyong Wu, Yishuang Ning, Jia Jia, Lianhong Cai, H. Meng","doi":"10.1109/ACII.2015.7344667","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344667","url":null,"abstract":"Speech are widely used to express one's emotion, intention, desire, etc. in social network communication, deriving abundant of internet speech data with different speaking styles. Such data provides a good resource for social multimedia research. However, regarding different styles are mixed together in the internet speech data, how to classify such data remains a challenging problem. In previous work, utterance-level statistics of acoustic features are utilized as features in classifying speaking styles, ignoring the local context information. Long short-term memory (LSTM) recurrent neural network (RNN) has achieved exciting success in lots of research areas, such as speech recognition. It is able to retrieve context information for long time duration, which is important in characterizing speaking styles. To train LSTM, huge number of labeled training data is required. While for the scenario of internet speech data classification, it is quite difficult to get such large scale labeled data. On the other hand, we can get some publicly available data for other tasks (such as speech emotion recognition), which offers us a new possibility to exploit LSTM in the low-resource task. We adopt retraining strategy to train LSTM to recognize speaking styles in speech data by training the network on emotion and speaking style datasets sequentially without reset the weights of the network. Experimental results demonstrate that retraining improves the training speed and the accuracy of network in speaking style classification.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"24 1","pages":"815-820"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77004724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344616
D. Glowinski, M. Mortillaro, K. Scherer, N. Dael, G. Volpe, A. Camurri
How efficiently decoding affective information when computational resources and sensor systems are limited? This paper presents a framework for analysis of affective behavior starting with a reduced amount of visual information related to human upper-body movements. The main goal is to individuate a minimal representation of emotional displays based on non-verbal gesture features. The GEMEP (Geneva multimodal emotion portrayals) corpus was used to validate this framework. Twelve emotions expressed by ten actors form the selected data set of emotion portrayals. Visual tracking of trajectories of head and hands was performed from a frontal and a lateral view. Postural/shape and dynamic expressive gesture features were identified and analyzed. A feature reduction procedure was carried out, resulting in a four-dimensional model of emotion expression, that effectively classified/grouped emotions according to their valence (positive, negative) and arousal (high, low). These results show that emotionally relevant information can be detected/measured/obtained from the dynamic qualities of gesture. The framework was implemented as software modules (plug-ins) extending the EyesWeb XMI Expressive Gesture Processing Library and was tested as a component for a multimodal search engine in collaboration with Google within the EU-ICT I-SEARCH project.
{"title":"Towards a minimal representation of affective gestures (Extended abstract)","authors":"D. Glowinski, M. Mortillaro, K. Scherer, N. Dael, G. Volpe, A. Camurri","doi":"10.1109/ACII.2015.7344616","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344616","url":null,"abstract":"How efficiently decoding affective information when computational resources and sensor systems are limited? This paper presents a framework for analysis of affective behavior starting with a reduced amount of visual information related to human upper-body movements. The main goal is to individuate a minimal representation of emotional displays based on non-verbal gesture features. The GEMEP (Geneva multimodal emotion portrayals) corpus was used to validate this framework. Twelve emotions expressed by ten actors form the selected data set of emotion portrayals. Visual tracking of trajectories of head and hands was performed from a frontal and a lateral view. Postural/shape and dynamic expressive gesture features were identified and analyzed. A feature reduction procedure was carried out, resulting in a four-dimensional model of emotion expression, that effectively classified/grouped emotions according to their valence (positive, negative) and arousal (high, low). These results show that emotionally relevant information can be detected/measured/obtained from the dynamic qualities of gesture. The framework was implemented as software modules (plug-ins) extending the EyesWeb XMI Expressive Gesture Processing Library and was tested as a component for a multimodal search engine in collaboration with Google within the EU-ICT I-SEARCH project.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"120 1","pages":"498-504"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77894822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344651
Leimin Tian, Johanna D. Moore, Catherine Lai
Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalisations (DIS-NVs), and show that they are highly predictive for recognizing emotions in spontaneous dialogues. We also propose the hierarchical fusion strategy as an alternative to current feature-level and decision-level fusion. This fusion strategy combines features from different modalities at different layers in a hierarchical structure. It is expected to overcome limitations of feature-level and decision-level fusion by including knowledge on modality differences, while preserving information of each modality.
{"title":"Recognizing emotions in dialogues with acoustic and lexical features","authors":"Leimin Tian, Johanna D. Moore, Catherine Lai","doi":"10.1109/ACII.2015.7344651","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344651","url":null,"abstract":"Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalisations (DIS-NVs), and show that they are highly predictive for recognizing emotions in spontaneous dialogues. We also propose the hierarchical fusion strategy as an alternative to current feature-level and decision-level fusion. This fusion strategy combines features from different modalities at different layers in a hierarchical structure. It is expected to overcome limitations of feature-level and decision-level fusion by including knowledge on modality differences, while preserving information of each modality.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"20 1","pages":"737-742"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74707832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-21DOI: 10.1109/ACII.2015.7344687
Yu Hao, Donghai Wang, J. Budd
Positive environmental emotion feedback is important to influence the brain and behaviors. By measuring emotional signals and providing affective neurofeedback, people can be better aware of their emotional state in real time. However, such direct mapping does not necessarily motivate people's emotion regulation effort. We introduce two levels of emotion feedback: an augmentation level that indicates direct feedback mapping and an intervention level which means feedback output is dynamically adapted with the regulation process. For the purpose of emotion regulation, this research summarizes the framework of emotion feedback design by adding new components that involve feature wrapping, mapping to output representation and interactive interface representation. By this means, the concept of intelligent emotion feedback is illustrated that not only enhances emotion regulation motivation but also considers subject and trial variability based on individual calibration and learning. An affective Brain-computer Interface technique is used to design the prototype among alternatives. Experimental tests and model simulation are planned for further evaluation.
{"title":"Design of intelligent emotion feedback to assist users regulate emotions: Framework and principles","authors":"Yu Hao, Donghai Wang, J. Budd","doi":"10.1109/ACII.2015.7344687","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344687","url":null,"abstract":"Positive environmental emotion feedback is important to influence the brain and behaviors. By measuring emotional signals and providing affective neurofeedback, people can be better aware of their emotional state in real time. However, such direct mapping does not necessarily motivate people's emotion regulation effort. We introduce two levels of emotion feedback: an augmentation level that indicates direct feedback mapping and an intervention level which means feedback output is dynamically adapted with the regulation process. For the purpose of emotion regulation, this research summarizes the framework of emotion feedback design by adding new components that involve feature wrapping, mapping to output representation and interactive interface representation. By this means, the concept of intelligent emotion feedback is illustrated that not only enhances emotion regulation motivation but also considers subject and trial variability based on individual calibration and learning. An affective Brain-computer Interface technique is used to design the prototype among alternatives. Experimental tests and model simulation are planned for further evaluation.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"1 1","pages":"938-943"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72726174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}