首页 > 最新文献

2011 IEEE Workshop on Affective Computational Intelligence (WACI)最新文献

英文 中文
An ontology-based affective tutoring system on digital arts 基于本体的数字艺术情感辅导系统
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5952985
H. Lin, I-Hen Tsai, Rui-Ting Sun
The aim of this paper is to introduce the design and evaluation of an ontology-based affective tutoring system on digital arts. The major clues for emotion recognition are the text pieces inputted by the learners. The semantic inference of the emotions is done by use of an ontology called OMCSNet. The system also incorporates an agent that provides feedback based on the inferred emotions. The SUS (System Usability Scale) evaluation results show that this system achieves positive usability so that the learners enjoy the interaction with the system.
本文的目的是介绍一个基于本体的数字艺术情感辅导系统的设计与评价。情感识别的主要线索是学习者输入的文本片段。情感的语义推理是通过使用一个名为OMCSNet的本体来完成的。该系统还集成了一个代理,根据推断的情绪提供反馈。系统可用性量表(System Usability Scale, SUS)评估结果表明,该系统达到了良好的可用性,学习者享受与系统的互动。
{"title":"An ontology-based affective tutoring system on digital arts","authors":"H. Lin, I-Hen Tsai, Rui-Ting Sun","doi":"10.1109/WACI.2011.5952985","DOIUrl":"https://doi.org/10.1109/WACI.2011.5952985","url":null,"abstract":"The aim of this paper is to introduce the design and evaluation of an ontology-based affective tutoring system on digital arts. The major clues for emotion recognition are the text pieces inputted by the learners. The semantic inference of the emotions is done by use of an ontology called OMCSNet. The system also incorporates an agent that provides feedback based on the inferred emotions. The SUS (System Usability Scale) evaluation results show that this system achieves positive usability so that the learners enjoy the interaction with the system.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117225355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Emotional correlates of information retrieval behaviors 信息检索行为的情感关联
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953145
Irene Lopatovska
There is an emergent interest in the use of emotion data to improve information retrieval processes. Our study examined whether the knowledge of searchers' emotions can be used to predict their actions (and vise versa). We investigated associations between information retrieval behaviors (e.g., examination of search results) and patterns of emotional expressions around those behaviors, and found that individual search behaviors were associated with the certain types of emotional expressions. The findings can inform classification of emotions and search behaviors, and in turn lead to the development of affect-sensitive retrieval systems.
人们对使用情绪数据来改进信息检索过程产生了浓厚的兴趣。我们的研究考察了对搜索者情绪的了解是否可以用来预测他们的行为(反之亦然)。我们研究了信息检索行为(如检查搜索结果)与这些行为周围的情绪表达模式之间的关系,发现个体的搜索行为与某些类型的情绪表达存在关联。这些发现可以为情绪和搜索行为的分类提供信息,进而导致情感敏感检索系统的发展。
{"title":"Emotional correlates of information retrieval behaviors","authors":"Irene Lopatovska","doi":"10.1109/WACI.2011.5953145","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953145","url":null,"abstract":"There is an emergent interest in the use of emotion data to improve information retrieval processes. Our study examined whether the knowledge of searchers' emotions can be used to predict their actions (and vise versa). We investigated associations between information retrieval behaviors (e.g., examination of search results) and patterns of emotional expressions around those behaviors, and found that individual search behaviors were associated with the certain types of emotional expressions. The findings can inform classification of emotions and search behaviors, and in turn lead to the development of affect-sensitive retrieval systems.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"132 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120865910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Facial electromyography (fEMG) activities in response to affective visual stimulation 面部肌电图(fEMG)对情感视觉刺激的反应
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953144
Jun-Wen Tan, Steffen Walter, Andreas Scheck, David Hrabal, H. Hoffmann, H. Kessler, H. Traue
Recently, affective computing findings demonstrated that emotion processing and recognition is important in improving the quality of human computer interaction (HCI). In the present study, new data for a robust discrimination of three emotional states (negative, neutral and positive) employing two-channel facial electromyography (EMG) over zygomaticus major and corrugator supercilii will be presented. The facial EMG activities evoked upon viewing a standard set of pictures selected from the International Affective Picture System (IAPS) and additional self selected pictures revealed that positive pictures led to increased facial EMG activities over zygomaticus major (F (2, 471) = 4.23, p < 0.05), whereas negative pictures elicited greater facial EMG activities over corrugator supercilii (F (2, 476) = 3.06, p < 0.05). In addition, the correlation between facial EMG activities over these two sites and participants' ratings of stimuli pictures in dimension of valence measured by Self-Assessment Manikin (SAM) was significant (r = −0.63, p < 0.001, corrugator supercilii, r = 0.51, p < 0.05, zygomaticus major, respectively). Our results suggest that emotion inducing pictures elicit the intended emotions and that corrugator and zygomaticus EMG can effectively and reliably differentiate negative and positive emotions, respectively.
近年来,情感计算的研究结果表明,情感处理和识别对于提高人机交互(HCI)的质量至关重要。在本研究中,采用双通道面部肌电图(EMG)对三种情绪状态(消极、中性和积极)进行了强有力的区分。从国际情感图片系统(IAPS)中选择一组标准图片和额外的自选图片后,面部肌电活动显示,正面图片导致颧大肌的面部肌电活动增加(F (2,471) = 4.23, p < 0.05),而负面图片引起波纹肌上毛毛的面部肌电活动增加(F (2,476) = 3.06, p < 0.05)。此外,这两个部位的面部肌电活动与被试自评模型(SAM)测量的刺激图像效价维度评分之间存在显著的相关性(r = - 0.63, p < 0.001,波纹肌上纤毛,r = 0.51, p < 0.05,颧大肌)。我们的研究结果表明,情绪诱导图片能诱发预期情绪,而皱肌肌电图和颧肌肌电图分别能有效、可靠地区分消极情绪和积极情绪。
{"title":"Facial electromyography (fEMG) activities in response to affective visual stimulation","authors":"Jun-Wen Tan, Steffen Walter, Andreas Scheck, David Hrabal, H. Hoffmann, H. Kessler, H. Traue","doi":"10.1109/WACI.2011.5953144","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953144","url":null,"abstract":"Recently, affective computing findings demonstrated that emotion processing and recognition is important in improving the quality of human computer interaction (HCI). In the present study, new data for a robust discrimination of three emotional states (negative, neutral and positive) employing two-channel facial electromyography (EMG) over zygomaticus major and corrugator supercilii will be presented. The facial EMG activities evoked upon viewing a standard set of pictures selected from the International Affective Picture System (IAPS) and additional self selected pictures revealed that positive pictures led to increased facial EMG activities over zygomaticus major (F (2, 471) = 4.23, p < 0.05), whereas negative pictures elicited greater facial EMG activities over corrugator supercilii (F (2, 476) = 3.06, p < 0.05). In addition, the correlation between facial EMG activities over these two sites and participants' ratings of stimuli pictures in dimension of valence measured by Self-Assessment Manikin (SAM) was significant (r = −0.63, p < 0.001, corrugator supercilii, r = 0.51, p < 0.05, zygomaticus major, respectively). Our results suggest that emotion inducing pictures elicit the intended emotions and that corrugator and zygomaticus EMG can effectively and reliably differentiate negative and positive emotions, respectively.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122984966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Mechanism, modulation, and expression of empathy in a virtual human 虚拟人的移情机制、调节和表达
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953146
Hana Boukricha, I. Wachsmuth
Empathy is believed to play a prominent role in contributing to an efficient and satisfying cooperative social interaction by adjusting one's own behavior to that of others. Thus, endowing virtual humans with the ability to empathize not only enhances their cooperative social skills, but also makes them more likeable, trustworthy, and caring. Supported by psychological models of empathy, we propose an approach to model empathy for EMMA — an Empathic MultiModal Agent — based on three processing steps: First, the Empathy Mechanism consists of an internal simulation of perceived emotional facial expressions and results in an internal emotional feedback that represents the empathic emotion. Second, the Empathy Modulation consists of modulating the empathic emotion through different predefined modulation factors. Third, the Expression of Empathy consists of triggering EMMA's multiple modalities like facial and verbal behaviors. In a conversational agent scenario involving the virtual humans MAX and EMMA, we illustrate our proposed model of empathy and we introduce a planned empirical evaluation of EMMA's empathic behavior.
移情被认为在促进高效和满意的合作社会互动中发挥着重要作用,通过调整自己的行为来适应他人的行为。因此,赋予虚拟人移情能力不仅能提高他们的合作社交技能,还能让他们更讨人喜欢、更值得信赖、更有爱心。在共情心理模型的支持下,我们提出了一种共情多模态Agent (EMMA)的共情建模方法,该方法基于三个加工步骤:首先,共情机制包括对感知到的情绪面部表情的内部模拟,并产生代表共情情绪的内部情绪反馈。第二,共情调节包括通过不同的预定义调节因子来调节共情情绪。第三,共情的表达包括触发EMMA的多种模式,如面部和言语行为。在涉及虚拟人MAX和EMMA的会话代理场景中,我们阐述了我们提出的共情模型,并介绍了EMMA共情行为的计划经验评估。
{"title":"Mechanism, modulation, and expression of empathy in a virtual human","authors":"Hana Boukricha, I. Wachsmuth","doi":"10.1109/WACI.2011.5953146","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953146","url":null,"abstract":"Empathy is believed to play a prominent role in contributing to an efficient and satisfying cooperative social interaction by adjusting one's own behavior to that of others. Thus, endowing virtual humans with the ability to empathize not only enhances their cooperative social skills, but also makes them more likeable, trustworthy, and caring. Supported by psychological models of empathy, we propose an approach to model empathy for EMMA — an Empathic MultiModal Agent — based on three processing steps: First, the Empathy Mechanism consists of an internal simulation of perceived emotional facial expressions and results in an internal emotional feedback that represents the empathic emotion. Second, the Empathy Modulation consists of modulating the empathic emotion through different predefined modulation factors. Third, the Expression of Empathy consists of triggering EMMA's multiple modalities like facial and verbal behaviors. In a conversational agent scenario involving the virtual humans MAX and EMMA, we illustrate our proposed model of empathy and we introduce a planned empirical evaluation of EMMA's empathic behavior.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124565476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Affect Bartender — Affective cues and their application in a conversational agent 情感调酒师——情感线索及其在会话代理中的应用
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953152
M. Skowron, G. Paltoglou
This paper presents methods for the detection of textual expressions of users' affective states and explores an application of these affective cues in a conversational system — Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces. The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.
本文提出了检测用户情感状态的文本表达的方法,并探讨了这些情感线索在会话系统中的应用-情感调酒师。我们还描述了系统的体系结构、核心系统组件和一系列已开发的通信接口。通过在虚拟现实环境中与实验参与者进行对话的例子说明了所描述方法的应用。
{"title":"Affect Bartender — Affective cues and their application in a conversational agent","authors":"M. Skowron, G. Paltoglou","doi":"10.1109/WACI.2011.5953152","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953152","url":null,"abstract":"This paper presents methods for the detection of textual expressions of users' affective states and explores an application of these affective cues in a conversational system — Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces. The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129268336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automatic detection of “enthusiasm” in non-task-oriented dialogues using word co-occurrence 用词共现法自动检测非任务导向对话中的“热情”
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953085
Michimasa Inaba, F. Toriumi, K. Ishii
A method is proposed for automatically detecting “enthusiastic” utterances in text-based dialogues. Using conditional random fields, our proposed method distinguishes between the enthusiastic and non-enthusiastic parts of a dialogue. Testing demonstrated that it performs as well as human detection. Being able to distinguish between the enthusiastic and non-enthusiastic parts makes it possible to quantitatively analyze the phenomenon of enthusiasm, which should lead to a practical approach to the creation of non-task-oriented agents that can help generate enthusiastic dialogues.
提出了一种基于文本的对话中“热情”话语的自动检测方法。使用条件随机场,我们提出的方法区分对话的热情和非热情部分。测试表明,它的性能与人类检测一样好。能够区分热情和不热情的部分,可以定量分析热情的现象,这应该导致一个实用的方法来创建非任务导向的代理,可以帮助产生热情的对话。
{"title":"Automatic detection of “enthusiasm” in non-task-oriented dialogues using word co-occurrence","authors":"Michimasa Inaba, F. Toriumi, K. Ishii","doi":"10.1109/WACI.2011.5953085","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953085","url":null,"abstract":"A method is proposed for automatically detecting “enthusiastic” utterances in text-based dialogues. Using conditional random fields, our proposed method distinguishes between the enthusiastic and non-enthusiastic parts of a dialogue. Testing demonstrated that it performs as well as human detection. Being able to distinguish between the enthusiastic and non-enthusiastic parts makes it possible to quantitatively analyze the phenomenon of enthusiasm, which should lead to a practical approach to the creation of non-task-oriented agents that can help generate enthusiastic dialogues.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133327098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Alida, a cognitive approach of text categorization 文本分类的认知方法Alida
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953148
Yann Vigile Hoareau, A. E. Ghali
This paper proposes a model of text categorization named Alida, which combines a model of categorization inspired of the classical cognitive models of categorization of Nosofsky, with a semantic space model as system of semantic knowledge representation. The model addresses large-scale text categorization applications in opinion mining in different domains and different languages. The performance in the text-mining campaign DEFT'09 shows that the model can compete with existing Natural Language Processing and Information Retrieval models.
本文提出了一种文本分类模型Alida,它将受Nosofsky经典分类认知模型启发的分类模型与作为语义知识表示系统的语义空间模型相结合。该模型解决了在不同领域和不同语言的意见挖掘中的大规模文本分类应用。在文本挖掘活动DEFT'09中的表现表明,该模型可以与现有的自然语言处理和信息检索模型竞争。
{"title":"Alida, a cognitive approach of text categorization","authors":"Yann Vigile Hoareau, A. E. Ghali","doi":"10.1109/WACI.2011.5953148","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953148","url":null,"abstract":"This paper proposes a model of text categorization named Alida, which combines a model of categorization inspired of the classical cognitive models of categorization of Nosofsky, with a semantic space model as system of semantic knowledge representation. The model addresses large-scale text categorization applications in opinion mining in different domains and different languages. The performance in the text-mining campaign DEFT'09 shows that the model can compete with existing Natural Language Processing and Information Retrieval models.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128393650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating facial displays of emotion for the android robot Geminoid F 对机器人Geminoid F的面部表情进行评估
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953147
C. Becker-Asano, H. Ishiguro
With android robots becoming increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. We prepared both surveys in English, German, and Japanese language allowing us to analyze for inter-cultural differences. Accordingly, we not only found that our design of the emotional expressions “fearful” and “surprised” were often confused, but also that many Japanese participants seemed to confuse “angry” with “sad” in contrast to the German and English participants. Although similar facial displays portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for the model person. We conclude that improving the android's expressiveness especially around the eyes would be a useful next step in android design. In general, these results could be complemented by an evaluation of dynamic facial expressions of Geminoid F in future research.
随着机器人在技术和艺术设计上变得越来越复杂,它们的非语言表达能力越来越接近真实的人类。因此,本文提出了两项在线调查的结果,旨在评估女性机器人的五种基本情绪的面部表情。我们准备了英语,德语和日语的调查,以便我们分析跨文化差异。因此,我们不仅发现我们对“恐惧”和“惊讶”情绪表达的设计经常被混淆,而且与德国和英国参与者相比,许多日本参与者似乎混淆了“愤怒”和“悲伤”。尽管Geminoid F的模特所描绘的类似面部表情总体上获得了更高的识别率,但对模特来说,描绘恐惧同样困难。我们的结论是,提高机器人的表现力,尤其是眼睛周围的表现力,将是机器人设计的一个有用的下一步。总的来说,这些结果可以在未来的研究中对Geminoid F的动态面部表情进行评价。
{"title":"Evaluating facial displays of emotion for the android robot Geminoid F","authors":"C. Becker-Asano, H. Ishiguro","doi":"10.1109/WACI.2011.5953147","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953147","url":null,"abstract":"With android robots becoming increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. We prepared both surveys in English, German, and Japanese language allowing us to analyze for inter-cultural differences. Accordingly, we not only found that our design of the emotional expressions “fearful” and “surprised” were often confused, but also that many Japanese participants seemed to confuse “angry” with “sad” in contrast to the German and English participants. Although similar facial displays portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for the model person. We conclude that improving the android's expressiveness especially around the eyes would be a useful next step in android design. In general, these results could be complemented by an evaluation of dynamic facial expressions of Geminoid F in future research.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122672190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
An ontology for predicting students' emotions during a quiz. Comparison with self-reported emotions 在测验中预测学生情绪的本体。与自我报告的情绪比较
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953153
Victoria Eyharabide, A. Amandi, M. Courgeon, C. Clavel, Chahnez Zakaria, Jean-Claude Martin
Recent research suggests that predicting students' emotions during e-learning is quite relevant but should be situated in the learning context and consider the individual profile of users. More knowledge is required for assessing the possible contributions of multiple sources of information for predicting students' emotions. In this paper we describe an ontology that we have implemented for predicting students' emotions when interacting with a quiz about Java programming. An experimental study with 17 computer science students compares the automatic predictions made by the ontology with the emotions self-reported by students.
最近的研究表明,预测学生在电子学习期间的情绪是非常相关的,但应该放在学习环境中,并考虑用户的个人概况。我们需要更多的知识来评估多种信息来源对预测学生情绪的可能贡献。在本文中,我们描述了一个我们已经实现的本体,用于预测学生在与Java编程测试交互时的情绪。一项针对17名计算机科学专业学生的实验研究将本体自动预测与学生自我报告的情绪进行了比较。
{"title":"An ontology for predicting students' emotions during a quiz. Comparison with self-reported emotions","authors":"Victoria Eyharabide, A. Amandi, M. Courgeon, C. Clavel, Chahnez Zakaria, Jean-Claude Martin","doi":"10.1109/WACI.2011.5953153","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953153","url":null,"abstract":"Recent research suggests that predicting students' emotions during e-learning is quite relevant but should be situated in the learning context and consider the individual profile of users. More knowledge is required for assessing the possible contributions of multiple sources of information for predicting students' emotions. In this paper we describe an ontology that we have implemented for predicting students' emotions when interacting with a quiz about Java programming. An experimental study with 17 computer science students compares the automatic predictions made by the ontology with the emotions self-reported by students.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126819412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Scalable multimodal fusion for continuous affect sensing 面向连续情感感知的可扩展多模态融合
Pub Date : 2011-04-11 DOI: 10.1109/WACI.2011.5953150
I. Hupont, S. Ballano, S. Baldassarri, E. Cerezo
The success of affective interfaces lies in the fusion of emotional information coming from different modalities. This paper proposes a scalable methodology for fusing multiple affect sensing modules, allowing the subsequent addition of new modules without having to retrain the existing ones. It relies on a 2-dimensional affective model and is able to output a continuous emotional path characterizing the user's affective progress over time.
情感界面的成功在于不同形态的情感信息的融合。本文提出了一种可扩展的方法来融合多个影响传感模块,允许后续添加新模块而无需重新训练现有模块。它依赖于一个二维情感模型,能够输出一个连续的情感路径,表征用户随着时间的推移的情感进展。
{"title":"Scalable multimodal fusion for continuous affect sensing","authors":"I. Hupont, S. Ballano, S. Baldassarri, E. Cerezo","doi":"10.1109/WACI.2011.5953150","DOIUrl":"https://doi.org/10.1109/WACI.2011.5953150","url":null,"abstract":"The success of affective interfaces lies in the fusion of emotional information coming from different modalities. This paper proposes a scalable methodology for fusing multiple affect sensing modules, allowing the subsequent addition of new modules without having to retrain the existing ones. It relies on a 2-dimensional affective model and is able to output a continuous emotional path characterizing the user's affective progress over time.","PeriodicalId":319764,"journal":{"name":"2011 IEEE Workshop on Affective Computational Intelligence (WACI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132929187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
2011 IEEE Workshop on Affective Computational Intelligence (WACI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1