首页 > 最新文献

Proceedings of the Fourth International Conference on Human Agent Interaction最新文献

英文 中文
Designing MUSE: A Multimodal User Experience for a Shopping Mall Kiosk 设计MUSE:购物亭的多模式用户体验
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980521
Andreea Niculescu, Kheng Hui Yeo, R. Banchs
Multimodal interactions provide more engaging experiences allowing users to perform complex tasks while searching for information. In this paper, we present a multimodal interactive kiosk for displaying information in shopping malls. The kiosk uses visual information and natural language to communicate with visitors. Users can connect to the kiosk using their own mobile phone as speech or type input device. The connection is established by scanning a QR code displayed on the kiosk screen. Field work, observations, design, system architecture and implementation are reported.
多模式交互提供了更吸引人的体验,允许用户在搜索信息的同时执行复杂的任务。在本文中,我们提出了一个多模式的交互式信息亭显示在购物中心的信息。亭子使用视觉信息和自然语言与游客交流。用户可以使用自己的手机作为语音或打字输入设备连接到kiosk。通过扫描显示在售货亭屏幕上的二维码,就可以建立连接。报告了现场工作、观察、设计、系统架构和实现。
{"title":"Designing MUSE: A Multimodal User Experience for a Shopping Mall Kiosk","authors":"Andreea Niculescu, Kheng Hui Yeo, R. Banchs","doi":"10.1145/2974804.2980521","DOIUrl":"https://doi.org/10.1145/2974804.2980521","url":null,"abstract":"Multimodal interactions provide more engaging experiences allowing users to perform complex tasks while searching for information. In this paper, we present a multimodal interactive kiosk for displaying information in shopping malls. The kiosk uses visual information and natural language to communicate with visitors. Users can connect to the kiosk using their own mobile phone as speech or type input device. The connection is established by scanning a QR code displayed on the kiosk screen. Field work, observations, design, system architecture and implementation are reported.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123051864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Children's Facial Expressions in Truthful and Deceptive Interactions with a Virtual Agent 儿童在与虚拟代理真实和欺骗互动中的面部表情
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974815
M. Pereira, J. D. Lange, S. Shahid, M. Swerts
The present study focused on the facial expressions that children exhibit while they try to deceive a virtual agent. An interactive lie elicitation game was developed to record children's facial expressions during deceptive and truthful utterances, when doing the task alone or in the presence of peers. Based on manual annotations of their facial expressions, we found that children, while communicating with a virtual agent, produce different facial expressions in deceptive and truthful contexts. It seems that deceptive children try to cover their lie as they smile significantly more than truthful children. Moreover, co-presence enhances children's facial expressive behaviour and the amount of cues to deceit. Deceivers, especially when being together with a friend, more often press their lips, smile, blink and avert their gaze than truth-tellers.
目前的研究重点是儿童在试图欺骗虚拟代理人时所表现出的面部表情。研究人员开发了一个互动的谎言引出游戏,用来记录孩子们在说谎和说真话时的面部表情,无论他们是单独完成任务还是在同伴在场的情况下。基于对他们面部表情的手动注释,我们发现儿童在与虚拟代理交流时,在欺骗和真实的语境中会产生不同的面部表情。看起来,欺骗的孩子比诚实的孩子笑得更多,所以他们试图掩盖自己的谎言。此外,共同在场增强了儿童的面部表情行为和欺骗线索的数量。说谎者,尤其是和朋友在一起时,比说真话的人更经常抿嘴唇、微笑、眨眼和转移目光。
{"title":"Children's Facial Expressions in Truthful and Deceptive Interactions with a Virtual Agent","authors":"M. Pereira, J. D. Lange, S. Shahid, M. Swerts","doi":"10.1145/2974804.2974815","DOIUrl":"https://doi.org/10.1145/2974804.2974815","url":null,"abstract":"The present study focused on the facial expressions that children exhibit while they try to deceive a virtual agent. An interactive lie elicitation game was developed to record children's facial expressions during deceptive and truthful utterances, when doing the task alone or in the presence of peers. Based on manual annotations of their facial expressions, we found that children, while communicating with a virtual agent, produce different facial expressions in deceptive and truthful contexts. It seems that deceptive children try to cover their lie as they smile significantly more than truthful children. Moreover, co-presence enhances children's facial expressive behaviour and the amount of cues to deceit. Deceivers, especially when being together with a friend, more often press their lips, smile, blink and avert their gaze than truth-tellers.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126068586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Session details: Main Track Session V: Extending Body Image 会议详情:主会场第五部分:扩展身体形象
Hirotaka Osawa, Tetsushi Oka
{"title":"Session details: Main Track Session V: Extending Body Image","authors":"Hirotaka Osawa, Tetsushi Oka","doi":"10.1145/3257127","DOIUrl":"https://doi.org/10.1145/3257127","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128697935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Main Track Session II: Power of Groups 会议详情:主轨会议II:团体的力量
T. Iio, Sin-Hwa Kang
{"title":"Session details: Main Track Session II: Power of Groups","authors":"T. Iio, Sin-Hwa Kang","doi":"10.1145/3257124","DOIUrl":"https://doi.org/10.1145/3257124","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121218682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Humotion: A Human Inspired Gaze Control Framework for Anthropomorphic Robot Heads Humotion:拟人机器人头部的人类灵感凝视控制框架
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974827
Simon Schulz, Florian Lier, A. Kipp, S. Wachsmuth
In recent years, an attempt is being made to control robots more intuitive and intelligible by exploiting and integrating anthropomorphic features to boost social human-robot interaction. The design and construction of anthropomorphic robots for this kind of interaction is not the only challenging issue -- smooth and expectation-matching motion control is still an unsolved topic. In this work we present a highly configurable, portable, and open control framework that facilitates anthropomorphic motion generation for humanoid robot heads by enhancing state-of-the-art neck-eye coordination with human-like eyelid saccades and animation. On top of that, the presented framework supports dynamic neck offset angles that allow animation overlays and changes in alignment to the robots communication partner whileretaining visual focus on a given target. In order to demonstrate the universal applicability of the proposed ideas we used this framework to control the Flobi and the iCub robot head, both in simulation and on the physical robot. In order to foster further comparative studies of different robot heads, we will release all software, based on this contribution, under an open-source license.
近年来,人们试图通过利用和整合拟人化特征来提高人机交互的社会性,从而使机器人的控制更加直观和易于理解。拟人机器人的设计和构造并不是唯一具有挑战性的问题——平滑和期望匹配的运动控制仍然是一个未解决的话题。在这项工作中,我们提出了一个高度可配置、便携和开放的控制框架,通过增强最先进的颈眼协调和类似人类的眼睑跳跃和动画,促进人形机器人头部的拟人化运动生成。最重要的是,所提出的框架支持动态颈部偏移角度,允许动画叠加和改变与机器人通信伙伴的对齐,同时保持对给定目标的视觉焦点。为了证明所提出的想法的普遍适用性,我们使用该框架来控制Flobi和iCub机器人头部,无论是在模拟还是在物理机器人上。为了促进对不同机器人头部的进一步比较研究,我们将在开源许可下发布基于此贡献的所有软件。
{"title":"Humotion: A Human Inspired Gaze Control Framework for Anthropomorphic Robot Heads","authors":"Simon Schulz, Florian Lier, A. Kipp, S. Wachsmuth","doi":"10.1145/2974804.2974827","DOIUrl":"https://doi.org/10.1145/2974804.2974827","url":null,"abstract":"In recent years, an attempt is being made to control robots more intuitive and intelligible by exploiting and integrating anthropomorphic features to boost social human-robot interaction. The design and construction of anthropomorphic robots for this kind of interaction is not the only challenging issue -- smooth and expectation-matching motion control is still an unsolved topic. In this work we present a highly configurable, portable, and open control framework that facilitates anthropomorphic motion generation for humanoid robot heads by enhancing state-of-the-art neck-eye coordination with human-like eyelid saccades and animation. On top of that, the presented framework supports dynamic neck offset angles that allow animation overlays and changes in alignment to the robots communication partner whileretaining visual focus on a given target. In order to demonstrate the universal applicability of the proposed ideas we used this framework to control the Flobi and the iCub robot head, both in simulation and on the physical robot. In order to foster further comparative studies of different robot heads, we will release all software, based on this contribution, under an open-source license.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114850822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Cross-cultural Study of Perception and Acceptance of Japanese Self-adaptors 日本自我适应者认知与接受的跨文化研究
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980491
T. Ishioh, Tomoko Koda
This paper reports our preliminary results of a cross-cultural study of perception and acceptance of cultural specific self-adaptors performed by a virtual agent. There are culturally-defined preferences in self-adaptors and other bodily expressions, and allowance level of expressing such non-verbal behavior are culture-dependent. We conducted a web experiment to evaluate the impression and acceptance of Japanese culture specific self-adaptors and gathered participants from 8 countries. The results indicated non-Japanese participants' insensitivity to the different types of self-adaptors and over sensitivity to Japanese participants' to stressful self-adaptors.
本文报告了我们对虚拟代理执行的文化特定自我适应的感知和接受的跨文化研究的初步结果。在自我适应和其他身体表达中存在文化定义的偏好,表达这种非语言行为的允许程度取决于文化。我们进行了一项网络实验,以评估日本文化特定自我适应者的印象和接受程度,并收集了来自8个国家的参与者。结果表明,非日本籍被试对不同类型的自适应不敏感,对日本籍被试对压力型自适应过于敏感。
{"title":"Cross-cultural Study of Perception and Acceptance of Japanese Self-adaptors","authors":"T. Ishioh, Tomoko Koda","doi":"10.1145/2974804.2980491","DOIUrl":"https://doi.org/10.1145/2974804.2980491","url":null,"abstract":"This paper reports our preliminary results of a cross-cultural study of perception and acceptance of cultural specific self-adaptors performed by a virtual agent. There are culturally-defined preferences in self-adaptors and other bodily expressions, and allowance level of expressing such non-verbal behavior are culture-dependent. We conducted a web experiment to evaluate the impression and acceptance of Japanese culture specific self-adaptors and gathered participants from 8 countries. The results indicated non-Japanese participants' insensitivity to the different types of self-adaptors and over sensitivity to Japanese participants' to stressful self-adaptors.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130187179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Understanding Behaviours and Roles for Social and Adaptive Robots In Education: Teacher's Perspective 理解社会机器人和自适应机器人在教育中的行为和角色:教师的观点
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974829
M. Ahmad, Omar Mubin, Joanne Orlando
In order to establish a long-term relationship between a robot and a child, robots need to learn from the environment, adapt to specific user needs and display behaviours and roles accordingly. Literature shows that certain robot behaviours could negatively impact child's learning and performance. Therefore, the purpose of the present study is to not only understand teacher's opinion on the existing effective social behaviours and roles but also to understand novel behaviours that can positively influence children performance in a language learning setting. In this paper, we present our results based on interviews conducted with 8 language teachers to get their opinion on how a robot can efficiently perform behaviour adaptation to influence learning and achieve long-term engagement. We also present results on future directions extracted from the interviews with teachers.
为了在机器人和孩子之间建立长期的关系,机器人需要从环境中学习,适应特定的用户需求,并相应地表现出行为和角色。文献表明,机器人的某些行为可能会对儿童的学习和表现产生负面影响。因此,本研究的目的不仅是了解教师对现有有效的社会行为和角色的看法,而且还要了解在语言学习环境中能够积极影响儿童表现的新行为。在本文中,我们基于对8位语言教师的访谈来展示我们的结果,以了解他们对机器人如何有效地执行行为适应来影响学习并实现长期参与的看法。我们也提出了从教师访谈中提取的未来方向的结果。
{"title":"Understanding Behaviours and Roles for Social and Adaptive Robots In Education: Teacher's Perspective","authors":"M. Ahmad, Omar Mubin, Joanne Orlando","doi":"10.1145/2974804.2974829","DOIUrl":"https://doi.org/10.1145/2974804.2974829","url":null,"abstract":"In order to establish a long-term relationship between a robot and a child, robots need to learn from the environment, adapt to specific user needs and display behaviours and roles accordingly. Literature shows that certain robot behaviours could negatively impact child's learning and performance. Therefore, the purpose of the present study is to not only understand teacher's opinion on the existing effective social behaviours and roles but also to understand novel behaviours that can positively influence children performance in a language learning setting. In this paper, we present our results based on interviews conducted with 8 language teachers to get their opinion on how a robot can efficiently perform behaviour adaptation to influence learning and achieve long-term engagement. We also present results on future directions extracted from the interviews with teachers.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130853425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Behavioral Expression Design onto Manufactured Figures 人造人物的行为表达设计
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980484
Yoshihisa Ishihara, Kazuki Kobayashi, S. Yamada
Natural language user interfaces, such as Apple Siri and Google Voice Search have been embedded in consumer devices; however, speaking to objects can feel awkward. Use of these interfaces should feel natural, like speaking to a real listener. This paper proposes a method for manufactured objects such as anime figures to exhibit highly realistic behavioral expressions to improve speech interaction between a user and an object. Using a projection mapping technique, an anime figure provides back-channel feedback to a user by appearing to nod or shake its head.
苹果Siri和谷歌语音搜索等自然语言用户界面已经嵌入到消费者设备中;然而,对物体说话会让人感到尴尬。使用这些界面应该感觉很自然,就像与真正的听众交谈一样。本文提出了一种方法,使人造物体(如动漫人物)表现出高度逼真的行为表达,以改善用户与物体之间的语音交互。使用投影映射技术,动画人物通过点头或摇头向用户提供反向反馈。
{"title":"Behavioral Expression Design onto Manufactured Figures","authors":"Yoshihisa Ishihara, Kazuki Kobayashi, S. Yamada","doi":"10.1145/2974804.2980484","DOIUrl":"https://doi.org/10.1145/2974804.2980484","url":null,"abstract":"Natural language user interfaces, such as Apple Siri and Google Voice Search have been embedded in consumer devices; however, speaking to objects can feel awkward. Use of these interfaces should feel natural, like speaking to a real listener. This paper proposes a method for manufactured objects such as anime figures to exhibit highly realistic behavioral expressions to improve speech interaction between a user and an object. Using a projection mapping technique, an anime figure provides back-channel feedback to a user by appearing to nod or shake its head.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133390242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Posture Detection using H-ELM Body Part and Whole Person Detectors for Human-Robot Interaction 基于H-ELM身体部位和全身探测器的人机交互人体姿态检测
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980480
M. Ramanathan, W. Yau, E. Teoh
For reliable human-robot interaction, the robot must know the person's action in order to plan the appropriate way to interact or assist the person. As part of the pre-processing stage of action recognition, the robot also needs to recognize the various body parts and posture of the person. But estimation of posture and body parts is challenging due to the articulated nature of the human body and the huge intra-class variations. To address this challenge, we propose two schemes using Hierarchical-ELM (H-ELM) for posture detection into either upright or non-upright posture. In the first scheme, we follow a whole body detector approach, where a H-ELM classifier is trained on several whole body postures. In the second scheme, we follow a body part detection approach, where separate H-ELM classifiers are detected for each body part. Using the detected body parts a final decision is made on the posture of the person. We have conducted several experiments to compare the performance of both approaches under different scenarios like view angle changes, occlusion etc. Our experimental results show that body part H-ELM based posture detection works better than other proposed framework even in the presence of occlusion.
为了实现可靠的人机交互,机器人必须知道人的动作,以便计划适当的方式与人交互或协助人。作为动作识别预处理阶段的一部分,机器人还需要识别人的各个身体部位和姿势。但是,由于人体的关节特性和巨大的班级内差异,对姿势和身体部位的估计是具有挑战性的。为了解决这一挑战,我们提出了两种使用层次elm (H-ELM)检测直立或非直立姿势的方案。在第一种方案中,我们采用全身检测器方法,其中H-ELM分类器在几个全身姿势上进行训练。在第二种方案中,我们采用身体部位检测方法,其中为每个身体部位检测单独的H-ELM分类器。利用检测到的身体部位,对人的姿势做出最终决定。我们进行了几个实验来比较两种方法在不同场景下的性能,如视角变化,遮挡等。实验结果表明,即使在遮挡的情况下,基于H-ELM的身体部位姿态检测效果也优于其他提出的框架。
{"title":"Human Posture Detection using H-ELM Body Part and Whole Person Detectors for Human-Robot Interaction","authors":"M. Ramanathan, W. Yau, E. Teoh","doi":"10.1145/2974804.2980480","DOIUrl":"https://doi.org/10.1145/2974804.2980480","url":null,"abstract":"For reliable human-robot interaction, the robot must know the person's action in order to plan the appropriate way to interact or assist the person. As part of the pre-processing stage of action recognition, the robot also needs to recognize the various body parts and posture of the person. But estimation of posture and body parts is challenging due to the articulated nature of the human body and the huge intra-class variations. To address this challenge, we propose two schemes using Hierarchical-ELM (H-ELM) for posture detection into either upright or non-upright posture. In the first scheme, we follow a whole body detector approach, where a H-ELM classifier is trained on several whole body postures. In the second scheme, we follow a body part detection approach, where separate H-ELM classifiers are detected for each body part. Using the detected body parts a final decision is made on the posture of the person. We have conducted several experiments to compare the performance of both approaches under different scenarios like view angle changes, occlusion etc. Our experimental results show that body part H-ELM based posture detection works better than other proposed framework even in the presence of occlusion.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring Social Interaction with Everyday Object based on Perceptual Crossing 基于感知交叉的日常物品社会互动探索
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974810
S. A. Anas, S. Qiu, G.W.M. Rauterberg, Jun Hu
Eye gaze plays an essential role in social interaction which influences our perception of others. It is most likely that we can perceive the existence of another intentional subject through the act of cathing one another's eyes. Based on the notion of perceptual crossing, we aim to establish a meaningful social interaction that emerges out of the perceptual crossing between a person and an everyday object by exploiting the gazing behavior of the person as the input modality for the system. We investigated in literature the experiments that adopt the perceptual crossing as their foundation, lessons learned from literature were used as input for a concept to create meaningful social interaction. We used an eye-tracker to measure gaze behavior that allows the participant to interact with the object by using their eyes through active exploration. It creates a situation where both of them mutually becoming aware of each other's existence. Further, we discuss the motivation for this research, present a preliminary experiment that influences our decision and our directions for future work.
目光在社会交往中起着至关重要的作用,它影响着我们对他人的看法。最有可能的是,我们可以通过相互注视的行为来感知另一个有意识主体的存在。基于感知交叉的概念,我们的目标是通过利用人的凝视行为作为系统的输入模态,在人与日常物体之间的感知交叉中建立一种有意义的社会互动。我们在文献中调查了以感知交叉为基础的实验,从文献中吸取的经验教训被用作概念的输入,以创造有意义的社会互动。我们使用眼动仪来测量凝视行为,允许参与者通过主动探索使用他们的眼睛与物体进行互动。它创造了一种情况,使他们双方都意识到彼此的存在。此外,我们讨论了这项研究的动机,提出了一个初步的实验,影响我们的决定和我们未来的工作方向。
{"title":"Exploring Social Interaction with Everyday Object based on Perceptual Crossing","authors":"S. A. Anas, S. Qiu, G.W.M. Rauterberg, Jun Hu","doi":"10.1145/2974804.2974810","DOIUrl":"https://doi.org/10.1145/2974804.2974810","url":null,"abstract":"Eye gaze plays an essential role in social interaction which influences our perception of others. It is most likely that we can perceive the existence of another intentional subject through the act of cathing one another's eyes. Based on the notion of perceptual crossing, we aim to establish a meaningful social interaction that emerges out of the perceptual crossing between a person and an everyday object by exploiting the gazing behavior of the person as the input modality for the system. We investigated in literature the experiments that adopt the perceptual crossing as their foundation, lessons learned from literature were used as input for a concept to create meaningful social interaction. We used an eye-tracker to measure gaze behavior that allows the participant to interact with the object by using their eyes through active exploration. It creates a situation where both of them mutually becoming aware of each other's existence. Further, we discuss the motivation for this research, present a preliminary experiment that influences our decision and our directions for future work.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115671165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the Fourth International Conference on Human Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1