首页 > 最新文献

Proceedings of the 3rd International Conference on Human-Agent Interaction最新文献

英文 中文
Understanding Human Internal States: I Know Who You Are and What You Think 理解人类的内在状态:我知道你是谁,你在想什么
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2825025
Soo-Young Lee
For the successful interaction between human and machine agents, the agents need understand both explicitly-presented human intention and unpresented human mind. Although the current human-agent interaction (HAI) systems mainly rely on the former with keystrokes, speech, and gestures, the latter will play an important role for the new and up-coming HAIs. In this talk we will present our continuing efforts to understand unpresented human mind, which may reside at the internal states of neural networks in human brain and may be estimated from brain-related signals such as fMRI (functional Magnetic Resonance Imaging), EEG (Electroencephalography), and eye movements. We hypothesized that the space of brain internal states have several independent axes, of which temporal dynamics have different time scales. Special emphasis was given to human memory, trustworthiness, and sympathy to others during interactions. Human memory changes much slowly in time, and is different from person to person. Therefore, by analyzing brain-related signals from many stimulating images, it may be possible to identify a person. On the other hand the sympathy to others has much shorter time constants during human-agent interactions, and may be identified for each user interaction. The trustworthiness to others may have slightly longer time constants, and may be accumulated by temporal integration during sequential interactions. Therefore, we measured brain-related signals during sequential Theory-of-Mind (ToM) games. Also, the effects of human-like cues of the agents to the trustworthiness were evaluated. At this moment the estimation of human internal states utilizes brain-related signals such as fMRI, EEG, and eye movements. In the future the classification systems of human internal states will be trained with audio-visual signals only, and the current study will provide near-ground-truth labels.
为了实现人机智能体之间的成功交互,智能体既需要理解明确呈现的人类意图,也需要理解未呈现的人类思维。虽然目前的人机交互(HAI)系统主要依赖于前者的按键、语音和手势,但后者将在新的和即将到来的人机交互中发挥重要作用。在这次演讲中,我们将展示我们对理解未呈现的人类思维的持续努力,它可能存在于人类大脑神经网络的内部状态中,并可能从与大脑相关的信号(如fMRI(功能性磁共振成像)、EEG(脑电图)和眼动)中进行估计。我们假设大脑内部状态空间有几个独立的轴,其中时间动态具有不同的时间尺度。特别强调的是人类的记忆,可信度,以及在互动中对他人的同情。人的记忆随时间的变化非常缓慢,而且因人而异。因此,通过分析来自许多刺激图像的大脑相关信号,有可能识别出一个人。另一方面,在人机交互过程中,对他人的同情具有更短的时间常数,并且可以在每个用户交互中识别出来。对他人的信任可能具有稍长的时间常数,并且可能在顺序交互过程中通过时间整合而积累。因此,我们测量了序列心理理论(ToM)游戏中的大脑相关信号。此外,我们还评估了代理人的类人线索对可信度的影响。目前,对人类内部状态的估计利用的是与大脑相关的信号,如功能磁共振成像(fMRI)、脑电图(EEG)和眼球运动。在未来,人类内部状态的分类系统将只使用视听信号进行训练,而目前的研究将提供接近地面真实的标签。
{"title":"Understanding Human Internal States: I Know Who You Are and What You Think","authors":"Soo-Young Lee","doi":"10.1145/2814940.2825025","DOIUrl":"https://doi.org/10.1145/2814940.2825025","url":null,"abstract":"For the successful interaction between human and machine agents, the agents need understand both explicitly-presented human intention and unpresented human mind. Although the current human-agent interaction (HAI) systems mainly rely on the former with keystrokes, speech, and gestures, the latter will play an important role for the new and up-coming HAIs. In this talk we will present our continuing efforts to understand unpresented human mind, which may reside at the internal states of neural networks in human brain and may be estimated from brain-related signals such as fMRI (functional Magnetic Resonance Imaging), EEG (Electroencephalography), and eye movements. We hypothesized that the space of brain internal states have several independent axes, of which temporal dynamics have different time scales. Special emphasis was given to human memory, trustworthiness, and sympathy to others during interactions. Human memory changes much slowly in time, and is different from person to person. Therefore, by analyzing brain-related signals from many stimulating images, it may be possible to identify a person. On the other hand the sympathy to others has much shorter time constants during human-agent interactions, and may be identified for each user interaction. The trustworthiness to others may have slightly longer time constants, and may be accumulated by temporal integration during sequential interactions. Therefore, we measured brain-related signals during sequential Theory-of-Mind (ToM) games. Also, the effects of human-like cues of the agents to the trustworthiness were evaluated. At this moment the estimation of human internal states utilizes brain-related signals such as fMRI, EEG, and eye movements. In the future the classification systems of human internal states will be trained with audio-visual signals only, and the current study will provide near-ground-truth labels.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131351802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expert on Wheels: An Approach to Remote Collaboration 车轮上的专家:远程协作的一种方法
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814943
E. Vartiainen, Veronika Domova, M. Englund
Tools used for remote collaboration and assistance within the industrial sector have remained unchanged for a long time. We introduce a concept called "Expert on Wheels" as the next step in development of remote collaboration tools. "Expert on Wheels" is a mobile tele-presence robot designed to support collaboration between a field worker and a remote expert. This paper presents an exploratory research study, in which "Expert on Wheels" was evaluated with target users as well as in a lab environment. The results indicate that the system has potential but requires improvements in key areas such as the situation awareness of the expert and the system's mobility as a whole. We conclude by discussing if and how such systems could be accepted and useful in different industrial settings.
用于工业部门内远程协作和协助的工具在很长一段时间内保持不变。我们引入了一个名为“车轮上的专家”的概念,作为开发远程协作工具的下一步。“车轮上的专家”是一种移动远程呈现机器人,旨在支持现场工作人员和远程专家之间的协作。本文提出了一项探索性研究,其中“车轮上的专家”在目标用户和实验室环境中进行了评估。结果表明,该系统具有潜力,但需要在专家的态势感知和系统整体机动性等关键领域进行改进。最后,我们讨论了这些系统是否以及如何在不同的工业环境中被接受和使用。
{"title":"Expert on Wheels: An Approach to Remote Collaboration","authors":"E. Vartiainen, Veronika Domova, M. Englund","doi":"10.1145/2814940.2814943","DOIUrl":"https://doi.org/10.1145/2814940.2814943","url":null,"abstract":"Tools used for remote collaboration and assistance within the industrial sector have remained unchanged for a long time. We introduce a concept called \"Expert on Wheels\" as the next step in development of remote collaboration tools. \"Expert on Wheels\" is a mobile tele-presence robot designed to support collaboration between a field worker and a remote expert. This paper presents an exploratory research study, in which \"Expert on Wheels\" was evaluated with target users as well as in a lab environment. The results indicate that the system has potential but requires improvements in key areas such as the situation awareness of the expert and the system's mobility as a whole. We conclude by discussing if and how such systems could be accepted and useful in different industrial settings.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122866085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Human-Robot Interaction using Intention Recognition 使用意图识别的人机交互
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815002
Sangwook Kim, Zhibin Yu, Jonghong Kim, A. Ojha, Minho Lee
Recognition of human intention is an important issue in human-robot interaction research and allows a robot to respond adequately according to human's wish. In this paper, we discuss how robots can infer human intention by learning affordance, a concept used to represent the relation between an agent and its environment. Learning of the robot, to understand human and its interaction with environment, is achieved within the framework of action-perception cycle. The action-perception cycle explains how an intelligent agent learns and enhances its ability continuously by interacting with its surrounding. The proposed intention recognition and recommendation system includes several key functions such as joint attention, object recognition, affordance model, motion understanding module and so on. The experimental results show high successful recognition performance and the plausibility of the proposed system.
人的意图识别是人机交互研究中的一个重要问题,它使机器人能够根据人的意愿做出充分的反应。在本文中,我们讨论了机器人如何通过学习可视性来推断人类的意图,这是一个用于表示代理与其环境之间关系的概念。机器人的学习是在动作-感知循环的框架内实现的,目的是了解人类及其与环境的相互作用。动作-感知循环解释了智能体如何通过与周围环境的互动不断学习和增强其能力。所提出的意图识别与推荐系统包含了共同注意、目标识别、认知模型、动作理解模块等几个关键功能。实验结果表明,该系统具有较高的识别成功率和可行性。
{"title":"Human-Robot Interaction using Intention Recognition","authors":"Sangwook Kim, Zhibin Yu, Jonghong Kim, A. Ojha, Minho Lee","doi":"10.1145/2814940.2815002","DOIUrl":"https://doi.org/10.1145/2814940.2815002","url":null,"abstract":"Recognition of human intention is an important issue in human-robot interaction research and allows a robot to respond adequately according to human's wish. In this paper, we discuss how robots can infer human intention by learning affordance, a concept used to represent the relation between an agent and its environment. Learning of the robot, to understand human and its interaction with environment, is achieved within the framework of action-perception cycle. The action-perception cycle explains how an intelligent agent learns and enhances its ability continuously by interacting with its surrounding. The proposed intention recognition and recommendation system includes several key functions such as joint attention, object recognition, affordance model, motion understanding module and so on. The experimental results show high successful recognition performance and the plausibility of the proposed system.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115429090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
EEG Analysis on 3D Navigation in Virtual Realty with Different Perspectives 虚拟现实中不同视角下三维导航的脑电分析
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814982
Jooyeon Lee, Seong-eun Moon, Manri Cheon, Jong-Seok Lee
In this paper we explore the relations between perspectives of navigation and electroencephalogram (EEG) in 3D virtual space. We analyze three types of navigation with EEG recordings and examine how the perspectives affect the users' electrical activities in their brains. Via a small-scale experiment, we find that the influence of peripersonal space is altered by the perspective, and it can be observed via EEG monitoring. These results have interesting implications on virtual reality applications where a sense of agency, or a peripersonal task takes important roles.
本文探讨了三维虚拟空间中导航视角与脑电图的关系。我们用脑电图记录分析了三种类型的导航,并研究了这些视角如何影响用户大脑中的电活动。通过一个小尺度实验,我们发现周围个人空间的影响会随着视角的改变而改变,并且可以通过脑电图监测来观察。这些结果对虚拟现实应用具有有趣的意义,在这些应用中,代理感或个人任务扮演着重要的角色。
{"title":"EEG Analysis on 3D Navigation in Virtual Realty with Different Perspectives","authors":"Jooyeon Lee, Seong-eun Moon, Manri Cheon, Jong-Seok Lee","doi":"10.1145/2814940.2814982","DOIUrl":"https://doi.org/10.1145/2814940.2814982","url":null,"abstract":"In this paper we explore the relations between perspectives of navigation and electroencephalogram (EEG) in 3D virtual space. We analyze three types of navigation with EEG recordings and examine how the perspectives affect the users' electrical activities in their brains. Via a small-scale experiment, we find that the influence of peripersonal space is altered by the perspective, and it can be observed via EEG monitoring. These results have interesting implications on virtual reality applications where a sense of agency, or a peripersonal task takes important roles.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115114854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Intelligent Learning Tool for Improving Foreign Language Skills Based on EEG and Eye tracker 基于脑电图和眼动仪的外语技能智能学习工具开发
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814951
Jun-Su Kang, A. Ojha, Minho Lee
Recently, there has been tremendous development in education contents for foreign language learning. Based on these trends, IT has provided educational contents development using e-learning and broadcast media. But conventional educational contents are non-interactive presents an impediment to provide user's specific service. To develop a user friendly language education tool, we propose an intelligent learning tool based on user's eye movement and brain waves. By analyzing these features, the proposed system detects if the given word is known or unknown to the user while learning a foreign language. Then it searches its meaning and provides a vocabulary list of unknown words to users in real time. The proposed model provides a tool which enables self-directed learning. We assume that the proposed system can improve users' learning achievements and satisfaction.
近年来,外语教学的内容有了很大的发展。基于这些趋势,IT利用电子学习和广播媒体提供了教育内容开发。但传统的教育内容缺乏互动性,不利于为用户提供针对性的服务。为了开发一种用户友好的语言教育工具,我们提出了一种基于用户眼球运动和脑电波的智能学习工具。通过分析这些特征,所提出的系统检测用户在学习外语时是否知道给定的单词。然后搜索它的含义,并实时向用户提供未知单词的词汇表。提出的模型提供了一种实现自主学习的工具。我们假设所提出的系统可以提高用户的学习成绩和满意度。
{"title":"Development of Intelligent Learning Tool for Improving Foreign Language Skills Based on EEG and Eye tracker","authors":"Jun-Su Kang, A. Ojha, Minho Lee","doi":"10.1145/2814940.2814951","DOIUrl":"https://doi.org/10.1145/2814940.2814951","url":null,"abstract":"Recently, there has been tremendous development in education contents for foreign language learning. Based on these trends, IT has provided educational contents development using e-learning and broadcast media. But conventional educational contents are non-interactive presents an impediment to provide user's specific service. To develop a user friendly language education tool, we propose an intelligent learning tool based on user's eye movement and brain waves. By analyzing these features, the proposed system detects if the given word is known or unknown to the user while learning a foreign language. Then it searches its meaning and provides a vocabulary list of unknown words to users in real time. The proposed model provides a tool which enables self-directed learning. We assume that the proposed system can improve users' learning achievements and satisfaction.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Constructing the Corpus of Infant-Directed Speech and Infant-Like Robot-Directed Speech 婴儿指向语和类婴儿机器人指向语语料库的构建
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814965
Ryuji Nakamura, Kouki Miyazawa, H. Ishihara, Ken'ya Nishikawa, H. Kikuchi, M. Asada, R. Mazuka
The characteristics of the spoken language used to address infants have been eagerly studied as a part of the language acquisition research. Because of the uncontrollability factor with regard to the infants, the features and roles of infant- directed speech were tried to be revealed by the comparison of speech directed toward infants and that toward other listeners. However, they share few characteristics with infants, while infants have many characteristics which may derive the features of IDS. In this study, to solve this problem, we will introduce a new approach that replaces the infant with an infant-like robot which is designed to control its motions and to imitate its appearance very similar to a real infant. We have now recorded both infant- and infant- like robot-directed speech and are constructing both corpora. Analysis of these corpora is expected to contribute to the studies of infant-directed speech. In this paper, we discuss the contents of this approach and the outline of the corpora.
作为语言习得研究的一部分,对幼儿口语特征的研究一直备受关注。由于婴儿言语的不可控性因素,本文试图通过对婴儿言语和对其他听者言语的比较来揭示婴儿指向言语的特点和作用。然而,他们与婴儿共有的特征很少,而婴儿有许多特征可能衍生出IDS的特征。在这项研究中,为了解决这个问题,我们将引入一种新的方法,用一个类似婴儿的机器人来代替婴儿,这个机器人可以控制婴儿的运动,并模仿它的外观,非常类似于真正的婴儿。我们现在已经记录了婴儿式和婴儿式机器人定向语音,并正在构建这两种语料库。对这些语料库的分析有望为婴儿指示语的研究做出贡献。在本文中,我们讨论了该方法的内容和语料库的大纲。
{"title":"Constructing the Corpus of Infant-Directed Speech and Infant-Like Robot-Directed Speech","authors":"Ryuji Nakamura, Kouki Miyazawa, H. Ishihara, Ken'ya Nishikawa, H. Kikuchi, M. Asada, R. Mazuka","doi":"10.1145/2814940.2814965","DOIUrl":"https://doi.org/10.1145/2814940.2814965","url":null,"abstract":"The characteristics of the spoken language used to address infants have been eagerly studied as a part of the language acquisition research. Because of the uncontrollability factor with regard to the infants, the features and roles of infant- directed speech were tried to be revealed by the comparison of speech directed toward infants and that toward other listeners. However, they share few characteristics with infants, while infants have many characteristics which may derive the features of IDS. In this study, to solve this problem, we will introduce a new approach that replaces the infant with an infant-like robot which is designed to control its motions and to imitate its appearance very similar to a real infant. We have now recorded both infant- and infant- like robot-directed speech and are constructing both corpora. Analysis of these corpora is expected to contribute to the studies of infant-directed speech. In this paper, we discuss the contents of this approach and the outline of the corpora.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132578922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Industrial Robot Learning from Demonstration 从示范走向工业机器人学习
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814984
W. Ko, Yan Wu, K. Tee, J. Buchli
Learning from demonstration (LfD) provides an easy and intuitive way to program robot behaviours, potentially reducing development time and costs tremendously. This is especially appealing for manufacturers interested in using industrial manipulators for high-mix production, since this technique enables fast and flexible modifications to the robot behaviours and is thus suitable to teach the robot to perform a wide range of tasks regularly. We define a set of criteria to assess the applicability of state-of-the-art LfD frameworks in the industry. A three-stage LfD method is then proposed, which incorporates human-in-the-loop adaptation to iteratively correct a batch-learned policy to improve accuracy and precision. The system will then transit to open-loop execution of the task to enhance production speed, by removing the human teacher from the feedback loop. The proposed LfD framework addresses all criteria set in this work.
从演示中学习(LfD)提供了一种简单直观的方法来编程机器人的行为,潜在地减少了开发时间和成本。这对于有兴趣使用工业机械手进行高混合生产的制造商尤其有吸引力,因为这种技术可以快速灵活地修改机器人的行为,因此适合教机器人定期执行各种任务。我们定义了一套标准来评估最先进的LfD框架在行业中的适用性。然后提出了一种三阶段LfD方法,该方法结合人在环自适应来迭代修正批量学习策略,以提高准确性和精密度。然后,系统将过渡到开环执行任务,通过将人类教师从反馈回路中移除来提高生产速度。拟议的LfD框架涉及本工作中设定的所有标准。
{"title":"Towards Industrial Robot Learning from Demonstration","authors":"W. Ko, Yan Wu, K. Tee, J. Buchli","doi":"10.1145/2814940.2814984","DOIUrl":"https://doi.org/10.1145/2814940.2814984","url":null,"abstract":"Learning from demonstration (LfD) provides an easy and intuitive way to program robot behaviours, potentially reducing development time and costs tremendously. This is especially appealing for manufacturers interested in using industrial manipulators for high-mix production, since this technique enables fast and flexible modifications to the robot behaviours and is thus suitable to teach the robot to perform a wide range of tasks regularly. We define a set of criteria to assess the applicability of state-of-the-art LfD frameworks in the industry. A three-stage LfD method is then proposed, which incorporates human-in-the-loop adaptation to iteratively correct a batch-learned policy to improve accuracy and precision. The system will then transit to open-loop execution of the task to enhance production speed, by removing the human teacher from the feedback loop. The proposed LfD framework addresses all criteria set in this work.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130346648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
I-get: A Creativity Assistance Tool to Generate Perceptual Pictorial Metaphors I-get:一个创造辅助工具,以产生感性的图形隐喻
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815006
A. Ojha, H. Lee, Minho Lee
We present our ongoing work on a creativity assistance tool called I-get. The tool is based on the hypothesis that perceptual similarity between a pair of images, at a subconscious level, plays a key role in generating creative conceptual associations and metaphorical interpretations. The tool "I-get" is designed to assist users to create novel ideas and metaphorical associations primed by algorithmic perceptual similarity between two images and alternative conceptual associations given by users.
我们展示了我们正在进行的创造力辅助工具I-get的工作。该工具基于一种假设,即在潜意识层面上,一对图像之间的感知相似性在产生创造性概念联想和隐喻解释方面起着关键作用。“I-get”工具旨在帮助用户创建新的想法和隐喻联想,这些联想是由两幅图像之间的算法感知相似性和用户给出的替代概念联想启动的。
{"title":"I-get: A Creativity Assistance Tool to Generate Perceptual Pictorial Metaphors","authors":"A. Ojha, H. Lee, Minho Lee","doi":"10.1145/2814940.2815006","DOIUrl":"https://doi.org/10.1145/2814940.2815006","url":null,"abstract":"We present our ongoing work on a creativity assistance tool called I-get. The tool is based on the hypothesis that perceptual similarity between a pair of images, at a subconscious level, plays a key role in generating creative conceptual associations and metaphorical interpretations. The tool \"I-get\" is designed to assist users to create novel ideas and metaphorical associations primed by algorithmic perceptual similarity between two images and alternative conceptual associations given by users.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Smart Air Purification and Humidification by a Mobile Robot toward a Smart Home 移动机器人实现智能家居的智能空气净化和加湿
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815003
Jeong-Yean Yang, D. Kwon
Air purifiers and humidifiers, the popular home appliances, have a large working space by injecting and controlling air, which spreads over a house. The air circulation inside a room limits the efficiency of air purification, the disparities of which vary with environmental factors, such as room shape, temperature, furniture arrangements, and human movement. In this study, mobility provided by mobile robot technology is combined with a conventional air purification function. This robotic appliance can improve air purification efficiency, which depends on the air injections and filtration. In our experiments, the effectiveness of the mobility is verified and the commercial growth potential, as a new type of smart home service robot is discussed.
流行的家用电器空气净化器和加湿器通过注入和控制弥漫在房子里的空气来扩大工作空间。房间内的空气循环限制了空气净化的效率,其差异随环境因素而变化,如房间形状、温度、家具布置、人员活动等。在本研究中,移动机器人技术提供的移动性与传统的空气净化功能相结合。这种机器人设备可以提高空气净化效率,这取决于空气注入和过滤。在实验中,验证了其移动性的有效性,并讨论了其作为一种新型智能家居服务机器人的商业增长潜力。
{"title":"Smart Air Purification and Humidification by a Mobile Robot toward a Smart Home","authors":"Jeong-Yean Yang, D. Kwon","doi":"10.1145/2814940.2815003","DOIUrl":"https://doi.org/10.1145/2814940.2815003","url":null,"abstract":"Air purifiers and humidifiers, the popular home appliances, have a large working space by injecting and controlling air, which spreads over a house. The air circulation inside a room limits the efficiency of air purification, the disparities of which vary with environmental factors, such as room shape, temperature, furniture arrangements, and human movement. In this study, mobility provided by mobile robot technology is combined with a conventional air purification function. This robotic appliance can improve air purification efficiency, which depends on the air injections and filtration. In our experiments, the effectiveness of the mobility is verified and the commercial growth potential, as a new type of smart home service robot is discussed.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130679089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Smart Cane: Face Recognition System for Blind 智能手杖:盲人面部识别系统
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814952
Yongsik Jin, Jonghong Kim, Bumhwi Kim, R. Mallipeddi, Minho Lee
We propose a smart cane with a face recognition system to help the blind in recognizing human faces. This system detects and recognizes faces around them. The result of the detection is informed to the blind person through a vibration pattern. The proposed system was designed to be used in real-time and is equipped with a camera mounted on the glasses, a vibration motor attached to the cane and a mobile computer. The camera attached to the glasses sends image to mobile computer. The mobile computer extracts features from the image and then detects the face using Adaboost. We use the modified census transform (MCT) descriptor for feature extraction. After face detection, the information regarding the detected face image is gathered. We used compressed sensing with L2-norm as a classifier. Cane is equipped with a Bluetooth module and receives a person's information from the mobile computer. The cane generates vibration patterns unique to each person as to inform a blind person about the identity of the detected person using the camera. Hence, the blind people can know the person standing in front of them.
我们提出了一种带有人脸识别系统的智能手杖,以帮助盲人识别人脸。这个系统检测并识别他们周围的面孔。检测结果通过振动模式传递给盲人。该系统被设计为实时使用,并配备了安装在眼镜上的摄像头、连接在手杖上的振动电机和移动计算机。眼镜上的摄像头将图像发送到移动电脑。移动计算机从图像中提取特征,然后使用Adaboost检测人脸。我们使用改进的普查变换(MCT)描述符进行特征提取。人脸检测完成后,收集被检测到的人脸图像信息。我们使用l2 -范数压缩感知作为分类器。Cane配备了蓝牙模块,可以从移动电脑接收人的信息。手杖会对每个人产生独特的振动模式,以告知盲人使用摄像头检测到的人的身份。因此,盲人可以认识站在他们面前的人。
{"title":"Smart Cane: Face Recognition System for Blind","authors":"Yongsik Jin, Jonghong Kim, Bumhwi Kim, R. Mallipeddi, Minho Lee","doi":"10.1145/2814940.2814952","DOIUrl":"https://doi.org/10.1145/2814940.2814952","url":null,"abstract":"We propose a smart cane with a face recognition system to help the blind in recognizing human faces. This system detects and recognizes faces around them. The result of the detection is informed to the blind person through a vibration pattern. The proposed system was designed to be used in real-time and is equipped with a camera mounted on the glasses, a vibration motor attached to the cane and a mobile computer. The camera attached to the glasses sends image to mobile computer. The mobile computer extracts features from the image and then detects the face using Adaboost. We use the modified census transform (MCT) descriptor for feature extraction. After face detection, the information regarding the detected face image is gathered. We used compressed sensing with L2-norm as a classifier. Cane is equipped with a Bluetooth module and receives a person's information from the mobile computer. The cane generates vibration patterns unique to each person as to inform a blind person about the identity of the detected person using the camera. Hence, the blind people can know the person standing in front of them.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125134659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
Proceedings of the 3rd International Conference on Human-Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1