首页 > 最新文献

Proceedings of the 3rd International Conference on Human-Agent Interaction最新文献

英文 中文
INTER: An App for Intercultural Communication INTER:一个跨文化交流的应用程序
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815008
Mengyue Li, Xiaoyi Hu, D. Tate, Jie Sun
To improve an intercultural communication between foreigners and Chinese in mainland China, an app called INTER is designed help them to find common communication topics. Questions & Answers (Q&A) and information push are utilized in the app application. A User interface is designed and preliminary tested by 30 participants studied in mainland china, who come from different cultural environments. During the test, they are given one actual user interface, and asked to fill in a questionnaire, followed by an interview for user experience collection and their personal opinions about function designs with application flow chat. INTER is demonstrated as a digital intercultural communication tool which can be utilized as software in mobile phone for an interactive accessible communication. Interface design would be further improved in the future study.
为了改善中国大陆外国人和中国人之间的跨文化交流,一款名为INTER的应用程序旨在帮助他们找到共同的交流话题。在app应用中使用了问答和信息推送。用户界面由30位来自中国大陆不同文化环境的参与者设计并初步测试。在测试过程中,给他们一个实际的用户界面,并要求他们填写一份调查问卷,然后进行用户体验收集的访谈,以及他们对应用程序流程聊天功能设计的个人意见。INTER是一种数字化的跨文化交流工具,可以作为手机软件进行交互式无障碍交流。界面设计将在今后的研究中进一步完善。
{"title":"INTER: An App for Intercultural Communication","authors":"Mengyue Li, Xiaoyi Hu, D. Tate, Jie Sun","doi":"10.1145/2814940.2815008","DOIUrl":"https://doi.org/10.1145/2814940.2815008","url":null,"abstract":"To improve an intercultural communication between foreigners and Chinese in mainland China, an app called INTER is designed help them to find common communication topics. Questions & Answers (Q&A) and information push are utilized in the app application. A User interface is designed and preliminary tested by 30 participants studied in mainland china, who come from different cultural environments. During the test, they are given one actual user interface, and asked to fill in a questionnaire, followed by an interview for user experience collection and their personal opinions about function designs with application flow chat. INTER is demonstrated as a digital intercultural communication tool which can be utilized as software in mobile phone for an interactive accessible communication. Interface design would be further improved in the future study.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125179063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast Training Algorithm of Multiple-Timescale Recurrent Neural Network for Agent Motion Generation 一种用于智能体运动生成的多时间尺度递归神经网络快速训练算法
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814986
Zhibin Yu, R. Mallipeddi, Minho Lee
Motion understanding and regeneration are two basic aspects of human-agent interaction. One important function of agents is to represent human's activities. For better interaction with human, robot agents should not only do something following human's order, but also be able to understand or even play some actions. Multiple Timescale Recurrent Neural Networks (MTRNN) is believed to be an efficient tool for robots action generation. In our previous work, we extended the concept of MTRNN and developed Supervised MTRNN for motion recognition. In this paper, we use Conditional Restricted Boltzmann Machine (CRBM) to initialize Supervised MTRNN and accelerate the training speed of Supervised MTRNN. Experiment results show that our method can greatly increase the training speed without losing much performance.
动作理解和动作再生是人机交互的两个基本方面。代理的一个重要功能是代表人的活动。为了更好地与人类互动,机器人代理不仅要按照人类的命令去做一些事情,而且要能够理解甚至扮演一些动作。多时间尺度递归神经网络(MTRNN)被认为是机器人动作生成的有效工具。在我们之前的工作中,我们扩展了MTRNN的概念,并开发了用于运动识别的监督MTRNN。本文采用条件限制玻尔兹曼机(CRBM)对有监督MTRNN进行初始化,提高了有监督MTRNN的训练速度。实验结果表明,该方法可以在不损失太多性能的情况下大大提高训练速度。
{"title":"A Fast Training Algorithm of Multiple-Timescale Recurrent Neural Network for Agent Motion Generation","authors":"Zhibin Yu, R. Mallipeddi, Minho Lee","doi":"10.1145/2814940.2814986","DOIUrl":"https://doi.org/10.1145/2814940.2814986","url":null,"abstract":"Motion understanding and regeneration are two basic aspects of human-agent interaction. One important function of agents is to represent human's activities. For better interaction with human, robot agents should not only do something following human's order, but also be able to understand or even play some actions. Multiple Timescale Recurrent Neural Networks (MTRNN) is believed to be an efficient tool for robots action generation. In our previous work, we extended the concept of MTRNN and developed Supervised MTRNN for motion recognition. In this paper, we use Conditional Restricted Boltzmann Machine (CRBM) to initialize Supervised MTRNN and accelerate the training speed of Supervised MTRNN. Experiment results show that our method can greatly increase the training speed without losing much performance.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126143855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Flying Frustum: A Spatial Interface for Enhancing Human-UAV Awareness 飞锥:增强人-无人机感知的空间接口
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814956
Nico Li, S. Cartwright, A. Nittala, E. Sharlin, M. Sousa
We present Flying Frustum, a 3D spatial interface that enables control of semi-autonomous UAVs (Unmanned Aerial Vehicles) using pen interaction on a physical model of the terrain, and that spatially situates the information streaming from the UAVs onto the physical model. Our interface is based on a 3D printout of the terrain, which allows the operator to enter goals and paths to the UAV by drawing them directly on the physical model. In turn, the UAV's streaming reconnaissance information is superimposed on the 3D printout as a view frustum, which is situated according to the UAV's position and orientation on the actual terrain. We argue that Flying Frustum's 3D spatially situated interaction can potentially help improve human-UAV awareness and enhance the overall situational awareness. We motivate our design approach for Flying Frustum, discuss previous related work in CSCW and HRI, present our preliminary prototype using both handheld and headset augmented reality interfaces, reflect on Flying Frustum's strengths and weaknesses, and discuss our plans for future evaluation and prototype improvements.
我们展示了Flying Frustum,这是一个3D空间界面,可以使用笔在地形的物理模型上进行交互来控制半自动无人机(无人机),并将从无人机到物理模型的信息流在空间上定位。我们的界面是基于地形的3D打印输出,它允许操作员通过直接在物理模型上绘制它们来输入无人机的目标和路径。反过来,无人机的流侦察信息作为视锥台叠加在3D打印输出上,视锥台根据无人机在实际地形上的位置和方向定位。我们认为,Flying Frustum的3D空间定位交互可能有助于提高人-无人机的感知能力,增强整体态势感知能力。我们对Flying Frustum的设计方法进行了激励,讨论了之前在CSCW和HRI中的相关工作,展示了我们使用手持和耳机增强现实界面的初步原型,反思了Flying Frustum的优缺点,并讨论了我们未来评估和原型改进的计划。
{"title":"Flying Frustum: A Spatial Interface for Enhancing Human-UAV Awareness","authors":"Nico Li, S. Cartwright, A. Nittala, E. Sharlin, M. Sousa","doi":"10.1145/2814940.2814956","DOIUrl":"https://doi.org/10.1145/2814940.2814956","url":null,"abstract":"We present Flying Frustum, a 3D spatial interface that enables control of semi-autonomous UAVs (Unmanned Aerial Vehicles) using pen interaction on a physical model of the terrain, and that spatially situates the information streaming from the UAVs onto the physical model. Our interface is based on a 3D printout of the terrain, which allows the operator to enter goals and paths to the UAV by drawing them directly on the physical model. In turn, the UAV's streaming reconnaissance information is superimposed on the 3D printout as a view frustum, which is situated according to the UAV's position and orientation on the actual terrain. We argue that Flying Frustum's 3D spatially situated interaction can potentially help improve human-UAV awareness and enhance the overall situational awareness. We motivate our design approach for Flying Frustum, discuss previous related work in CSCW and HRI, present our preliminary prototype using both handheld and headset augmented reality interfaces, reflect on Flying Frustum's strengths and weaknesses, and discuss our plans for future evaluation and prototype improvements.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125374363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Face Recognition by Using SURF Features with Block-Based Bag of Feature Models 基于块特征模型包的SURF特征人脸识别
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815005
Ahmed Salem, T. Ozeki
Face recognition is used to identify humans by their face image. Recently it becomes most common application in information security. Bag of features has been successfully applied in face recognition. In our research we use SURF features and try to improve it by using block-based bag of feature models. In this method we partition the image into multiple blocks and we extract SURF features densely on each block. We compare the performance of the original bag of feature model with Grid/Detector method and bag of block-based feature model.
人脸识别是通过人脸图像来识别人。近年来,它已成为信息安全领域最普遍的应用。特征包已成功应用于人脸识别。在我们的研究中,我们使用SURF特征,并尝试使用基于块的特征模型包来改进它。在该方法中,我们将图像分成多个块,并在每个块上密集地提取SURF特征。比较了基于网格/检测器方法的原始特征模型袋和基于块的特征模型袋的性能。
{"title":"Face Recognition by Using SURF Features with Block-Based Bag of Feature Models","authors":"Ahmed Salem, T. Ozeki","doi":"10.1145/2814940.2815005","DOIUrl":"https://doi.org/10.1145/2814940.2815005","url":null,"abstract":"Face recognition is used to identify humans by their face image. Recently it becomes most common application in information security. Bag of features has been successfully applied in face recognition. In our research we use SURF features and try to improve it by using block-based bag of feature models. In this method we partition the image into multiple blocks and we extract SURF features densely on each block. We compare the performance of the original bag of feature model with Grid/Detector method and bag of block-based feature model.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122133107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Personification Aspect of Conversational Agents as Representations of a Physical Object 会话代理作为物理对象表示的人格化方面
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814983
Akihito Yoshii, T. Nakajima
Using computer technologies, the physical world in which we live and the virtual worlds generated by computers or popular cultures are coming closer together. Virtual agents are often designed that refer to humans in the physical world; at the same time, physical world products or services draw stories and emotions using characters. Computers can adjust virtual representations in the physical world based on the information they represent which is possessed by computers, and this adjustment leads to perceived personification. In this paper, we discuss the perception of personification and the possibility of application for persuasion. We developed a prototype application and then conducted surveys and a task-based user study. The results suggest the possibility of persuasion from personified agents superimposed close to an object.
使用计算机技术,我们生活的物理世界和由计算机或流行文化产生的虚拟世界越来越接近。虚拟代理通常被设计为参考物理世界中的人类;与此同时,实体世界的产品或服务使用角色来描绘故事和情感。计算机可以根据计算机所拥有的信息来调整物理世界中的虚拟表征,这种调整导致了感知人格化。在本文中,我们讨论了人格化的感知和适用于说服的可能性。我们开发了一个原型应用程序,然后进行了调查和基于任务的用户研究。研究结果表明,拟人化的代理人可能会在靠近物体的地方进行说服。
{"title":"Personification Aspect of Conversational Agents as Representations of a Physical Object","authors":"Akihito Yoshii, T. Nakajima","doi":"10.1145/2814940.2814983","DOIUrl":"https://doi.org/10.1145/2814940.2814983","url":null,"abstract":"Using computer technologies, the physical world in which we live and the virtual worlds generated by computers or popular cultures are coming closer together. Virtual agents are often designed that refer to humans in the physical world; at the same time, physical world products or services draw stories and emotions using characters. Computers can adjust virtual representations in the physical world based on the information they represent which is possessed by computers, and this adjustment leads to perceived personification. In this paper, we discuss the perception of personification and the possibility of application for persuasion. We developed a prototype application and then conducted surveys and a task-based user study. The results suggest the possibility of persuasion from personified agents superimposed close to an object.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Humor Utterance Generation for Non-task-oriented Dialogue Systems 非任务导向对话系统的幽默话语生成
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814966
Shohei Fujikura, Yoshito Ogawa, H. Kikuchi
We propose a humor utterance generation method that is compatible with dialogue systems, to increase "desire of continuing dialogue". A dialogue system retrieves leading-item:noun pairs from Twitter as knowledge and attempts to select the most humorous reply using word similarity, which reveals that incongruity can be explained by the incongruity-resolution model. We consider the differences among individuals, and confirm the validity of the proposed method. Experimental results indicate that high-incongruity replies are significantly effective against low-incongruity replies with a limited condition.
我们提出了一种与对话系统兼容的幽默话语生成方法,以增加“继续对话的欲望”。一个对话系统从Twitter中检索引导项:名词对作为知识,并尝试使用单词相似度来选择最幽默的回复,这表明不协调可以通过不协调解决模型来解释。我们考虑了个体之间的差异,并证实了所提出方法的有效性。实验结果表明,在有限条件下,高不一致性应答对低不一致性应答具有显著的有效性。
{"title":"Humor Utterance Generation for Non-task-oriented Dialogue Systems","authors":"Shohei Fujikura, Yoshito Ogawa, H. Kikuchi","doi":"10.1145/2814940.2814966","DOIUrl":"https://doi.org/10.1145/2814940.2814966","url":null,"abstract":"We propose a humor utterance generation method that is compatible with dialogue systems, to increase \"desire of continuing dialogue\". A dialogue system retrieves leading-item:noun pairs from Twitter as knowledge and attempts to select the most humorous reply using word similarity, which reveals that incongruity can be explained by the incongruity-resolution model. We consider the differences among individuals, and confirm the validity of the proposed method. Experimental results indicate that high-incongruity replies are significantly effective against low-incongruity replies with a limited condition.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115977743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Inferring Affective States by Involving Simple Robot Movements 通过涉及简单的机器人动作推断情感状态
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814959
Genta Yoshioka, Yugo Takeuchi
Humans estimate the emotions of others based on voices, expressions, and gestures, even though such expressions are comprised of many complicated elements. If a simple existence moves, how we estimate its emotional states remains relatively unclear. This paper investigates how humans suppose emotional states by the parameter changes of simple movements. We use a simple disk-shaped robot that only moves on the floor and expresses its emotional states by movements and movement parameters based on Russell's circumplex model. We observed the physical interaction between humans and our robot through an experiment where our participants were seeking a treasure in a given field and confirmed that humans infer emotional states by movements that can be changed by simple parameters. This result will contribute to the basic design of HRI.
人类根据声音、表情和手势来判断他人的情绪,尽管这些表情是由许多复杂的元素组成的。如果一个简单的存在移动,我们如何估计它的情绪状态仍然相对不清楚。本文研究了人类如何通过简单动作的参数变化来假设情绪状态。我们使用的是一个简单的圆盘形机器人,它只在地板上移动,并通过运动和运动参数来表达它的情绪状态。我们通过一个实验观察了人类和机器人之间的物理互动,在这个实验中,我们的参与者在给定的领域中寻找宝藏,并证实了人类通过动作来推断情绪状态,这些动作可以通过简单的参数来改变。这一结果将有助于HRI的基本设计。
{"title":"Inferring Affective States by Involving Simple Robot Movements","authors":"Genta Yoshioka, Yugo Takeuchi","doi":"10.1145/2814940.2814959","DOIUrl":"https://doi.org/10.1145/2814940.2814959","url":null,"abstract":"Humans estimate the emotions of others based on voices, expressions, and gestures, even though such expressions are comprised of many complicated elements. If a simple existence moves, how we estimate its emotional states remains relatively unclear. This paper investigates how humans suppose emotional states by the parameter changes of simple movements. We use a simple disk-shaped robot that only moves on the floor and expresses its emotional states by movements and movement parameters based on Russell's circumplex model. We observed the physical interaction between humans and our robot through an experiment where our participants were seeking a treasure in a given field and confirmed that humans infer emotional states by movements that can be changed by simple parameters. This result will contribute to the basic design of HRI.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"91 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131219236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pictogram Generator from Korean Sentences using Emoticon and Saliency Map 基于表情符号和显著性地图的韩语句子象形文字生成器
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814991
Jihun Kim, A. Ojha, Yongsik Jin, Minho Lee
Picture is worth a thousand words. With changing life styles and technology advancement, visual or pictorial communication is preferred. We present a system to generate a pictogram for simple Korean sentences. The final pictogram integrates information about the object (about which something is said), the background (the environment) and the emotion of the user. The proposed system is divided into two parts. First is the registration part, which saves personal information and face image of the user. The second part searches corresponding images for words, downloads them and finally integrates all of them together to along with user's emotion to generate a single pictogram.
图片胜过千言万语。随着生活方式的变化和科技的进步,视觉或图像的交流是首选。我们提出了一个为简单的韩语句子生成象形文字的系统。最后的象形图整合了关于物体(所说的内容)、背景(环境)和用户情感的信息。本系统分为两部分。首先是注册部分,它保存了用户的个人信息和人脸图像。第二部分在相应的图片中搜索单词,下载单词,最后结合用户的情感,将所有的图片整合在一起,生成一个单一的象形文字。
{"title":"Pictogram Generator from Korean Sentences using Emoticon and Saliency Map","authors":"Jihun Kim, A. Ojha, Yongsik Jin, Minho Lee","doi":"10.1145/2814940.2814991","DOIUrl":"https://doi.org/10.1145/2814940.2814991","url":null,"abstract":"Picture is worth a thousand words. With changing life styles and technology advancement, visual or pictorial communication is preferred. We present a system to generate a pictogram for simple Korean sentences. The final pictogram integrates information about the object (about which something is said), the background (the environment) and the emotion of the user. The proposed system is divided into two parts. First is the registration part, which saves personal information and face image of the user. The second part searches corresponding images for words, downloads them and finally integrates all of them together to along with user's emotion to generate a single pictogram.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115693971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enabling Disaster Early Warning via a Configurable Data Collection Framework and Real-time Analytics 通过可配置数据收集框架和实时分析实现灾难早期预警
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815014
Young-Woo Kwon, Seungwon Yang, Haeyong Chung
The detection and prediction of natural catastrophes or man-made disasters before they occur has recently shone the light on several relatively new technologies. Due to the significant development of mobile hardware and software technologies, a smartphone has become an important device for detecting and warning about such disasters. Specifically, disaster-related data can be collected from diverse sources including smartphones' sensors and social networks, and then the collected data are further analyzed to detect disasters and alert people about them. These collective data enable a user to have access to a variety of essential information related to disaster events. Using the example of a communicable disease outbreak, such information helps to identify and detect the ground zero of a disaster, as well as make sense of the means of transmission, progress, and patterns of the disaster. In this paper, we discuss a novel approach for analyzing and interacting with collective sensor data in a visual, real-time, and scalable fashion, offering diverse perspectives and data management components.
在自然灾害或人为灾害发生之前对其进行探测和预测,最近为若干相对较新的技术带来了曙光。由于移动硬件和软件技术的显著发展,智能手机已成为检测和预警此类灾害的重要设备。具体来说,可以从智能手机的传感器和社交网络等不同来源收集与灾害相关的数据,然后对收集到的数据进行进一步分析,以发现灾害并提醒人们。这些集体数据使用户能够访问与灾害事件有关的各种基本信息。以传染病爆发为例,这类信息有助于确定和发现灾难的“归零点”,并有助于了解灾难的传播方式、进展和模式。在本文中,我们讨论了一种以可视化、实时和可扩展的方式分析和交互集体传感器数据的新方法,提供了不同的视角和数据管理组件。
{"title":"Enabling Disaster Early Warning via a Configurable Data Collection Framework and Real-time Analytics","authors":"Young-Woo Kwon, Seungwon Yang, Haeyong Chung","doi":"10.1145/2814940.2815014","DOIUrl":"https://doi.org/10.1145/2814940.2815014","url":null,"abstract":"The detection and prediction of natural catastrophes or man-made disasters before they occur has recently shone the light on several relatively new technologies. Due to the significant development of mobile hardware and software technologies, a smartphone has become an important device for detecting and warning about such disasters. Specifically, disaster-related data can be collected from diverse sources including smartphones' sensors and social networks, and then the collected data are further analyzed to detect disasters and alert people about them. These collective data enable a user to have access to a variety of essential information related to disaster events. Using the example of a communicable disease outbreak, such information helps to identify and detect the ground zero of a disaster, as well as make sense of the means of transmission, progress, and patterns of the disaster. In this paper, we discuss a novel approach for analyzing and interacting with collective sensor data in a visual, real-time, and scalable fashion, offering diverse perspectives and data management components.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"98 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114547349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do Infants Consider a Robot as a Social Partner in Collaborative Activity? 婴儿在协作活动中是否将机器人视为社会伙伴?
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814953
Yun-hee Park, S. Itakura, A. Henderson, T. Kanda, Naoki Furuhata, H. Ishiguro
Human infants can consider humans as collaborative agents, but little is known about whether they also recognize robots as collaborative agents. This study investigated that how infants understand robot agents while they watched collaborative interactions between a human and a robot. We presented a novel visual habituation paradigm in which a human and a robot performed a collaborative activity to 13-month-old infants. Our findings suggested that 13-month-olds can appreciate robots as collaborative partners. We interpreted infants' expectancy violation responses to actions of the robot as facilitating their understanding about nonhuman agents as social partner.
人类婴儿可以将人类视为协作代理,但很少知道他们是否也将机器人视为协作代理。这项研究调查了婴儿在观看人类和机器人之间的协作互动时如何理解机器人代理。我们提出了一种新的视觉习惯范式,其中人类和机器人对13个月大的婴儿进行了合作活动。我们的研究结果表明,13个月大的婴儿可以将机器人视为合作伙伴。我们将婴儿对机器人行为的期望违反反应解释为促进了他们对作为社会伙伴的非人类代理人的理解。
{"title":"Do Infants Consider a Robot as a Social Partner in Collaborative Activity?","authors":"Yun-hee Park, S. Itakura, A. Henderson, T. Kanda, Naoki Furuhata, H. Ishiguro","doi":"10.1145/2814940.2814953","DOIUrl":"https://doi.org/10.1145/2814940.2814953","url":null,"abstract":"Human infants can consider humans as collaborative agents, but little is known about whether they also recognize robots as collaborative agents. This study investigated that how infants understand robot agents while they watched collaborative interactions between a human and a robot. We presented a novel visual habituation paradigm in which a human and a robot performed a collaborative activity to 13-month-old infants. Our findings suggested that 13-month-olds can appreciate robots as collaborative partners. We interpreted infants' expectancy violation responses to actions of the robot as facilitating their understanding about nonhuman agents as social partner.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 3rd International Conference on Human-Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1