Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673296
Samuel Spaulding, C. Breazeal
In this paper we present additional results from a prior study of speech-based games to promote early literacy skills through child-robot interaction [6]. The additional data and results support our original conclusion, that pronunciation analysis software can be an effective enabler of speech child-robot interactions. We also include a comparison of other pronunciation services, an updated version of the SpeechAce API and a new technology from Soapbox Labs. We reflect on some lessons learned and introduce a redesigned version of the game interaction called ‘RhymeRacer’ based on the results and observations from both data collections.
在本文中,我们从先前的一项基于语音的游戏的研究中获得了额外的结果,该游戏通过儿童与机器人的互动来促进早期读写技能[6]。额外的数据和结果支持了我们最初的结论,即发音分析软件可以有效地促进孩子与机器人之间的语音交互。我们还包括其他发音服务的比较,一个更新版本的speech hace API和一个来自Soapbox实验室的新技术。我们总结了一些经验教训,并基于两种数据收集的结果和观察,重新设计了游戏互动版本“RhymeRacer”。
{"title":"Pronunciation-Based Child-Robot Game Interactions to Promote Literacy Skills","authors":"Samuel Spaulding, C. Breazeal","doi":"10.1109/HRI.2019.8673296","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673296","url":null,"abstract":"In this paper we present additional results from a prior study of speech-based games to promote early literacy skills through child-robot interaction [6]. The additional data and results support our original conclusion, that pronunciation analysis software can be an effective enabler of speech child-robot interactions. We also include a comparison of other pronunciation services, an updated version of the SpeechAce API and a new technology from Soapbox Labs. We reflect on some lessons learned and introduce a redesigned version of the game interaction called ‘RhymeRacer’ based on the results and observations from both data collections.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"20 1","pages":"554-555"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74912417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673311
Jaclyn A. Barnes, S. M. Fakhrhosseini, Eric Vasey, Joseph D. Ryan, C. Park, M. Jeon
In an eight-week STEAM education program for elementary school children, kids worked on musical theater projects with a variety of robots. The program included 4 modules about acting, dancing, music & sounds, and drawing. Twenty-five children grades K-5 participated in this program. Children were excited by the program and they demonstrated collaboration and peer-to-peer interactive learning. In the future, we plan to add more robust interaction and more science and engineering experiences to the program. This program is expected to promote STEM education in the informal learning environment by combining it with arts and design.
{"title":"Promoting STEAM Education with Child-Robot Musical Theater","authors":"Jaclyn A. Barnes, S. M. Fakhrhosseini, Eric Vasey, Joseph D. Ryan, C. Park, M. Jeon","doi":"10.1109/HRI.2019.8673311","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673311","url":null,"abstract":"In an eight-week STEAM education program for elementary school children, kids worked on musical theater projects with a variety of robots. The program included 4 modules about acting, dancing, music & sounds, and drawing. Twenty-five children grades K-5 participated in this program. Children were excited by the program and they demonstrated collaboration and peer-to-peer interactive learning. In the future, we plan to add more robust interaction and more science and engineering experiences to the program. This program is expected to promote STEM education in the informal learning environment by combining it with arts and design.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"13 1","pages":"366-366"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73133154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673163
Meimei Zheng, Yingying She, Fang Liu, Jin Chen, Yang Shu, J. Xiahou
The BabeBay is a children companion robot which has the ability of real-time multimodal affective computing. Accurate and effective affective fusion computing makes BabeBay own adaptability and capability during interaction according to different children in different emotion. Furthermore, the corresponding cognitive computing and robots behavior can be enhanced to personalized companionship.
{"title":"BabeBay-A Companion Robot for Children Based on Multimodal Affective Computing","authors":"Meimei Zheng, Yingying She, Fang Liu, Jin Chen, Yang Shu, J. Xiahou","doi":"10.1109/HRI.2019.8673163","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673163","url":null,"abstract":"The BabeBay is a children companion robot which has the ability of real-time multimodal affective computing. Accurate and effective affective fusion computing makes BabeBay own adaptability and capability during interaction according to different children in different emotion. Furthermore, the corresponding cognitive computing and robots behavior can be enhanced to personalized companionship.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"1 1","pages":"604-605"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89592507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673329
B. Gromov, Jérôme Guzzi, L. Gambardella, A. Giusti
We demonstrate a system to control robots in the users proximity with pointing gestures—a natural device that people use all the time to communicate with each other. Our setup consists of a miniature quadrotor Crazyflie 2.0, a wearable inertial measurement unit MetaWearR+ mounted on the user's wrist, and a laptop as the ground control station. The video of this demo is available at https://youtu.be/yafy-HZMk_U [1].
{"title":"Demo: Pointing Gestures for Proximity Interaction","authors":"B. Gromov, Jérôme Guzzi, L. Gambardella, A. Giusti","doi":"10.1109/HRI.2019.8673329","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673329","url":null,"abstract":"We demonstrate a system to control robots in the users proximity with pointing gestures—a natural device that people use all the time to communicate with each other. Our setup consists of a miniature quadrotor Crazyflie 2.0, a wearable inertial measurement unit MetaWearR+ mounted on the user's wrist, and a laptop as the ground control station. The video of this demo is available at https://youtu.be/yafy-HZMk_U [1].","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"15 1","pages":"665-665"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91146151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673241
Marlena R. Fraune, Steven Sherrin, S. Šabanović, Eliot R. Smith
As robots, both individually and in groups, become more prevalent in everyday contexts (e.g., schools, workplaces, educational and caregiving institutions), it is possible that they will be perceived as outgroups, or come into competition for resources with humans. Research indicates that some of the psychological effects of intergroup interaction common in humans translate to human-robot interaction (HRI). In this paper, we examine how intergroup competition, like that among humans, translates to HRI. Specifically, we examined how Number of Humans (1, 3) and Number of Robots (1, 3) affect behavioral competition on dilemma tasks and survey ratings of perceived threat, emotion, and motivation (fear, greed, and outperformance). We also examined the effect of perceived group entitativity (i.e., cohesiveness) on competition motivation. Like in social psychological literature, these results indicate that groups of humans (especially entitative groups) showed more greed-based motivation and competition toward robots than individual humans did. However, we did not find evidence that number of robots had an effect on fear-based motivation or competition against them unless the robot groups were perceived as highly entitative. Our data also show the intriguing finding that participants displayed more fear of and competed slightly more against robots that matched their number. Future research should more deeply examine this novel pattern of results compared to one-on-one HRI and typical group dynamics in social psychology.
{"title":"Is Human-Robot Interaction More Competitive Between Groups Than Between Individuals?","authors":"Marlena R. Fraune, Steven Sherrin, S. Šabanović, Eliot R. Smith","doi":"10.1109/HRI.2019.8673241","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673241","url":null,"abstract":"As robots, both individually and in groups, become more prevalent in everyday contexts (e.g., schools, workplaces, educational and caregiving institutions), it is possible that they will be perceived as outgroups, or come into competition for resources with humans. Research indicates that some of the psychological effects of intergroup interaction common in humans translate to human-robot interaction (HRI). In this paper, we examine how intergroup competition, like that among humans, translates to HRI. Specifically, we examined how Number of Humans (1, 3) and Number of Robots (1, 3) affect behavioral competition on dilemma tasks and survey ratings of perceived threat, emotion, and motivation (fear, greed, and outperformance). We also examined the effect of perceived group entitativity (i.e., cohesiveness) on competition motivation. Like in social psychological literature, these results indicate that groups of humans (especially entitative groups) showed more greed-based motivation and competition toward robots than individual humans did. However, we did not find evidence that number of robots had an effect on fear-based motivation or competition against them unless the robot groups were perceived as highly entitative. Our data also show the intriguing finding that participants displayed more fear of and competed slightly more against robots that matched their number. Future research should more deeply examine this novel pattern of results compared to one-on-one HRI and typical group dynamics in social psychology.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"11 1","pages":"104-113"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86563504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673109
W. Johal, A. Sandygulova, J. D. Wit, M. Haas, B. Scassellati
The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. This year's half-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to design, develop and test new robotics systems that help learners. This 5th edition of the workshop will be dealing in particular on the potential use of robots for adaptive learning. Since the past few years, inclusive education have been a key policy in a number of countries, aiming to provide equal changes and common ground to all. In this workshop, we aim to discuss strategies to design robotics system able to adapt to the learners' abilities, to provide assistance and to demonstrate long-term learning effects.
{"title":"Robots for Learning - R4L: Adaptive Learning","authors":"W. Johal, A. Sandygulova, J. D. Wit, M. Haas, B. Scassellati","doi":"10.1109/HRI.2019.8673109","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673109","url":null,"abstract":"The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. This year's half-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to design, develop and test new robotics systems that help learners. This 5th edition of the workshop will be dealing in particular on the potential use of robots for adaptive learning. Since the past few years, inclusive education have been a key policy in a number of countries, aiming to provide equal changes and common ground to all. In this workshop, we aim to discuss strategies to design robotics system able to adapt to the learners' abilities, to provide assistance and to demonstrate long-term learning effects.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"48 1","pages":"693-694"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79260312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673175
Yuki Okafuji, Jun Baba, Junya Nakanishi
It is an important functional behavior for humanoid robots to have face-to-face contact with humans. We predict future face position to achieve natural behavior that is similar to the communication between people. Robots gaze at a prediction point for reducing mechanical delay. The proposed system for robots to have face-to-face contact can reduce delay.
{"title":"Face-to-Face Contact Method for Humanoid Robots Using Face Position Prediction","authors":"Yuki Okafuji, Jun Baba, Junya Nakanishi","doi":"10.1109/HRI.2019.8673175","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673175","url":null,"abstract":"It is an important functional behavior for humanoid robots to have face-to-face contact with humans. We predict future face position to achieve natural behavior that is similar to the communication between people. Robots gaze at a prediction point for reducing mechanical delay. The proposed system for robots to have face-to-face contact can reduce delay.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"77 1","pages":"666-666"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77404693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673122
Yasuto Nakanishi
This paper introduces DataDrawingDroid, a wheel robot to visualize data and draw data-driven generative art onto a floor. In our user study, 24 participants watched videos of three types of data drawing. T-tests for the results on five-point Likert scales indicated that it attracted them and suggested the importance of balancing functionality and aesthetics.
{"title":"DataDrawingDroid: A Wheel Robot Drawing Planned Path as Data-Driven Generative Art","authors":"Yasuto Nakanishi","doi":"10.1109/HRI.2019.8673122","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673122","url":null,"abstract":"This paper introduces DataDrawingDroid, a wheel robot to visualize data and draw data-driven generative art onto a floor. In our user study, 24 participants watched videos of three types of data drawing. T-tests for the results on five-point Likert scales indicated that it attracted them and suggested the importance of balancing functionality and aesthetics.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"11 1","pages":"536-537"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72666464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673066
Jangwon Lee
After nearly ten years “Keepon” appeared in HRI, a question arises: what about the music used in the video “Keepon goes Seoul-searching?” The song “superfantastic” with the lyrics “possibility, it's a mystery” is written by peppertones in 2005, a Korean duo band celebrating their fifteenth anniversary in 2019. Superfantastic is a song of hope, inspired by the concerns and worries they had while starting a career in popular music, and conveys a message to “keep on dreaming,” as “your biggest dreams, they might come to reality.” This talk shares life stories of uncertainty: how the band started, what unexpected outcomes the band has witnessed, which decisions led them to this point, and what issues they are currently facing. As one member of the band is also involved in computer music research focusing on mobile music interaction, additionally this talk also covers current research topics and how it is to live as a multidisciplinary person spanning popular music, television, and computer music research.
{"title":"Possibility, It's a Mystery: How Keepon's Video Brought Me Here","authors":"Jangwon Lee","doi":"10.1109/HRI.2019.8673066","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673066","url":null,"abstract":"After nearly ten years “Keepon” appeared in HRI, a question arises: what about the music used in the video “Keepon goes Seoul-searching?” The song “superfantastic” with the lyrics “possibility, it's a mystery” is written by peppertones in 2005, a Korean duo band celebrating their fifteenth anniversary in 2019. Superfantastic is a song of hope, inspired by the concerns and worries they had while starting a career in popular music, and conveys a message to “keep on dreaming,” as “your biggest dreams, they might come to reality.” This talk shares life stories of uncertainty: how the band started, what unexpected outcomes the band has witnessed, which decisions led them to this point, and what issues they are currently facing. As one member of the band is also involved in computer music research focusing on mobile music interaction, additionally this talk also covers current research topics and how it is to live as a multidisciplinary person spanning popular music, television, and computer music research.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"52 1","pages":"304-304"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80383212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673228
I. Cardenas, Kelsey A. Vitullo, Michelle Park, Jong-Hoon Kim, Margarita Benitez, Chanjuan Chen, Linda Ohrn-McDaniels
Telepresence takes place when a user is afforded with the experience of being in a remote environment or virtual world, through the use of immersive technologies. A humanoid robot and a control apparatus that correlates the operator's movements whilst providing sufficient sensory feedback, encompass such immersive technologies. This paper considers the control mechanisms that afford telepresence, the requirements for continuous or extended telepresence control, and the health implications of engaging in complex time-constrained tasks. We present Telesuit - a full-body telepresence control system for operating a humanoid telepresence robot. The suit is part of a broader system that considers the constraints of controlling a dexterous bimanual robotic torso, and the need for modular hardware and software that allows for high-fidelity immersiveness. It incorporates a health-monitoring system that collects information such as respiratory effort, galvanic skin response, and heart rate. This information is consequently leveraged by the platform to adjust the telepresence experience and apply control modalities for autonomy. Furthermore, the design of the Telesuit garment considers both functionality and aesthetics.
{"title":"Telesuit: An Immersive User-Centric Telepresence Control Suit","authors":"I. Cardenas, Kelsey A. Vitullo, Michelle Park, Jong-Hoon Kim, Margarita Benitez, Chanjuan Chen, Linda Ohrn-McDaniels","doi":"10.1109/HRI.2019.8673228","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673228","url":null,"abstract":"Telepresence takes place when a user is afforded with the experience of being in a remote environment or virtual world, through the use of immersive technologies. A humanoid robot and a control apparatus that correlates the operator's movements whilst providing sufficient sensory feedback, encompass such immersive technologies. This paper considers the control mechanisms that afford telepresence, the requirements for continuous or extended telepresence control, and the health implications of engaging in complex time-constrained tasks. We present Telesuit - a full-body telepresence control system for operating a humanoid telepresence robot. The suit is part of a broader system that considers the constraints of controlling a dexterous bimanual robotic torso, and the need for modular hardware and software that allows for high-fidelity immersiveness. It incorporates a health-monitoring system that collects information such as respiratory effort, galvanic skin response, and heart rate. This information is consequently leveraged by the platform to adjust the telepresence experience and apply control modalities for autonomy. Furthermore, the design of the Telesuit garment considers both functionality and aesthetics.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"25 1","pages":"654-655"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89136886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}