Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223502
Maximilian Diehl, Alexander Plopski, H. Kato, Karinne Ramirez-Amaro
Teaching robots new skills is considered as an important aspect of Human-Robot Collaboration (HRC). One challenge is that robots cannot communicate feedback in the same ways as humans do. This decreases the trust towards robots since it is difficult to judge, before the actual execution, if the robot has learned the task correctly. In this paper, we introduce an Augmented Reality (AR) based visualization tool that allows humans to verify the taught behavior before its execution. Our verification interface displays a virtual simulation embedded into the real environment, timely coupled with a semantic description of the current action. We developed three designs based on different interface/visualization-technology combinations to explore the potential benefits of enhanced simulations using AR over traditional simulation environments like RViz. We conducted a user study with 18 participants to assess the effectiveness of the proposed visualization tools regarding error detection capabilities. One of the advantages of the AR interfaces is that they provide more realistic feedback than traditional simulations with a lower cost of not having to model the entire environment.
{"title":"Augmented Reality interface to verify Robot Learning","authors":"Maximilian Diehl, Alexander Plopski, H. Kato, Karinne Ramirez-Amaro","doi":"10.1109/RO-MAN47096.2020.9223502","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223502","url":null,"abstract":"Teaching robots new skills is considered as an important aspect of Human-Robot Collaboration (HRC). One challenge is that robots cannot communicate feedback in the same ways as humans do. This decreases the trust towards robots since it is difficult to judge, before the actual execution, if the robot has learned the task correctly. In this paper, we introduce an Augmented Reality (AR) based visualization tool that allows humans to verify the taught behavior before its execution. Our verification interface displays a virtual simulation embedded into the real environment, timely coupled with a semantic description of the current action. We developed three designs based on different interface/visualization-technology combinations to explore the potential benefits of enhanced simulations using AR over traditional simulation environments like RViz. We conducted a user study with 18 participants to assess the effectiveness of the proposed visualization tools regarding error detection capabilities. One of the advantages of the AR interfaces is that they provide more realistic feedback than traditional simulations with a lower cost of not having to model the entire environment.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123806411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223488
Silvia Rossi, Elena Dell’Aquila, Davide Russo, Gianpaolo Maggi
As the field of service robotics has been rapidly growing, it is expected for such robots to be endowed with the appropriate capabilities to interact with humans in a socially acceptable way. This is particularly relevant in the case of customer relationships where a positive and affective interaction has an impact on the users’ experience. In this paper, we address the question of whether a specific behavioral style of a barman-robot, acted through para-verbal and non-verbal behaviors, can affect users’ engagement and the creation of positive emotions. To that end, we endowed a barman-robot taking drink orders from human customers, with an empathic behavioral style. This aims at triggering to alignment process by mimicking the conversation partner’s behavior. This behavioral style is compared to an entertaining style, aiming at creating a positive relationship with the users, and a neutral style for control. Results suggest that when participants experienced more positive emotions, the robot was perceived as safer, so suggesting that interactions that stimulate positive and open relations with the robot may have a positive impact on the affective dimension of engagement. Indeed, when the empathic robot modulates its behavior according to the user’s one, this interaction seems to be more effective than when interacting with a neutral robot in improving engagement and positive emotions in public-service contexts.
{"title":"Increasing Engagement with Chameleon Robots in Bartending Services","authors":"Silvia Rossi, Elena Dell’Aquila, Davide Russo, Gianpaolo Maggi","doi":"10.1109/RO-MAN47096.2020.9223488","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223488","url":null,"abstract":"As the field of service robotics has been rapidly growing, it is expected for such robots to be endowed with the appropriate capabilities to interact with humans in a socially acceptable way. This is particularly relevant in the case of customer relationships where a positive and affective interaction has an impact on the users’ experience. In this paper, we address the question of whether a specific behavioral style of a barman-robot, acted through para-verbal and non-verbal behaviors, can affect users’ engagement and the creation of positive emotions. To that end, we endowed a barman-robot taking drink orders from human customers, with an empathic behavioral style. This aims at triggering to alignment process by mimicking the conversation partner’s behavior. This behavioral style is compared to an entertaining style, aiming at creating a positive relationship with the users, and a neutral style for control. Results suggest that when participants experienced more positive emotions, the robot was perceived as safer, so suggesting that interactions that stimulate positive and open relations with the robot may have a positive impact on the affective dimension of engagement. Indeed, when the empathic robot modulates its behavior according to the user’s one, this interaction seems to be more effective than when interacting with a neutral robot in improving engagement and positive emotions in public-service contexts.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124722714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223443
Min Set Paing, Enock William Nshama, N. Uchiyama
Throwing and catching are fundamental motions for human beings, and may be applied for advanced human and robot collaborative tasks. Since catching motion is more difficult than throwing for a robot, this study deals with reduced impact catching of a flying object by a planar manipulator. The estimation of the object's trajectory is improved by the Kalman filter and the least squares fitting is proposed to accurately predict the catching time, position and velocity of the manipulator. To achieve reduced impact catching, the minimization of the total impact force in x and y-directions is proposed as an optimization problem. The fifth degree non-periodic B-spline curve is implemented to achieve smooth and continuous trajectories in the joint space. The effectiveness of the proposed approaches are demonstrated by simulation and experiment.
{"title":"Motion Trajectory Estimation of a Flying Object and Optimal Reduced Impact Catching by a Planar Manipulator*","authors":"Min Set Paing, Enock William Nshama, N. Uchiyama","doi":"10.1109/RO-MAN47096.2020.9223443","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223443","url":null,"abstract":"Throwing and catching are fundamental motions for human beings, and may be applied for advanced human and robot collaborative tasks. Since catching motion is more difficult than throwing for a robot, this study deals with reduced impact catching of a flying object by a planar manipulator. The estimation of the object's trajectory is improved by the Kalman filter and the least squares fitting is proposed to accurately predict the catching time, position and velocity of the manipulator. To achieve reduced impact catching, the minimization of the total impact force in x and y-directions is proposed as an optimization problem. The fifth degree non-periodic B-spline curve is implemented to achieve smooth and continuous trajectories in the joint space. The effectiveness of the proposed approaches are demonstrated by simulation and experiment.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124934564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223534
Pinar Uluer, Hatice Kose, B. Oz, Turgut Can Aydinalev, D. Erol
The motivation of this work is to develop an affective robot companion for audiology rehabilitation and to test the system with the deaf or hard of hearing children. Two robot modules are developed which are the multimodal "stress/emotion/motivation" recognition module for the robot to "understand" how the children feel, and behaviour and feedback module of the robot to show the children how the robot "feels". In this study we only focus on the behaviour and feedback module of the robot. The selected affective/affirmative behaviours are tested by means of tablet games and employed on the robot during an audiology test, as a feedback mechanism. Facial data are used together with the surveys to evaluate the children’s perception of the robot and the behaviour set.
{"title":"Towards An Affective Robot Companion for Audiology Rehabilitation: How Does Pepper Feel Today?","authors":"Pinar Uluer, Hatice Kose, B. Oz, Turgut Can Aydinalev, D. Erol","doi":"10.1109/RO-MAN47096.2020.9223534","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223534","url":null,"abstract":"The motivation of this work is to develop an affective robot companion for audiology rehabilitation and to test the system with the deaf or hard of hearing children. Two robot modules are developed which are the multimodal \"stress/emotion/motivation\" recognition module for the robot to \"understand\" how the children feel, and behaviour and feedback module of the robot to show the children how the robot \"feels\". In this study we only focus on the behaviour and feedback module of the robot. The selected affective/affirmative behaviours are tested by means of tablet games and employed on the robot during an audiology test, as a feedback mechanism. Facial data are used together with the surveys to evaluate the children’s perception of the robot and the behaviour set.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125339827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223611
Zhuoni Jie, H. Gunes
Tasting is an essential activity in our daily lives. Implementing social robots in the food and drink service industry requires the social robots to be able to understand customers’ nonverbal behaviours, including taste-liking. Little is known about whether people alter their behavioural responses related to taste-liking when interacting with a humanoid social robot. We conducted the first beverage tasting study where the facilitator is a human versus a humanoid social robot with priming versus non-priming instruction styles. We found that the facilitator type and facilitation style had no significant influence on cognitive taste-liking. However, in robot facilitator scenarios, people were more willing to follow the instruction and felt more comfortable when facilitated with priming. Our study provides new empirical findings and design implications for using humanoid social robots in the hospitality industry.
{"title":"Investigating Taste-liking with a Humanoid Robot Facilitator","authors":"Zhuoni Jie, H. Gunes","doi":"10.1109/RO-MAN47096.2020.9223611","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223611","url":null,"abstract":"Tasting is an essential activity in our daily lives. Implementing social robots in the food and drink service industry requires the social robots to be able to understand customers’ nonverbal behaviours, including taste-liking. Little is known about whether people alter their behavioural responses related to taste-liking when interacting with a humanoid social robot. We conducted the first beverage tasting study where the facilitator is a human versus a humanoid social robot with priming versus non-priming instruction styles. We found that the facilitator type and facilitation style had no significant influence on cognitive taste-liking. However, in robot facilitator scenarios, people were more willing to follow the instruction and felt more comfortable when facilitated with priming. Our study provides new empirical findings and design implications for using humanoid social robots in the hospitality industry.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223493
Yongmin Cho, Frank L. Hammond
Recent surges in global construction spending are driving the need for safer, more efficient construction methods. One potential way of improving construction methods is to provide user interfaces that allow human operators to control machinery in a more intuitive and strategic manner. This paper explores the use of motion scaling and haptic feedback to improve task completion speed and force control during construction-related teleoperated robotic manipulation tasks.In this study, we design a bench-top Teleoperated Motion Scaling Robotic Arm (TMSRA) platform that allows the human operator to control the motion-mapping rate between the master (haptic console) and slave (robotic excavator) devices, while also providing force feedback and virtual safety functions to help prevent excessive force application by the slave device. We experimentally evaluated the impact of motion scaling and force feedback on human users' ability to perform simulated construction tasks. Experimental results from simulated robotic excavation and demolition tasks show that the maximum force applied to fictive buried utilities was reduced by 77.67% and 76.36% respectively due to the force feedback and safety function. Experimental results from simulated payload pushing/sliding tasks demonstrate that the provision of user- controlled motion scaling increases task efficiency, reducing completion times by at least 31.41%, and as much as 47.76%.
{"title":"Improving Efficiency and Safety in Teleoperated Robotic Manipulators using Motion Scaling and Force Feedback","authors":"Yongmin Cho, Frank L. Hammond","doi":"10.1109/RO-MAN47096.2020.9223493","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223493","url":null,"abstract":"Recent surges in global construction spending are driving the need for safer, more efficient construction methods. One potential way of improving construction methods is to provide user interfaces that allow human operators to control machinery in a more intuitive and strategic manner. This paper explores the use of motion scaling and haptic feedback to improve task completion speed and force control during construction-related teleoperated robotic manipulation tasks.In this study, we design a bench-top Teleoperated Motion Scaling Robotic Arm (TMSRA) platform that allows the human operator to control the motion-mapping rate between the master (haptic console) and slave (robotic excavator) devices, while also providing force feedback and virtual safety functions to help prevent excessive force application by the slave device. We experimentally evaluated the impact of motion scaling and force feedback on human users' ability to perform simulated construction tasks. Experimental results from simulated robotic excavation and demolition tasks show that the maximum force applied to fictive buried utilities was reduced by 77.67% and 76.36% respectively due to the force feedback and safety function. Experimental results from simulated payload pushing/sliding tasks demonstrate that the provision of user- controlled motion scaling increases task efficiency, reducing completion times by at least 31.41%, and as much as 47.76%.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130021758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223344
Yue (Sophie) Guo, Boshi Wang, Dana Hughes, M. Lewis, K. Sycara
Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.
{"title":"Designing Context-Sensitive Norm Inverse Reinforcement Learning Framework for Norm-Compliant Autonomous Agents","authors":"Yue (Sophie) Guo, Boshi Wang, Dana Hughes, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN47096.2020.9223344","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223344","url":null,"abstract":"Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128318210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223571
Victor Emeli, Katelyn E. Fry, A. Howard
The kicking patterns of infants can provide markers that may predict the trajectory of their future development. Atypical kicking patterns may predict the possibility of developmental disorders like Cerebral Palsy (CP). Early intervention and physical therapy that encourages the practice of proper kicking motions can help to improve the outcomes in these scenarios. The kicking motions of an infant are usually evaluated by a trained health professional and subsequent physical therapy is also conducted by a licensed professional. The automation of the evaluation of kicking motions and the administration of physical therapy is desirable for standardizing these processes. In this work, we attempt to develop a method to quantify metrics that can provide insight into the quality of baby kicking actions. We utilize a computer vision system to analyze infant kicking stimulated by parent-infant play and a robotic infant mobile. We utilize statistical techniques to estimate kick type (synchronous and non-synchronous), kick amplitude, kick frequency, and kick deviation. These parameters can prove helpful in determining an infant's kick quality and also measure improvements in physical therapy over time. In this paper, we detail the design of the system and discuss the statistical results.
{"title":"Towards Infant Kick Quality Detection to Support Physical Therapy and Early Detection of Cerebral Palsy: A Pilot Study","authors":"Victor Emeli, Katelyn E. Fry, A. Howard","doi":"10.1109/RO-MAN47096.2020.9223571","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223571","url":null,"abstract":"The kicking patterns of infants can provide markers that may predict the trajectory of their future development. Atypical kicking patterns may predict the possibility of developmental disorders like Cerebral Palsy (CP). Early intervention and physical therapy that encourages the practice of proper kicking motions can help to improve the outcomes in these scenarios. The kicking motions of an infant are usually evaluated by a trained health professional and subsequent physical therapy is also conducted by a licensed professional. The automation of the evaluation of kicking motions and the administration of physical therapy is desirable for standardizing these processes. In this work, we attempt to develop a method to quantify metrics that can provide insight into the quality of baby kicking actions. We utilize a computer vision system to analyze infant kicking stimulated by parent-infant play and a robotic infant mobile. We utilize statistical techniques to estimate kick type (synchronous and non-synchronous), kick amplitude, kick frequency, and kick deviation. These parameters can prove helpful in determining an infant's kick quality and also measure improvements in physical therapy over time. In this paper, we detail the design of the system and discuss the statistical results.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117124975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223518
Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, K. Okada, M. Inaba
Eating with others enhances our dining experience, improves socialization, and has some health benefits. Although many people do not want to eat alone, there is an increase in the number of people who eat alone in Japan due to difficulty in matching mealtimes and places with others.In this paper, we develop a mixed reality (MR) system for coeating with a robot. In this system, a robot and a MR headset are connected enabling users to observe a robot putting food image into its mouth, as if eating. We conducted an experiment to evaluate the developed system with users that are at least 13 years old. Experimental results show that the users enjoyed their meal and felt more delicious when the robot ate with them than when the robot only talked without eating. Furthermore, they eat more when a robot eats, suggesting that a robot could influence people’s eating behavior.
{"title":"Development and Evaluation of Mixed Reality Co-eating System: Sharing the Behavior of Eating Food with a Robot Could Improve Our Dining Experience","authors":"Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, K. Okada, M. Inaba","doi":"10.1109/RO-MAN47096.2020.9223518","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223518","url":null,"abstract":"Eating with others enhances our dining experience, improves socialization, and has some health benefits. Although many people do not want to eat alone, there is an increase in the number of people who eat alone in Japan due to difficulty in matching mealtimes and places with others.In this paper, we develop a mixed reality (MR) system for coeating with a robot. In this system, a robot and a MR headset are connected enabling users to observe a robot putting food image into its mouth, as if eating. We conducted an experiment to evaluate the developed system with users that are at least 13 years old. Experimental results show that the users enjoyed their meal and felt more delicious when the robot ate with them than when the robot only talked without eating. Furthermore, they eat more when a robot eats, suggesting that a robot could influence people’s eating behavior.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122853400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223573
Tudor B. Ionescu
This paper reports on a research project aimed at introducing a collaborative industrial robot into a makerspace (a public machine shop equipped with digital manufacturing technologies). Using an ethnographic approach, we observed how collaborations between researchers and non-experts are facilitated by makerspaces, how robot safety is being construed and negotiated by the actors involved in the project; and how knowledge about collaborative robot safety and applications is produced in a context previously unforeseen by the creators of the technology. The proposed analysis suggests that the sociotechnical configuration of the studied project resembles that of a trading zone, in which various types of knowledge and expertise are exchanged between the researchers from the interdisciplinary project team and makerspace members. As we shall argue, the trading zone model can be useful in the analysis and organization of participatory HRI research.
{"title":"Meet Your Personal Cobot, But Don’t Touch It Just Yet*","authors":"Tudor B. Ionescu","doi":"10.1109/RO-MAN47096.2020.9223573","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223573","url":null,"abstract":"This paper reports on a research project aimed at introducing a collaborative industrial robot into a makerspace (a public machine shop equipped with digital manufacturing technologies). Using an ethnographic approach, we observed how collaborations between researchers and non-experts are facilitated by makerspaces, how robot safety is being construed and negotiated by the actors involved in the project; and how knowledge about collaborative robot safety and applications is produced in a context previously unforeseen by the creators of the technology. The proposed analysis suggests that the sociotechnical configuration of the studied project resembles that of a trading zone, in which various types of knowledge and expertise are exchanged between the researchers from the interdisciplinary project team and makerspace members. As we shall argue, the trading zone model can be useful in the analysis and organization of participatory HRI research.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126062306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}