This paper presents a novel architecture to detect social groups in real-time from a continuous image stream of an ego-vision camera. F-formation defines social orientations in space where two or more person tends to communicate in a social place. Thus, essentially, we detect F-formations in social gatherings such as meetings, discussions, etc. and predict the robot’s approach angle if it wants to join the social group. Additionally, we also detect outliers, i.e., the persons who are not part of the group under consideration. Our proposed pipeline consists of – a) a skeletal key points estimator (a total of 17) for the detected human in the scene, b) a learning model (using a feature vector based on the skeletal points) using CRF to detect groups of people and outlier person in a scene, and c) a separate learning model using a multi-class Support Vector Machine (SVM) to predict the exact F-formation of the group of people in the current scene and the angle of approach for the viewing robot. The system is evaluated using two data-sets. The results show that the group and outlier detection in a scene using our method establishes an accuracy of 91%. We have made rigorous comparisons of our systems with a state-of-the-art F-formation detection system and found that it outperforms the state-of-the-art by 29% for formation detection and 55% for combined detection of the formation and approach angle.
{"title":"Let me join you! Real-time F-formation recognition by a socially aware robot","authors":"Hrishav Bakul Barua, Pradip Pramanick, Chayan Sarkar, Theint Haythi Mg","doi":"10.1109/RO-MAN47096.2020.9223469","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223469","url":null,"abstract":"This paper presents a novel architecture to detect social groups in real-time from a continuous image stream of an ego-vision camera. F-formation defines social orientations in space where two or more person tends to communicate in a social place. Thus, essentially, we detect F-formations in social gatherings such as meetings, discussions, etc. and predict the robot’s approach angle if it wants to join the social group. Additionally, we also detect outliers, i.e., the persons who are not part of the group under consideration. Our proposed pipeline consists of – a) a skeletal key points estimator (a total of 17) for the detected human in the scene, b) a learning model (using a feature vector based on the skeletal points) using CRF to detect groups of people and outlier person in a scene, and c) a separate learning model using a multi-class Support Vector Machine (SVM) to predict the exact F-formation of the group of people in the current scene and the angle of approach for the viewing robot. The system is evaluated using two data-sets. The results show that the group and outlier detection in a scene using our method establishes an accuracy of 91%. We have made rigorous comparisons of our systems with a state-of-the-art F-formation detection system and found that it outperforms the state-of-the-art by 29% for formation detection and 55% for combined detection of the formation and approach angle.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223481
Daiki Sato, Mana Sasagawa, Arinobu Niijima
We explore how to design emotional expression using tabletop social robots with multiple texture modules. Previous studies in human-robot interaction have presented various designs for emotionally expressive robots without using anthropomorphic forms or cues. They revealed that haptic stimulation based on the textures and movements of the robots could evoke some emotions in users, although these were limited. In this work, we propose using a combination of textures and movements for richer emotional expression. We implemented tabletop robots equipped with detachable texture modules made of five different materials (plastic resin, aluminum, clay, Velcro, and cotton) and performed a user study with 13 participants to investigate how they would map the combinations of textures and movements to nine emotions chosen from Russell’s circumplex model. The results indicated that the robots could express various emotions such as excited, happy, calm, and sad. Deeper analysis of these results revealed some interesting relationships between emotional valence/arousal and texture/movement: for example, cold texture played an important role in expressing negative valence, and controlling the frequency of the movements could change the expression of arousal.
{"title":"Affective Touch Robots with Changing Textures and Movements","authors":"Daiki Sato, Mana Sasagawa, Arinobu Niijima","doi":"10.1109/RO-MAN47096.2020.9223481","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223481","url":null,"abstract":"We explore how to design emotional expression using tabletop social robots with multiple texture modules. Previous studies in human-robot interaction have presented various designs for emotionally expressive robots without using anthropomorphic forms or cues. They revealed that haptic stimulation based on the textures and movements of the robots could evoke some emotions in users, although these were limited. In this work, we propose using a combination of textures and movements for richer emotional expression. We implemented tabletop robots equipped with detachable texture modules made of five different materials (plastic resin, aluminum, clay, Velcro, and cotton) and performed a user study with 13 participants to investigate how they would map the combinations of textures and movements to nine emotions chosen from Russell’s circumplex model. The results indicated that the robots could express various emotions such as excited, happy, calm, and sad. Deeper analysis of these results revealed some interesting relationships between emotional valence/arousal and texture/movement: for example, cold texture played an important role in expressing negative valence, and controlling the frequency of the movements could change the expression of arousal.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"41 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223567
Luca Morando, C. Recchiuto, A. Sgorbissa
Unmanned Aerial Vehicles (UAVs) popularity is increased in recent years, and the domain of application of this new technology is continuously expanding. However, although UAVs may be extremely useful in monitoring contexts, the operational aspects of drone patrolling services have not yet been extensively studied. Specifically, patrolling and inspecting with UAVs different targets distributed over a large area is still an open problem, due to battery constraints and other practical limitations. In this work, we propose a deterministic algorithm for patrolling large areas in a pre- or post-critical event scenario. The autonomy range of UAVs is extended with the concept of Social Drone Sharing: citizens may offer their availability to take care of the UAV if it lands in their private area, being thus strictly involved in the monitoring process. The proposed approach aims at finding optimal routes in this context, minimizing the patrolling time and respecting the battery constraints. Simulation experiments have been conducted, giving some insights about the performance of the proposed method.
{"title":"Social Drone Sharing to Increase the UAV Patrolling Autonomy in Emergency Scenarios","authors":"Luca Morando, C. Recchiuto, A. Sgorbissa","doi":"10.1109/RO-MAN47096.2020.9223567","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223567","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) popularity is increased in recent years, and the domain of application of this new technology is continuously expanding. However, although UAVs may be extremely useful in monitoring contexts, the operational aspects of drone patrolling services have not yet been extensively studied. Specifically, patrolling and inspecting with UAVs different targets distributed over a large area is still an open problem, due to battery constraints and other practical limitations. In this work, we propose a deterministic algorithm for patrolling large areas in a pre- or post-critical event scenario. The autonomy range of UAVs is extended with the concept of Social Drone Sharing: citizens may offer their availability to take care of the UAV if it lands in their private area, being thus strictly involved in the monitoring process. The proposed approach aims at finding optimal routes in this context, minimizing the patrolling time and respecting the battery constraints. Simulation experiments have been conducted, giving some insights about the performance of the proposed method.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122211653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223434
Eleuda Nuñez, Masakazu Hirokawa, Kenji Suzuki
In computer-mediated communication, the amount of non-verbal cues or social signals that machines can support is still limited. By integrating haptic information into computational systems, it might be possible to give a new dimension to the way people convey social signals in mediated communication. This research aims to distinguish different haptic gestures using a physical interface with a cushion-like form designed as a mediator for remote communication scenarios. The proposed interface can sense the user through the cushion’s deformation data combined with motion data. The contribution of this paper is the following: 1) Regardless of each participant’s particular interpretation of the gesture, the proposed solution can detect eight haptic gestures with more than 80% of accuracy across participants, and 2) The classification of gestures was done without the need of calibration, and independent of the orientation of the cushion. These results represent one step toward the development of affect communication systems that can support haptic gesture classification.
{"title":"Design of Haptic Gestures for Affective Social Signaling Through a Cushion Interface","authors":"Eleuda Nuñez, Masakazu Hirokawa, Kenji Suzuki","doi":"10.1109/RO-MAN47096.2020.9223434","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223434","url":null,"abstract":"In computer-mediated communication, the amount of non-verbal cues or social signals that machines can support is still limited. By integrating haptic information into computational systems, it might be possible to give a new dimension to the way people convey social signals in mediated communication. This research aims to distinguish different haptic gestures using a physical interface with a cushion-like form designed as a mediator for remote communication scenarios. The proposed interface can sense the user through the cushion’s deformation data combined with motion data. The contribution of this paper is the following: 1) Regardless of each participant’s particular interpretation of the gesture, the proposed solution can detect eight haptic gestures with more than 80% of accuracy across participants, and 2) The classification of gestures was done without the need of calibration, and independent of the orientation of the cushion. These results represent one step toward the development of affect communication systems that can support haptic gesture classification.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132383201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223599
Ilaria Torre, Sébastien Le Maguer
Accents are vocal features that immediately tell a listener whether a speaker comes from their same place, i.e. whether they share a social group. This in-groupness is important, as people tend to prefer interacting with others who belong to their same groups. Accents also evoke attitudinal responses based on their supposed prestigious status. These accent-based perceptions might affect interactions between humans and robots. Yet, very few studies so far have investigated the effect of accented robot speakers on users’ perceptions and behaviour, and none have collected users’ explicit preferences on robot accents. In this paper we present results from a survey of over 500 British speakers, who indicated what accent they would like a robot to have. The biggest proportion of participants wanted a robot to have a Standard Southern British English (SSBE) accent, followed by an Irish accent. Crucially, very few people wanted a robot with their same accent, or with a machine-like voice. These explicit preferences might not turn out to predict more successful interactions, also because of the unrealistic expectations that such human-like vocal features might generate in a user. Nonetheless, it seems that people have an idea of how their artificial companions should sound like, and this preference should be considered when designing them.
{"title":"Should robots have accents?","authors":"Ilaria Torre, Sébastien Le Maguer","doi":"10.1109/RO-MAN47096.2020.9223599","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223599","url":null,"abstract":"Accents are vocal features that immediately tell a listener whether a speaker comes from their same place, i.e. whether they share a social group. This in-groupness is important, as people tend to prefer interacting with others who belong to their same groups. Accents also evoke attitudinal responses based on their supposed prestigious status. These accent-based perceptions might affect interactions between humans and robots. Yet, very few studies so far have investigated the effect of accented robot speakers on users’ perceptions and behaviour, and none have collected users’ explicit preferences on robot accents. In this paper we present results from a survey of over 500 British speakers, who indicated what accent they would like a robot to have. The biggest proportion of participants wanted a robot to have a Standard Southern British English (SSBE) accent, followed by an Irish accent. Crucially, very few people wanted a robot with their same accent, or with a machine-like voice. These explicit preferences might not turn out to predict more successful interactions, also because of the unrealistic expectations that such human-like vocal features might generate in a user. Nonetheless, it seems that people have an idea of how their artificial companions should sound like, and this preference should be considered when designing them.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130121485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223546
Samuele Vinanzi, A. Cangelosi, C. Goerick
Social interaction is the new frontier in contemporary robotics: we want to build robots that blend with ease into our daily social environments, following their norms and rules. The cognitive skill that bootstraps social awareness in humans is known as "intention reading" and it allows us to interpret other agents’ actions and assign them meaning. Given its centrality for humans, it is likely that intention reading will foster the development of robotic social understanding. In this paper, we present an artificial cognitive architecture for intention reading in human-robot interaction (HRI) that makes use of social cues to disambiguate goals. This is accomplished by performing a low-level action encoding paired with a high-level probabilistic goal inference. We introduce a new clustering algorithm that has been developed to differentiate multi-sensory human social cues by performing several levels of clustering on different feature-spaces, paired with a Bayesian network that infers the underlying intention. The model has been validated through an interactive HRI experiment involving a joint manipulation game performed by a human and a robotic arm in a toy block scenario. The results show that the artificial agent was capable of reading the intention of its partner and cooperate in mutual interaction, thus validating the novel methodology and the use of social cues to disambiguate goals, other than demonstrating the advantages of intention reading in social HRI.
{"title":"The Role of Social Cues for Goal Disambiguation in Human-Robot Cooperation","authors":"Samuele Vinanzi, A. Cangelosi, C. Goerick","doi":"10.1109/RO-MAN47096.2020.9223546","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223546","url":null,"abstract":"Social interaction is the new frontier in contemporary robotics: we want to build robots that blend with ease into our daily social environments, following their norms and rules. The cognitive skill that bootstraps social awareness in humans is known as \"intention reading\" and it allows us to interpret other agents’ actions and assign them meaning. Given its centrality for humans, it is likely that intention reading will foster the development of robotic social understanding. In this paper, we present an artificial cognitive architecture for intention reading in human-robot interaction (HRI) that makes use of social cues to disambiguate goals. This is accomplished by performing a low-level action encoding paired with a high-level probabilistic goal inference. We introduce a new clustering algorithm that has been developed to differentiate multi-sensory human social cues by performing several levels of clustering on different feature-spaces, paired with a Bayesian network that infers the underlying intention. The model has been validated through an interactive HRI experiment involving a joint manipulation game performed by a human and a robotic arm in a toy block scenario. The results show that the artificial agent was capable of reading the intention of its partner and cooperate in mutual interaction, thus validating the novel methodology and the use of social cues to disambiguate goals, other than demonstrating the advantages of intention reading in social HRI.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132801398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223455
Hossein Karami, Kourosh Darvish, F. Mastrogiovanni
The presence and coexistence of human operators and collaborative robots in shop-floor environments raises the need for assigning tasks to either operators or robots, or both. Depending on task characteristics, operator capabilities and the involved robot functionalities, it is of the utmost importance to design strategies allowing for the concurrent and/or sequential allocation of tasks related to object manipulation and assembly. In this paper, we extend the FLEXHRC framework presented in [1] to allow a human operator to interact with multiple, heterogeneous robots at the same time in order to jointly carry out a given task. The extended FLEXHRC framework leverages a concurrent and sequential task representation framework to allocate tasks to either operators or robots as part of a dynamic collaboration process. In particular, we focus on a use case related to the inspection of product defects, which involves a human operator, a dual-arm Baxter manipulator from Rethink Robotics and a Kuka youBot mobile manipulator.
{"title":"A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios","authors":"Hossein Karami, Kourosh Darvish, F. Mastrogiovanni","doi":"10.1109/RO-MAN47096.2020.9223455","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223455","url":null,"abstract":"The presence and coexistence of human operators and collaborative robots in shop-floor environments raises the need for assigning tasks to either operators or robots, or both. Depending on task characteristics, operator capabilities and the involved robot functionalities, it is of the utmost importance to design strategies allowing for the concurrent and/or sequential allocation of tasks related to object manipulation and assembly. In this paper, we extend the FLEXHRC framework presented in [1] to allow a human operator to interact with multiple, heterogeneous robots at the same time in order to jointly carry out a given task. The extended FLEXHRC framework leverages a concurrent and sequential task representation framework to allocate tasks to either operators or robots as part of a dynamic collaboration process. In particular, we focus on a use case related to the inspection of product defects, which involves a human operator, a dual-arm Baxter manipulator from Rethink Robotics and a Kuka youBot mobile manipulator.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133093401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223486
François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru
Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.
{"title":"Learning prohibited and authorised grasping locations from a few demonstrations","authors":"François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru","doi":"10.1109/RO-MAN47096.2020.9223486","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223486","url":null,"abstract":"Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115541739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223457
Débora Pereira, Alessandro Morassut, E. Tiberi, P. Dario, G. Ciuti
The study of cooking tasks, such as grilling, is hindered by several adverse conditions for sensors, such as the proximity to humidity, fat, and heat. Still, robotics research could benefit from understanding the human control of forces and torques in important contact interactions of kitchen-utensils with food. This work presents a preliminary study on the dynamics of grilling tasks (i.e. food flipping movements). A spatula and kitchen-tweezers were instrumented to measure forces and torque in multiple directions. Furthermore, we designed an experimental setup to keep sensors distant from heat/humidity and to, simultaneously, hold the effects of grilling (stickiness/slipperiness) during the tasks execution and recording. This allowed a successful data collection of 1426 movements with the spatula (flipping hamburgers, chicken, zucchini and eggplant slices) and 660 movements with the tweezers (flipping zucchini and eggplant slices), performed by chefs and ordinary home cooks. Finally, we analyzed three dynamical characteristics of the tasks for the different food: bending force and torsion torque on the impact to unstick food, and maximum pinching with tweezers. We verified that bending on impact and maximum pinching are adjusted to the food by both chefs and home cooks.
{"title":"Forces and torque measurements in the interaction of kitchen-utensils with food during typical cooking tasks: preliminary test and evaluation","authors":"Débora Pereira, Alessandro Morassut, E. Tiberi, P. Dario, G. Ciuti","doi":"10.1109/RO-MAN47096.2020.9223457","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223457","url":null,"abstract":"The study of cooking tasks, such as grilling, is hindered by several adverse conditions for sensors, such as the proximity to humidity, fat, and heat. Still, robotics research could benefit from understanding the human control of forces and torques in important contact interactions of kitchen-utensils with food. This work presents a preliminary study on the dynamics of grilling tasks (i.e. food flipping movements). A spatula and kitchen-tweezers were instrumented to measure forces and torque in multiple directions. Furthermore, we designed an experimental setup to keep sensors distant from heat/humidity and to, simultaneously, hold the effects of grilling (stickiness/slipperiness) during the tasks execution and recording. This allowed a successful data collection of 1426 movements with the spatula (flipping hamburgers, chicken, zucchini and eggplant slices) and 660 movements with the tweezers (flipping zucchini and eggplant slices), performed by chefs and ordinary home cooks. Finally, we analyzed three dynamical characteristics of the tasks for the different food: bending force and torsion torque on the impact to unstick food, and maximum pinching with tweezers. We verified that bending on impact and maximum pinching are adjusted to the food by both chefs and home cooks.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124109962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223575
Kentaro Watanabe, K. Jokinen
Social robots are receiving more attention through increased research and development, and they are gradually becoming a part of our daily lives. In this study, we investigated how social robots are accepted by robot users. We applied the theoretical lens of the boundary-crossing robot concept, which describes the role shift of robots from tools to agents. This concept highlights the impact of social robots on the everyday lives of humans, and can be used to structure the development of perceived interactions between robots and human users. In this paper, we report on the results of a web questionnaire study conducted among users of interactive devices (humanoid robots, animal robots, and smart speakers). Their acceptance and roles in daily life are compared from both functional and affective perspectives, with respect to their perceived roles as boundary-crossing robots.
{"title":"Interactive Robotic Systems as Boundary-Crossing Robots – the User’s View*","authors":"Kentaro Watanabe, K. Jokinen","doi":"10.1109/RO-MAN47096.2020.9223575","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223575","url":null,"abstract":"Social robots are receiving more attention through increased research and development, and they are gradually becoming a part of our daily lives. In this study, we investigated how social robots are accepted by robot users. We applied the theoretical lens of the boundary-crossing robot concept, which describes the role shift of robots from tools to agents. This concept highlights the impact of social robots on the everyday lives of humans, and can be used to structure the development of perceived interactions between robots and human users. In this paper, we report on the results of a web questionnaire study conducted among users of interactive devices (humanoid robots, animal robots, and smart speakers). Their acceptance and roles in daily life are compared from both functional and affective perspectives, with respect to their perceived roles as boundary-crossing robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114316826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}