Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223503
K. Shibuya, Kento Kosuga, H. Fukuhara
This study aims to build a violin-playing robot that can automatically determine how to perform based on the information included in musical scores. In this paper, we discuss the design of the variation pattern for the tempo of every bar and the sound pressure of every musical note to produce sounds that can convey bright and dark impressions. First, we present the analytical results of a trained violinist’s performance, in which we found that the tempo of the bright timbre is faster than that of the dark timbre, and the bright performance includes several steep variations in the sound pressure pattern. We then propose a design method for the performance to express bright and dark timbres based on the analytical results. In the experiments, sounds were produced by our anthropomorphic violin-playing robot, which can vary the sound pressure by varying a wrist joint angle. The sounds produced by the robot were analyzed, and we confirmed that the patterns of the produced sound pressure for the bright performance are similar to those of the designed one. The sounds were also evaluated by ten subjects, and we found that they distinguished the bright performances from the dark ones when the sound pressure and tempo variations were included.
{"title":"Bright and Dark Timbre Expressions with Sound Pressure and Tempo Variations by Violin-playing Robot*","authors":"K. Shibuya, Kento Kosuga, H. Fukuhara","doi":"10.1109/RO-MAN47096.2020.9223503","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223503","url":null,"abstract":"This study aims to build a violin-playing robot that can automatically determine how to perform based on the information included in musical scores. In this paper, we discuss the design of the variation pattern for the tempo of every bar and the sound pressure of every musical note to produce sounds that can convey bright and dark impressions. First, we present the analytical results of a trained violinist’s performance, in which we found that the tempo of the bright timbre is faster than that of the dark timbre, and the bright performance includes several steep variations in the sound pressure pattern. We then propose a design method for the performance to express bright and dark timbres based on the analytical results. In the experiments, sounds were produced by our anthropomorphic violin-playing robot, which can vary the sound pressure by varying a wrist joint angle. The sounds produced by the robot were analyzed, and we confirmed that the patterns of the produced sound pressure for the bright performance are similar to those of the designed one. The sounds were also evaluated by ten subjects, and we found that they distinguished the bright performances from the dark ones when the sound pressure and tempo variations were included.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116726418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223590
T. Nomura, Shun Horii
To investigate influences of media literacy into experiences of and negative attitudes toward robots, an online survey was conducted in Japan (N = 500). The results suggested that the connections of robot experiences with media literacy and negative attitudes toward robots were weak, and both media literacy and robot experiences had negative effects on negative attitudes toward interaction with robots.
{"title":"Influences of Media Literacy and Experiences of Robots into Negative Attitudes toward Robots in Japan","authors":"T. Nomura, Shun Horii","doi":"10.1109/RO-MAN47096.2020.9223590","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223590","url":null,"abstract":"To investigate influences of media literacy into experiences of and negative attitudes toward robots, an online survey was conducted in Japan (N = 500). The results suggested that the connections of robot experiences with media literacy and negative attitudes toward robots were weak, and both media literacy and robot experiences had negative effects on negative attitudes toward interaction with robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125597815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223515
Emir Mobedi, Nicola Villa, Wansoo Kim, A. Ajoudani
The objective of this paper is to propose an adaptive impedance control framework to cope with uncertainties in vision and dynamics in robotic assembly tasks. The framework is composed of an adaptive controller, a vision system, and an interaction planner, which are all supervised by a finite state machine. In this framework, the target assembly object’s pose is detected through the vision module, which is then used for the planning of the robot trajectories. The adaptive impedance control module copes with the uncertainties of the vision and the interaction planner modules in alignment of the assembly parts (a peg and a hole in this work). Unlike the classical impedance controllers, the online adaptation rule regulates the level of robot compliance in constrained directions, acting on and responding to the external forces. This enables the implementation of a flexible and adaptive Remote Center of Compliance (RCC) system, using active control. We first evaluate the performance of the proposed adaptive controller in comparison to classical impedance control. Next, the overall performance of the integrated system is evaluated in a peg-in-hole setup, with different clearances and orientation mismatches.
{"title":"An Adaptive Control Approach to Robotic Assembly with Uncertainties in Vision and Dynamics","authors":"Emir Mobedi, Nicola Villa, Wansoo Kim, A. Ajoudani","doi":"10.1109/RO-MAN47096.2020.9223515","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223515","url":null,"abstract":"The objective of this paper is to propose an adaptive impedance control framework to cope with uncertainties in vision and dynamics in robotic assembly tasks. The framework is composed of an adaptive controller, a vision system, and an interaction planner, which are all supervised by a finite state machine. In this framework, the target assembly object’s pose is detected through the vision module, which is then used for the planning of the robot trajectories. The adaptive impedance control module copes with the uncertainties of the vision and the interaction planner modules in alignment of the assembly parts (a peg and a hole in this work). Unlike the classical impedance controllers, the online adaptation rule regulates the level of robot compliance in constrained directions, acting on and responding to the external forces. This enables the implementation of a flexible and adaptive Remote Center of Compliance (RCC) system, using active control. We first evaluate the performance of the proposed adaptive controller in comparison to classical impedance control. Next, the overall performance of the integrated system is evaluated in a peg-in-hole setup, with different clearances and orientation mismatches.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113963196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223554
Kasra Mokhtari, Ali Ayub, Vidullan Surendran, Alan R. Wagner
Human drivers continually use social information to inform their decision making. We believe that incorporating this information into autonomous vehicle decision making would improve performance and importantly safety. This paper investigates how information in the form of pedestrian density can be used to identify the path being travelled and predict the number of pedestrians that the vehicle will encounter along that path in the future. We present experiments which use camera data captured while driving to evaluate our methods for path recognition and pedestrian density prediction. Our results show that we can identify the vehicle’s path using only pedestrian density at 92.4% accuracy and we can predict the number of pedestrians the vehicle will encounter with an accuracy of 70.45%. These results demonstrate that pedestrian density can serve as a source of information both perhaps to augment localization and for path risk prediction.
{"title":"Pedestrian Density Based Path Recognition and Risk Prediction for Autonomous Vehicles","authors":"Kasra Mokhtari, Ali Ayub, Vidullan Surendran, Alan R. Wagner","doi":"10.1109/RO-MAN47096.2020.9223554","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223554","url":null,"abstract":"Human drivers continually use social information to inform their decision making. We believe that incorporating this information into autonomous vehicle decision making would improve performance and importantly safety. This paper investigates how information in the form of pedestrian density can be used to identify the path being travelled and predict the number of pedestrians that the vehicle will encounter along that path in the future. We present experiments which use camera data captured while driving to evaluate our methods for path recognition and pedestrian density prediction. Our results show that we can identify the vehicle’s path using only pedestrian density at 92.4% accuracy and we can predict the number of pedestrians the vehicle will encounter with an accuracy of 70.45%. These results demonstrate that pedestrian density can serve as a source of information both perhaps to augment localization and for path risk prediction.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128083010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223606
S. Nakagawa, S. Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Y. Kuniyoshi
It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user’s state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the "mental health" QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human–robot interactions and personalized monitoring.
{"title":"Estimation of Mental Health Quality of Life using Visual Information during Interaction with a Communication Agent","authors":"S. Nakagawa, S. Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Y. Kuniyoshi","doi":"10.1109/RO-MAN47096.2020.9223606","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223606","url":null,"abstract":"It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user’s state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the \"mental health\" QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human–robot interactions and personalized monitoring.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223571
Victor Emeli, Katelyn E. Fry, A. Howard
The kicking patterns of infants can provide markers that may predict the trajectory of their future development. Atypical kicking patterns may predict the possibility of developmental disorders like Cerebral Palsy (CP). Early intervention and physical therapy that encourages the practice of proper kicking motions can help to improve the outcomes in these scenarios. The kicking motions of an infant are usually evaluated by a trained health professional and subsequent physical therapy is also conducted by a licensed professional. The automation of the evaluation of kicking motions and the administration of physical therapy is desirable for standardizing these processes. In this work, we attempt to develop a method to quantify metrics that can provide insight into the quality of baby kicking actions. We utilize a computer vision system to analyze infant kicking stimulated by parent-infant play and a robotic infant mobile. We utilize statistical techniques to estimate kick type (synchronous and non-synchronous), kick amplitude, kick frequency, and kick deviation. These parameters can prove helpful in determining an infant's kick quality and also measure improvements in physical therapy over time. In this paper, we detail the design of the system and discuss the statistical results.
{"title":"Towards Infant Kick Quality Detection to Support Physical Therapy and Early Detection of Cerebral Palsy: A Pilot Study","authors":"Victor Emeli, Katelyn E. Fry, A. Howard","doi":"10.1109/RO-MAN47096.2020.9223571","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223571","url":null,"abstract":"The kicking patterns of infants can provide markers that may predict the trajectory of their future development. Atypical kicking patterns may predict the possibility of developmental disorders like Cerebral Palsy (CP). Early intervention and physical therapy that encourages the practice of proper kicking motions can help to improve the outcomes in these scenarios. The kicking motions of an infant are usually evaluated by a trained health professional and subsequent physical therapy is also conducted by a licensed professional. The automation of the evaluation of kicking motions and the administration of physical therapy is desirable for standardizing these processes. In this work, we attempt to develop a method to quantify metrics that can provide insight into the quality of baby kicking actions. We utilize a computer vision system to analyze infant kicking stimulated by parent-infant play and a robotic infant mobile. We utilize statistical techniques to estimate kick type (synchronous and non-synchronous), kick amplitude, kick frequency, and kick deviation. These parameters can prove helpful in determining an infant's kick quality and also measure improvements in physical therapy over time. In this paper, we detail the design of the system and discuss the statistical results.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117124975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223345
Byeong June Moon, Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, Jong-suk Choi
Robotic things and social robots have been introduced into home, and they are expected to change the relationships between humans. Our study examines whether the introduction of robotic things or social robots, and the way that they are organized, can change the social relationship between family members. To observe this phenomenon, we designed a living lab experiment that simulated a home environment and recruited two families to participate. Families were asked to conduct home activities within two different types of Internet of Robotic Things(IoRT):1)internet of only robotic things(IoRT without mediator condition), and 2)internet of robotic things mediated by a social robot(IoRT with mediator condition). We recorded the interactions between the family members and the robotic things during the experiments and coded them into a dataset for social network analysis. The results revealed relationship differences between the two conditions. The introduction of IoRT without mediator motivated younger generation family members to share the burden of caring for other members, which was previously the duty of the mothers. However, this made the interaction network inefficient to do indirect interaction. On the contrary, introducing IoRT with mediator did not significantly change family relationships at the actor-level, and the mothers remained in charge of caring for other family members. However, IoRT with mediator made indirect interactions within the network more efficient. Furthermore, the role of the social robot mediator overlapped with that of the mothers. This shows that a social robot mediator can help the mothers care for other members of the family by operating and managing robotic things. Additionally, we discussed the implications for developing the IoRT for home.
{"title":"The Effects of Internet of Robotic Things on In-home Social Family Relationships","authors":"Byeong June Moon, Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, Jong-suk Choi","doi":"10.1109/RO-MAN47096.2020.9223345","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223345","url":null,"abstract":"Robotic things and social robots have been introduced into home, and they are expected to change the relationships between humans. Our study examines whether the introduction of robotic things or social robots, and the way that they are organized, can change the social relationship between family members. To observe this phenomenon, we designed a living lab experiment that simulated a home environment and recruited two families to participate. Families were asked to conduct home activities within two different types of Internet of Robotic Things(IoRT):1)internet of only robotic things(IoRT without mediator condition), and 2)internet of robotic things mediated by a social robot(IoRT with mediator condition). We recorded the interactions between the family members and the robotic things during the experiments and coded them into a dataset for social network analysis. The results revealed relationship differences between the two conditions. The introduction of IoRT without mediator motivated younger generation family members to share the burden of caring for other members, which was previously the duty of the mothers. However, this made the interaction network inefficient to do indirect interaction. On the contrary, introducing IoRT with mediator did not significantly change family relationships at the actor-level, and the mothers remained in charge of caring for other family members. However, IoRT with mediator made indirect interactions within the network more efficient. Furthermore, the role of the social robot mediator overlapped with that of the mothers. This shows that a social robot mediator can help the mothers care for other members of the family by operating and managing robotic things. Additionally, we discussed the implications for developing the IoRT for home.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117180683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223488
Silvia Rossi, Elena Dell’Aquila, Davide Russo, Gianpaolo Maggi
As the field of service robotics has been rapidly growing, it is expected for such robots to be endowed with the appropriate capabilities to interact with humans in a socially acceptable way. This is particularly relevant in the case of customer relationships where a positive and affective interaction has an impact on the users’ experience. In this paper, we address the question of whether a specific behavioral style of a barman-robot, acted through para-verbal and non-verbal behaviors, can affect users’ engagement and the creation of positive emotions. To that end, we endowed a barman-robot taking drink orders from human customers, with an empathic behavioral style. This aims at triggering to alignment process by mimicking the conversation partner’s behavior. This behavioral style is compared to an entertaining style, aiming at creating a positive relationship with the users, and a neutral style for control. Results suggest that when participants experienced more positive emotions, the robot was perceived as safer, so suggesting that interactions that stimulate positive and open relations with the robot may have a positive impact on the affective dimension of engagement. Indeed, when the empathic robot modulates its behavior according to the user’s one, this interaction seems to be more effective than when interacting with a neutral robot in improving engagement and positive emotions in public-service contexts.
{"title":"Increasing Engagement with Chameleon Robots in Bartending Services","authors":"Silvia Rossi, Elena Dell’Aquila, Davide Russo, Gianpaolo Maggi","doi":"10.1109/RO-MAN47096.2020.9223488","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223488","url":null,"abstract":"As the field of service robotics has been rapidly growing, it is expected for such robots to be endowed with the appropriate capabilities to interact with humans in a socially acceptable way. This is particularly relevant in the case of customer relationships where a positive and affective interaction has an impact on the users’ experience. In this paper, we address the question of whether a specific behavioral style of a barman-robot, acted through para-verbal and non-verbal behaviors, can affect users’ engagement and the creation of positive emotions. To that end, we endowed a barman-robot taking drink orders from human customers, with an empathic behavioral style. This aims at triggering to alignment process by mimicking the conversation partner’s behavior. This behavioral style is compared to an entertaining style, aiming at creating a positive relationship with the users, and a neutral style for control. Results suggest that when participants experienced more positive emotions, the robot was perceived as safer, so suggesting that interactions that stimulate positive and open relations with the robot may have a positive impact on the affective dimension of engagement. Indeed, when the empathic robot modulates its behavior according to the user’s one, this interaction seems to be more effective than when interacting with a neutral robot in improving engagement and positive emotions in public-service contexts.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124722714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223443
Min Set Paing, Enock William Nshama, N. Uchiyama
Throwing and catching are fundamental motions for human beings, and may be applied for advanced human and robot collaborative tasks. Since catching motion is more difficult than throwing for a robot, this study deals with reduced impact catching of a flying object by a planar manipulator. The estimation of the object's trajectory is improved by the Kalman filter and the least squares fitting is proposed to accurately predict the catching time, position and velocity of the manipulator. To achieve reduced impact catching, the minimization of the total impact force in x and y-directions is proposed as an optimization problem. The fifth degree non-periodic B-spline curve is implemented to achieve smooth and continuous trajectories in the joint space. The effectiveness of the proposed approaches are demonstrated by simulation and experiment.
{"title":"Motion Trajectory Estimation of a Flying Object and Optimal Reduced Impact Catching by a Planar Manipulator*","authors":"Min Set Paing, Enock William Nshama, N. Uchiyama","doi":"10.1109/RO-MAN47096.2020.9223443","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223443","url":null,"abstract":"Throwing and catching are fundamental motions for human beings, and may be applied for advanced human and robot collaborative tasks. Since catching motion is more difficult than throwing for a robot, this study deals with reduced impact catching of a flying object by a planar manipulator. The estimation of the object's trajectory is improved by the Kalman filter and the least squares fitting is proposed to accurately predict the catching time, position and velocity of the manipulator. To achieve reduced impact catching, the minimization of the total impact force in x and y-directions is proposed as an optimization problem. The fifth degree non-periodic B-spline curve is implemented to achieve smooth and continuous trajectories in the joint space. The effectiveness of the proposed approaches are demonstrated by simulation and experiment.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124934564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223518
Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, K. Okada, M. Inaba
Eating with others enhances our dining experience, improves socialization, and has some health benefits. Although many people do not want to eat alone, there is an increase in the number of people who eat alone in Japan due to difficulty in matching mealtimes and places with others.In this paper, we develop a mixed reality (MR) system for coeating with a robot. In this system, a robot and a MR headset are connected enabling users to observe a robot putting food image into its mouth, as if eating. We conducted an experiment to evaluate the developed system with users that are at least 13 years old. Experimental results show that the users enjoyed their meal and felt more delicious when the robot ate with them than when the robot only talked without eating. Furthermore, they eat more when a robot eats, suggesting that a robot could influence people’s eating behavior.
{"title":"Development and Evaluation of Mixed Reality Co-eating System: Sharing the Behavior of Eating Food with a Robot Could Improve Our Dining Experience","authors":"Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, K. Okada, M. Inaba","doi":"10.1109/RO-MAN47096.2020.9223518","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223518","url":null,"abstract":"Eating with others enhances our dining experience, improves socialization, and has some health benefits. Although many people do not want to eat alone, there is an increase in the number of people who eat alone in Japan due to difficulty in matching mealtimes and places with others.In this paper, we develop a mixed reality (MR) system for coeating with a robot. In this system, a robot and a MR headset are connected enabling users to observe a robot putting food image into its mouth, as if eating. We conducted an experiment to evaluate the developed system with users that are at least 13 years old. Experimental results show that the users enjoyed their meal and felt more delicious when the robot ate with them than when the robot only talked without eating. Furthermore, they eat more when a robot eats, suggesting that a robot could influence people’s eating behavior.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122853400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}