Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333694
Yasuto Tamura, Hun-ok Lim
We propose an object recognition method for service robots under the constraint of uncertain object teaching by humans. In previous object recognition methods, the training phase required a large number of prepared images and also required the training data to not have a complex background. However, for robots to perform daily tasks, they should be able to recognize objects despite unclear object teaching by humans. In order to mitigate the effect of features in the background on object recognition, our proposed method classifies local features based on saliency from video images. In this paper, we demonstrate the efficacy of the proposed method in recognizing target objects despite unclear teaching by the user.
{"title":"Object recognition using multiple instance learning with unclear object teaching","authors":"Yasuto Tamura, Hun-ok Lim","doi":"10.1109/ROMAN.2015.7333694","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333694","url":null,"abstract":"We propose an object recognition method for service robots under the constraint of uncertain object teaching by humans. In previous object recognition methods, the training phase required a large number of prepared images and also required the training data to not have a complex background. However, for robots to perform daily tasks, they should be able to recognize objects despite unclear object teaching by humans. In order to mitigate the effect of features in the background on object recognition, our proposed method classifies local features based on saliency from video images. In this paper, we demonstrate the efficacy of the proposed method in recognizing target objects despite unclear teaching by the user.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124121887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333665
L. J. Corrigan, Christina Basedow, Dennis Küster, Arvid Kappas, Christopher E. Peters, Ginevra Castellano
Engagement in task orientated social robotics is a complex phenomenon, consisting of both task and social elements. Previous work in this area tends to focus on these aspects in isolation without consideration for the positive or negative effects one might cause the other. We explore both, in an attempt to understand how engagement with the task might effect the social relationship with the robot, and vice versa. In this paper, we describe the analysis of participant self-report data collected during an exploratory pilot study used to evaluate users' “perception of engagement”. We discuss how the results of our analysis suggest that ultimately, it was the users' own perception of the robots' characteristics such as friendliness, helpfulness and attentiveness which led to sustained engagement with both the task and robot.
{"title":"Perception matters! Engagement in task orientated social robotics","authors":"L. J. Corrigan, Christina Basedow, Dennis Küster, Arvid Kappas, Christopher E. Peters, Ginevra Castellano","doi":"10.1109/ROMAN.2015.7333665","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333665","url":null,"abstract":"Engagement in task orientated social robotics is a complex phenomenon, consisting of both task and social elements. Previous work in this area tends to focus on these aspects in isolation without consideration for the positive or negative effects one might cause the other. We explore both, in an attempt to understand how engagement with the task might effect the social relationship with the robot, and vice versa. In this paper, we describe the analysis of participant self-report data collected during an exploratory pilot study used to evaluate users' “perception of engagement”. We discuss how the results of our analysis suggest that ultimately, it was the users' own perception of the robots' characteristics such as friendliness, helpfulness and attentiveness which led to sustained engagement with both the task and robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132373019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333596
Adeline Chanseau, K. Lohan, R. Aylett
The aim of this paper is to give a short overview on how facial expressions enhance interaction with elderly people. Two studies were done. The first one was an online questionnaire that gave information on what type of expressions were more appropriate and the second one was an experiment in which participants met the robot. This study's results suggested a correlation between facial expressions and interaction enhancement. Participants preferred a certain motor speed according to the facial expression presented which shows that speed has an influence on understanding expressions and beyond this, emotions. In the end, this paper presents some ways of enhancing interaction with the robot.
{"title":"How motor speed of a robot face can influence the \"older\" user's perception of facial expression?","authors":"Adeline Chanseau, K. Lohan, R. Aylett","doi":"10.1109/ROMAN.2015.7333596","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333596","url":null,"abstract":"The aim of this paper is to give a short overview on how facial expressions enhance interaction with elderly people. Two studies were done. The first one was an online questionnaire that gave information on what type of expressions were more appropriate and the second one was an experiment in which participants met the robot. This study's results suggested a correlation between facial expressions and interaction enhancement. Participants preferred a certain motor speed according to the facial expression presented which shows that speed has an influence on understanding expressions and beyond this, emotions. In the end, this paper presents some ways of enhancing interaction with the robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122957667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333668
E. Broadbent, Josephine R. Orejana, H. Ahn, J. Xie, P. Rouse, B. MacDonald
Robots have been proposed to reduce the costs of the provision of healthcare in rural settings, but as yet little research has tested this. This study investigated the feasibility and cost-effectiveness of a robot measuring routine vital signs in a family medicine clinic in a rural setting. The length of patient consultations was compared before (N = 85 patients) and after a robot was deployed in the clinic (N = 48 patients). A Cafero touchscreen robot took the patient's vital signs prior to the consultation and transferred the results to the medical professional's computer. Time-savings were calculated in New Zealand dollar terms and compared to the costs of the robot and its maintenance. Results showed that consultation lengths were cut by 18% on average (3 minutes and 13 seconds). If 20% of the clinics' annual consultations were augmented with the robot this translates to a total annual savings of NZ$19075. The annual cost of the robot was calculated to be NZ$9400 overs 5 years. Present value calculations of Benefit Cost result in a Benefit Cost ratio of 2.3. These results support the cost-effectiveness of the robot in a rural medical clinic. Further research is needed to improve the services provided by the robot and test it in a larger trial.
{"title":"The cost-effectiveness of a robot measuring vital signs in a rural medical practice","authors":"E. Broadbent, Josephine R. Orejana, H. Ahn, J. Xie, P. Rouse, B. MacDonald","doi":"10.1109/ROMAN.2015.7333668","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333668","url":null,"abstract":"Robots have been proposed to reduce the costs of the provision of healthcare in rural settings, but as yet little research has tested this. This study investigated the feasibility and cost-effectiveness of a robot measuring routine vital signs in a family medicine clinic in a rural setting. The length of patient consultations was compared before (N = 85 patients) and after a robot was deployed in the clinic (N = 48 patients). A Cafero touchscreen robot took the patient's vital signs prior to the consultation and transferred the results to the medical professional's computer. Time-savings were calculated in New Zealand dollar terms and compared to the costs of the robot and its maintenance. Results showed that consultation lengths were cut by 18% on average (3 minutes and 13 seconds). If 20% of the clinics' annual consultations were augmented with the robot this translates to a total annual savings of NZ$19075. The annual cost of the robot was calculated to be NZ$9400 overs 5 years. Present value calculations of Benefit Cost result in a Benefit Cost ratio of 2.3. These results support the cost-effectiveness of the robot in a rural medical clinic. Further research is needed to improve the services provided by the robot and test it in a larger trial.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125547006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333583
Shotaro Nakagawa, Shunki Itadera, Y. Hasegawa, K. Sekiyama, T. Fukuda, P. Di, Jian Huang, Qiang Huang
A cane-type robot called intelligent cane has been developed to support the elderly during walking. By supporting a part of a user's body weight, the cane robot aims to reduce a load applied to a user's affected leg. Therefore, while the user's affected leg is a support leg, it is preferable that the cane robot stops to sufficiently support the user. In our previous work, the cane robot is controlled based on horizontal component of force applied to the cane robot and moment around a vertical axis. In this paper, virtual friction force, which is proportional to vertical component of force, is proposed to improve a walking assistance capability of the cane robot. In addition, virtual frictional coefficients are arranged based on the user's state inferred by a laser range finder. By employing the proposed method, the cane robot moves easily in the both legs support phase, stops in the healthy leg support phase, and supports the user reliably in the affected leg support phase.
{"title":"Virtual friction model for control of cane robot","authors":"Shotaro Nakagawa, Shunki Itadera, Y. Hasegawa, K. Sekiyama, T. Fukuda, P. Di, Jian Huang, Qiang Huang","doi":"10.1109/ROMAN.2015.7333583","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333583","url":null,"abstract":"A cane-type robot called intelligent cane has been developed to support the elderly during walking. By supporting a part of a user's body weight, the cane robot aims to reduce a load applied to a user's affected leg. Therefore, while the user's affected leg is a support leg, it is preferable that the cane robot stops to sufficiently support the user. In our previous work, the cane robot is controlled based on horizontal component of force applied to the cane robot and moment around a vertical axis. In this paper, virtual friction force, which is proportional to vertical component of force, is proposed to improve a walking assistance capability of the cane robot. In addition, virtual frictional coefficients are arranged based on the user's state inferred by a laser range finder. By employing the proposed method, the cane robot moves easily in the both legs support phase, stops in the healthy leg support phase, and supports the user reliably in the affected leg support phase.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116243101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333573
G. Trovato, J. J. G. Ramos, Helio Azevedo, A. Moroni, Silvia Magossi, H. Ishii, R. Simmons, A. Takanishi
Robots are possible candidates for performing tasks as helpers in activities of daily living in the future: working as a receptionist is one possible employment. However, the way the receptionist robot should appear, sound and behave needs to be investigated carefully, in order to design a robot which is accepted and perceived in a positive way by common users. This paper describes a study on anthropomorphism of a receptionist robot made for Brazilian people depending on the appearance and on the voice of the receptionist. This experiment was preceded by a preliminary survey about expectation of people regarding receptionists. The main experiment consisted in having Brazilian people interacting with a conversational agent and with a humanoid robot through a video conference. The two receptionists are not only different in physical appearance, but in the sound of the voice, too, which can be either human-like or robotic sound. The two receptionists gave indications to the participants to reach rooms where they could evaluate the receptionists through questionnaires concerning anthropomorphism and uncanniness among other concepts. The results gathered from both experiments provide useful hints to design a receptionist robot.
{"title":"Designing a receptionist robot: Effect of voice and appearance on anthropomorphism","authors":"G. Trovato, J. J. G. Ramos, Helio Azevedo, A. Moroni, Silvia Magossi, H. Ishii, R. Simmons, A. Takanishi","doi":"10.1109/ROMAN.2015.7333573","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333573","url":null,"abstract":"Robots are possible candidates for performing tasks as helpers in activities of daily living in the future: working as a receptionist is one possible employment. However, the way the receptionist robot should appear, sound and behave needs to be investigated carefully, in order to design a robot which is accepted and perceived in a positive way by common users. This paper describes a study on anthropomorphism of a receptionist robot made for Brazilian people depending on the appearance and on the voice of the receptionist. This experiment was preceded by a preliminary survey about expectation of people regarding receptionists. The main experiment consisted in having Brazilian people interacting with a conversational agent and with a humanoid robot through a video conference. The two receptionists are not only different in physical appearance, but in the sound of the voice, too, which can be either human-like or robotic sound. The two receptionists gave indications to the participants to reach rooms where they could evaluate the receptionists through questionnaires concerning anthropomorphism and uncanniness among other concepts. The results gathered from both experiments provide useful hints to design a receptionist robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131185529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333600
Ayse Küçükyilmaz, Y. Demiris
An emerging research problem in the field of assistive robotics is the design of methodologies that allow robots to provide human-like assistance to the users. Especially within the rehabilitation domain, a grand challenge is to program a robot to mimic the operation of an occupational therapist, intervening with the user when necessary so as to improve the therapeutic power of the assistive robotic system. We propose a method to estimate assistance policies from expert demonstrations to present human-like intervention during navigation in a powered wheelchair setup. For this purpose, we constructed a setting, where a human offers assistance to the user over a haptic shared control system. The robot learns from human assistance demonstrations while the user is actively driving the wheelchair in an unconstrained environment. We train a Gaussian process regression model to learn assistance commands given past and current actions of the user and the state of the environment. The results indicate that the model can estimate human assistance after only a single demonstration, i.e. in one-shot, so that the robot can help the user by selecting the appropriate assistance in a human-like fashion.
{"title":"One-shot assistance estimation from expert demonstrations for a shared control wheelchair system","authors":"Ayse Küçükyilmaz, Y. Demiris","doi":"10.1109/ROMAN.2015.7333600","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333600","url":null,"abstract":"An emerging research problem in the field of assistive robotics is the design of methodologies that allow robots to provide human-like assistance to the users. Especially within the rehabilitation domain, a grand challenge is to program a robot to mimic the operation of an occupational therapist, intervening with the user when necessary so as to improve the therapeutic power of the assistive robotic system. We propose a method to estimate assistance policies from expert demonstrations to present human-like intervention during navigation in a powered wheelchair setup. For this purpose, we constructed a setting, where a human offers assistance to the user over a haptic shared control system. The robot learns from human assistance demonstrations while the user is actively driving the wheelchair in an unconstrained environment. We train a Gaussian process regression model to learn assistance commands given past and current actions of the user and the state of the environment. The results indicate that the model can estimate human assistance after only a single demonstration, i.e. in one-shot, so that the robot can help the user by selecting the appropriate assistance in a human-like fashion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129976223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333691
Hideyuki Tanaka, Yujin Wakita, Y. Matsumoto
In order to let assistive robots be utilized in daily lives, it is important to analyze the daily life of users before development, and to enable users to understand the benefit of the robot after development. In this paper, we focus on the robotic arms for daily support, and firstly show the analysis of daily life in the viewpoint of “hand use.” The specification of robotic arms such as the DoF and payload can be determined based on the analysis. Then we show how to describe the benefit of robotic arms. A prototype of database for assistive device is created to show the benefit of assistive robots together with other information such as who can use, how to use, where to use. We utilize ICF approved by the WHO for such analysis and description.
{"title":"Needs analysis and benefit description of robotic arms for daily support","authors":"Hideyuki Tanaka, Yujin Wakita, Y. Matsumoto","doi":"10.1109/ROMAN.2015.7333691","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333691","url":null,"abstract":"In order to let assistive robots be utilized in daily lives, it is important to analyze the daily life of users before development, and to enable users to understand the benefit of the robot after development. In this paper, we focus on the robotic arms for daily support, and firstly show the analysis of daily life in the viewpoint of “hand use.” The specification of robotic arms such as the DoF and payload can be determined based on the analysis. Then we show how to describe the benefit of robotic arms. A prototype of database for assistive device is created to show the benefit of assistive robots together with other information such as who can use, how to use, where to use. We utilize ICF approved by the WHO for such analysis and description.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132103020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333641
Thomas Meneweger, D. Wurhofer, Verena Fuchsberger, M. Tscheligi
This paper illustrates how workers in a semiconductor factory experience working together with industrial robots. Applying a narrative interview approach, we evoked reports on workers' personal experiences of working together with these robots. This specific approach enabled us to gain knowledge of how daily work with these robots is experienced, including workers' attitudes and inherent meanings regarding the robots. Following a thematic analysis approach, we analyzed the collected interview data and identified rivalry with the robots, adaption towards the robot's behavior, perceived reasonability and knowledge acquisition as salient aspects regarding human-robot interaction in the factory. Our findings indicate how human-robot cooperation can be improved in terms of enhancing workers' everyday experiences with robots, for example, by providing a feeling of control or emphasizing human competences.
{"title":"Working together with industrial robots: Experiencing robots in a production environment","authors":"Thomas Meneweger, D. Wurhofer, Verena Fuchsberger, M. Tscheligi","doi":"10.1109/ROMAN.2015.7333641","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333641","url":null,"abstract":"This paper illustrates how workers in a semiconductor factory experience working together with industrial robots. Applying a narrative interview approach, we evoked reports on workers' personal experiences of working together with these robots. This specific approach enabled us to gain knowledge of how daily work with these robots is experienced, including workers' attitudes and inherent meanings regarding the robots. Following a thematic analysis approach, we analyzed the collected interview data and identified rivalry with the robots, adaption towards the robot's behavior, perceived reasonability and knowledge acquisition as salient aspects regarding human-robot interaction in the factory. Our findings indicate how human-robot cooperation can be improved in terms of enhancing workers' everyday experiences with robots, for example, by providing a feeling of control or emphasizing human competences.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128307165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333617
Masaru Takizawa, S. Kudoh, T. Suehiro
This paper describes a method for generating hand trajectories for a robot to enable it to place a rope on a table in a target shape. In many aspects of rope-work, e.g., making knots, hitches, and bends, it is important to control not only the topological relations (e.g., a tangle of string), but also to control the shape (e.g., the position, size, and so on). The shape of the rope on a table is the starting stage of the rope-work in that it affects the result of the rope-work, e.g., the position and size of a knot. Thus, it is necessary to generate hand trajectories automatically according to the target rope-shape when executing rope-work with a robot arm. To generate trajectories from a tabletop target shape, it is necessary to consider a positional relationship between the robot hand and the touchdown point, which is the point that the rope is placed on the table. In this study we propose a rope-shape model which can be used in limited situations, and derive hand trajectories from the model in a simple manner. The parameter needed for the model is easily identified. An experiment of making a loop was conducted to verify the effectiveness and limitations of the proposed method. As further verification, the method was applied to making a clove hitch using a dual-arm robot. In the starting stage of tying a clove hitch, two loops were created in a specified shape, and the clove hitch was successfully tied.
{"title":"Method for placing a rope in a target shape and its application to a clove hitch","authors":"Masaru Takizawa, S. Kudoh, T. Suehiro","doi":"10.1109/ROMAN.2015.7333617","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333617","url":null,"abstract":"This paper describes a method for generating hand trajectories for a robot to enable it to place a rope on a table in a target shape. In many aspects of rope-work, e.g., making knots, hitches, and bends, it is important to control not only the topological relations (e.g., a tangle of string), but also to control the shape (e.g., the position, size, and so on). The shape of the rope on a table is the starting stage of the rope-work in that it affects the result of the rope-work, e.g., the position and size of a knot. Thus, it is necessary to generate hand trajectories automatically according to the target rope-shape when executing rope-work with a robot arm. To generate trajectories from a tabletop target shape, it is necessary to consider a positional relationship between the robot hand and the touchdown point, which is the point that the rope is placed on the table. In this study we propose a rope-shape model which can be used in limited situations, and derive hand trajectories from the model in a simple manner. The parameter needed for the model is easily identified. An experiment of making a loop was conducted to verify the effectiveness and limitations of the proposed method. As further verification, the method was applied to making a clove hitch using a dual-arm robot. In the starting stage of tying a clove hitch, two loops were created in a specified shape, and the clove hitch was successfully tied.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130664771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}