{"title":"一种用于增强物体放置的多模态方法","authors":"P. Srimal, A. Jayasekara","doi":"10.1109/NCTM.2017.7872821","DOIUrl":null,"url":null,"abstract":"Voice commands have been used as the basic method of interaction between humans and robots over the years. Voice interaction is natural and require no additional technical knowledge. But while using voice commands humans frequently use uncertain information. In the case of object manipulation on a table, frequently used uncertain terms “Left”, “Right”, “Middle”, “Front”…etc. These terms fail to depict an exact location on the table and the interpretation is governed by the robots point of view. Depending solely on vocal cues is not ideal as it requires the users to explain the exact location with more words and phrases making the interaction process cumbersome and less human like. However, using hand gestures to pinpoint the location is as natural as using the voice commands and frequently used when manipulating items on a surface. When compared to voice commands use of hand gestures is a more direct and less cumbersome approach. But when used alone hand gestures can result in errors while extracting the pointed location making the user dissatisfied. This paper proposes a multi-modal interaction method which uses hand gestures combined with voice commands to interpret uncertain information when placing an object on a table. Two fuzzy inference systems have been used to interpret the uncertain terms related to the two axes of the table. The proposed system has been implemented on an assistive robot platform. Experiments have been conducted to analyze the behaviour of the system.","PeriodicalId":343372,"journal":{"name":"2017 6th National Conference on Technology and Management (NCTM)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A multi-modal approach for enhancing object placement\",\"authors\":\"P. Srimal, A. Jayasekara\",\"doi\":\"10.1109/NCTM.2017.7872821\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Voice commands have been used as the basic method of interaction between humans and robots over the years. Voice interaction is natural and require no additional technical knowledge. But while using voice commands humans frequently use uncertain information. In the case of object manipulation on a table, frequently used uncertain terms “Left”, “Right”, “Middle”, “Front”…etc. These terms fail to depict an exact location on the table and the interpretation is governed by the robots point of view. Depending solely on vocal cues is not ideal as it requires the users to explain the exact location with more words and phrases making the interaction process cumbersome and less human like. However, using hand gestures to pinpoint the location is as natural as using the voice commands and frequently used when manipulating items on a surface. When compared to voice commands use of hand gestures is a more direct and less cumbersome approach. But when used alone hand gestures can result in errors while extracting the pointed location making the user dissatisfied. This paper proposes a multi-modal interaction method which uses hand gestures combined with voice commands to interpret uncertain information when placing an object on a table. Two fuzzy inference systems have been used to interpret the uncertain terms related to the two axes of the table. The proposed system has been implemented on an assistive robot platform. Experiments have been conducted to analyze the behaviour of the system.\",\"PeriodicalId\":343372,\"journal\":{\"name\":\"2017 6th National Conference on Technology and Management (NCTM)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 6th National Conference on Technology and Management (NCTM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NCTM.2017.7872821\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 6th National Conference on Technology and Management (NCTM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCTM.2017.7872821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A multi-modal approach for enhancing object placement
Voice commands have been used as the basic method of interaction between humans and robots over the years. Voice interaction is natural and require no additional technical knowledge. But while using voice commands humans frequently use uncertain information. In the case of object manipulation on a table, frequently used uncertain terms “Left”, “Right”, “Middle”, “Front”…etc. These terms fail to depict an exact location on the table and the interpretation is governed by the robots point of view. Depending solely on vocal cues is not ideal as it requires the users to explain the exact location with more words and phrases making the interaction process cumbersome and less human like. However, using hand gestures to pinpoint the location is as natural as using the voice commands and frequently used when manipulating items on a surface. When compared to voice commands use of hand gestures is a more direct and less cumbersome approach. But when used alone hand gestures can result in errors while extracting the pointed location making the user dissatisfied. This paper proposes a multi-modal interaction method which uses hand gestures combined with voice commands to interpret uncertain information when placing an object on a table. Two fuzzy inference systems have been used to interpret the uncertain terms related to the two axes of the table. The proposed system has been implemented on an assistive robot platform. Experiments have been conducted to analyze the behaviour of the system.