Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144942
Soonhyun Noh, Cheonghwa Lee, Myungsun Kim, Seongsoo Hong
We present the Splash programming framework to support the effective implementation of multisensor data fusion. Multisensor data fusion has been widely exploited in autonomous machines since it outperforms algorithms using only a single sensor, in terms of accuracy, reliability and robustness. Knowing that developers have long been lacking programming language support for multisensor data fusion, we offer a dedicated Splash language construct along with formal semantics for multisensor data fusion. Specifically, we analyze the structural characteristics of multisensor data fusion algorithms and derive technical issues that the language construct must tackle. We then give a detailed account of the language construct along with its formal semantics. Finally, we validate its utility and effectiveness via its application to a lane keeping assist system.
{"title":"Programming Language Support for Multisensor Data Fusion: The Splash Approach*","authors":"Soonhyun Noh, Cheonghwa Lee, Myungsun Kim, Seongsoo Hong","doi":"10.1109/UR49135.2020.9144942","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144942","url":null,"abstract":"We present the Splash programming framework to support the effective implementation of multisensor data fusion. Multisensor data fusion has been widely exploited in autonomous machines since it outperforms algorithms using only a single sensor, in terms of accuracy, reliability and robustness. Knowing that developers have long been lacking programming language support for multisensor data fusion, we offer a dedicated Splash language construct along with formal semantics for multisensor data fusion. Specifically, we analyze the structural characteristics of multisensor data fusion algorithms and derive technical issues that the language construct must tackle. We then give a detailed account of the language construct along with its formal semantics. Finally, we validate its utility and effectiveness via its application to a lane keeping assist system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132837446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/UR49135.2020.9144907
N. Islam, Sungmin Lee, Jaebyung Park
Modifying the facial images with desired attributes is important, though challenging tasks in computer vision, where it aims to modify single or multiple attributes of the face image. Some of the existing methods are either based on attribute independent approaches where the modification is done in the latent representation or attribute dependent approaches. The attribute independent methods are limited in performance as they require the desired paired data for changing the desired attributes. Secondly, the attribute independent constraint may result in the loss of information and, hence, fail in generating the required attributes in the face image. In contrast, the attribute dependent approaches are effective as these approaches are capable of modifying the required features along with preserving the information in the given image. However, attribute dependent approaches are sensitive and require a careful model design in generating high-quality results. To address this problem, we propose an attribute dependent face modification approach. The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes and, in return, generate high-quality attribute modification results. Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
{"title":"Prominent Attribute Modification using Attribute Dependent Generative Adversarial Network","authors":"N. Islam, Sungmin Lee, Jaebyung Park","doi":"10.1109/UR49135.2020.9144907","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144907","url":null,"abstract":"Modifying the facial images with desired attributes is important, though challenging tasks in computer vision, where it aims to modify single or multiple attributes of the face image. Some of the existing methods are either based on attribute independent approaches where the modification is done in the latent representation or attribute dependent approaches. The attribute independent methods are limited in performance as they require the desired paired data for changing the desired attributes. Secondly, the attribute independent constraint may result in the loss of information and, hence, fail in generating the required attributes in the face image. In contrast, the attribute dependent approaches are effective as these approaches are capable of modifying the required features along with preserving the information in the given image. However, attribute dependent approaches are sensitive and require a careful model design in generating high-quality results. To address this problem, we propose an attribute dependent face modification approach. The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes and, in return, generate high-quality attribute modification results. Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124593507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/UR49135.2020.9144861
Rui Peng, D. Navarro-Alarcon, Victor Wu, Wen Yang
In this paper, in order to pursue high-efficiency robotic arc welding tasks, we propose a method based on point cloud acquired by an RGB-D sensor. The method consists of two parts: welding groove detection and 3D welding trajectory generation. The actual welding scene could be displayed in 3D point cloud format. Focusing on the geometric feature of the welding groove, the detection algorithm is capable of adapting well to different welding workpieces with a V-type welding groove. Meanwhile, a 3D welding trajectory involving 6-DOF poses of the welding groove for robotic manipulator motion is generated. With an acceptable error in trajectory generation, the robotic manipulator could drive the welding torch to follow the trajectory and execute welding tasks. In this paper, details of the integrated robotic system are also presented. Experimental results prove application value of the presented welding robotic system.
{"title":"A Point Cloud-Based Method for Automatic Groove Detection and Trajectory Generation of Robotic Arc Welding Tasks","authors":"Rui Peng, D. Navarro-Alarcon, Victor Wu, Wen Yang","doi":"10.1109/UR49135.2020.9144861","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144861","url":null,"abstract":"In this paper, in order to pursue high-efficiency robotic arc welding tasks, we propose a method based on point cloud acquired by an RGB-D sensor. The method consists of two parts: welding groove detection and 3D welding trajectory generation. The actual welding scene could be displayed in 3D point cloud format. Focusing on the geometric feature of the welding groove, the detection algorithm is capable of adapting well to different welding workpieces with a V-type welding groove. Meanwhile, a 3D welding trajectory involving 6-DOF poses of the welding groove for robotic manipulator motion is generated. With an acceptable error in trajectory generation, the robotic manipulator could drive the welding torch to follow the trajectory and execute welding tasks. In this paper, details of the integrated robotic system are also presented. Experimental results prove application value of the presented welding robotic system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131078777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-01DOI: 10.1109/UR49135.2020.9144872
G. Knizhnik, Mark H. Yim
We present a novel design for a low-cost robotic boat powered by a single actuator, useful for both modular and swarming applications. The boat uses the conservation of angular momentum and passive flippers to convert the motion of a single motor into an adjustable paddling motion for propulsion and steering. We develop design criteria for modularity and swarming and present a prototype implementing these criteria. We identify significant mechanical sensitivities with the presented design, theorize about the cause of the sensitivities, and present an improved design for future work.
{"title":"Design and Experiments with a Low-Cost Single-Motor Modular Aquatic Robot","authors":"G. Knizhnik, Mark H. Yim","doi":"10.1109/UR49135.2020.9144872","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144872","url":null,"abstract":"We present a novel design for a low-cost robotic boat powered by a single actuator, useful for both modular and swarming applications. The boat uses the conservation of angular momentum and passive flippers to convert the motion of a single motor into an adjustable paddling motion for propulsion and steering. We develop design criteria for modularity and swarming and present a prototype implementing these criteria. We identify significant mechanical sensitivities with the presented design, theorize about the cause of the sensitivities, and present an improved design for future work.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131688758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-16DOI: 10.1109/UR49135.2020.9144789
Linh Kästner, D. Dimitrov, Jens Lambrecht
Augmented Reality has been subject to various integration efforts within industries due to its ability to enhance human machine interaction and understanding. Neural networks have achieved remarkable results in areas of computer vision, which bear great potential to assist and facilitate an enhanced Augmented Reality experience. However, most neural networks are computationally intensive and demand huge processing power, thus are not suitable for deployment on Augmented Reality devices. In this work, we propose a method to deploy state of the art neural networks for real time 3D object localization on augmented reality devices. As a result, we provide a more automated method of calibrating the AR devices with mobile robotic systems. To accelerate the calibration process and enhance user experience, we focus on fast 2D detection approaches which are extracting the 3D pose of the object fast and accurately by using only 2D input. The results are implemented into an Augmented Reality application for intuitive robot control and sensor data visualization. For the 6D annotation of 2D images, we developed an annotation tool, which is, to our knowledge, the first open source tool to be available. We achieve feasible results which are generally applicable to any AR device, thus making this work promising for further research in combining high demanding neural networks with Internet of Things devices.
{"title":"A Markerless Deep Learning-based 6 Degrees of Freedom Pose Estimation for Mobile Robots using RGB Data","authors":"Linh Kästner, D. Dimitrov, Jens Lambrecht","doi":"10.1109/UR49135.2020.9144789","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144789","url":null,"abstract":"Augmented Reality has been subject to various integration efforts within industries due to its ability to enhance human machine interaction and understanding. Neural networks have achieved remarkable results in areas of computer vision, which bear great potential to assist and facilitate an enhanced Augmented Reality experience. However, most neural networks are computationally intensive and demand huge processing power, thus are not suitable for deployment on Augmented Reality devices. In this work, we propose a method to deploy state of the art neural networks for real time 3D object localization on augmented reality devices. As a result, we provide a more automated method of calibrating the AR devices with mobile robotic systems. To accelerate the calibration process and enhance user experience, we focus on fast 2D detection approaches which are extracting the 3D pose of the object fast and accurately by using only 2D input. The results are implemented into an Augmented Reality application for intuitive robot control and sensor data visualization. For the 6D annotation of 2D images, we developed an annotation tool, which is, to our knowledge, the first open source tool to be available. We achieve feasible results which are generally applicable to any AR device, thus making this work promising for further research in combining high demanding neural networks with Internet of Things devices.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114964355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}