首页 > 最新文献

2020 17th International Conference on Ubiquitous Robots (UR)最新文献

英文 中文
Programming Language Support for Multisensor Data Fusion: The Splash Approach* 多传感器数据融合的编程语言支持:飞溅方法*
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144942
Soonhyun Noh, Cheonghwa Lee, Myungsun Kim, Seongsoo Hong
We present the Splash programming framework to support the effective implementation of multisensor data fusion. Multisensor data fusion has been widely exploited in autonomous machines since it outperforms algorithms using only a single sensor, in terms of accuracy, reliability and robustness. Knowing that developers have long been lacking programming language support for multisensor data fusion, we offer a dedicated Splash language construct along with formal semantics for multisensor data fusion. Specifically, we analyze the structural characteristics of multisensor data fusion algorithms and derive technical issues that the language construct must tackle. We then give a detailed account of the language construct along with its formal semantics. Finally, we validate its utility and effectiveness via its application to a lane keeping assist system.
我们提出了Splash编程框架来支持多传感器数据融合的有效实现。多传感器数据融合在自主机器中得到了广泛的应用,因为它在准确性、可靠性和鲁棒性方面优于仅使用单个传感器的算法。了解到开发人员长期以来一直缺乏对多传感器数据融合的编程语言支持,我们提供了专用的Splash语言结构以及用于多传感器数据融合的形式化语义。具体来说,我们分析了多传感器数据融合算法的结构特点,并得出了语言结构必须解决的技术问题。然后,我们给出了语言结构及其形式语义的详细说明。最后,通过在车道保持辅助系统中的应用验证了该方法的实用性和有效性。
{"title":"Programming Language Support for Multisensor Data Fusion: The Splash Approach*","authors":"Soonhyun Noh, Cheonghwa Lee, Myungsun Kim, Seongsoo Hong","doi":"10.1109/UR49135.2020.9144942","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144942","url":null,"abstract":"We present the Splash programming framework to support the effective implementation of multisensor data fusion. Multisensor data fusion has been widely exploited in autonomous machines since it outperforms algorithms using only a single sensor, in terms of accuracy, reliability and robustness. Knowing that developers have long been lacking programming language support for multisensor data fusion, we offer a dedicated Splash language construct along with formal semantics for multisensor data fusion. Specifically, we analyze the structural characteristics of multisensor data fusion algorithms and derive technical issues that the language construct must tackle. We then give a detailed account of the language construct along with its formal semantics. Finally, we validate its utility and effectiveness via its application to a lane keeping assist system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132837446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prominent Attribute Modification using Attribute Dependent Generative Adversarial Network 基于属性依赖生成对抗网络的显著属性修改
Pub Date : 2020-04-01 DOI: 10.1109/UR49135.2020.9144907
N. Islam, Sungmin Lee, Jaebyung Park
Modifying the facial images with desired attributes is important, though challenging tasks in computer vision, where it aims to modify single or multiple attributes of the face image. Some of the existing methods are either based on attribute independent approaches where the modification is done in the latent representation or attribute dependent approaches. The attribute independent methods are limited in performance as they require the desired paired data for changing the desired attributes. Secondly, the attribute independent constraint may result in the loss of information and, hence, fail in generating the required attributes in the face image. In contrast, the attribute dependent approaches are effective as these approaches are capable of modifying the required features along with preserving the information in the given image. However, attribute dependent approaches are sensitive and require a careful model design in generating high-quality results. To address this problem, we propose an attribute dependent face modification approach. The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes and, in return, generate high-quality attribute modification results. Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.
修改具有期望属性的面部图像是重要的,尽管在计算机视觉中具有挑战性的任务,其目的是修改面部图像的单个或多个属性。现有的一些方法要么是基于属性独立的方法,其中修改是在潜在表示中完成的,要么是基于属性依赖的方法。独立于属性的方法在性能上受到限制,因为它们需要所需的配对数据来更改所需的属性。其次,属性独立约束可能导致信息丢失,从而无法在人脸图像中生成所需的属性。相反,依赖属性的方法是有效的,因为这些方法能够修改所需的特征,同时保留给定图像中的信息。然而,属性依赖的方法是敏感的,在生成高质量的结果时需要仔细的模型设计。为了解决这一问题,我们提出了一种基于属性的人脸修改方法。该方法基于两个生成器和两个鉴别器,它们利用属性的二进制和真实表示,从而生成高质量的属性修改结果。在CelebA数据集上的实验表明,该方法在完整保留其他面部细节的情况下,有效地完成了多属性编辑。
{"title":"Prominent Attribute Modification using Attribute Dependent Generative Adversarial Network","authors":"N. Islam, Sungmin Lee, Jaebyung Park","doi":"10.1109/UR49135.2020.9144907","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144907","url":null,"abstract":"Modifying the facial images with desired attributes is important, though challenging tasks in computer vision, where it aims to modify single or multiple attributes of the face image. Some of the existing methods are either based on attribute independent approaches where the modification is done in the latent representation or attribute dependent approaches. The attribute independent methods are limited in performance as they require the desired paired data for changing the desired attributes. Secondly, the attribute independent constraint may result in the loss of information and, hence, fail in generating the required attributes in the face image. In contrast, the attribute dependent approaches are effective as these approaches are capable of modifying the required features along with preserving the information in the given image. However, attribute dependent approaches are sensitive and require a careful model design in generating high-quality results. To address this problem, we propose an attribute dependent face modification approach. The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes and, in return, generate high-quality attribute modification results. Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124593507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Point Cloud-Based Method for Automatic Groove Detection and Trajectory Generation of Robotic Arc Welding Tasks 基于点云的机器人弧焊自动坡口检测与轨迹生成方法
Pub Date : 2020-04-01 DOI: 10.1109/UR49135.2020.9144861
Rui Peng, D. Navarro-Alarcon, Victor Wu, Wen Yang
In this paper, in order to pursue high-efficiency robotic arc welding tasks, we propose a method based on point cloud acquired by an RGB-D sensor. The method consists of two parts: welding groove detection and 3D welding trajectory generation. The actual welding scene could be displayed in 3D point cloud format. Focusing on the geometric feature of the welding groove, the detection algorithm is capable of adapting well to different welding workpieces with a V-type welding groove. Meanwhile, a 3D welding trajectory involving 6-DOF poses of the welding groove for robotic manipulator motion is generated. With an acceptable error in trajectory generation, the robotic manipulator could drive the welding torch to follow the trajectory and execute welding tasks. In this paper, details of the integrated robotic system are also presented. Experimental results prove application value of the presented welding robotic system.
为了实现机器人弧焊任务的高效率,本文提出了一种基于RGB-D传感器采集点云的方法。该方法包括两个部分:焊接坡口检测和三维焊接轨迹生成。实际焊接场景可采用三维点云格式显示。针对焊接坡口的几何特征,该检测算法能够很好地适应带有v型焊接坡口的不同焊接工件。同时,生成了包含焊接坡口六自由度位姿的机器人机械手运动三维焊接轨迹。在轨迹生成误差可接受的情况下,机器人可以驱动焊枪沿轨迹运动,完成焊接任务。本文还详细介绍了集成机器人系统。实验结果证明了该焊接机器人系统的应用价值。
{"title":"A Point Cloud-Based Method for Automatic Groove Detection and Trajectory Generation of Robotic Arc Welding Tasks","authors":"Rui Peng, D. Navarro-Alarcon, Victor Wu, Wen Yang","doi":"10.1109/UR49135.2020.9144861","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144861","url":null,"abstract":"In this paper, in order to pursue high-efficiency robotic arc welding tasks, we propose a method based on point cloud acquired by an RGB-D sensor. The method consists of two parts: welding groove detection and 3D welding trajectory generation. The actual welding scene could be displayed in 3D point cloud format. Focusing on the geometric feature of the welding groove, the detection algorithm is capable of adapting well to different welding workpieces with a V-type welding groove. Meanwhile, a 3D welding trajectory involving 6-DOF poses of the welding groove for robotic manipulator motion is generated. With an acceptable error in trajectory generation, the robotic manipulator could drive the welding torch to follow the trajectory and execute welding tasks. In this paper, details of the integrated robotic system are also presented. Experimental results prove application value of the presented welding robotic system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131078777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Design and Experiments with a Low-Cost Single-Motor Modular Aquatic Robot 低成本单电机模块化水上机器人的设计与实验
Pub Date : 2020-02-01 DOI: 10.1109/UR49135.2020.9144872
G. Knizhnik, Mark H. Yim
We present a novel design for a low-cost robotic boat powered by a single actuator, useful for both modular and swarming applications. The boat uses the conservation of angular momentum and passive flippers to convert the motion of a single motor into an adjustable paddling motion for propulsion and steering. We develop design criteria for modularity and swarming and present a prototype implementing these criteria. We identify significant mechanical sensitivities with the presented design, theorize about the cause of the sensitivities, and present an improved design for future work.
我们提出了一种新颖的低成本机器人船的设计,由一个驱动器驱动,适用于模块化和集群应用。该船利用角动量守恒和被动鳍片将单个电机的运动转换为可调节的划桨运动,以推进和转向。我们开发了模块化和蜂群的设计标准,并提出了实现这些标准的原型。我们通过提出的设计确定了显著的机械敏感性,理论化了敏感性的原因,并为未来的工作提出了改进的设计。
{"title":"Design and Experiments with a Low-Cost Single-Motor Modular Aquatic Robot","authors":"G. Knizhnik, Mark H. Yim","doi":"10.1109/UR49135.2020.9144872","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144872","url":null,"abstract":"We present a novel design for a low-cost robotic boat powered by a single actuator, useful for both modular and swarming applications. The boat uses the conservation of angular momentum and passive flippers to convert the motion of a single motor into an adjustable paddling motion for propulsion and steering. We develop design criteria for modularity and swarming and present a prototype implementing these criteria. We identify significant mechanical sensitivities with the presented design, theorize about the cause of the sensitivities, and present an improved design for future work.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131688758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Markerless Deep Learning-based 6 Degrees of Freedom Pose Estimation for Mobile Robots using RGB Data 基于RGB数据的无标记深度学习移动机器人6自由度姿态估计
Pub Date : 2020-01-16 DOI: 10.1109/UR49135.2020.9144789
Linh Kästner, D. Dimitrov, Jens Lambrecht
Augmented Reality has been subject to various integration efforts within industries due to its ability to enhance human machine interaction and understanding. Neural networks have achieved remarkable results in areas of computer vision, which bear great potential to assist and facilitate an enhanced Augmented Reality experience. However, most neural networks are computationally intensive and demand huge processing power, thus are not suitable for deployment on Augmented Reality devices. In this work, we propose a method to deploy state of the art neural networks for real time 3D object localization on augmented reality devices. As a result, we provide a more automated method of calibrating the AR devices with mobile robotic systems. To accelerate the calibration process and enhance user experience, we focus on fast 2D detection approaches which are extracting the 3D pose of the object fast and accurately by using only 2D input. The results are implemented into an Augmented Reality application for intuitive robot control and sensor data visualization. For the 6D annotation of 2D images, we developed an annotation tool, which is, to our knowledge, the first open source tool to be available. We achieve feasible results which are generally applicable to any AR device, thus making this work promising for further research in combining high demanding neural networks with Internet of Things devices.
增强现实由于其增强人机交互和理解的能力,一直受到行业内各种集成工作的影响。神经网络在计算机视觉领域取得了显著的成果,在帮助和促进增强的增强现实体验方面具有巨大的潜力。然而,大多数神经网络都是计算密集型的,需要巨大的处理能力,因此不适合部署在增强现实设备上。在这项工作中,我们提出了一种方法来部署最先进的神经网络,用于增强现实设备上的实时3D对象定位。因此,我们提供了一种更自动化的方法来校准AR设备与移动机器人系统。为了加速校准过程并增强用户体验,我们专注于快速2D检测方法,即仅使用2D输入即可快速准确地提取物体的3D姿态。结果被实现到一个增强现实应用程序中,用于直观的机器人控制和传感器数据可视化。对于二维图像的6D注释,我们开发了一个注释工具,据我们所知,这是第一个可用的开源工具。我们获得了可行的结果,这些结果普遍适用于任何AR设备,从而使这项工作为将高要求的神经网络与物联网设备相结合的进一步研究提供了前景。
{"title":"A Markerless Deep Learning-based 6 Degrees of Freedom Pose Estimation for Mobile Robots using RGB Data","authors":"Linh Kästner, D. Dimitrov, Jens Lambrecht","doi":"10.1109/UR49135.2020.9144789","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144789","url":null,"abstract":"Augmented Reality has been subject to various integration efforts within industries due to its ability to enhance human machine interaction and understanding. Neural networks have achieved remarkable results in areas of computer vision, which bear great potential to assist and facilitate an enhanced Augmented Reality experience. However, most neural networks are computationally intensive and demand huge processing power, thus are not suitable for deployment on Augmented Reality devices. In this work, we propose a method to deploy state of the art neural networks for real time 3D object localization on augmented reality devices. As a result, we provide a more automated method of calibrating the AR devices with mobile robotic systems. To accelerate the calibration process and enhance user experience, we focus on fast 2D detection approaches which are extracting the 3D pose of the object fast and accurately by using only 2D input. The results are implemented into an Augmented Reality application for intuitive robot control and sensor data visualization. For the 6D annotation of 2D images, we developed an annotation tool, which is, to our knowledge, the first open source tool to be available. We achieve feasible results which are generally applicable to any AR device, thus making this work promising for further research in combining high demanding neural networks with Internet of Things devices.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114964355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 17th International Conference on Ubiquitous Robots (UR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1