首页 > 最新文献

2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

英文 中文
Pushing ROS towards the Dark Side: A ROS-based Co-Simulation Architecture for Mixed-Reality Test Systems for Autonomous Vehicles 将ROS推向阴暗面:用于自动驾驶汽车混合现实测试系统的基于ROS的联合仿真架构
M. Zofka, Lars Töttel, Maximilian Zipfl, Marc Heinrich, Tobias Fleck, P. Schulz, Johann Marius Zöllner
Validation and verification of autonomous vehicles is still an unsolved problem. Although virtual approaches promise a cost efficient and reproducible solution, a most comprehensive and realistic representation of the real world traffic domain is required in order to make valuable statements about the performance of a highly automated driving (HAD) function. Models from different domain experts offer a repository of such representations. However, these models must be linked together for an extensive and uniform mapping of real world traffic domain for HAD performance assessment.Hereby, we propose the concept of a co-simulation architecture built upon the Robot Operating System (ROS) for both coupling and for integration of different domain expert models, immersion and stimulation of real pedestrians as well as AD systems into a common test system. This enables a unified way of generating ground truth for the performance assessment of multi-sensorial AD systems. We demonstrate the applicability of the ROS powered co-simulation by coupling behavior models in our mixed reality environment.
自动驾驶汽车的验证和验证仍然是一个未解决的问题。尽管虚拟方法有望提供一种经济高效且可复制的解决方案,但为了对高度自动驾驶(HAD)功能的性能做出有价值的陈述,需要对现实世界交通领域进行最全面、最真实的描述。来自不同领域专家的模型提供了此类表示的存储库。然而,这些模型必须连接在一起,以便对真实世界的交通域进行广泛和统一的映射,以进行HAD性能评估。因此,我们提出了基于机器人操作系统(ROS)的联合仿真架构的概念,用于将不同领域的专家模型、真实行人的沉浸和刺激以及AD系统耦合和集成到一个共同的测试系统中。这为多感官AD系统的性能评估提供了一种统一的方法来生成地面真实值。我们通过在混合现实环境中耦合行为模型来演示ROS驱动的联合仿真的适用性。
{"title":"Pushing ROS towards the Dark Side: A ROS-based Co-Simulation Architecture for Mixed-Reality Test Systems for Autonomous Vehicles","authors":"M. Zofka, Lars Töttel, Maximilian Zipfl, Marc Heinrich, Tobias Fleck, P. Schulz, Johann Marius Zöllner","doi":"10.1109/MFI49285.2020.9235238","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235238","url":null,"abstract":"Validation and verification of autonomous vehicles is still an unsolved problem. Although virtual approaches promise a cost efficient and reproducible solution, a most comprehensive and realistic representation of the real world traffic domain is required in order to make valuable statements about the performance of a highly automated driving (HAD) function. Models from different domain experts offer a repository of such representations. However, these models must be linked together for an extensive and uniform mapping of real world traffic domain for HAD performance assessment.Hereby, we propose the concept of a co-simulation architecture built upon the Robot Operating System (ROS) for both coupling and for integration of different domain expert models, immersion and stimulation of real pedestrians as well as AD systems into a common test system. This enables a unified way of generating ground truth for the performance assessment of multi-sensorial AD systems. We demonstrate the applicability of the ROS powered co-simulation by coupling behavior models in our mixed reality environment.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128340412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Large-Scale UAS Traffic Management (UTM) Structure 大型UAS流量管理(UTM)结构
D. Sacharny, T. Henderson, Michael Cline
The advent of large-scale Unmanned Aircraft Systems (UAS) exploitation for urban tasks, such as delivery, has led to a great deal of research and development in the UAS Traffic Management (UTM) domain. The general approach at this time is to define a grid network for the area of operation, and then have UAS Service Suppliers (USS) pairwise deconflict any overlapping grid elements for their flights. Moreover, this analysis is performed on arbitrary flight paths through the airspace, and thus may impose a substantial computational burden in order to ensure strategic deconfliction (that is, no two flights are ever closer than the minimum required separation). However, the biggest drawback to this approach is the impact of contingencies on UTM operations. For example, if one UAS slows down, or goes off course, then strategic deconfliction is no longer guaranteed, and this can have a disastrous snowballing effect on a large number of flights. We propose a lane-based approach which not only allows a one-dimensional strategic deconfliction method, but provides structural support for alternative contingency handling methods with minimal impact on the overall UTM system. Methods for lane creation, path assignment through lanes, flight strategic deconfliction, and contingency handling are provided here.
随着大规模无人机系统(UAS)用于城市任务(如交付)的出现,导致了UAS交通管理(UTM)领域的大量研究和开发。此时的一般方法是为操作区域定义网格网络,然后让UAS服务供应商(USS)成对地消除其航班的任何重叠网格元素的冲突。此外,这种分析是在通过空域的任意飞行路径上进行的,因此可能会造成大量的计算负担,以确保战略上的消除冲突(即,没有两个飞行比所需的最小距离更近)。然而,这种方法的最大缺点是对UTM操作的突发事件的影响。例如,如果一架无人机减速或偏离航线,那么战略冲突就不再得到保证,这可能会对大量航班产生灾难性的滚雪球效应。我们提出了一种基于车道的方法,它不仅允许一维的战略冲突消除方法,而且为对整个UTM系统影响最小的备用应急处理方法提供结构支持。本文提供了航道创建、航道间路径分配、飞行策略冲突消除和应急处理的方法。
{"title":"Large-Scale UAS Traffic Management (UTM) Structure","authors":"D. Sacharny, T. Henderson, Michael Cline","doi":"10.1109/MFI49285.2020.9235237","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235237","url":null,"abstract":"The advent of large-scale Unmanned Aircraft Systems (UAS) exploitation for urban tasks, such as delivery, has led to a great deal of research and development in the UAS Traffic Management (UTM) domain. The general approach at this time is to define a grid network for the area of operation, and then have UAS Service Suppliers (USS) pairwise deconflict any overlapping grid elements for their flights. Moreover, this analysis is performed on arbitrary flight paths through the airspace, and thus may impose a substantial computational burden in order to ensure strategic deconfliction (that is, no two flights are ever closer than the minimum required separation). However, the biggest drawback to this approach is the impact of contingencies on UTM operations. For example, if one UAS slows down, or goes off course, then strategic deconfliction is no longer guaranteed, and this can have a disastrous snowballing effect on a large number of flights. We propose a lane-based approach which not only allows a one-dimensional strategic deconfliction method, but provides structural support for alternative contingency handling methods with minimal impact on the overall UTM system. Methods for lane creation, path assignment through lanes, flight strategic deconfliction, and contingency handling are provided here.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126396385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Gamma Filter for Positive Parameter Estimation 一种用于正参数估计的伽玛滤波器
F. Govaers, Hosam Alqaderi
In many data fusion applications, the parameter of interest only takes positive values. For example, it might be the goal to estimate a distance or to count instances of certain items. Optimal data fusion then should model the system state as a positive random variable, which has a probability density function that is restricted to the positive real axis. However, classical approaches based on normal densities fail here, in particular whenever the variance of the likelihood is rather large compared to the mean. In this paper, it is considered to model such random parameters with a Gamma distribution, since its support is positive and it is the maximum entropy distribution for such variables. For a Bayesian recursion, an approximative moment matching approach is proposed. An example within the framework of an autonomous simulation and further numerical considerations demonstrate the feasibility of the approach.
在许多数据融合应用中,感兴趣的参数只能取正值。例如,目标可能是估计距离或计算某些项目的实例数。因此,最优的数据融合应该将系统状态建模为一个正随机变量,该随机变量具有一个限制在正实轴上的概率密度函数。然而,基于正态密度的经典方法在这里失败了,特别是当似然的方差与平均值相比相当大时。本文考虑用Gamma分布对这类随机参数建模,因为它的支持度是正的,并且它是这类变量的最大熵分布。对于贝叶斯递推,提出了一种近似矩匹配方法。在自主仿真框架内的一个例子和进一步的数值考虑证明了该方法的可行性。
{"title":"A Gamma Filter for Positive Parameter Estimation","authors":"F. Govaers, Hosam Alqaderi","doi":"10.1109/MFI49285.2020.9235265","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235265","url":null,"abstract":"In many data fusion applications, the parameter of interest only takes positive values. For example, it might be the goal to estimate a distance or to count instances of certain items. Optimal data fusion then should model the system state as a positive random variable, which has a probability density function that is restricted to the positive real axis. However, classical approaches based on normal densities fail here, in particular whenever the variance of the likelihood is rather large compared to the mean. In this paper, it is considered to model such random parameters with a Gamma distribution, since its support is positive and it is the maximum entropy distribution for such variables. For a Bayesian recursion, an approximative moment matching approach is proposed. An example within the framework of an autonomous simulation and further numerical considerations demonstrate the feasibility of the approach.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130142866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic Discovery of Motion Patterns that Improve Learning Rate in Communication-Limited Multi-Robot Systems 在通信受限的多机器人系统中提高学习率的运动模式自动发现
Taeyeong Choi, Theodore P. Pavlic
Learning in robotic systems is largely constrained by the quality of the training data available to a robot learner. Robots may have to make multiple, repeated expensive excursions to gather this data or have humans in the loop to perform demonstrations to ensure reliable performance. The cost can be much higher when a robot embedded within a multi-robot system must learn from the complex aggregate of the many robots that surround it and may react to the learner’s motions. In our previous work [1], [2], we considered the problem of Remote Teammate Localization (ReTLo), where a single robot in a team uses passive observations of a nearby neighbor to accurately infer the position of robots outside of its sensory range even when robot-to-robot communication is not allowed in the system. We demonstrated a communication-free approach to show that the rearmost robot can use motion information of a single robot within its sensory range to predict the positions of all robots in the convoy. Here, we expand on that work with Selective Random Sampling (SRS), a framework that improves the ReTLo learning process by enabling the learner to actively deviate from its trajectory in ways that are likely to lead to better training samples and consequently gain accurate localization ability with fewer observations. By adding diversity to the learner’s motion, SRS simultaneously improves the learner’s predictions of all other teammates and thus can achieve similar performance as prior methods with less data.
机器人系统中的学习在很大程度上受到机器人学习者可用的训练数据质量的限制。机器人可能需要进行多次,重复的昂贵的短途旅行来收集这些数据,或者让人类在循环中进行演示以确保可靠的性能。当嵌入在多机器人系统中的机器人必须从周围许多机器人的复杂集合中学习,并可能对学习者的动作做出反应时,成本可能会高得多。在我们之前的工作[1],[2]中,我们考虑了远程队友定位(Remote队友Localization, ReTLo)问题,即团队中的单个机器人使用对附近邻居的被动观察来准确推断其感知范围之外的机器人的位置,即使系统中不允许机器人之间的通信。我们演示了一种无需通信的方法,表明最后面的机器人可以使用其感知范围内单个机器人的运动信息来预测车队中所有机器人的位置。在这里,我们扩展了选择性随机抽样(SRS)的工作,这是一个框架,通过使学习者主动偏离其轨迹的方式,可能导致更好的训练样本,从而以更少的观察值获得准确的定位能力,从而改进了ReTLo学习过程。通过增加学习者动作的多样性,SRS同时提高了学习者对所有其他队友的预测,从而可以在数据较少的情况下获得与先前方法相似的性能。
{"title":"Automatic Discovery of Motion Patterns that Improve Learning Rate in Communication-Limited Multi-Robot Systems","authors":"Taeyeong Choi, Theodore P. Pavlic","doi":"10.1109/MFI49285.2020.9235218","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235218","url":null,"abstract":"Learning in robotic systems is largely constrained by the quality of the training data available to a robot learner. Robots may have to make multiple, repeated expensive excursions to gather this data or have humans in the loop to perform demonstrations to ensure reliable performance. The cost can be much higher when a robot embedded within a multi-robot system must learn from the complex aggregate of the many robots that surround it and may react to the learner’s motions. In our previous work [1], [2], we considered the problem of Remote Teammate Localization (ReTLo), where a single robot in a team uses passive observations of a nearby neighbor to accurately infer the position of robots outside of its sensory range even when robot-to-robot communication is not allowed in the system. We demonstrated a communication-free approach to show that the rearmost robot can use motion information of a single robot within its sensory range to predict the positions of all robots in the convoy. Here, we expand on that work with Selective Random Sampling (SRS), a framework that improves the ReTLo learning process by enabling the learner to actively deviate from its trajectory in ways that are likely to lead to better training samples and consequently gain accurate localization ability with fewer observations. By adding diversity to the learner’s motion, SRS simultaneously improves the learner’s predictions of all other teammates and thus can achieve similar performance as prior methods with less data.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125935358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Continuous Fusion of IMU and Pose Data using Uniform B-Spline 基于均匀b样条的IMU和位姿数据连续融合
Haohao Hu, Johannes Beck, M. Lauer, C. Stiller
In this work, we present an uniform B-spline based continuous fusion approach, which fuses the motion data from an inertial measurement unit and the pose data from a visual localization system accurately, efficiently and continu-ously. Currently, in the domain of robotics and autonomous driving, most of the ego motion fusion approaches are filter based or pose graph based. By using the filter based approaches like the Kalman Filter or the Particle Filter, usually, many parameters should be set carefully, which is a big overhead. Besides that, the filter based approaches can only fuse data in a time forwards direction, which is a big disadvantage in processing async data. Since the pose graph based approaches only fuse the pose data, the inertial measurement unit data should be integrated to estimate the corresponding pose data firstly, which can however bring accumulated error into the fusion system. Additionally, the filter based approaches and the pose graph based approaches only provide discrete fusion results, which may decrease the accuracy of the data processing steps afterwards. Since the fusion approach is generally needed for robots and automated driving vehicles, it is a major goal to make it more accurate, robust, efficient and continuous. Therefore, in this work, we address this problem and apply the axis-angle rotation representation method, the Rodrigues’ formula and the uniform B-spline implementation to solve the ego motion fusion problem continuously. Evaluation results performed on the real world data show that our approach provides accurate, robust and continuous fusion results.
本文提出了一种基于均匀b样条的连续融合方法,将惯性测量单元的运动数据和视觉定位系统的姿态数据精确、高效、连续地融合在一起。目前,在机器人和自动驾驶领域,大多数自我运动融合方法都是基于滤波器或基于姿态图的。使用基于滤波的方法,如卡尔曼滤波或粒子滤波,通常需要仔细设置许多参数,这是一个很大的开销。此外,基于过滤器的方法只能在时间向前的方向上融合数据,这是处理异步数据的一大缺点。由于基于位姿图的方法只对位姿数据进行融合,因此需要先对惯性测量单元数据进行融合来估计相应的位姿数据,这样会给融合系统带来累积误差。此外,基于滤波器的方法和基于位姿图的方法只能提供离散的融合结果,这可能会降低后续数据处理步骤的准确性。由于机器人和自动驾驶车辆通常需要融合方法,因此使其更加准确,稳健,高效和连续是一个主要目标。因此,在这项工作中,我们解决了这个问题,并应用轴角旋转表示方法,Rodrigues公式和均匀b样条实现来连续解决自我运动融合问题。在实际数据上进行的评估结果表明,我们的方法提供了准确、鲁棒和连续的融合结果。
{"title":"Continuous Fusion of IMU and Pose Data using Uniform B-Spline","authors":"Haohao Hu, Johannes Beck, M. Lauer, C. Stiller","doi":"10.1109/MFI49285.2020.9235248","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235248","url":null,"abstract":"In this work, we present an uniform B-spline based continuous fusion approach, which fuses the motion data from an inertial measurement unit and the pose data from a visual localization system accurately, efficiently and continu-ously. Currently, in the domain of robotics and autonomous driving, most of the ego motion fusion approaches are filter based or pose graph based. By using the filter based approaches like the Kalman Filter or the Particle Filter, usually, many parameters should be set carefully, which is a big overhead. Besides that, the filter based approaches can only fuse data in a time forwards direction, which is a big disadvantage in processing async data. Since the pose graph based approaches only fuse the pose data, the inertial measurement unit data should be integrated to estimate the corresponding pose data firstly, which can however bring accumulated error into the fusion system. Additionally, the filter based approaches and the pose graph based approaches only provide discrete fusion results, which may decrease the accuracy of the data processing steps afterwards. Since the fusion approach is generally needed for robots and automated driving vehicles, it is a major goal to make it more accurate, robust, efficient and continuous. Therefore, in this work, we address this problem and apply the axis-angle rotation representation method, the Rodrigues’ formula and the uniform B-spline implementation to solve the ego motion fusion problem continuously. Evaluation results performed on the real world data show that our approach provides accurate, robust and continuous fusion results.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129840726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Magnetometer-Free Real-Time Inertial Hand Motion Tracking 稀疏无磁强计实时惯性手部运动跟踪
Aaron Grapentin, Dustin Lehmann, Ardjola Zhupa, T. Seel
Hand motion tracking is a key technology in several applications including ergonomic workplace assessment, human-machine interaction and neurological rehabilitation. Recent technological solutions are based on inertial measurement units (IMUs). They are less obtrusive than exoskeleton-based solutions and overcome the line-of-sight restrictions of optical systems. The number of sensors is crucial for usability, unobtrusiveness, and hardware cost. In this paper, we present a real-time capable, sparse motion tracking solution for hand motion tracking that requires only five IMUs, one on each of the distal finger segments and one on the back of the hand, in contrast to recently proposed full-setup solution with 16 IMUs. The method only uses gyroscope and accelerometer readings and avoids magnetometer readings, which enables unrestricted use in indoor environments, near ferromagnetic materials and electronic devices. We use a moving horizon estimation (MHE) approach that exploits kinematic constraints to track motions and performs long-term stable heading estimation. The proposed method is validated experimentally using a recently developed sensor system. It is found that the proposed method yields qualitatively good agreement of the estimated and the actual hand motion and that the estimates are long-term stable. The root-mean-square deviation between the fingertip position estimates of the sparse and the full setup are found to be in the range of 1 cm. The method is hence highly suitable for unobtrusive and non-restrictive motion tracking in a range of applications.
手部运动跟踪技术是人体工程学工作场所评估、人机交互和神经康复等领域的关键技术。最近的技术解决方案是基于惯性测量单元(imu)。它们比基于外骨骼的解决方案不那么突兀,并且克服了光学系统的视线限制。传感器的数量对可用性、不显眼性和硬件成本至关重要。在本文中,我们提出了一种实时的、稀疏的手部运动跟踪解决方案,它只需要五个imu,一个在远端手指节上,一个在手背上,而不是最近提出的具有16个imu的全设置解决方案。该方法仅使用陀螺仪和加速度计读数,避免了磁力计读数,从而可以在室内环境,铁磁材料和电子设备附近不受限制地使用。我们使用移动视界估计(MHE)方法,利用运动学约束来跟踪运动并执行长期稳定的航向估计。该方法在最新研制的传感器系统上得到了实验验证。结果表明,所提出的方法在定性上与实际手部运动的估计值一致,且估计值是长期稳定的。指尖位置估计的稀疏和完整设置之间的均方根偏差在1厘米的范围内。因此,该方法非常适合在一系列应用中进行不显眼和非限制性的运动跟踪。
{"title":"Sparse Magnetometer-Free Real-Time Inertial Hand Motion Tracking","authors":"Aaron Grapentin, Dustin Lehmann, Ardjola Zhupa, T. Seel","doi":"10.1109/MFI49285.2020.9235262","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235262","url":null,"abstract":"Hand motion tracking is a key technology in several applications including ergonomic workplace assessment, human-machine interaction and neurological rehabilitation. Recent technological solutions are based on inertial measurement units (IMUs). They are less obtrusive than exoskeleton-based solutions and overcome the line-of-sight restrictions of optical systems. The number of sensors is crucial for usability, unobtrusiveness, and hardware cost. In this paper, we present a real-time capable, sparse motion tracking solution for hand motion tracking that requires only five IMUs, one on each of the distal finger segments and one on the back of the hand, in contrast to recently proposed full-setup solution with 16 IMUs. The method only uses gyroscope and accelerometer readings and avoids magnetometer readings, which enables unrestricted use in indoor environments, near ferromagnetic materials and electronic devices. We use a moving horizon estimation (MHE) approach that exploits kinematic constraints to track motions and performs long-term stable heading estimation. The proposed method is validated experimentally using a recently developed sensor system. It is found that the proposed method yields qualitatively good agreement of the estimated and the actual hand motion and that the estimates are long-term stable. The root-mean-square deviation between the fingertip position estimates of the sparse and the full setup are found to be in the range of 1 cm. The method is hence highly suitable for unobtrusive and non-restrictive motion tracking in a range of applications.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Nonlinear von Mises–Fisher Filtering Based on Isotropic Deterministic Sampling 基于各向同性确定性采样的非线性von Mises-Fisher滤波
Kailai Li, F. Pfaff, U. Hanebeck
We present a novel deterministic sampling approach for von Mises–Fisher distributions of arbitrary dimensions. Following the idea of the unscented transform, samples of configurable size are drawn isotropically on the hypersphere while preserving the mean resultant vector of the underlying distribution. Based on these samples, a von Mises–Fisher filter is proposed for nonlinear estimation of hyperspherical states. Compared with existing von Mises–Fisher-based filtering schemes, the proposed filter exhibits superior hyperspherical tracking performance.
针对任意维的von Mises-Fisher分布,提出了一种新的确定性采样方法。根据unscented变换的思想,在超球上各向同性地绘制可配置大小的样本,同时保留底层分布的平均结果向量。在此基础上,提出了一种用于超球面状态非线性估计的von Mises-Fisher滤波器。与现有的基于von mises - fisher的滤波方案相比,该滤波器具有优越的超球面跟踪性能。
{"title":"Nonlinear von Mises–Fisher Filtering Based on Isotropic Deterministic Sampling","authors":"Kailai Li, F. Pfaff, U. Hanebeck","doi":"10.1109/MFI49285.2020.9235260","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235260","url":null,"abstract":"We present a novel deterministic sampling approach for von Mises–Fisher distributions of arbitrary dimensions. Following the idea of the unscented transform, samples of configurable size are drawn isotropically on the hypersphere while preserving the mean resultant vector of the underlying distribution. Based on these samples, a von Mises–Fisher filter is proposed for nonlinear estimation of hyperspherical states. Compared with existing von Mises–Fisher-based filtering schemes, the proposed filter exhibits superior hyperspherical tracking performance.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"16 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133487595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Online 3D Frontier-Based UGV and UAV Exploration Using Direct Point Cloud Visibility 基于直接点云可视性的UGV和无人机在线三维边界探索
Jason L. Williams, Shu Jiang, M. O'Brien, Glenn Wagner, E. Hernández, Mark Cox, Alex Pitt, R. Arkin, N. Hudson
While robots have long been proposed as a tool to reduce human personnel’s exposure to danger in subterranean environments, these environments also present significant challenges to the development of these robots. Fundamental to this challenge is the problem of autonomous exploration. Frontier-based methods have been a powerful and successful approach to exploration, but complex 3D environments remain a challenge when online employment is required. This paper presents a new approach that addresses the complexity of operating in 3D by directly modelling the boundary between observed free and unobserved space (the frontier), rather than utilising dense 3D volumetric representations. By avoiding a representation involving a single map, it also achieves scalability to problems where Simultaneous Localisation and Matching (SLAM) loop closures are essential. The approach enabled a team of seven ground and air robots to autonomously explore the DARPA Subterranean Challenge Urban Circuit, jointly traversing over 8 km in a complex and communication denied environment.
虽然机器人长期以来一直被认为是减少人类人员在地下环境中暴露于危险的工具,但这些环境也对这些机器人的发展提出了重大挑战。这一挑战的根本是自主探索的问题。基于边界的方法是一种强大而成功的勘探方法,但当需要在线作业时,复杂的3D环境仍然是一个挑战。本文提出了一种新的方法,通过直接建模观察到的自由空间和未观察到的空间(边界)之间的边界来解决3D操作的复杂性,而不是利用密集的3D体积表示。通过避免涉及单个地图的表示,它还实现了对同步定位和匹配(SLAM)循环闭包必不可少的问题的可伸缩性。该方法使一个由7个地面和空中机器人组成的团队能够自主探索DARPA地下挑战城市线路,在复杂和通信中断的环境中共同穿越超过8公里。
{"title":"Online 3D Frontier-Based UGV and UAV Exploration Using Direct Point Cloud Visibility","authors":"Jason L. Williams, Shu Jiang, M. O'Brien, Glenn Wagner, E. Hernández, Mark Cox, Alex Pitt, R. Arkin, N. Hudson","doi":"10.1109/MFI49285.2020.9235268","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235268","url":null,"abstract":"While robots have long been proposed as a tool to reduce human personnel’s exposure to danger in subterranean environments, these environments also present significant challenges to the development of these robots. Fundamental to this challenge is the problem of autonomous exploration. Frontier-based methods have been a powerful and successful approach to exploration, but complex 3D environments remain a challenge when online employment is required. This paper presents a new approach that addresses the complexity of operating in 3D by directly modelling the boundary between observed free and unobserved space (the frontier), rather than utilising dense 3D volumetric representations. By avoiding a representation involving a single map, it also achieves scalability to problems where Simultaneous Localisation and Matching (SLAM) loop closures are essential. The approach enabled a team of seven ground and air robots to autonomously explore the DARPA Subterranean Challenge Urban Circuit, jointly traversing over 8 km in a complex and communication denied environment.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"492 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116030226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
SemanticVoxels: Sequential Fusion for 3D Pedestrian Detection using LiDAR Point Cloud and Semantic Segmentation SemanticVoxels:使用LiDAR点云和语义分割的3D行人检测的顺序融合
Juncong Fei, Wenbo Chen, Philipp Heidenreich, Sascha Wirges, C. Stiller
3D pedestrian detection is a challenging task in automated driving because pedestrians are relatively small, frequently occluded and easily confused with narrow vertical objects. LiDAR and camera are two commonly used sensor modalities for this task, which should provide complementary information. Unexpectedly, LiDAR-only detection methods tend to outperform multisensor fusion methods in public benchmarks. Recently, PointPainting has been presented to eliminate this performance drop by effectively fusing the output of a semantic segmentation network instead of the raw image information. In this paper, we propose a generalization of PointPainting to be able to apply fusion at different levels. After the semantic augmentation of the point cloud, we encode raw point data in pillars to get geometric features and semantic point data in voxels to get semantic features and fuse them in an effective way. Experimental results on the KITTI test set show that SemanticVoxels achieves state-of-the-art performance in both 3D and bird’s eye view pedestrian detection benchmarks. In particular, our approach demonstrates its strength in detecting challenging pedestrian cases and outperforms current state-of-the-art approaches.
3D行人检测在自动驾驶中是一项具有挑战性的任务,因为行人相对较小,经常被遮挡,容易与狭窄的垂直物体混淆。激光雷达和相机是这一任务中常用的两种传感器模式,它们应该提供互补的信息。出乎意料的是,在公共基准测试中,仅激光雷达检测方法往往优于多传感器融合方法。最近,PointPainting通过有效地融合语义分割网络的输出而不是原始图像信息来消除这种性能下降。在本文中,我们提出了一个泛化的点绘画,能够应用融合在不同的层次。对点云进行语义增强后,对原始点数据进行柱状编码得到几何特征,对语义点数据进行体素编码得到语义特征并进行有效融合。在KITTI测试集上的实验结果表明,SemanticVoxels在3D和鸟瞰行人检测基准中都达到了最先进的性能。特别是,我们的方法在检测具有挑战性的行人情况方面显示出其优势,并且优于当前最先进的方法。
{"title":"SemanticVoxels: Sequential Fusion for 3D Pedestrian Detection using LiDAR Point Cloud and Semantic Segmentation","authors":"Juncong Fei, Wenbo Chen, Philipp Heidenreich, Sascha Wirges, C. Stiller","doi":"10.1109/MFI49285.2020.9235240","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235240","url":null,"abstract":"3D pedestrian detection is a challenging task in automated driving because pedestrians are relatively small, frequently occluded and easily confused with narrow vertical objects. LiDAR and camera are two commonly used sensor modalities for this task, which should provide complementary information. Unexpectedly, LiDAR-only detection methods tend to outperform multisensor fusion methods in public benchmarks. Recently, PointPainting has been presented to eliminate this performance drop by effectively fusing the output of a semantic segmentation network instead of the raw image information. In this paper, we propose a generalization of PointPainting to be able to apply fusion at different levels. After the semantic augmentation of the point cloud, we encode raw point data in pillars to get geometric features and semantic point data in voxels to get semantic features and fuse them in an effective way. Experimental results on the KITTI test set show that SemanticVoxels achieves state-of-the-art performance in both 3D and bird’s eye view pedestrian detection benchmarks. In particular, our approach demonstrates its strength in detecting challenging pedestrian cases and outperforms current state-of-the-art approaches.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132944957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
AirMuseum: a heterogeneous multi-robot dataset for stereo-visual and inertial Simultaneous Localization And Mapping AirMuseum:用于立体视觉和惯性同步定位和映射的异构多机器人数据集
Rodolphe Dubois, A. Eudes, V. Fremont
This paper introduces a new dataset dedicated to multi-robot stereo-visual and inertial Simultaneous Localization And Mapping (SLAM). This dataset consists in five indoor multi-robot scenarios acquired with ground and aerial robots in a former Air Museum at ONERA Meudon, France. Those scenarios were designed to exhibit some specific opportunities and challenges associated to collaborative SLAM. Each scenario includes synchronized sequences between multiple robots with stereo images and inertial measurements. They also exhibit explicit direct interactions between robots through the detection of mounted AprilTag markers [1]. Ground-truth trajectories for each robot were computed using Structure-from-Motion algorithms and constrained with the detection of fixed AprilTag markers placed as beacons on the experimental area. Those scenarios have been benchmarked on state-of-the-art monocular, stereo and visual-inertial SLAM algorithms to provide a baseline of the single-robot performances to be enhanced in collaborative frameworks.
本文介绍了一种用于多机器人立体视觉和惯性同步定位与映射的新数据集。该数据集由五个室内多机器人场景组成,这些场景是由地面和空中机器人在法国ONERA Meudon的前航空博物馆中获得的。这些场景被设计用来展示与协作SLAM相关的一些特定的机会和挑战。每个场景包括多个机器人之间的同步序列,具有立体图像和惯性测量。它们还通过检测安装的AprilTag标记,在机器人之间表现出明确的直接交互[1]。每个机器人的Ground-truth轨迹使用Structure-from-Motion算法计算,并受到放置在实验区域作为信标的固定AprilTag标记检测的约束。这些场景已经在最先进的单目、立体和视觉惯性SLAM算法上进行了基准测试,以提供在协作框架中增强的单机器人性能的基线。
{"title":"AirMuseum: a heterogeneous multi-robot dataset for stereo-visual and inertial Simultaneous Localization And Mapping","authors":"Rodolphe Dubois, A. Eudes, V. Fremont","doi":"10.1109/MFI49285.2020.9235257","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235257","url":null,"abstract":"This paper introduces a new dataset dedicated to multi-robot stereo-visual and inertial Simultaneous Localization And Mapping (SLAM). This dataset consists in five indoor multi-robot scenarios acquired with ground and aerial robots in a former Air Museum at ONERA Meudon, France. Those scenarios were designed to exhibit some specific opportunities and challenges associated to collaborative SLAM. Each scenario includes synchronized sequences between multiple robots with stereo images and inertial measurements. They also exhibit explicit direct interactions between robots through the detection of mounted AprilTag markers [1]. Ground-truth trajectories for each robot were computed using Structure-from-Motion algorithms and constrained with the detection of fixed AprilTag markers placed as beacons on the experimental area. Those scenarios have been benchmarked on state-of-the-art monocular, stereo and visual-inertial SLAM algorithms to provide a baseline of the single-robot performances to be enhanced in collaborative frameworks.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125201428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1