首页 > 最新文献

2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

英文 中文
Localization based on multiple visual-metric maps 基于多个视觉度量地图的定位
Adi Sujiwo, E. Takeuchi, Luis Yoichi Morales Saiki, Naoki Akai, Y. Ninomiya, M. Edahiro
This paper presents a fusion of monocular camera-based metric localization, IMU and odometry in dynamic environments of public roads. We build multiple vision-based maps and use them at the same time in localization phase. For the mapping phase, visual maps are built by employing ORB-SLAM and accurate metric positioning from LiDAR-based NDT scan matching. This external positioning is utilized to correct for scale drift inherent in all vision-based SLAM methods. Next in the localization phase, these embedded positions are used to estimate the vehicle pose in metric global coordinates using solely monocular camera. Furthermore, to increase system robustness we also proposed utilization of multiple maps and sensor fusion with odometry and IMU using particle filter method. Experimental testing were performed through public road environment as far as 170 km at different times of day to evaluate and compare localization results of vision-only, GNSS and sensor fusion methods. The results show that sensor fusion method offers lower average errors than GNSS and better coverage than vision-only one.
本文提出了一种基于单目摄像机的公共道路动态环境中度量定位、IMU和里程计的融合方法。我们创建多个基于视觉的地图,并在定位阶段同时使用它们。在绘图阶段,使用ORB-SLAM和基于lidar的NDT扫描匹配的精确度量定位来构建可视化地图。这种外部定位被用来纠正所有基于视觉的SLAM方法中固有的尺度漂移。接下来,在定位阶段,这些嵌入的位置被用来估计车辆在度量全局坐标下的姿态,使用单目相机。此外,为了提高系统的鲁棒性,我们还提出了利用多地图和传感器融合与里程计和IMU使用粒子滤波方法。在长达170公里的公共道路环境中,在一天中的不同时间进行实验测试,评估和比较纯视觉、GNSS和传感器融合方法的定位结果。结果表明,传感器融合方法的平均误差低于GNSS,覆盖范围优于视觉融合方法。
{"title":"Localization based on multiple visual-metric maps","authors":"Adi Sujiwo, E. Takeuchi, Luis Yoichi Morales Saiki, Naoki Akai, Y. Ninomiya, M. Edahiro","doi":"10.1109/MFI.2017.8170431","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170431","url":null,"abstract":"This paper presents a fusion of monocular camera-based metric localization, IMU and odometry in dynamic environments of public roads. We build multiple vision-based maps and use them at the same time in localization phase. For the mapping phase, visual maps are built by employing ORB-SLAM and accurate metric positioning from LiDAR-based NDT scan matching. This external positioning is utilized to correct for scale drift inherent in all vision-based SLAM methods. Next in the localization phase, these embedded positions are used to estimate the vehicle pose in metric global coordinates using solely monocular camera. Furthermore, to increase system robustness we also proposed utilization of multiple maps and sensor fusion with odometry and IMU using particle filter method. Experimental testing were performed through public road environment as far as 170 km at different times of day to evaluate and compare localization results of vision-only, GNSS and sensor fusion methods. The results show that sensor fusion method offers lower average errors than GNSS and better coverage than vision-only one.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
3D reconstruction of line features using multi-view acoustic images in underwater environment 水下环境下多视点声学图像的线特征三维重建
Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Y. Tamura, A. Yamashita, H. Asama
In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the next generation of sonar sensors, to extract and track the line of the underwater objects which are used as visual features for the image processing algorithm. In this work, we concentrate on artificial underwater environments, such as dams and bridges. In these structured environments, the line segments are preferred over the points feature, as they can represent structure information more effectively. We also developed a method for automatic extraction and correspondences matching of line features. Our approach enables 3D measurement of underwater objects using arbitrary viewpoints based on an extended Kalman filter (EKF). The probabilistic method allows computing the 3D reconstruction of underwater objects even in presence of uncertainty in the control input of the camera's movements. Experiments have been performed in real environments. Results showed the effectiveness and accuracy of the proposed solution.
为了了解水下环境,必须使用能够感知被探测地点的三维信息的传感方法。声纳传感器通常用于水下探测。提出了一种新的水下物体三维信息检索方法。该方案采用代表新一代声纳传感器的声学摄像机,提取并跟踪水下物体的线条,作为图像处理算法的视觉特征。在这项工作中,我们专注于人工水下环境,如水坝和桥梁。在这些结构化环境中,线段比点特征更受欢迎,因为它们可以更有效地表示结构信息。提出了一种线特征自动提取与对应匹配的方法。我们的方法可以使用基于扩展卡尔曼滤波器(EKF)的任意视点对水下物体进行3D测量。概率方法允许计算水下物体的三维重建,即使在摄像机运动的控制输入中存在不确定性。实验已在真实环境中进行。结果表明了该方法的有效性和准确性。
{"title":"3D reconstruction of line features using multi-view acoustic images in underwater environment","authors":"Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Y. Tamura, A. Yamashita, H. Asama","doi":"10.1109/MFI.2017.8170447","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170447","url":null,"abstract":"In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the next generation of sonar sensors, to extract and track the line of the underwater objects which are used as visual features for the image processing algorithm. In this work, we concentrate on artificial underwater environments, such as dams and bridges. In these structured environments, the line segments are preferred over the points feature, as they can represent structure information more effectively. We also developed a method for automatic extraction and correspondences matching of line features. Our approach enables 3D measurement of underwater objects using arbitrary viewpoints based on an extended Kalman filter (EKF). The probabilistic method allows computing the 3D reconstruction of underwater objects even in presence of uncertainty in the control input of the camera's movements. Experiments have been performed in real environments. Results showed the effectiveness and accuracy of the proposed solution.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Development of robot manipulation technology in ROS environment ROS环境下机器人操作技术的发展
Dong-Eon Kim, Dongju Park, Jeong-Hwan Moon, Ki-Seo Kim, Jin‐Hyun Park, Jangmyung Lee
A new manipulation strategy has been proposed to grasp various objects stably using a dual-arm robotic system in the ROS environment. The grasping pose of the dual-arm has been determined depending upon the shape of the objects which is identified by the pan/tilt camera. For the stable grasping of the object, an operability index of the dual-arm robot (OPIND) has been defined by using the current values applied to the motors for the given grasping pose. When analyzing the motion of a manipulator, the manipulability index of both arms has been derived from the Jacobian to represent the relationship between the joint velocity vector and the workspace velocity vector, which has an elliptical range representing easiness to work with. Through the experiments, the OPIND applied state and the non — applied state of the dual-arm robotic system have been compared to each to other.
提出了一种利用双臂机器人系统在ROS环境下稳定抓取各种物体的新操作策略。根据平移/倾斜相机识别的物体形状确定双臂的抓取姿势。为了稳定抓取物体,利用给定抓取姿态下电机的电流值,定义了双臂机器人的可操作性指标。在分析机械臂运动时,由雅可比矩阵推导出双臂的可操纵性指标,以表示关节速度矢量与工作空间速度矢量之间的关系,该指标具有一个表示易于处理的椭圆范围。通过实验,对双臂机器人系统的OPIND应用状态和非应用状态进行了对比。
{"title":"Development of robot manipulation technology in ROS environment","authors":"Dong-Eon Kim, Dongju Park, Jeong-Hwan Moon, Ki-Seo Kim, Jin‐Hyun Park, Jangmyung Lee","doi":"10.1109/MFI.2017.8170364","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170364","url":null,"abstract":"A new manipulation strategy has been proposed to grasp various objects stably using a dual-arm robotic system in the ROS environment. The grasping pose of the dual-arm has been determined depending upon the shape of the objects which is identified by the pan/tilt camera. For the stable grasping of the object, an operability index of the dual-arm robot (OPIND) has been defined by using the current values applied to the motors for the given grasping pose. When analyzing the motion of a manipulator, the manipulability index of both arms has been derived from the Jacobian to represent the relationship between the joint velocity vector and the workspace velocity vector, which has an elliptical range representing easiness to work with. Through the experiments, the OPIND applied state and the non — applied state of the dual-arm robotic system have been compared to each to other.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128338714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wearable gesture control of agile micro quadrotors 敏捷微型四旋翼机的可穿戴手势控制
Yunho Choi, Inhwan Hwang, Songhwai Oh
Quadrotor unmanned aerial vehicles (UAVs) have seen a surge of use in various applications due to its structural simplicity and high maneuverability. However, conventional control methods using joysticks prohibit novices from getting used to maneuvering quadrotors in short time. In this paper, we suggest the use of a wearable device, such as a smart watch, as a new remote-controller for a quadrotor. The user's command is recognized as gestures using the 9-DoF inertial measurement unit (IMU) of a wearable device through a recurrent neural network (RNN) with long short-term memory (LSTM) cells. Our implementation also makes it possible to align the heading of a quadrotor with the heading of the user. Our implementation allows nine different gestures and the trained RNN is used for real-time gesture recognition for controlling a micro quadrotor. The proposed system exploits available sensors in a wearable device and a quadrotor as much as possible to make the gesture-based control intuitive. We have experimentally validated the performance of the proposed system by using a Samsung Gear S smart watch and a Crazyflie Nano Quadcopter.
四旋翼无人机(uav)由于其结构简单和高机动性,在各种应用中使用激增。然而,使用操纵杆的传统控制方法禁止新手在短时间内习惯机动四旋翼机。在本文中,我们建议使用可穿戴设备,如智能手表,作为四旋翼飞行器的新型遥控器。使用可穿戴设备的9自由度惯性测量单元(IMU),通过具有长短期记忆(LSTM)单元的循环神经网络(RNN),将用户的指令识别为手势。我们的实现还可以使四旋翼的航向与用户的航向对齐。我们的实现允许九种不同的手势,训练后的RNN用于实时手势识别,以控制微型四旋翼飞行器。该系统尽可能利用可穿戴设备和四旋翼飞行器中的可用传感器,使基于手势的控制更加直观。我们通过三星Gear S智能手表和crazyfly纳米四轴飞行器实验验证了所提出系统的性能。
{"title":"Wearable gesture control of agile micro quadrotors","authors":"Yunho Choi, Inhwan Hwang, Songhwai Oh","doi":"10.1109/MFI.2017.8170439","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170439","url":null,"abstract":"Quadrotor unmanned aerial vehicles (UAVs) have seen a surge of use in various applications due to its structural simplicity and high maneuverability. However, conventional control methods using joysticks prohibit novices from getting used to maneuvering quadrotors in short time. In this paper, we suggest the use of a wearable device, such as a smart watch, as a new remote-controller for a quadrotor. The user's command is recognized as gestures using the 9-DoF inertial measurement unit (IMU) of a wearable device through a recurrent neural network (RNN) with long short-term memory (LSTM) cells. Our implementation also makes it possible to align the heading of a quadrotor with the heading of the user. Our implementation allows nine different gestures and the trained RNN is used for real-time gesture recognition for controlling a micro quadrotor. The proposed system exploits available sensors in a wearable device and a quadrotor as much as possible to make the gesture-based control intuitive. We have experimentally validated the performance of the proposed system by using a Samsung Gear S smart watch and a Crazyflie Nano Quadcopter.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detection and classification of stochastic features using a multi-Bayesian approach 基于多贝叶斯方法的随机特征检测与分类
J. J. Steckenrider, T. Furukawa
This paper introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. This approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike conventional methods, these features' uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 25% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor.
本文介绍了一种多贝叶斯框架,用于在存在大量误差噪声的环境中对特征进行检测和分类。这种方法在三个不同的阶段利用贝叶斯校正和分类。这里描述的校正方案从数据源中提取有用但高度随机的特征,无论是基于视觉的还是其他的,以帮助进行更高级别的分类。与传统方法不同的是,这些特征的不确定性被表征,这样测试数据就可以用概率分布函数正确地投射到特征空间中,这些概率分布函数可以在由二次贝叶斯分类器创建的类决策边界上进行集成。提出的方法是专门为道路裂缝检测和表征制定的,这是潜在的应用之一。对于用这种技术评估的测试图像,地面真实度被准确和一致地估计,有效的贝叶斯校正,显示召回率比标准分类提高25%。应用于道路裂缝的检测和分类,在实际领域取得了成功。当从多个传感器或从单个传感器顺序获得的数据中获得多个相关观测值时,所提出的方法在描述噪声环境中的高概率特征方面非常有效。
{"title":"Detection and classification of stochastic features using a multi-Bayesian approach","authors":"J. J. Steckenrider, T. Furukawa","doi":"10.1109/MFI.2017.8170421","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170421","url":null,"abstract":"This paper introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. This approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike conventional methods, these features' uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 25% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130625631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
UJI RobInLab's approach to the Amazon Robotics Challenge 2017 UJI RobInLab参加2017年亚马逊机器人挑战赛的方法
A. P. Pobil, Majd Kassawat, A. J. Duran, M. Arias, N. Nechyporenko, Arijit Mallick, E. Cervera, Dipendra Subedi, Ilia Vasilev, D. Cardin, Emanuele Sansebastiano, Ester Martínez-Martín, A. Morales, Gustavo A. Casañ, A. Arenal, B. Goriatcheff, C. Rubert, G. Recatalá
This paper describes the approach taken by the team from the Robotic Intelligence Laboratory at Jaume I University to the Amazon Robotics Challenge 2017. The goal of the challenge is to automate pick and place operations in unstructured environments, specifically the shelves in an Amazon warehouse. RobInLab's approach is based on a Baxter Research robot and a customized storage system. The system's modular architecture, based on ROS, allows communication between two computers, two Arduinos and the Baxter. It integrates 9 hardware components along with 10 different algorithms to accomplish the pick and stow tasks. We describe the main components and pipelines of the system, along with some experimental results.
本文描述了来自Jaume I大学机器人智能实验室的团队在2017年亚马逊机器人挑战赛中所采取的方法。挑战的目标是在非结构化环境中自动化取货和放置操作,特别是在Amazon仓库中的货架上。RobInLab的方法是基于Baxter Research的机器人和定制的存储系统。该系统基于ROS的模块化架构允许两台计算机、两台arduino和Baxter之间进行通信。它集成了9个硬件组件以及10种不同的算法来完成拾取和装载任务。文中介绍了该系统的主要组成部分和系统的管线,并给出了一些实验结果。
{"title":"UJI RobInLab's approach to the Amazon Robotics Challenge 2017","authors":"A. P. Pobil, Majd Kassawat, A. J. Duran, M. Arias, N. Nechyporenko, Arijit Mallick, E. Cervera, Dipendra Subedi, Ilia Vasilev, D. Cardin, Emanuele Sansebastiano, Ester Martínez-Martín, A. Morales, Gustavo A. Casañ, A. Arenal, B. Goriatcheff, C. Rubert, G. Recatalá","doi":"10.1109/MFI.2017.8170448","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170448","url":null,"abstract":"This paper describes the approach taken by the team from the Robotic Intelligence Laboratory at Jaume I University to the Amazon Robotics Challenge 2017. The goal of the challenge is to automate pick and place operations in unstructured environments, specifically the shelves in an Amazon warehouse. RobInLab's approach is based on a Baxter Research robot and a customized storage system. The system's modular architecture, based on ROS, allows communication between two computers, two Arduinos and the Baxter. It integrates 9 hardware components along with 10 different algorithms to accomplish the pick and stow tasks. We describe the main components and pipelines of the system, along with some experimental results.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133084234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Design of multiple classifier systems based on testing sample pairs 基于测试样本对的多分类器系统设计
Gaochao Feng, Deqiang Han, Yi Yang, Jiankun Ding
A new multiple classifier system (MCS) is proposed based on CTSP (classification based on Testing Sample Pairs), which is a kind of applicable and efficient classification method. However, the original output form of the CTSP is only crisp class labels. To make use of the information provided by the classifier, in this paper, the output of CTSP is modeled using the membership function. Then, the fuzzy-cautious ordered weighted averaging approach with evidential reasoning (FCOWA-ER) is used to combine the membership functions originated from different member classifiers. It is shown by experimental results that the proposed MCS effectively can improve the classification performance.
基于CTSP(基于测试样本对的分类)提出了一种新的多分类器系统(MCS),这是一种适用且高效的分类方法。但是,CTSP的原始输出形式只是脆类标签。为了利用分类器提供的信息,本文使用隶属函数对CTSP的输出进行建模。然后,采用基于证据推理的模糊谨慎有序加权平均方法(FCOWA-ER)对来自不同成员分类器的隶属度函数进行组合。实验结果表明,所提出的MCS可以有效地提高分类性能。
{"title":"Design of multiple classifier systems based on testing sample pairs","authors":"Gaochao Feng, Deqiang Han, Yi Yang, Jiankun Ding","doi":"10.1109/MFI.2017.8170429","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170429","url":null,"abstract":"A new multiple classifier system (MCS) is proposed based on CTSP (classification based on Testing Sample Pairs), which is a kind of applicable and efficient classification method. However, the original output form of the CTSP is only crisp class labels. To make use of the information provided by the classifier, in this paper, the output of CTSP is modeled using the membership function. Then, the fuzzy-cautious ordered weighted averaging approach with evidential reasoning (FCOWA-ER) is used to combine the membership functions originated from different member classifiers. It is shown by experimental results that the proposed MCS effectively can improve the classification performance.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133502058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On state estimation and fusion with elliptical constraints 椭圆约束下的状态估计与融合
Qiang Liu, N. Rao
We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in a lossy long-haul tracking environment.
研究了具有椭圆非线性运动约束的目标跟踪问题。状态估计由传感器产生,并通过长途链路发送到远程聚变中心进行聚变。我们证明了约束可以被投影到已知的椭圆上,从而纳入到估计和融合过程中。特别讨论了基于(i)与中心直接连接和(ii)与椭圆最短距离的两种方法。通过一个跟踪实例,说明了在有耗长途跟踪环境下,利用投影法对各种引信进行跟踪的性能。
{"title":"On state estimation and fusion with elliptical constraints","authors":"Qiang Liu, N. Rao","doi":"10.1109/MFI.2017.8170411","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170411","url":null,"abstract":"We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in a lossy long-haul tracking environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134310673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3D handheld scanning based on multiview 3D registration using Kinect Sensing device 基于Kinect传感设备的多视角三维配准的手持式3D扫描
Shirazi Muhammad Ayaz, Danish Khan, M. Y. Kim
This paper describes the implementation of a 3D handheld scanning approach based on Kinect. User may get the 3D scans at a very fast rate using real time scanning devices like Kinect. These devices have been utilized in several applications, but the scanning lacks in the accuracy and reliability of the 3D data, which makes their employment a difficult task. This research proposed the 3D handheld scanning approach based on Kinect device which renders the 3D point cloud data for different views and registers them using visual navigation and ICP. This research also compares several ICP variants with the proposed method. The proposed approach can be used for the 3D modeling applications especially in medical domain. Experiments and results demonstrate the feasibility of the proposed approach to generate reliable 3D reconstructions from the Kinect's point clouds.
本文介绍了一种基于Kinect的3D手持扫描方法的实现。用户可以使用像Kinect这样的实时扫描设备以非常快的速度获得3D扫描。这些设备已经在许多应用中使用,但扫描缺乏3D数据的准确性和可靠性,这使得它们的使用成为一项困难的任务。本研究提出了基于Kinect设备的三维手持扫描方法,该方法将三维点云数据渲染为不同视角,并使用视觉导航和ICP进行注册。本研究还比较了几种ICP变体与所提出的方法。该方法可用于三维建模,特别是医学领域的三维建模。实验和结果证明了所提出的方法从Kinect的点云生成可靠的3D重建的可行性。
{"title":"3D handheld scanning based on multiview 3D registration using Kinect Sensing device","authors":"Shirazi Muhammad Ayaz, Danish Khan, M. Y. Kim","doi":"10.1109/MFI.2017.8170450","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170450","url":null,"abstract":"This paper describes the implementation of a 3D handheld scanning approach based on Kinect. User may get the 3D scans at a very fast rate using real time scanning devices like Kinect. These devices have been utilized in several applications, but the scanning lacks in the accuracy and reliability of the 3D data, which makes their employment a difficult task. This research proposed the 3D handheld scanning approach based on Kinect device which renders the 3D point cloud data for different views and registers them using visual navigation and ICP. This research also compares several ICP variants with the proposed method. The proposed approach can be used for the 3D modeling applications especially in medical domain. Experiments and results demonstrate the feasibility of the proposed approach to generate reliable 3D reconstructions from the Kinect's point clouds.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134350749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A nearest neighbour ensemble Kalman Filter for multi-object tracking 多目标跟踪的最近邻集成卡尔曼滤波
Fabian Sigges, M. Baum
In this paper, we present an approach to Multi-Object Tracking (MOT) that is based on the Ensemble Kalman Filter (EnKF). The EnKF is a standard algorithm for data assimilation in high-dimensional state spaces that is mainly used in geosciences, but has so far only attracted little attention for object tracking problems. In our approach, the Optimal Subpattern Assignment (OSPA) distance is used for coping with unlabeled noisy measurements and a robust covariance estimation is done using FastMCD to deal with possible outliers due to false detections. The algorithm is evaluated and compared against a global nearest neighbour Kalman Filter (NNKF) and a recently proposed JPDA-Ensemble Kalman Filter (JPDA-EnKF) in a simulated scenario with multiple objects and false detections.
本文提出了一种基于集成卡尔曼滤波(EnKF)的多目标跟踪(MOT)方法。EnKF是一种用于高维状态空间数据同化的标准算法,主要应用于地球科学领域,但迄今为止只在目标跟踪问题上引起了很少的关注。在我们的方法中,使用最优子模式分配(OSPA)距离来处理未标记的噪声测量,并使用FastMCD进行鲁棒协方差估计来处理由于误检测而可能出现的异常值。在具有多目标和假检测的模拟场景中,对该算法与全局最近邻卡尔曼滤波器(NNKF)和最近提出的JPDA-Ensemble卡尔曼滤波器(JPDA-EnKF)进行了评估和比较。
{"title":"A nearest neighbour ensemble Kalman Filter for multi-object tracking","authors":"Fabian Sigges, M. Baum","doi":"10.1109/MFI.2017.8170433","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170433","url":null,"abstract":"In this paper, we present an approach to Multi-Object Tracking (MOT) that is based on the Ensemble Kalman Filter (EnKF). The EnKF is a standard algorithm for data assimilation in high-dimensional state spaces that is mainly used in geosciences, but has so far only attracted little attention for object tracking problems. In our approach, the Optimal Subpattern Assignment (OSPA) distance is used for coping with unlabeled noisy measurements and a robust covariance estimation is done using FastMCD to deal with possible outliers due to false detections. The algorithm is evaluated and compared against a global nearest neighbour Kalman Filter (NNKF) and a recently proposed JPDA-Ensemble Kalman Filter (JPDA-EnKF) in a simulated scenario with multiple objects and false detections.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132922472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1