Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485297
Guillaume Bresson, R. Aufrère, R. Chapuis
This paper presents a real-time Decentralized Monocular SLAM process. It is the first time, to our knowledge, that a decentralized SLAM with vehicles using only proprioceptive sensors and a single camera is presented. A new architecture has been built to cope with the problems involved by a decentralized scheme. A special care has been given to the data incest problem. It is solved thanks to a substate system. Network aspects and computational time are also considered. By using an Extended Kalman Filter and sending only essential information, we are able to make our decentralized algorithm suitable for an important number of vehicles. We also introduce a way to retrieve the distance between vehicles and so to put the different maps built in a common frame. This approach was tested using a realistic simulator with different trajectories.
{"title":"Real-time Decentralized Monocular SLAM","authors":"Guillaume Bresson, R. Aufrère, R. Chapuis","doi":"10.1109/ICARCV.2012.6485297","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485297","url":null,"abstract":"This paper presents a real-time Decentralized Monocular SLAM process. It is the first time, to our knowledge, that a decentralized SLAM with vehicles using only proprioceptive sensors and a single camera is presented. A new architecture has been built to cope with the problems involved by a decentralized scheme. A special care has been given to the data incest problem. It is solved thanks to a substate system. Network aspects and computational time are also considered. By using an Extended Kalman Filter and sending only essential information, we are able to make our decentralized algorithm suitable for an important number of vehicles. We also introduce a way to retrieve the distance between vehicles and so to put the different maps built in a common frame. This approach was tested using a realistic simulator with different trajectories.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130315985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485400
Jianbin Qiu, G. Feng
This paper studies the problem of robust H∞ output feedback control for a class of continuous-time Takagi-Sugeno (T-S) fuzzy affine dynamic systems with parametric uncertainties and input constraints. Based on a smooth piecewise quadratic Lyapunov function combined with S-procedure and some matrix inequality convexification techniques, some new results are developed for static output feedback H∞ controller synthesis of the underlying continuous-time T-S fuzzy affine systems. It is shown that the controller gains can be obtained by solving a set of linear matrix inequalities. Finally, a simulation example is provided to illustrate the application of the proposed methods.
{"title":"Control of continuous-time T-S fuzzy affine dynamic systems via piecewise Lyapunov functions","authors":"Jianbin Qiu, G. Feng","doi":"10.1109/ICARCV.2012.6485400","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485400","url":null,"abstract":"This paper studies the problem of robust H∞ output feedback control for a class of continuous-time Takagi-Sugeno (T-S) fuzzy affine dynamic systems with parametric uncertainties and input constraints. Based on a smooth piecewise quadratic Lyapunov function combined with S-procedure and some matrix inequality convexification techniques, some new results are developed for static output feedback H∞ controller synthesis of the underlying continuous-time T-S fuzzy affine systems. It is shown that the controller gains can be obtained by solving a set of linear matrix inequalities. Finally, a simulation example is provided to illustrate the application of the proposed methods.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131906847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485403
D. Cheng, Yin Zhao
A class of control systems, which are emerged from dynamic games, are considered. Using semi-tensor product of matrices, the set of strategies can be described as a set of matrices. Then the dynamics of such systems can be converted from logical type dynamics into standard discrete-time dynamic systems. Hence, the classical techniques for control systems are applicable to such systems. Semi-tensor product formulation of such systems is investigated. Some related optimal control problems are also investigated.
{"title":"Game-based control systems: A semi-tensor product formulation","authors":"D. Cheng, Yin Zhao","doi":"10.1109/ICARCV.2012.6485403","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485403","url":null,"abstract":"A class of control systems, which are emerged from dynamic games, are considered. Using semi-tensor product of matrices, the set of strategies can be described as a set of matrices. Then the dynamics of such systems can be converted from logical type dynamics into standard discrete-time dynamic systems. Hence, the classical techniques for control systems are applicable to such systems. Semi-tensor product formulation of such systems is investigated. Some related optimal control problems are also investigated.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134130246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485182
T. T. Hoang, P. M. Duong, N. T. T. Van, D. A. Viet, T. Q. Vinh
This paper presents the implementation of a perceptual system for a mobile robot. The system is designed and installed with modern sensors and multi-point communication channels. The goal is to equip the robot with a high level of perception to support a wide range of navigating applications including Internet-based telecontrol, semi-autonomy, and autonomy. Due to uncertainties of acquiring data, a sensor fusion model is developed, in which heterogeneous measured data including odometry, compass heading and laser range is combined to get an optimal estimate in a statistical sense. The combination is carried out by an extended Kalman filter. Experimental results indicate that based on the system, the robot localization is enhanced and sufficient for navigation tasks.
{"title":"Development of an EKF-based localization algorithm using compass sensor and LRF","authors":"T. T. Hoang, P. M. Duong, N. T. T. Van, D. A. Viet, T. Q. Vinh","doi":"10.1109/ICARCV.2012.6485182","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485182","url":null,"abstract":"This paper presents the implementation of a perceptual system for a mobile robot. The system is designed and installed with modern sensors and multi-point communication channels. The goal is to equip the robot with a high level of perception to support a wide range of navigating applications including Internet-based telecontrol, semi-autonomy, and autonomy. Due to uncertainties of acquiring data, a sensor fusion model is developed, in which heterogeneous measured data including odometry, compass heading and laser range is combined to get an optimal estimate in a statistical sense. The combination is carried out by an extended Kalman filter. Experimental results indicate that based on the system, the robot localization is enhanced and sufficient for navigation tasks.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133003060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485230
Jun Xu, Lihua Xie, Nitish Khanna, Wei Hong Chee
A multi-purpose 3D simulator named HNMSim (hybrid networked multi-agent system (MAS) simulator) with application in cooperative control is presented in this paper. Based on USARSim (Unified system for automation and robot simulation) [37], Unreal Engine [33], LabView [17], Matlab [20] and OMNet++ [22], this simulator creates a high-fidelity simulation environment for networked MAS. To provide assistant in research and education, we present three interfaces based on LabView, Matlab and OMNet++, respectively. The system structure (hardware-in-the-loop simulation) and design methodology are also briefly described. In addition, the network simulation is highlighted. Furthermore, we show its wide applications in networked MAS by demonstrating its usage in distributed formation control, distributed coverage control and search problem using multiple UAVs/UGVs.
{"title":"Cooperative control in HNMSim — A 3D hybrid networked MAS simulator","authors":"Jun Xu, Lihua Xie, Nitish Khanna, Wei Hong Chee","doi":"10.1109/ICARCV.2012.6485230","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485230","url":null,"abstract":"A multi-purpose 3D simulator named HNMSim (hybrid networked multi-agent system (MAS) simulator) with application in cooperative control is presented in this paper. Based on USARSim (Unified system for automation and robot simulation) [37], Unreal Engine [33], LabView [17], Matlab [20] and OMNet++ [22], this simulator creates a high-fidelity simulation environment for networked MAS. To provide assistant in research and education, we present three interfaces based on LabView, Matlab and OMNet++, respectively. The system structure (hardware-in-the-loop simulation) and design methodology are also briefly described. In addition, the network simulation is highlighted. Furthermore, we show its wide applications in networked MAS by demonstrating its usage in distributed formation control, distributed coverage control and search problem using multiple UAVs/UGVs.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132063055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485156
Junjie Yan, Zhiwei Zhang, Zhen Lei, Dong Yi, S. Li
Liveness detection is an indispensable guarantee for reliable face recognition, which has recently received enormous attention. In this paper we propose three scenic clues, which are non-rigid motion, face-background consistency and imaging banding effect, to conduct accurate and efficient face liveness detection. Non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blinking, and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion. Face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces, and this consistency can serve as an efficient liveness clue which is explored by GMM based motion detection method. Image banding effect reflects the imaging quality defects introduced in the fake face reproduction, which can be detected by wavelet decomposition. By fusing these three clues, we thoroughly explore sufficient clues for liveness detection. The proposed face liveness detection method achieves 100% accuracy on Idiap print-attack database and the best performance on self-collected face anti-spoofing database.
{"title":"Face liveness detection by exploring multiple scenic clues","authors":"Junjie Yan, Zhiwei Zhang, Zhen Lei, Dong Yi, S. Li","doi":"10.1109/ICARCV.2012.6485156","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485156","url":null,"abstract":"Liveness detection is an indispensable guarantee for reliable face recognition, which has recently received enormous attention. In this paper we propose three scenic clues, which are non-rigid motion, face-background consistency and imaging banding effect, to conduct accurate and efficient face liveness detection. Non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blinking, and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion. Face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces, and this consistency can serve as an efficient liveness clue which is explored by GMM based motion detection method. Image banding effect reflects the imaging quality defects introduced in the fake face reproduction, which can be detected by wavelet decomposition. By fusing these three clues, we thoroughly explore sufficient clues for liveness detection. The proposed face liveness detection method achieves 100% accuracy on Idiap print-attack database and the best performance on self-collected face anti-spoofing database.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485308
Sascha Schrader, Markus Dambek, Adrian Block, Stefan Brending, D. Nakath, Falko Schmid, J. V. D. Ven
In this paper we introduce a way of tracking people in an indoor environment across multiple cameras with overlapping as well as non-overlapping fields of view. To do so, we use our distribution model called SpARTA and an extended Tracking-Learning-Detection algorithm. A big advantage in comparison to other systems is that each camera node learns the tracked person and builds a database of positive and negative examples in real time. With these datasets we are able to distinguish different people across different nodes. The learned data is shared across nodes, so that they improve each other while tracking. In the main part we present an experimental validation of the system. Finally, we will show that distribution of tracking data improves tracking across multiple nodes considerably with regard to partial occlusion of the tracked object.
{"title":"A distributed online learning tracking algorithm","authors":"Sascha Schrader, Markus Dambek, Adrian Block, Stefan Brending, D. Nakath, Falko Schmid, J. V. D. Ven","doi":"10.1109/ICARCV.2012.6485308","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485308","url":null,"abstract":"In this paper we introduce a way of tracking people in an indoor environment across multiple cameras with overlapping as well as non-overlapping fields of view. To do so, we use our distribution model called SpARTA and an extended Tracking-Learning-Detection algorithm. A big advantage in comparison to other systems is that each camera node learns the tracked person and builds a database of positive and negative examples in real time. With these datasets we are able to distinguish different people across different nodes. The learned data is shared across nodes, so that they improve each other while tracking. In the main part we present an experimental validation of the system. Finally, we will show that distribution of tracking data improves tracking across multiple nodes considerably with regard to partial occlusion of the tracked object.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127698578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485332
Huiling Xu, Zhiping Lin, A. Makur
This paper is concerned with the problem of robust unbiased H∞ filtering for uncertain two-dimensional (2-D) systems described by the Fornasini-Marchesini local state-space second model. The parameter uncertainties are assumed to be norm-bounded in both the state and measurement equations. The concept of robust unbiased filtering is first introduced into uncertain 2-D systems. A necessary and sufficient condition for the existence of robust unbiased 2-D H∞ filters is derived based on the rank condition of the given system matrices. A method is then proposed for the design of robust unbiased H∞ filters for uncertain 2-D systems using a linear matrix inequality (LMI) technique. The main advantage of the proposed method is that it can be applied to unstable uncertain 2-D systems while existing robust 2-D H∞ filtering approaches only work for robust stable uncertain 2-D systems. An illustrative example is also provided and comparison with existing results is made.
{"title":"Robust unbiased H∞ filtering for uncertain two-dimensional systems","authors":"Huiling Xu, Zhiping Lin, A. Makur","doi":"10.1109/ICARCV.2012.6485332","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485332","url":null,"abstract":"This paper is concerned with the problem of robust unbiased H∞ filtering for uncertain two-dimensional (2-D) systems described by the Fornasini-Marchesini local state-space second model. The parameter uncertainties are assumed to be norm-bounded in both the state and measurement equations. The concept of robust unbiased filtering is first introduced into uncertain 2-D systems. A necessary and sufficient condition for the existence of robust unbiased 2-D H∞ filters is derived based on the rank condition of the given system matrices. A method is then proposed for the design of robust unbiased H∞ filters for uncertain 2-D systems using a linear matrix inequality (LMI) technique. The main advantage of the proposed method is that it can be applied to unstable uncertain 2-D systems while existing robust 2-D H∞ filtering approaches only work for robust stable uncertain 2-D systems. An illustrative example is also provided and comparison with existing results is made.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127884271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485315
C. Antonya
The gaze point and gaze line, measured with an eye tracking device, can be used in various interaction interfaces, like mobile robot programming in immersive virtual environment. Path generation of the robot should be made without any tedious eye gestures, but rather it should be detected from the context. The obtained trajectory, influenced by the precision of the estimated gaze point, can be used by physically disabled people in moving with a wheelchair using their eyes. The goal of this study is to assess the accuracy of the gaze point computation based on eye tracking in an immersive virtual environment. The point in space where the two directions of left and right eye converge gives a measure for the distance to the gazed objects. This distance is needed whenever the user wants to indicate a point in space or in case two or more objects to be selected are placed one behind the other. In this work several experiments have been conducted to assess the accuracy of the convergence point detection in space.
{"title":"Accuracy of gaze point estimation in immersive 3D interaction interface based on eye tracking","authors":"C. Antonya","doi":"10.1109/ICARCV.2012.6485315","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485315","url":null,"abstract":"The gaze point and gaze line, measured with an eye tracking device, can be used in various interaction interfaces, like mobile robot programming in immersive virtual environment. Path generation of the robot should be made without any tedious eye gestures, but rather it should be detected from the context. The obtained trajectory, influenced by the precision of the estimated gaze point, can be used by physically disabled people in moving with a wheelchair using their eyes. The goal of this study is to assess the accuracy of the gaze point computation based on eye tracking in an immersive virtual environment. The point in space where the two directions of left and right eye converge gives a measure for the distance to the gazed objects. This distance is needed whenever the user wants to indicate a point in space or in case two or more objects to be selected are placed one behind the other. In this work several experiments have been conducted to assess the accuracy of the convergence point detection in space.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127997020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485296
S. R. U. N. Jafri, Zhao Li, A. A. Chandio, R. Chellali
This paper presents multi-robot simultaneous localization and mapping (SLAM) framework for a team of robots with unknown initial poses. The proposed solution is using feature based Rao-Blackwellised particle filter (RBPF) SLAM for each robot working in an unknown environment equipped only with 2D range sensor and communication module. To represent the environment in compact form, line and corner features (or point features) are used. By sharing and comparing distinct feature based maps of each robot, a global map with known poses is formed without any physical meeting among the robots. This approach can easily applicable to the distributed or centralized robotic systems with ease of data handling and reduced computational cost.
{"title":"Laser only feature based multi robot SLAM","authors":"S. R. U. N. Jafri, Zhao Li, A. A. Chandio, R. Chellali","doi":"10.1109/ICARCV.2012.6485296","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485296","url":null,"abstract":"This paper presents multi-robot simultaneous localization and mapping (SLAM) framework for a team of robots with unknown initial poses. The proposed solution is using feature based Rao-Blackwellised particle filter (RBPF) SLAM for each robot working in an unknown environment equipped only with 2D range sensor and communication module. To represent the environment in compact form, line and corner features (or point features) are used. By sharing and comparing distinct feature based maps of each robot, a global map with known poses is formed without any physical meeting among the robots. This approach can easily applicable to the distributed or centralized robotic systems with ease of data handling and reduced computational cost.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121301004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}