Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343058
Xinwu Liang, Hesheng Wang, Weidong Chen
In this paper, the uncalibrated visual servoing problem of robot manipulators with motor dynamics will be addressed for the fixed-camera configuration. A new adaptive image-space visual servoing strategy is presented to handle uncertainties in the camera intrinsic and extrinsic parameters, robot kinematic and dynamic parameters, and motor dynamic parameters. To deal with the nonlinear dependence of image Jacobian matrix on the unknown parameters, the proposed scheme is developed based on the concept of depth-independent interaction matrix. In this way, the camera parameters and the robot kinematic parameters in the closed-loop dynamics can be linearly parameterized such that adaptive laws can be designed to estimate them on-line. Adaptive algorithms are also developed to provide estimation of unknown robot dynamic and motor dynamic parameters. Stability analysis will be performed to show asymptotic convergence of image errors using Lyapunov theory based on both rigid-link robot dynamics and full motor dynamics. Simulation results based on a two-link planar robot manipulators will be given to illustrate the performance of the proposed scheme.
{"title":"Uncalibrated fixed-camera visual servoing of robot manipulators by considering the motor dynamics","authors":"Xinwu Liang, Hesheng Wang, Weidong Chen","doi":"10.1109/MFI.2012.6343058","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343058","url":null,"abstract":"In this paper, the uncalibrated visual servoing problem of robot manipulators with motor dynamics will be addressed for the fixed-camera configuration. A new adaptive image-space visual servoing strategy is presented to handle uncertainties in the camera intrinsic and extrinsic parameters, robot kinematic and dynamic parameters, and motor dynamic parameters. To deal with the nonlinear dependence of image Jacobian matrix on the unknown parameters, the proposed scheme is developed based on the concept of depth-independent interaction matrix. In this way, the camera parameters and the robot kinematic parameters in the closed-loop dynamics can be linearly parameterized such that adaptive laws can be designed to estimate them on-line. Adaptive algorithms are also developed to provide estimation of unknown robot dynamic and motor dynamic parameters. Stability analysis will be performed to show asymptotic convergence of image errors using Lyapunov theory based on both rigid-link robot dynamics and full motor dynamics. Simulation results based on a two-link planar robot manipulators will be given to illustrate the performance of the proposed scheme.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124737926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343042
L. Tamás, A. Majdik
This paper gives an insight in the preliminary results of an ongoing work about heterogeneous point feature estimation acquired from different type of sensors including structured light camera, stereo camera and a custom 3D laser range finder. The main goal of the paper is to compare the performance of the different type of local descriptors for indoor office environment. Several type of 3D features were evaluated on different datasets including the output of an enhanced stereo image processing algorithm too. From the extracted features the correspondences were determined between two different recording positions for each type of sensor. These correspondences were filtered and the final benchmarking of the extracted feature correspondences were compared for the different data sets. Further on, there is proposed an open access dataset for public evaluation of the proposed algorithms.
{"title":"Heterogeneous feature based correspondence estimation","authors":"L. Tamás, A. Majdik","doi":"10.1109/MFI.2012.6343042","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343042","url":null,"abstract":"This paper gives an insight in the preliminary results of an ongoing work about heterogeneous point feature estimation acquired from different type of sensors including structured light camera, stereo camera and a custom 3D laser range finder. The main goal of the paper is to compare the performance of the different type of local descriptors for indoor office environment. Several type of 3D features were evaluated on different datasets including the output of an enhanced stereo image processing algorithm too. From the extracted features the correspondences were determined between two different recording positions for each type of sensor. These correspondences were filtered and the final benchmarking of the extracted feature correspondences were compared for the different data sets. Further on, there is proposed an open access dataset for public evaluation of the proposed algorithms.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129934668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6342995
Z. Xue, Xianyi Zeng, L. Koehl, Yan Chen
In our daily life, much information related to tactile properties of natural objects can also be perceived by our eyes. This is because our brain has a very sophisticated mechanism permitting to associate the memories obtained from different sensory channels and make them work as a whole. In the present study, given a set of representative textile fabrics, a number of standardized sensory experiments were carried out by a panel of trained textile experts under the real-touch, video and image conditions, respectively. A novel algorithm based on the rough sets theory and the fuzzy sets theory was proposed in order to investigate from the evaluation data the extent to which the tactile properties of a fabric can be interpreted through specific visual representations of the corresponding apparel product. The obtained results confirm that most of the tactile information can be perceived correctly by the assessors through either the video or image displays, while a better performance is detected in the video scenarios. Finally, upon the analysis results, some significant suggestions were put forward to modify the available visual displays, in order to better illustrate fabric's tactile properties in a non-haptic environment.
{"title":"Development of a fuzzy inclusion measure for investigating relations between visual and tactile perception of textile products","authors":"Z. Xue, Xianyi Zeng, L. Koehl, Yan Chen","doi":"10.1109/MFI.2012.6342995","DOIUrl":"https://doi.org/10.1109/MFI.2012.6342995","url":null,"abstract":"In our daily life, much information related to tactile properties of natural objects can also be perceived by our eyes. This is because our brain has a very sophisticated mechanism permitting to associate the memories obtained from different sensory channels and make them work as a whole. In the present study, given a set of representative textile fabrics, a number of standardized sensory experiments were carried out by a panel of trained textile experts under the real-touch, video and image conditions, respectively. A novel algorithm based on the rough sets theory and the fuzzy sets theory was proposed in order to investigate from the evaluation data the extent to which the tactile properties of a fabric can be interpreted through specific visual representations of the corresponding apparel product. The obtained results confirm that most of the tactile information can be perceived correctly by the assessors through either the video or image displays, while a better performance is detected in the video scenarios. Finally, upon the analysis results, some significant suggestions were put forward to modify the available visual displays, in order to better illustrate fabric's tactile properties in a non-haptic environment.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127130585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a novel surveillance system, thermal omnidirectional vision system, is introduced which is robust to illumination and has a global field of view. According to the characteristic of the proposed system, a rotating adaptive Haar wavelet transform is developed for human tracking in thermal omnidirectional vision. The proposed feature can effectively handle the nonisotropic distortion of catadioptric omnidirectional vision (COV). For robust tracking, we develop a rotational kinematic model based adaptive particle filter, which can handle various movements including rapid movement. Since the involvement of the rotational kinematic model, the proposed tracking algorithm can well deal with the short term occlusion. Finally, a series of experiments verify the effectiveness of the proposed rotating adaptive Haar wavelet transform and the rotational kinematic model based adaptive particle filter for human tracking in thermal omnidirectional vision.
{"title":"Rotating adaptive Haar wavelet transform for human tracking in thermal omnidirectional vision","authors":"Yazhe Tang, Youfu Li, Tianxiang Bai, Xiaolong Zhou","doi":"10.1109/MFI.2012.6343068","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343068","url":null,"abstract":"In this paper, a novel surveillance system, thermal omnidirectional vision system, is introduced which is robust to illumination and has a global field of view. According to the characteristic of the proposed system, a rotating adaptive Haar wavelet transform is developed for human tracking in thermal omnidirectional vision. The proposed feature can effectively handle the nonisotropic distortion of catadioptric omnidirectional vision (COV). For robust tracking, we develop a rotational kinematic model based adaptive particle filter, which can handle various movements including rapid movement. Since the involvement of the rotational kinematic model, the proposed tracking algorithm can well deal with the short term occlusion. Finally, a series of experiments verify the effectiveness of the proposed rotating adaptive Haar wavelet transform and the rotational kinematic model based adaptive particle filter for human tracking in thermal omnidirectional vision.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127740541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343004
Aashish Sheshadri, K. Peterson, H. Jones, W. Whittaker
LIDAR-only and camera-only approaches to global localization in planetary environments have relied heavily on availability of elevation data. The low-resolution nature of available DEMs limits the accuracy of these methods. Availability of new high-resolution planetary imagery motivates the rover localization method presented here. The method correlates terrain appearance with orthographic imagery. A rover generates a colorized 3D model of the local terrain using a panorama of camera and LIDAR data. This model is orthographically projected onto the ground plane to create a template image. The template is then correlated with available satellite imagery to determine rover location. No prior elevation data is necessary. Experiments in simulation demonstrate 2m accuracy. This method is robust to 30° differences in lighting angle between satellite and rover imagery.
{"title":"Position estimation by registration to planetary terrain","authors":"Aashish Sheshadri, K. Peterson, H. Jones, W. Whittaker","doi":"10.1109/MFI.2012.6343004","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343004","url":null,"abstract":"LIDAR-only and camera-only approaches to global localization in planetary environments have relied heavily on availability of elevation data. The low-resolution nature of available DEMs limits the accuracy of these methods. Availability of new high-resolution planetary imagery motivates the rover localization method presented here. The method correlates terrain appearance with orthographic imagery. A rover generates a colorized 3D model of the local terrain using a panorama of camera and LIDAR data. This model is orthographically projected onto the ground plane to create a template image. The template is then correlated with available satellite imagery to determine rover location. No prior elevation data is necessary. Experiments in simulation demonstrate 2m accuracy. This method is robust to 30° differences in lighting angle between satellite and rover imagery.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343040
Lei Wang, Simon X. Yang, M. Biglarbegian
This paper presents a new path planning method for mobile robots in unknown environments. The structure of the proposed algorithm is a hybrid fuzzy logic neural networks, and hence it benefits from the potentials of these two techniques. For modeling the mobile robot, the proposed system adopts the Braitenberg's automata models that were developed for agents. Wheels of the robot are represented by a bio-inspired neuron of a neural network, where each wheel receives different sensor inputs indicating different signals from either excitatory or inhibitory synapses. Training of the neural network weighting is automatically achieved through the fuzzy system that is developed to adjust the weighting between each synapse and neuron of the network. To assess the performance of the developed algorithm, simulation results are presented. It was shown that the proposed method can successfully navigate the robot to the target, and turn the robot at corners for given desired angles. The methodology proposed herein improves the Braitenberg navigation scheme and offers insights into using biologically inspired systems for path planning.
{"title":"A fuzzy logic based bio-inspired system for mobile robot navigation","authors":"Lei Wang, Simon X. Yang, M. Biglarbegian","doi":"10.1109/MFI.2012.6343040","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343040","url":null,"abstract":"This paper presents a new path planning method for mobile robots in unknown environments. The structure of the proposed algorithm is a hybrid fuzzy logic neural networks, and hence it benefits from the potentials of these two techniques. For modeling the mobile robot, the proposed system adopts the Braitenberg's automata models that were developed for agents. Wheels of the robot are represented by a bio-inspired neuron of a neural network, where each wheel receives different sensor inputs indicating different signals from either excitatory or inhibitory synapses. Training of the neural network weighting is automatically achieved through the fuzzy system that is developed to adjust the weighting between each synapse and neuron of the network. To assess the performance of the developed algorithm, simulation results are presented. It was shown that the proposed method can successfully navigate the robot to the target, and turn the robot at corners for given desired angles. The methodology proposed herein improves the Braitenberg navigation scheme and offers insights into using biologically inspired systems for path planning.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115362339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343011
A. Al-Jawad, M. R. Adame, M. Romanovas, M. Hobert, W. Maetzler, M. Trächtler, K. Möller, Y. Manoli
The Timed Up and Go (TUG) is a clinical test used widely to measure balance and mobility, e.g. in Parkinson's disease (PD). The test includes a sequence of functional activities, namely: sit-to-stand, 3-meters walk, 180° turn, walk back, another turn and sit on the chair. Meanwhile the stopwatch is used to score the test by measuring the time which the patients with PD need to perform the test. Here, the work presents an instrumented TUG using a wearable inertial sensor unit attached on the lower back of the person. The approach is used to automate the process of assessment compared with the manual evaluation by using visual observation and a stopwatch. The developed algorithm is based on the Dynamic Time Warping (DTW) for multi-dimensional time series and has been applied with the augmented feature for detection and duration assessment of turn state transitions, while a 1-dimensional DTW is used to detect the sit-to-stand and stand-to-sit phases. The feature set is a 3-dimensional vector which consists of the angular velocity, derived angle and features from Linear Discriminant Analysis (LDA). The algorithm was tested on 10 healthy individuals and 20 patients with PD (10 patients with early and late disease phases respectively). The test demonstrates that the developed technique can successfully extract the time information of the sit-to-stand, both turns and stand-to-sit transitions in the TUG test.
TUG (Timed Up and Go)是一种临床测试,广泛用于测量平衡和活动能力,例如帕金森病(PD)。测试包括一系列功能活动,即:坐立、3米步行、180°转身、后退、再转身、坐在椅子上。同时使用秒表对测试进行评分,测量PD患者完成测试所需的时间。在这里,这项工作展示了一个仪器化的拖船,使用一个可穿戴的惯性传感器单元连接在人的下背部。与人工目测和秒表评估相比,该方法实现了评估过程的自动化。该算法基于多维时间序列的动态时间翘曲(DTW),并结合增强特征用于旋转状态转换的检测和持续时间评估,而一维DTW用于检测从坐到站和从站到坐的阶段。特征集是由角速度、导出角度和线性判别分析(LDA)的特征组成的三维向量。该算法在10名健康个体和20名PD患者(分别为10名疾病早期和晚期患者)上进行了测试。试验结果表明,所开发的技术能够成功地提取出TUG试验中坐转站、转弯和站坐转换的时间信息。
{"title":"Using multi-dimensional dynamic time warping for TUG test instrumentation with inertial sensors","authors":"A. Al-Jawad, M. R. Adame, M. Romanovas, M. Hobert, W. Maetzler, M. Trächtler, K. Möller, Y. Manoli","doi":"10.1109/MFI.2012.6343011","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343011","url":null,"abstract":"The Timed Up and Go (TUG) is a clinical test used widely to measure balance and mobility, e.g. in Parkinson's disease (PD). The test includes a sequence of functional activities, namely: sit-to-stand, 3-meters walk, 180° turn, walk back, another turn and sit on the chair. Meanwhile the stopwatch is used to score the test by measuring the time which the patients with PD need to perform the test. Here, the work presents an instrumented TUG using a wearable inertial sensor unit attached on the lower back of the person. The approach is used to automate the process of assessment compared with the manual evaluation by using visual observation and a stopwatch. The developed algorithm is based on the Dynamic Time Warping (DTW) for multi-dimensional time series and has been applied with the augmented feature for detection and duration assessment of turn state transitions, while a 1-dimensional DTW is used to detect the sit-to-stand and stand-to-sit phases. The feature set is a 3-dimensional vector which consists of the angular velocity, derived angle and features from Linear Discriminant Analysis (LDA). The algorithm was tested on 10 healthy individuals and 20 patients with PD (10 patients with early and late disease phases respectively). The test demonstrates that the developed technique can successfully extract the time information of the sit-to-stand, both turns and stand-to-sit transitions in the TUG test.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"55 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127239482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343029
Yudong Luo, Yantao Shen
An integrated micro-biomanipulation system that can perform mind-controlled biomanipulation at micro scale is presented in the paper. The system incorporates a non-invasive electroencephalogram (EEG) device with a high-precision automated micromanipulator through high speed network. The human manipulation mind measured by the EEG can effectively drive the micromanipulator to perform the 2-D manipulation on bio-samples at micro scale. During the manipulation, the trace of human manipulation mind or movement signal is monitored by a custom-built high-precision position sensing detector (PSD) interface unit. In addition, topographical properties of all 14 EEG channels from the operator corresponding to 2-D mind movements are plotted and preliminarily analyzed. Experimental results validate the performance of the developed network-enabled and mind-controlled micro-biomanipulation system. Further work will focus on using the system to investigate neurobiofeedback mechanisms or manipulation behaviors of human brain during micro-biomanipulation and microassembly, so as to facilitate developing high-efficiency strategy of engineering approaches in micro/nano level.
{"title":"Mind-controlled micro-biomanipulation with position sensing feedback: System integration and validation","authors":"Yudong Luo, Yantao Shen","doi":"10.1109/MFI.2012.6343029","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343029","url":null,"abstract":"An integrated micro-biomanipulation system that can perform mind-controlled biomanipulation at micro scale is presented in the paper. The system incorporates a non-invasive electroencephalogram (EEG) device with a high-precision automated micromanipulator through high speed network. The human manipulation mind measured by the EEG can effectively drive the micromanipulator to perform the 2-D manipulation on bio-samples at micro scale. During the manipulation, the trace of human manipulation mind or movement signal is monitored by a custom-built high-precision position sensing detector (PSD) interface unit. In addition, topographical properties of all 14 EEG channels from the operator corresponding to 2-D mind movements are plotted and preliminarily analyzed. Experimental results validate the performance of the developed network-enabled and mind-controlled micro-biomanipulation system. Further work will focus on using the system to investigate neurobiofeedback mechanisms or manipulation behaviors of human brain during micro-biomanipulation and microassembly, so as to facilitate developing high-efficiency strategy of engineering approaches in micro/nano level.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128149528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343020
Sebastian Rockel, Denis Klimentjew, Jianwei Zhang
In the field of robotics, typical, single-robot systems encounter limits when executing complex tasks. Todays systems often lack flexibility and inter-operability, especially when interaction between participants is necessary. Nevertheless, well developed systems for robotics and for the cognitive and distributive domain are available. What is missing is the common link between these two domains. This work deals with the foundations and methods of a middle layer that joins a multi-agent system with a multi-robot system in a generic way. A prototype system consisting of a multiagent system, a multi-robot system and a middle layer will be presented and evaluated. Its purpose is to combine highlevel cognitive models and information distribution with robot-focused abilities, such as navigation and reactive behavior based artificial intelligence. This enables the assignment of various scenarios to a team of mobile robots.
{"title":"A multi-robot platform for mobile robots — A novel evaluation and development approach with multi-agent technology","authors":"Sebastian Rockel, Denis Klimentjew, Jianwei Zhang","doi":"10.1109/MFI.2012.6343020","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343020","url":null,"abstract":"In the field of robotics, typical, single-robot systems encounter limits when executing complex tasks. Todays systems often lack flexibility and inter-operability, especially when interaction between participants is necessary. Nevertheless, well developed systems for robotics and for the cognitive and distributive domain are available. What is missing is the common link between these two domains. This work deals with the foundations and methods of a middle layer that joins a multi-agent system with a multi-robot system in a generic way. A prototype system consisting of a multiagent system, a multi-robot system and a middle layer will be presented and evaluated. Its purpose is to combine highlevel cognitive models and information distribution with robot-focused abilities, such as navigation and reactive behavior based artificial intelligence. This enables the assignment of various scenarios to a team of mobile robots.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129726618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343005
Peijiang Yuan, Tianmiao Wang, Fucun Ma, Maozhen Gong
This paper describes a design and simulation device for the aircraft drilling end-effector based on bionics which is mainly for robotic drilling on the aircraft surface and inner structures. The whole system is composed of six modules which are controlled by master-slave architecture. Automation studio programming software provides a programming pattern to control hardware system. Experiments show that the ADEE can greatly enhance the quality and precision of drilling holes comparing with manual drilling. We use Matlab and simulink to simulate the process of drilling and test the drilling results with non-PID drilling system (UPDS) and PID drilling system (PDS). Experiments demonstrate that the drilling results of PDS are obvious better than UPDS. At last, we apply pulsed thermography to investigate drilling defects.
{"title":"A design and simulation of aircraft drilling end-effector based on bionics","authors":"Peijiang Yuan, Tianmiao Wang, Fucun Ma, Maozhen Gong","doi":"10.1109/MFI.2012.6343005","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343005","url":null,"abstract":"This paper describes a design and simulation device for the aircraft drilling end-effector based on bionics which is mainly for robotic drilling on the aircraft surface and inner structures. The whole system is composed of six modules which are controlled by master-slave architecture. Automation studio programming software provides a programming pattern to control hardware system. Experiments show that the ADEE can greatly enhance the quality and precision of drilling holes comparing with manual drilling. We use Matlab and simulink to simulate the process of drilling and test the drilling results with non-PID drilling system (UPDS) and PID drilling system (PDS). Experiments demonstrate that the drilling results of PDS are obvious better than UPDS. At last, we apply pulsed thermography to investigate drilling defects.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128286345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}