Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698418
M. Emde, J. Roßmann
Digital prototypes and simulation technologies are widely used in the development of new technical systems. They allow cost- and time-efficient tests in all stages of development. Sensor simulation is an important aspect in many simulation scenarios dealing with robotics. This paper focuses on the validation process of a newly developed single ray based laser scanner simulation and provides an insight into the accelerated development processes through the use of virtual testbeds. The work is motivated by the development of a localization unit for an exploration rover containing a space qualified laser scanner.
{"title":"Validating a simulation of a single ray based laser scanner used in mobile robot applications","authors":"M. Emde, J. Roßmann","doi":"10.1109/ROSE.2013.6698418","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698418","url":null,"abstract":"Digital prototypes and simulation technologies are widely used in the development of new technical systems. They allow cost- and time-efficient tests in all stages of development. Sensor simulation is an important aspect in many simulation scenarios dealing with robotics. This paper focuses on the validation process of a newly developed single ray based laser scanner simulation and provides an insight into the accelerated development processes through the use of virtual testbeds. The work is motivated by the development of a localization unit for an exploration rover containing a space qualified laser scanner.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123708719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698415
W. Rone, P. Ben-Tzvi
This paper describes a state estimation model for a multi-segment continuum robot that utilizes the displacement of passive cables embedded along the robot's length to estimate its overall shape. As continuum robots are used in activities outside a laboratory environment, methods of measuring their shape configuration in real-time will be necessary to ensure robust closed-loop control. However, because these robots deform along their entire length and lack discrete joints at which primary displacements take place, conventional approaches to sensing joint displacement (e.g., encoders) are inappropriate. Furthermore, elasticity plays a key role in determining the resulting shape of the continuum robot, instead of the mechanics-independent kinematic configuration frequently seen in rigid-link robotics. In order to enable accurate estimates of a continuum robot's shape, the measured displacements of passive cables are utilized to detect the change in shape of the continuum robot. An optimization is used with a static model based on the principle of virtual power to map these cable displacements into the resulting continuum robot configuration. This state estimation model was implemented numerically in MATLAB and validated on an experimental test platform.
{"title":"Multi-segment continuum robot shape estimation using passive cable displacement","authors":"W. Rone, P. Ben-Tzvi","doi":"10.1109/ROSE.2013.6698415","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698415","url":null,"abstract":"This paper describes a state estimation model for a multi-segment continuum robot that utilizes the displacement of passive cables embedded along the robot's length to estimate its overall shape. As continuum robots are used in activities outside a laboratory environment, methods of measuring their shape configuration in real-time will be necessary to ensure robust closed-loop control. However, because these robots deform along their entire length and lack discrete joints at which primary displacements take place, conventional approaches to sensing joint displacement (e.g., encoders) are inappropriate. Furthermore, elasticity plays a key role in determining the resulting shape of the continuum robot, instead of the mechanics-independent kinematic configuration frequently seen in rigid-link robotics. In order to enable accurate estimates of a continuum robot's shape, the measured displacements of passive cables are utilized to detect the change in shape of the continuum robot. An optimization is used with a static model based on the principle of virtual power to map these cable displacements into the resulting continuum robot configuration. This state estimation model was implemented numerically in MATLAB and validated on an experimental test platform.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"435 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698422
M. Nazari, Y. P. Singh
The paper presents automated volumetric analysis of human brain MR images for many applications based on the Expectation-maximization (EM) algorithm. It involves voxel labeling, counting, and calculating tissues volume. The voxel labeling requires the brain magnetic resonance image segmentation which is most commonly performed based on voxels intensity signals. A widely used method for segmentation is by creating a Gaussian Mixture Model (GMM) through the EM algorithm and the same can be used to find the tissues, class label and volumes. The experimental results are provided for volumetric analysis of automated segmentation of male and female subjects as well as normal volumes of tissue classes for verifying correctness of automated volumetric analysis and statistical inference for diagnostic applications.
{"title":"Automatic human brain MRI volumetric analysis technique using EM-algorithm","authors":"M. Nazari, Y. P. Singh","doi":"10.1109/ROSE.2013.6698422","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698422","url":null,"abstract":"The paper presents automated volumetric analysis of human brain MR images for many applications based on the Expectation-maximization (EM) algorithm. It involves voxel labeling, counting, and calculating tissues volume. The voxel labeling requires the brain magnetic resonance image segmentation which is most commonly performed based on voxels intensity signals. A widely used method for segmentation is by creating a Gaussian Mixture Model (GMM) through the EM algorithm and the same can be used to find the tissues, class label and volumes. The experimental results are provided for volumetric analysis of automated segmentation of male and female subjects as well as normal volumes of tissue classes for verifying correctness of automated volumetric analysis and statistical inference for diagnostic applications.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698430
I. Jawhar, N. Mohamed, J. Al-Jaroodi, S. Zhang
Considerable advances have taken place in the area of sensor technology, which have lead smaller, less expensive sensing devices with higher processing, sensing, storage, and communication capabilities. Consequently, many environmental, commercial, and military applications have emerged for wireless sensor networks (WSNs). Such WSNs can be used in the important field of oil, gas, and water pipeline monitoring. In this type of WSNs, due to the nature of the monitored structure, the nodes are lined up in a linear form, making a special class of these networks; We defined these in a previous paper as Linear Sensor Networks (LSNs). This paper focuses on using LSNs to monitor underwater pipelines where data is collected from the sensor nodes (SNs) and transmitted to a surface sink using an autonomous underwater vehicle (AUV). In turn, the surface sink can transmit the data to the network control center (NCC) using the communication infrastructure that is available in the corresponding region (e.g. WiMAX, cellular, GPRS, satellite communication, etc.) We name this network architecture an AUV-based LSNs (ALSNs). The use of the AUV is due to the fact that a pure multihop approach to route the data all the way along the linear network which can extend for hundreds or even thousands of kilometers can be very costly from an energy dissipation point of view, thereby reducing the effective lifetime of the network. With this approach a significantly smaller transmission range can be used by the SNs. Furthermore, the strategy provides for reduced interference between the SN transmissions that can be caused by hidden terminal and collision problems, that would be expected if a pure multihop approach is used. Finally, different AUV movement strategies are offered and analysed under various network conditions with respect to the performance of important system metrics such as average data packet end-to-end delay and delivery ratio.
{"title":"An efficient framework for autonomous underwater vehicle extended sensor networks for pipeline monitoring","authors":"I. Jawhar, N. Mohamed, J. Al-Jaroodi, S. Zhang","doi":"10.1109/ROSE.2013.6698430","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698430","url":null,"abstract":"Considerable advances have taken place in the area of sensor technology, which have lead smaller, less expensive sensing devices with higher processing, sensing, storage, and communication capabilities. Consequently, many environmental, commercial, and military applications have emerged for wireless sensor networks (WSNs). Such WSNs can be used in the important field of oil, gas, and water pipeline monitoring. In this type of WSNs, due to the nature of the monitored structure, the nodes are lined up in a linear form, making a special class of these networks; We defined these in a previous paper as Linear Sensor Networks (LSNs). This paper focuses on using LSNs to monitor underwater pipelines where data is collected from the sensor nodes (SNs) and transmitted to a surface sink using an autonomous underwater vehicle (AUV). In turn, the surface sink can transmit the data to the network control center (NCC) using the communication infrastructure that is available in the corresponding region (e.g. WiMAX, cellular, GPRS, satellite communication, etc.) We name this network architecture an AUV-based LSNs (ALSNs). The use of the AUV is due to the fact that a pure multihop approach to route the data all the way along the linear network which can extend for hundreds or even thousands of kilometers can be very costly from an energy dissipation point of view, thereby reducing the effective lifetime of the network. With this approach a significantly smaller transmission range can be used by the SNs. Furthermore, the strategy provides for reduced interference between the SN transmissions that can be caused by hidden terminal and collision problems, that would be expected if a pure multihop approach is used. Finally, different AUV movement strategies are offered and analysed under various network conditions with respect to the performance of important system metrics such as average data packet end-to-end delay and delivery ratio.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126472001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698439
A. Tamjidi, C. Ye, Soonhac Hong
In this paper, we present a 6-DOF pose estimation method for a Portable Navigation Aid for the visually impaired. The navigation aid uses a single 3D camera-SwissRanger SR4000-for both pose estimation and object/obstacle detection. The SR4000 provides intensity and range data of the scene. These data are simultaneously processed to estimate the camera's egomotion, which is then used as the motion model by an Extended Kalman Filter (EKF) to track the visual features maintained in a local map. In order to create correct feature correspondences between images, a 3-point RANSAC (RANdom SAmple Consensus) process is devised to identify the inliers from the feature correspondences based on the SIFT (Scale Invariant Feature Transform) descriptors. Only the inliers are used to update the EKF's state. Additional inliers caused by the updated state are then located and used to perform another state update. The EKF integrates the egomotion into the camera's pose in the world coordinate with a relatively small error. Since the camera's y coordinate may be measured as the distance between the camera and the floor plane, it is used as an additional observation in this work. Experimental results indicate that the proposed pose estimation method results in accurate pose estimates for positioning the visually impaired in an indoor environment.
{"title":"6-DOF pose estimation of a Portable Navigation Aid for the visually impaired","authors":"A. Tamjidi, C. Ye, Soonhac Hong","doi":"10.1109/ROSE.2013.6698439","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698439","url":null,"abstract":"In this paper, we present a 6-DOF pose estimation method for a Portable Navigation Aid for the visually impaired. The navigation aid uses a single 3D camera-SwissRanger SR4000-for both pose estimation and object/obstacle detection. The SR4000 provides intensity and range data of the scene. These data are simultaneously processed to estimate the camera's egomotion, which is then used as the motion model by an Extended Kalman Filter (EKF) to track the visual features maintained in a local map. In order to create correct feature correspondences between images, a 3-point RANSAC (RANdom SAmple Consensus) process is devised to identify the inliers from the feature correspondences based on the SIFT (Scale Invariant Feature Transform) descriptors. Only the inliers are used to update the EKF's state. Additional inliers caused by the updated state are then located and used to perform another state update. The EKF integrates the egomotion into the camera's pose in the world coordinate with a relatively small error. Since the camera's y coordinate may be measured as the distance between the camera and the floor plane, it is used as an additional observation in this work. Experimental results indicate that the proposed pose estimation method results in accurate pose estimates for positioning the visually impaired in an indoor environment.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129144271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698432
J. McCausland, R. Abielmona, R. Falcon, A. Crétu, E. Petriu
In this paper, an auction-based node selection technique is considered for a risk-aware Robotic Sensor Network (RSN) applied to Critical Infrastructure Protection (CIP). The goal of this risk-aware RSN is to maintain a secure perimeter around the CIP, which is best maintained by detecting high-risk network events and mitigate them through a response involving the most suitable robotic nodes. These robotic nodes can operate without the use of a centralized system and select amongst themselves the nodes with the best fitness to risk mitigation plan. The robot node that is first aware of a high-risk event becomes an auctioneer. The risk mitigation task is advertised to the entire network. Each robotic node is responsible for calculating their bid metric (i.e. availability metric) for the risk mitigation task. We employ fuzzy logic in the process of the bid calculation, which incorporates the battery level, distance to the event, and redundant coverage to produce an appropriate bid value. The auctioneer only considers the top bidders. The nature of this system is to permit simultaneous mitigation plans to execute on a single RSN by effectively segmenting the network into discrete autonomous groups. Each autonomous group will utilize an evolutionary multi-objective algorithm - the Non-Dominated Sorting Genetic Algorithm (NSGA-II) - to optimize the segment's topology to mitigate the risk. A chromosome length is determined by the number of bids received, but the NSGA-II explored to separate solution spaces to achieve optimal Pareto results. The NSGA-II will seek optimal node positions and determine the optimal set of robotic nodes to utilize of the bids received. The NSGA-II will produce a set of optimized responses for each network segment for a security operator to pick the most suitable response.
{"title":"Auction-based node selection of optimal and concurrent responses for a risk-aware robotic sensor network","authors":"J. McCausland, R. Abielmona, R. Falcon, A. Crétu, E. Petriu","doi":"10.1109/ROSE.2013.6698432","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698432","url":null,"abstract":"In this paper, an auction-based node selection technique is considered for a risk-aware Robotic Sensor Network (RSN) applied to Critical Infrastructure Protection (CIP). The goal of this risk-aware RSN is to maintain a secure perimeter around the CIP, which is best maintained by detecting high-risk network events and mitigate them through a response involving the most suitable robotic nodes. These robotic nodes can operate without the use of a centralized system and select amongst themselves the nodes with the best fitness to risk mitigation plan. The robot node that is first aware of a high-risk event becomes an auctioneer. The risk mitigation task is advertised to the entire network. Each robotic node is responsible for calculating their bid metric (i.e. availability metric) for the risk mitigation task. We employ fuzzy logic in the process of the bid calculation, which incorporates the battery level, distance to the event, and redundant coverage to produce an appropriate bid value. The auctioneer only considers the top bidders. The nature of this system is to permit simultaneous mitigation plans to execute on a single RSN by effectively segmenting the network into discrete autonomous groups. Each autonomous group will utilize an evolutionary multi-objective algorithm - the Non-Dominated Sorting Genetic Algorithm (NSGA-II) - to optimize the segment's topology to mitigate the risk. A chromosome length is determined by the number of bids received, but the NSGA-II explored to separate solution spaces to achieve optimal Pareto results. The NSGA-II will seek optimal node positions and determine the optimal set of robotic nodes to utilize of the bids received. The NSGA-II will produce a set of optimized responses for each network segment for a security operator to pick the most suitable response.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114602697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/rose.2013.6698414
J. Marvel, R. Bostelman
We present an overview of the current (as of Spring, 2013) safety standards for industrial robots and automated guided vehicles (AGVs). We also describe how they relate to the safety concerns of mobile manipulators (robot arms mounted on mobile bases) in modern manufacturing. Provisions for the capabilities of mobile manipulators are provided in relationship to the current standards. Several scenarios are presented for which the behavior of a mobile manipulator may be unpredictable or otherwise contrary to the current safety requirements. We also discuss the needs for a new class of test artifacts for verifying and validating the functionality of mobile manipulator safety systems in collaborative working environments.
{"title":"Towards mobile manipulator safety standards","authors":"J. Marvel, R. Bostelman","doi":"10.1109/rose.2013.6698414","DOIUrl":"https://doi.org/10.1109/rose.2013.6698414","url":null,"abstract":"We present an overview of the current (as of Spring, 2013) safety standards for industrial robots and automated guided vehicles (AGVs). We also describe how they relate to the safety concerns of mobile manipulators (robot arms mounted on mobile bases) in modern manufacturing. Provisions for the capabilities of mobile manipulators are provided in relationship to the current standards. Several scenarios are presented for which the behavior of a mobile manipulator may be unpredictable or otherwise contrary to the current safety requirements. We also discuss the needs for a new class of test artifacts for verifying and validating the functionality of mobile manipulator safety systems in collaborative working environments.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133506362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698431
S. Zug, André Dietrich, Christoph Steup, Tino Brade, Thomas Petig
The variable acquisition of distributed sensor perception promises an effective utilization of these data across different applications. Multiple observations enable an increase in the quality of the system output. However, the dynamic composition disables all off-line optimization approaches, especially for sensor-application-scheduling. This paper addresses the need for an online adjustment of periodically working sensors and fusion/control applications. Based on a number of common goals - e.g., minimization of the variance of sensor data or the age of data sets - we deduce different metrics. For one aspect, the number of input counts, we propose a mathematical description and apply related optimizations. We use an an example analysis to illustrate further research goals.
{"title":"Phase optimization for control/fusion applications in dynamically composed sensor networks","authors":"S. Zug, André Dietrich, Christoph Steup, Tino Brade, Thomas Petig","doi":"10.1109/ROSE.2013.6698431","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698431","url":null,"abstract":"The variable acquisition of distributed sensor perception promises an effective utilization of these data across different applications. Multiple observations enable an increase in the quality of the system output. However, the dynamic composition disables all off-line optimization approaches, especially for sensor-application-scheduling. This paper addresses the need for an online adjustment of periodically working sensors and fusion/control applications. Based on a number of common goals - e.g., minimization of the variance of sensor data or the age of data sets - we deduce different metrics. For one aspect, the number of input counts, we propose a mathematical description and apply related optimizations. We use an an example analysis to illustrate further research goals.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122680965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698438
Mohamed H. Merzban, M. Abdellatif, Hossam S. Abbas, S. Sessa
SLAM is defined as simultaneous estimation of mobile robot pose and structure of the surrounding environment Currently, there is a much interest in Visual SLAM, SLAM with a camera as main sensor, because the camera is an ubiquitous and affordable sensor. Camera measurements formed by perspective projection is highly nonlinear with respect to estimated states, leading to complicated nonlinear estimation problem. In this paper, a novel system is proposed that divides the problem into two parts: local and global motion estimation. This division leads to a simple linear estimation system. In the first stage, local motion parameters (acceleration, velocity, angular acceleration and orientation) are estimated in robot local frame. Robot position and the scene map are then estimated in the second stage in global frame as global motion parameters. Map is updated at each camera frame and is represented in a relative way to decouple robot pose from map structure estimation. The new system simplified the map correction to a linear optimization problem. Simulation results showed that the proposed system converges and yields accurate results.
{"title":"Toward multi-stage decoupled visual SLAM system","authors":"Mohamed H. Merzban, M. Abdellatif, Hossam S. Abbas, S. Sessa","doi":"10.1109/ROSE.2013.6698438","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698438","url":null,"abstract":"SLAM is defined as simultaneous estimation of mobile robot pose and structure of the surrounding environment Currently, there is a much interest in Visual SLAM, SLAM with a camera as main sensor, because the camera is an ubiquitous and affordable sensor. Camera measurements formed by perspective projection is highly nonlinear with respect to estimated states, leading to complicated nonlinear estimation problem. In this paper, a novel system is proposed that divides the problem into two parts: local and global motion estimation. This division leads to a simple linear estimation system. In the first stage, local motion parameters (acceleration, velocity, angular acceleration and orientation) are estimated in robot local frame. Robot position and the scene map are then estimated in the second stage in global frame as global motion parameters. Map is updated at each camera frame and is represented in a relative way to decouple robot pose from map structure estimation. The new system simplified the map correction to a linear optimization problem. Simulation results showed that the proposed system converges and yields accurate results.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124947487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698426
C. Fosalau, C. Zet, D. Petrisor, C. Damian
The paper presents the results obtained after performing an exhaustive study upon the properties of a special category of materials in form of magnetic amorphous microwires (MAWs), in order to reveal their applicability to build sensitive elements for measuring strains and deformations. The study has been directed to the Stressimpedance (SI) effect, a variant of Giant Magnetoimpedance (GMI) effect that occurs in such materials. The goal of the study was to find the optimal conditions for developing a new type of strain gage in order to build a sensitive force measurement device that would be successfully applied in robotic arm control. Following the study, we built a strain gage possessing a gage factor of about 1000 times larger than that of a metallic gage, with a linearity under 1 % for an operating range of ± 200 ppm.
{"title":"A new type of highly sensitive strain gage based on microstructured magnetic materials","authors":"C. Fosalau, C. Zet, D. Petrisor, C. Damian","doi":"10.1109/ROSE.2013.6698426","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698426","url":null,"abstract":"The paper presents the results obtained after performing an exhaustive study upon the properties of a special category of materials in form of magnetic amorphous microwires (MAWs), in order to reveal their applicability to build sensitive elements for measuring strains and deformations. The study has been directed to the Stressimpedance (SI) effect, a variant of Giant Magnetoimpedance (GMI) effect that occurs in such materials. The goal of the study was to find the optimal conditions for developing a new type of strain gage in order to build a sensitive force measurement device that would be successfully applied in robotic arm control. Following the study, we built a strain gage possessing a gage factor of about 1000 times larger than that of a metallic gage, with a linearity under 1 % for an operating range of ± 200 ppm.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129698873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}