Pub Date : 2010-12-20DOI: 10.1109/SII.2010.5708293
S. Diamantas, Anastasios Oikonomidis, R. Crowder
Depth computation in robotics is an important step towards providing robust and accurate navigation capabilities to a mobile robot. In this paper we examine the problem of depth estimation with the view to be used in parsimonious systems where fast and accurate measurements are critical. For this purpose we have combined two methods, namely optical flow and least squares in order to infer depth estimates between a robot and a landmark. In the optical flow method the variation of the optical flow vector at varying distances and velocities is observed. In the least squares method snapshots of a landmark are taken from different robot positions. The results of the two methods show that there is a significant increase in depth estimation accuracy by combining optical flow and least squares.
{"title":"Depth computation using optical flow and least squares","authors":"S. Diamantas, Anastasios Oikonomidis, R. Crowder","doi":"10.1109/SII.2010.5708293","DOIUrl":"https://doi.org/10.1109/SII.2010.5708293","url":null,"abstract":"Depth computation in robotics is an important step towards providing robust and accurate navigation capabilities to a mobile robot. In this paper we examine the problem of depth estimation with the view to be used in parsimonious systems where fast and accurate measurements are critical. For this purpose we have combined two methods, namely optical flow and least squares in order to infer depth estimates between a robot and a landmark. In the optical flow method the variation of the optical flow vector at varying distances and velocities is observed. In the least squares method snapshots of a landmark are taken from different robot positions. The results of the two methods show that there is a significant increase in depth estimation accuracy by combining optical flow and least squares.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128471791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708310
N. Uyama, H. Lund, Koki Asakimori, Yuki Ikeda, Daichi Hirano, H. Nakanishi, Kazuya Yoshida
On-ground experiment for space robotic system is essential to validate constructed robotic system prior to launch. This paper presents an integrated on-ground experimental environment for orbital robotic system. The experimental environment utilizes an air-floating testbed to realize two-dimensional micro-gravity environment on ground. As manipulation system, the constructed environment adopts a ground-based manipulator and a free-floating robot. In order to verify the emulated micro-gravity environment, two cases are tested: the impulse-momentum relationship between a ground-based manipulator and a free-floating target, and the conservation of momentum in a free-floating robot. Both results conclude the validity of the constructed experimental environment for on-ground micro-gravity emulation.
{"title":"Integrated experimental environment for orbital robotic systems, using ground-based and free-floating manipulators","authors":"N. Uyama, H. Lund, Koki Asakimori, Yuki Ikeda, Daichi Hirano, H. Nakanishi, Kazuya Yoshida","doi":"10.1109/SII.2010.5708310","DOIUrl":"https://doi.org/10.1109/SII.2010.5708310","url":null,"abstract":"On-ground experiment for space robotic system is essential to validate constructed robotic system prior to launch. This paper presents an integrated on-ground experimental environment for orbital robotic system. The experimental environment utilizes an air-floating testbed to realize two-dimensional micro-gravity environment on ground. As manipulation system, the constructed environment adopts a ground-based manipulator and a free-floating robot. In order to verify the emulated micro-gravity environment, two cases are tested: the impulse-momentum relationship between a ground-based manipulator and a free-floating target, and the conservation of momentum in a free-floating robot. Both results conclude the validity of the constructed experimental environment for on-ground micro-gravity emulation.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708296
Michihisa Ishikura, E. Takeuchi, M. Konyo, S. Tadokoro
This paper presents the evaluation results for conventional methods that can be used for vision-based localization. An Active Scope Camera is a very thin snake robot and can be used as a rescue robot for search and rescue missions. Self-position estimation of the Active Scope Camera is important for efficient search. Nevertheless, using sensors for this purpose hinders the movement and maneuvering of the camera through narrow gaps, because sensors are very big and heavy for the Active Scope Camera. Vision-based localization using a fish-eye camera is suitable technique for self-position estimation. However, the images obtained using the Active Scope Camera are not of good quality. The material of objects found in disaster environments and overexposure by light-emitting diodes embedded at the camera tip affects the matching of feature points. In this paper, properties of images of disaster sites obtained using the Active Scope Camera and the accuracy evaluation of vision-based localization are described.
{"title":"Vision-based localization using active scope camera — Accuracy evaluation for structure from motion in disaster environment","authors":"Michihisa Ishikura, E. Takeuchi, M. Konyo, S. Tadokoro","doi":"10.1109/SII.2010.5708296","DOIUrl":"https://doi.org/10.1109/SII.2010.5708296","url":null,"abstract":"This paper presents the evaluation results for conventional methods that can be used for vision-based localization. An Active Scope Camera is a very thin snake robot and can be used as a rescue robot for search and rescue missions. Self-position estimation of the Active Scope Camera is important for efficient search. Nevertheless, using sensors for this purpose hinders the movement and maneuvering of the camera through narrow gaps, because sensors are very big and heavy for the Active Scope Camera. Vision-based localization using a fish-eye camera is suitable technique for self-position estimation. However, the images obtained using the Active Scope Camera are not of good quality. The material of objects found in disaster environments and overexposure by light-emitting diodes embedded at the camera tip affects the matching of feature points. In this paper, properties of images of disaster sites obtained using the Active Scope Camera and the accuracy evaluation of vision-based localization are described.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115204387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708348
M. Mukul, F. Matsuno
This paper addresses the performance of Common Spatial Pattern (CSP) on rhythmic band information selected from temporally decorrelated signals by zero-phase FIR digital filter. The standard blind source separation (BSS) method is applied to the EOG corrected EEG signals to make the EOG corrected EEG signals temporally decorrelated. This work considers the standard CSP with IEEE-1057 signal reconstruction algorithm for extraction of rhythmic information from the EOG corrected EEG signals too. The selected rhythmic band information is further processed by the CSP method for the feature extraction. The performance of the proposed method has been evaluated by BCI performance evaluation parameters: classification accuracy (ACC) and Cohen's Kappa co-efficient (k).
{"title":"Rhythmic components of spatio-temporally decorrelated EEG signals based Common Spatial Pattern","authors":"M. Mukul, F. Matsuno","doi":"10.1109/SII.2010.5708348","DOIUrl":"https://doi.org/10.1109/SII.2010.5708348","url":null,"abstract":"This paper addresses the performance of Common Spatial Pattern (CSP) on rhythmic band information selected from temporally decorrelated signals by zero-phase FIR digital filter. The standard blind source separation (BSS) method is applied to the EOG corrected EEG signals to make the EOG corrected EEG signals temporally decorrelated. This work considers the standard CSP with IEEE-1057 signal reconstruction algorithm for extraction of rhythmic information from the EOG corrected EEG signals too. The selected rhythmic band information is further processed by the CSP method for the feature extraction. The performance of the proposed method has been evaluated by BCI performance evaluation parameters: classification accuracy (ACC) and Cohen's Kappa co-efficient (k).","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114324545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708347
T. Kikuchi, K. Oda
We have proposed a leg-shaped haptic simulator (Leg-Robot) that provides an environment for young physical therapist to learn rehabilitation techniques or diagnosis techniques with haptic information which mimics a real patient with abnormal motor function, for example, stroke or spinal cord injury. In previous studies, we also proposed a control method to develop a clonus-like movement for the Leg-Robot. However, it was a completely engineered approach and does not reflect neurological knowledge of the disability sufficiently. In order to enhance profitability of the Leg-Robot as an educational system to learn neurophysiological mechanism of abnormal activities of disabled patients, we newly developed neurologically knowledge-based control method to simulate and control a clonus-like movement of the Leg-Robot. In this paper, dynamic simulation with Open Dynamics Engine (ODE) of a proposed clonus model is mainly discussed.
{"title":"Dynamics simulation of a neuromuscular model of ankle clonus for neurophysiological education by a leg-shaped haptic simulator","authors":"T. Kikuchi, K. Oda","doi":"10.1109/SII.2010.5708347","DOIUrl":"https://doi.org/10.1109/SII.2010.5708347","url":null,"abstract":"We have proposed a leg-shaped haptic simulator (Leg-Robot) that provides an environment for young physical therapist to learn rehabilitation techniques or diagnosis techniques with haptic information which mimics a real patient with abnormal motor function, for example, stroke or spinal cord injury. In previous studies, we also proposed a control method to develop a clonus-like movement for the Leg-Robot. However, it was a completely engineered approach and does not reflect neurological knowledge of the disability sufficiently. In order to enhance profitability of the Leg-Robot as an educational system to learn neurophysiological mechanism of abnormal activities of disabled patients, we newly developed neurologically knowledge-based control method to simulate and control a clonus-like movement of the Leg-Robot. In this paper, dynamic simulation with Open Dynamics Engine (ODE) of a proposed clonus model is mainly discussed.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125031035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708353
Nozomi Toyoda, T. Yokoyama, Naoki Sakai, T. Yabuta
Many research papers have reported studies on sports robots that realize giant-swing motion. However, almost all these robots were controlled using trajectory planning methods, and few robots realized giant-swing motion by learning. Consequently, in this study, we attempted to construct a humanoid robot that realizes giant-swing motion by Q-learning, a reinforcement learning technique. The significant aspect of our study is that few robotic models were constructed beforehand; the robot learns giant-swing motion only by interaction with the environment during simulations. Our implementation faced several problems such as imperfect perception of the velocity state and robot posture issues caused by using only the arm angle. However, our real robot realized giant-swing motion by averaging the Q value and by using rewards — the absolute angle of the foot angle and the angular velocity of the arm angle-in the simulated learning data; the sampling time was 250 ms. Furthermore, the feasibility of generalization of learning for realizing selective motion in the forward and backward rotational directions was investigated; it was revealed that the generalization of learning is feasible as long as it does not interfere with the robot's motions.
{"title":"Realization and analysis of giant-swing motion using Q-Learning","authors":"Nozomi Toyoda, T. Yokoyama, Naoki Sakai, T. Yabuta","doi":"10.1109/SII.2010.5708353","DOIUrl":"https://doi.org/10.1109/SII.2010.5708353","url":null,"abstract":"Many research papers have reported studies on sports robots that realize giant-swing motion. However, almost all these robots were controlled using trajectory planning methods, and few robots realized giant-swing motion by learning. Consequently, in this study, we attempted to construct a humanoid robot that realizes giant-swing motion by Q-learning, a reinforcement learning technique. The significant aspect of our study is that few robotic models were constructed beforehand; the robot learns giant-swing motion only by interaction with the environment during simulations. Our implementation faced several problems such as imperfect perception of the velocity state and robot posture issues caused by using only the arm angle. However, our real robot realized giant-swing motion by averaging the Q value and by using rewards — the absolute angle of the foot angle and the angular velocity of the arm angle-in the simulated learning data; the sampling time was 250 ms. Furthermore, the feasibility of generalization of learning for realizing selective motion in the forward and backward rotational directions was investigated; it was revealed that the generalization of learning is feasible as long as it does not interfere with the robot's motions.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132397486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708356
S. Okamoto, Yoji Yamada
We investigate the perceptual properties of vibro-tactile material textures: in particular, the effects of changes in vibrotactile amplitudes and of vibratory stimuli that are beneath detection thresholds. Existing knowledge of the perceptual properties of vibrotactile gratings does not hold for vibrotactile material textures because material textures are composed of many more frequency components than those of grating textures. We experimentally investigate perceptual properties of wood and sandpaper textures through lossy data compression of the textures. Lossy data compression is suitable for manipulating many independent variables of material textures while maintaining their qualities. By quantization of vibratory amplitudes, we find that subjective quality of textures does not change until the number of quantization steps reaches 12. This represents that data sizes of material textures are reduced by up to approximately 75% while maintaining their qualities. By cutting off the frequency components whose amplitudes are beneath the shifted-threshold curve, we find that even subliminal stimuli affect perception. Developers of vibro-tactile material textures should be aware of these perceptual properties.
{"title":"Perceptual properties of vibrotactile material texture: Effects of amplitude changes and stimuli beneath detection thresholds","authors":"S. Okamoto, Yoji Yamada","doi":"10.1109/SII.2010.5708356","DOIUrl":"https://doi.org/10.1109/SII.2010.5708356","url":null,"abstract":"We investigate the perceptual properties of vibro-tactile material textures: in particular, the effects of changes in vibrotactile amplitudes and of vibratory stimuli that are beneath detection thresholds. Existing knowledge of the perceptual properties of vibrotactile gratings does not hold for vibrotactile material textures because material textures are composed of many more frequency components than those of grating textures. We experimentally investigate perceptual properties of wood and sandpaper textures through lossy data compression of the textures. Lossy data compression is suitable for manipulating many independent variables of material textures while maintaining their qualities. By quantization of vibratory amplitudes, we find that subjective quality of textures does not change until the number of quantization steps reaches 12. This represents that data sizes of material textures are reduced by up to approximately 75% while maintaining their qualities. By cutting off the frequency components whose amplitudes are beneath the shifted-threshold curve, we find that even subliminal stimuli affect perception. Developers of vibro-tactile material textures should be aware of these perceptual properties.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131157566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708297
K. Kurashiki, T. Fukao, Kenji Ishiyama, Tsuyoshi Kamiya, N. Murakami
The authors previously proposed an Unmanned Ground Vehicle (UGV) in an orchard as a base platform for autonomous robot systems for performing tasks such as monitoring, pesticide spraying, and harvesting. To control a UGV in a semi-natural environment, accurate self-localization and a control law that is robust under large disturbances from rough terrain are the first priorities. In this paper, a self-localization algorithm consisting of a 2D laser range finder and the particle filter is proposed. A robust nonlinear control law and a path regeneration algorithm that the authors proposed for underactuated mobile robots are combined with the localization method and applied to a drive-by-wire experimental vehicle. Excellent experimental results were obtained for traveling through a real orchard. The standard deviation of the control error in the lateral direction was less than 15cm.
{"title":"Orchard traveling UGV using particle filter based localization and inverse optimal control","authors":"K. Kurashiki, T. Fukao, Kenji Ishiyama, Tsuyoshi Kamiya, N. Murakami","doi":"10.1109/SII.2010.5708297","DOIUrl":"https://doi.org/10.1109/SII.2010.5708297","url":null,"abstract":"The authors previously proposed an Unmanned Ground Vehicle (UGV) in an orchard as a base platform for autonomous robot systems for performing tasks such as monitoring, pesticide spraying, and harvesting. To control a UGV in a semi-natural environment, accurate self-localization and a control law that is robust under large disturbances from rough terrain are the first priorities. In this paper, a self-localization algorithm consisting of a 2D laser range finder and the particle filter is proposed. A robust nonlinear control law and a path regeneration algorithm that the authors proposed for underactuated mobile robots are combined with the localization method and applied to a drive-by-wire experimental vehicle. Excellent experimental results were obtained for traveling through a real orchard. The standard deviation of the control error in the lateral direction was less than 15cm.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131271188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708298
Ken Sakurada, Shihoko Suzuki, K. Ohno, E. Takeuchi, S. Tadokoro, Akihiko Hata, Naoki Miyahara, K. Higashi
This thesis describes a new method that in real time predicts fall and collision in order to support remote control of a tracked vehicle with sub-tracks. A tracked vehicle has high ability of getting over rough terrain. However, it is difficult for an operator at a remote place to control the vehicle's moving direction and speed. Hence, we propose a new path evaluation system based on the measurement of environmental shapes around the vehicle. In this system, the candidate paths are generated by operator inputs and terrain information. For evaluating the traversability of the path, we estimate the pose of the robot on the path and contact points with the ground. Then, the combination of translational and rotational velocity is chosen.
{"title":"Real-time prediction of fall and collision of tracked vehicle for remote-control support","authors":"Ken Sakurada, Shihoko Suzuki, K. Ohno, E. Takeuchi, S. Tadokoro, Akihiko Hata, Naoki Miyahara, K. Higashi","doi":"10.1109/SII.2010.5708298","DOIUrl":"https://doi.org/10.1109/SII.2010.5708298","url":null,"abstract":"This thesis describes a new method that in real time predicts fall and collision in order to support remote control of a tracked vehicle with sub-tracks. A tracked vehicle has high ability of getting over rough terrain. However, it is difficult for an operator at a remote place to control the vehicle's moving direction and speed. Hence, we propose a new path evaluation system based on the measurement of environmental shapes around the vehicle. In this system, the candidate paths are generated by operator inputs and terrain information. For evaluating the traversability of the path, we estimate the pose of the robot on the path and contact points with the ground. Then, the combination of translational and rotational velocity is chosen.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117185010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708328
Zhaojia Liu, Hiromasa Kamogawa, J. Ota
Fast transition from a stable initial state to a stable handling state is important when multiple mobile robots manipulate and transport a heavy and bulky object. In this paper, a cooperative system consisting of two mobile robots was designed to realize fast transition. A gripper robot grasps and lifts an object from one side to provide enough space for a lifter robot to lift the object. Fast transition can be formulated as an optimization problem. We propose an algorithm to realize fast transition based on the cooperative system designed. The experimental results illustrate the validity of the proposed method.
{"title":"Manipulation of an irregularly shaped object by two mobile robots","authors":"Zhaojia Liu, Hiromasa Kamogawa, J. Ota","doi":"10.1109/SII.2010.5708328","DOIUrl":"https://doi.org/10.1109/SII.2010.5708328","url":null,"abstract":"Fast transition from a stable initial state to a stable handling state is important when multiple mobile robots manipulate and transport a heavy and bulky object. In this paper, a cooperative system consisting of two mobile robots was designed to realize fast transition. A gripper robot grasps and lifts an object from one side to provide enough space for a lifter robot to lift the object. Fast transition can be formulated as an optimization problem. We propose an algorithm to realize fast transition based on the cooperative system designed. The experimental results illustrate the validity of the proposed method.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}