Pub Date : 2010-12-20DOI: 10.1109/SII.2010.5708293
S. Diamantas, Anastasios Oikonomidis, R. Crowder
Depth computation in robotics is an important step towards providing robust and accurate navigation capabilities to a mobile robot. In this paper we examine the problem of depth estimation with the view to be used in parsimonious systems where fast and accurate measurements are critical. For this purpose we have combined two methods, namely optical flow and least squares in order to infer depth estimates between a robot and a landmark. In the optical flow method the variation of the optical flow vector at varying distances and velocities is observed. In the least squares method snapshots of a landmark are taken from different robot positions. The results of the two methods show that there is a significant increase in depth estimation accuracy by combining optical flow and least squares.
{"title":"Depth computation using optical flow and least squares","authors":"S. Diamantas, Anastasios Oikonomidis, R. Crowder","doi":"10.1109/SII.2010.5708293","DOIUrl":"https://doi.org/10.1109/SII.2010.5708293","url":null,"abstract":"Depth computation in robotics is an important step towards providing robust and accurate navigation capabilities to a mobile robot. In this paper we examine the problem of depth estimation with the view to be used in parsimonious systems where fast and accurate measurements are critical. For this purpose we have combined two methods, namely optical flow and least squares in order to infer depth estimates between a robot and a landmark. In the optical flow method the variation of the optical flow vector at varying distances and velocities is observed. In the least squares method snapshots of a landmark are taken from different robot positions. The results of the two methods show that there is a significant increase in depth estimation accuracy by combining optical flow and least squares.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128471791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708310
N. Uyama, H. Lund, Koki Asakimori, Yuki Ikeda, Daichi Hirano, H. Nakanishi, Kazuya Yoshida
On-ground experiment for space robotic system is essential to validate constructed robotic system prior to launch. This paper presents an integrated on-ground experimental environment for orbital robotic system. The experimental environment utilizes an air-floating testbed to realize two-dimensional micro-gravity environment on ground. As manipulation system, the constructed environment adopts a ground-based manipulator and a free-floating robot. In order to verify the emulated micro-gravity environment, two cases are tested: the impulse-momentum relationship between a ground-based manipulator and a free-floating target, and the conservation of momentum in a free-floating robot. Both results conclude the validity of the constructed experimental environment for on-ground micro-gravity emulation.
{"title":"Integrated experimental environment for orbital robotic systems, using ground-based and free-floating manipulators","authors":"N. Uyama, H. Lund, Koki Asakimori, Yuki Ikeda, Daichi Hirano, H. Nakanishi, Kazuya Yoshida","doi":"10.1109/SII.2010.5708310","DOIUrl":"https://doi.org/10.1109/SII.2010.5708310","url":null,"abstract":"On-ground experiment for space robotic system is essential to validate constructed robotic system prior to launch. This paper presents an integrated on-ground experimental environment for orbital robotic system. The experimental environment utilizes an air-floating testbed to realize two-dimensional micro-gravity environment on ground. As manipulation system, the constructed environment adopts a ground-based manipulator and a free-floating robot. In order to verify the emulated micro-gravity environment, two cases are tested: the impulse-momentum relationship between a ground-based manipulator and a free-floating target, and the conservation of momentum in a free-floating robot. Both results conclude the validity of the constructed experimental environment for on-ground micro-gravity emulation.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708296
Michihisa Ishikura, E. Takeuchi, M. Konyo, S. Tadokoro
This paper presents the evaluation results for conventional methods that can be used for vision-based localization. An Active Scope Camera is a very thin snake robot and can be used as a rescue robot for search and rescue missions. Self-position estimation of the Active Scope Camera is important for efficient search. Nevertheless, using sensors for this purpose hinders the movement and maneuvering of the camera through narrow gaps, because sensors are very big and heavy for the Active Scope Camera. Vision-based localization using a fish-eye camera is suitable technique for self-position estimation. However, the images obtained using the Active Scope Camera are not of good quality. The material of objects found in disaster environments and overexposure by light-emitting diodes embedded at the camera tip affects the matching of feature points. In this paper, properties of images of disaster sites obtained using the Active Scope Camera and the accuracy evaluation of vision-based localization are described.
{"title":"Vision-based localization using active scope camera — Accuracy evaluation for structure from motion in disaster environment","authors":"Michihisa Ishikura, E. Takeuchi, M. Konyo, S. Tadokoro","doi":"10.1109/SII.2010.5708296","DOIUrl":"https://doi.org/10.1109/SII.2010.5708296","url":null,"abstract":"This paper presents the evaluation results for conventional methods that can be used for vision-based localization. An Active Scope Camera is a very thin snake robot and can be used as a rescue robot for search and rescue missions. Self-position estimation of the Active Scope Camera is important for efficient search. Nevertheless, using sensors for this purpose hinders the movement and maneuvering of the camera through narrow gaps, because sensors are very big and heavy for the Active Scope Camera. Vision-based localization using a fish-eye camera is suitable technique for self-position estimation. However, the images obtained using the Active Scope Camera are not of good quality. The material of objects found in disaster environments and overexposure by light-emitting diodes embedded at the camera tip affects the matching of feature points. In this paper, properties of images of disaster sites obtained using the Active Scope Camera and the accuracy evaluation of vision-based localization are described.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115204387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708348
M. Mukul, F. Matsuno
This paper addresses the performance of Common Spatial Pattern (CSP) on rhythmic band information selected from temporally decorrelated signals by zero-phase FIR digital filter. The standard blind source separation (BSS) method is applied to the EOG corrected EEG signals to make the EOG corrected EEG signals temporally decorrelated. This work considers the standard CSP with IEEE-1057 signal reconstruction algorithm for extraction of rhythmic information from the EOG corrected EEG signals too. The selected rhythmic band information is further processed by the CSP method for the feature extraction. The performance of the proposed method has been evaluated by BCI performance evaluation parameters: classification accuracy (ACC) and Cohen's Kappa co-efficient (k).
{"title":"Rhythmic components of spatio-temporally decorrelated EEG signals based Common Spatial Pattern","authors":"M. Mukul, F. Matsuno","doi":"10.1109/SII.2010.5708348","DOIUrl":"https://doi.org/10.1109/SII.2010.5708348","url":null,"abstract":"This paper addresses the performance of Common Spatial Pattern (CSP) on rhythmic band information selected from temporally decorrelated signals by zero-phase FIR digital filter. The standard blind source separation (BSS) method is applied to the EOG corrected EEG signals to make the EOG corrected EEG signals temporally decorrelated. This work considers the standard CSP with IEEE-1057 signal reconstruction algorithm for extraction of rhythmic information from the EOG corrected EEG signals too. The selected rhythmic band information is further processed by the CSP method for the feature extraction. The performance of the proposed method has been evaluated by BCI performance evaluation parameters: classification accuracy (ACC) and Cohen's Kappa co-efficient (k).","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114324545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708347
T. Kikuchi, K. Oda
We have proposed a leg-shaped haptic simulator (Leg-Robot) that provides an environment for young physical therapist to learn rehabilitation techniques or diagnosis techniques with haptic information which mimics a real patient with abnormal motor function, for example, stroke or spinal cord injury. In previous studies, we also proposed a control method to develop a clonus-like movement for the Leg-Robot. However, it was a completely engineered approach and does not reflect neurological knowledge of the disability sufficiently. In order to enhance profitability of the Leg-Robot as an educational system to learn neurophysiological mechanism of abnormal activities of disabled patients, we newly developed neurologically knowledge-based control method to simulate and control a clonus-like movement of the Leg-Robot. In this paper, dynamic simulation with Open Dynamics Engine (ODE) of a proposed clonus model is mainly discussed.
{"title":"Dynamics simulation of a neuromuscular model of ankle clonus for neurophysiological education by a leg-shaped haptic simulator","authors":"T. Kikuchi, K. Oda","doi":"10.1109/SII.2010.5708347","DOIUrl":"https://doi.org/10.1109/SII.2010.5708347","url":null,"abstract":"We have proposed a leg-shaped haptic simulator (Leg-Robot) that provides an environment for young physical therapist to learn rehabilitation techniques or diagnosis techniques with haptic information which mimics a real patient with abnormal motor function, for example, stroke or spinal cord injury. In previous studies, we also proposed a control method to develop a clonus-like movement for the Leg-Robot. However, it was a completely engineered approach and does not reflect neurological knowledge of the disability sufficiently. In order to enhance profitability of the Leg-Robot as an educational system to learn neurophysiological mechanism of abnormal activities of disabled patients, we newly developed neurologically knowledge-based control method to simulate and control a clonus-like movement of the Leg-Robot. In this paper, dynamic simulation with Open Dynamics Engine (ODE) of a proposed clonus model is mainly discussed.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125031035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708353
Nozomi Toyoda, T. Yokoyama, Naoki Sakai, T. Yabuta
Many research papers have reported studies on sports robots that realize giant-swing motion. However, almost all these robots were controlled using trajectory planning methods, and few robots realized giant-swing motion by learning. Consequently, in this study, we attempted to construct a humanoid robot that realizes giant-swing motion by Q-learning, a reinforcement learning technique. The significant aspect of our study is that few robotic models were constructed beforehand; the robot learns giant-swing motion only by interaction with the environment during simulations. Our implementation faced several problems such as imperfect perception of the velocity state and robot posture issues caused by using only the arm angle. However, our real robot realized giant-swing motion by averaging the Q value and by using rewards — the absolute angle of the foot angle and the angular velocity of the arm angle-in the simulated learning data; the sampling time was 250 ms. Furthermore, the feasibility of generalization of learning for realizing selective motion in the forward and backward rotational directions was investigated; it was revealed that the generalization of learning is feasible as long as it does not interfere with the robot's motions.
{"title":"Realization and analysis of giant-swing motion using Q-Learning","authors":"Nozomi Toyoda, T. Yokoyama, Naoki Sakai, T. Yabuta","doi":"10.1109/SII.2010.5708353","DOIUrl":"https://doi.org/10.1109/SII.2010.5708353","url":null,"abstract":"Many research papers have reported studies on sports robots that realize giant-swing motion. However, almost all these robots were controlled using trajectory planning methods, and few robots realized giant-swing motion by learning. Consequently, in this study, we attempted to construct a humanoid robot that realizes giant-swing motion by Q-learning, a reinforcement learning technique. The significant aspect of our study is that few robotic models were constructed beforehand; the robot learns giant-swing motion only by interaction with the environment during simulations. Our implementation faced several problems such as imperfect perception of the velocity state and robot posture issues caused by using only the arm angle. However, our real robot realized giant-swing motion by averaging the Q value and by using rewards — the absolute angle of the foot angle and the angular velocity of the arm angle-in the simulated learning data; the sampling time was 250 ms. Furthermore, the feasibility of generalization of learning for realizing selective motion in the forward and backward rotational directions was investigated; it was revealed that the generalization of learning is feasible as long as it does not interfere with the robot's motions.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132397486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708356
S. Okamoto, Yoji Yamada
We investigate the perceptual properties of vibro-tactile material textures: in particular, the effects of changes in vibrotactile amplitudes and of vibratory stimuli that are beneath detection thresholds. Existing knowledge of the perceptual properties of vibrotactile gratings does not hold for vibrotactile material textures because material textures are composed of many more frequency components than those of grating textures. We experimentally investigate perceptual properties of wood and sandpaper textures through lossy data compression of the textures. Lossy data compression is suitable for manipulating many independent variables of material textures while maintaining their qualities. By quantization of vibratory amplitudes, we find that subjective quality of textures does not change until the number of quantization steps reaches 12. This represents that data sizes of material textures are reduced by up to approximately 75% while maintaining their qualities. By cutting off the frequency components whose amplitudes are beneath the shifted-threshold curve, we find that even subliminal stimuli affect perception. Developers of vibro-tactile material textures should be aware of these perceptual properties.
{"title":"Perceptual properties of vibrotactile material texture: Effects of amplitude changes and stimuli beneath detection thresholds","authors":"S. Okamoto, Yoji Yamada","doi":"10.1109/SII.2010.5708356","DOIUrl":"https://doi.org/10.1109/SII.2010.5708356","url":null,"abstract":"We investigate the perceptual properties of vibro-tactile material textures: in particular, the effects of changes in vibrotactile amplitudes and of vibratory stimuli that are beneath detection thresholds. Existing knowledge of the perceptual properties of vibrotactile gratings does not hold for vibrotactile material textures because material textures are composed of many more frequency components than those of grating textures. We experimentally investigate perceptual properties of wood and sandpaper textures through lossy data compression of the textures. Lossy data compression is suitable for manipulating many independent variables of material textures while maintaining their qualities. By quantization of vibratory amplitudes, we find that subjective quality of textures does not change until the number of quantization steps reaches 12. This represents that data sizes of material textures are reduced by up to approximately 75% while maintaining their qualities. By cutting off the frequency components whose amplitudes are beneath the shifted-threshold curve, we find that even subliminal stimuli affect perception. Developers of vibro-tactile material textures should be aware of these perceptual properties.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131157566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708297
K. Kurashiki, T. Fukao, Kenji Ishiyama, Tsuyoshi Kamiya, N. Murakami
The authors previously proposed an Unmanned Ground Vehicle (UGV) in an orchard as a base platform for autonomous robot systems for performing tasks such as monitoring, pesticide spraying, and harvesting. To control a UGV in a semi-natural environment, accurate self-localization and a control law that is robust under large disturbances from rough terrain are the first priorities. In this paper, a self-localization algorithm consisting of a 2D laser range finder and the particle filter is proposed. A robust nonlinear control law and a path regeneration algorithm that the authors proposed for underactuated mobile robots are combined with the localization method and applied to a drive-by-wire experimental vehicle. Excellent experimental results were obtained for traveling through a real orchard. The standard deviation of the control error in the lateral direction was less than 15cm.
{"title":"Orchard traveling UGV using particle filter based localization and inverse optimal control","authors":"K. Kurashiki, T. Fukao, Kenji Ishiyama, Tsuyoshi Kamiya, N. Murakami","doi":"10.1109/SII.2010.5708297","DOIUrl":"https://doi.org/10.1109/SII.2010.5708297","url":null,"abstract":"The authors previously proposed an Unmanned Ground Vehicle (UGV) in an orchard as a base platform for autonomous robot systems for performing tasks such as monitoring, pesticide spraying, and harvesting. To control a UGV in a semi-natural environment, accurate self-localization and a control law that is robust under large disturbances from rough terrain are the first priorities. In this paper, a self-localization algorithm consisting of a 2D laser range finder and the particle filter is proposed. A robust nonlinear control law and a path regeneration algorithm that the authors proposed for underactuated mobile robots are combined with the localization method and applied to a drive-by-wire experimental vehicle. Excellent experimental results were obtained for traveling through a real orchard. The standard deviation of the control error in the lateral direction was less than 15cm.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131271188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708344
M. Sato, Seiji Sugiyama, T. Yoshikawa
In this paper, a task-oriented grasp criterion for robot hands is discussed. The objectives of grasp criteria proposed in the past are stable grasping and the task that require adequate forces and moments on the grasping object. These criteria do not consider the other parameters such as positions and velocities on the grasping object and the mechanical property of robot hands. We propose a task-oriented grasp criterion that evaluates the feasibility of the task with considering hand mechanisms. Our proposed method find the grasping position by considering the efficiency of grasping object's positions, velocities, and forces. As a result, this criterion could detect the grasping position like humans do.
{"title":"A grasp criterion for robot hands considering multiple aspects of tasks and hand mechanisms","authors":"M. Sato, Seiji Sugiyama, T. Yoshikawa","doi":"10.1109/SII.2010.5708344","DOIUrl":"https://doi.org/10.1109/SII.2010.5708344","url":null,"abstract":"In this paper, a task-oriented grasp criterion for robot hands is discussed. The objectives of grasp criteria proposed in the past are stable grasping and the task that require adequate forces and moments on the grasping object. These criteria do not consider the other parameters such as positions and velocities on the grasping object and the mechanical property of robot hands. We propose a task-oriented grasp criterion that evaluates the feasibility of the task with considering hand mechanisms. Our proposed method find the grasping position by considering the efficiency of grasping object's positions, velocities, and forces. As a result, this criterion could detect the grasping position like humans do.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132097856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708354
Kazufumi Fukuda, T. Nakano, Y. Sakamoto, T. Kuwahara, Kazuya Yoshida, Y. Takahashi
This paper summarizes the attitude control system of the 50-kg micro satellite RISING-2, which is now under development by the Tohoku University and Hokkaido University. The main mission of the RISING-2 is Earth surface observations with 5-m resolution using a Cassegrain telescope with 10-cm diameter and 1-m focal length. Accurate attitude control capability with less than 0.1 deg direction errors and less than 0.02 deg/s angular velocity errors is required to realize this observation. In addition, because of the larger power consumption of the science units than expected, actuators must be operated with sufficiently low power. The attitude control system realizes 3-axis stabilization for the observation by means of star sensors, gyro sensors, sun attitude sensors and reaction wheels. In this paper the attitude control law of the RISING-2 is analyzed to keep the power of reaction wheels under the limit. This simulation is based on component specifications and also includes noise data of the components which are under development. The simulation results show that the pointing error is less than 0.1 deg in most time with the RISING-2 attitude control system.
{"title":"Attitude control system of micro satellite RISING-2","authors":"Kazufumi Fukuda, T. Nakano, Y. Sakamoto, T. Kuwahara, Kazuya Yoshida, Y. Takahashi","doi":"10.1109/SII.2010.5708354","DOIUrl":"https://doi.org/10.1109/SII.2010.5708354","url":null,"abstract":"This paper summarizes the attitude control system of the 50-kg micro satellite RISING-2, which is now under development by the Tohoku University and Hokkaido University. The main mission of the RISING-2 is Earth surface observations with 5-m resolution using a Cassegrain telescope with 10-cm diameter and 1-m focal length. Accurate attitude control capability with less than 0.1 deg direction errors and less than 0.02 deg/s angular velocity errors is required to realize this observation. In addition, because of the larger power consumption of the science units than expected, actuators must be operated with sufficiently low power. The attitude control system realizes 3-axis stabilization for the observation by means of star sensors, gyro sensors, sun attitude sensors and reaction wheels. In this paper the attitude control law of the RISING-2 is analyzed to keep the power of reaction wheels under the limit. This simulation is based on component specifications and also includes noise data of the components which are under development. The simulation results show that the pointing error is less than 0.1 deg in most time with the RISING-2 attitude control system.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133651256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}