Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708367
Y. Maki, S. Kagami, K. Hashimoto
We have been working on detecting an object with an accelerometer from many moving objects in a camera view. Our previous method relied on knowledges of specific appearance of the moving objects such as colors, which limited the practical use. In this paper, in order to reduce this dependency on the knowledges of object appearance as much as possible, we apply our method to natural feature points detected in the camera view. To deal with numerous feature points in real time, we propose a method to speed up the computation. We also investigate a method to cope with natural features that are not always tracked continuously. Experimental results show that the proposed method is successfully applied to natural feature points detected and tracked in real time.
{"title":"Accelerometer detection in a camera view based on feature point tracking","authors":"Y. Maki, S. Kagami, K. Hashimoto","doi":"10.1109/SII.2010.5708367","DOIUrl":"https://doi.org/10.1109/SII.2010.5708367","url":null,"abstract":"We have been working on detecting an object with an accelerometer from many moving objects in a camera view. Our previous method relied on knowledges of specific appearance of the moving objects such as colors, which limited the practical use. In this paper, in order to reduce this dependency on the knowledges of object appearance as much as possible, we apply our method to natural feature points detected in the camera view. To deal with numerous feature points in real time, we propose a method to speed up the computation. We also investigate a method to cope with natural features that are not always tracked continuously. Experimental results show that the proposed method is successfully applied to natural feature points detected and tracked in real time.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124703582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708303
M. Sutoh, Tsuyoshi Ito, K. Nagatani, Kazuya Yoshida
Planetary rovers play a significant role in surface explorations on the Moon and/or Mars. However, because of wheel slippage, the wheels of planetary rovers can get stuck in loose soil, and the exploration mission may fail because of this situation. To avoid slippage and increase the wheels' drawbar pull, the wheels of planetary rovers typically have parallel fins called lugs on their surface. In this study, we conducted experiments using two-wheeled testbeds in a sandbox to provide a quantitative confirmation of the influence of lugs on the traversability of planetary rovers. In this paper, we report the results of the experiments, and discuss the influence of lugs on the traversability of planetary rovers.
{"title":"Influence evaluation of wheel surface profile on traversability of planetary rovers","authors":"M. Sutoh, Tsuyoshi Ito, K. Nagatani, Kazuya Yoshida","doi":"10.1109/SII.2010.5708303","DOIUrl":"https://doi.org/10.1109/SII.2010.5708303","url":null,"abstract":"Planetary rovers play a significant role in surface explorations on the Moon and/or Mars. However, because of wheel slippage, the wheels of planetary rovers can get stuck in loose soil, and the exploration mission may fail because of this situation. To avoid slippage and increase the wheels' drawbar pull, the wheels of planetary rovers typically have parallel fins called lugs on their surface. In this study, we conducted experiments using two-wheeled testbeds in a sandbox to provide a quantitative confirmation of the influence of lugs on the traversability of planetary rovers. In this paper, we report the results of the experiments, and discuss the influence of lugs on the traversability of planetary rovers.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121553621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708316
S. Taghvaei, Y. Hirata, K. Kosuge
The motion of a passive-type intelligent walker is controlled based on visual estimation of the motion state of the user. The controlled walker would be a support for standing up and prevent the user from falling down. The visual motion analysis detects and tracks the human upper body parts and localizes them in 3D using a stereovision approach. Using this data the user's state is estimated to be whether seated, standing up, falling down or walking. The controller activates the servo brakes in sitting, standing and falling situations as a support to insure both comfort and safety of the user. Estimation methods are experimented with a passive-type intelligent walker referred to as “RT Walker”, equipped with servo brakes.
{"title":"Vision-based human state estimation to control an intelligent passive walker","authors":"S. Taghvaei, Y. Hirata, K. Kosuge","doi":"10.1109/SII.2010.5708316","DOIUrl":"https://doi.org/10.1109/SII.2010.5708316","url":null,"abstract":"The motion of a passive-type intelligent walker is controlled based on visual estimation of the motion state of the user. The controlled walker would be a support for standing up and prevent the user from falling down. The visual motion analysis detects and tracks the human upper body parts and localizes them in 3D using a stereovision approach. Using this data the user's state is estimated to be whether seated, standing up, falling down or walking. The controller activates the servo brakes in sitting, standing and falling situations as a support to insure both comfort and safety of the user. Estimation methods are experimented with a passive-type intelligent walker referred to as “RT Walker”, equipped with servo brakes.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129714461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708306
Kota Toma, S. Kagami, K. Hashimoto
In this paper, we describe high-speed 3D position and normal measurement of a specified point of interest using a high-speed projector-camera systems. A special cross line pattern is adaptively projected onto the point of interest and the reflected pattern on an object surface is captured by the camera and analyzed to obtain the position and normal information. The experimental results show that the estimated normal directions were apparently correct and stable except for occasional large errors due to instantaneous detection failures. Based on this measurement system, we describe a game system by which real-time interaction with non-structured 3D real environment is achieved.
{"title":"3D measurement of a surface point using a high-speed projector-camera system for augmented reality games","authors":"Kota Toma, S. Kagami, K. Hashimoto","doi":"10.1109/SII.2010.5708306","DOIUrl":"https://doi.org/10.1109/SII.2010.5708306","url":null,"abstract":"In this paper, we describe high-speed 3D position and normal measurement of a specified point of interest using a high-speed projector-camera systems. A special cross line pattern is adaptively projected onto the point of interest and the reflected pattern on an object surface is captured by the camera and analyzed to obtain the position and normal information. The experimental results show that the estimated normal directions were apparently correct and stable except for occasional large errors due to instantaneous detection failures. Based on this measurement system, we describe a game system by which real-time interaction with non-structured 3D real environment is achieved.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1941 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128024615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708335
Y. Yamanishi, T. Nakano, Yu Sawada, K. Itoga, T. Okano, F. Arai
This paper presents a novel method of three-dimensional fabrication using Mask-less exposure equipment and a three dimensional microfluidic application for the cell manipulation. The grayscale data can directly control the height of the photoresist to be exposed without using any mask. Three-dimensional microchannel was successfully fabricated simply by using the low cost exposure system with the height range of 0–200 μm. We have succeeded in removing the zona pellucida of oocyte passing through the 3D-microchannel whose cross section is gradually restricted along the path to provide mechanical stimuli on the surface of the oocyte in every direction. This microfluidic chip contributes to the effective high throughput of the peeled oocyte without damaging them.
{"title":"3D-microfluidic device to remove zona pellucida fabricated by Mask-less exposure technology","authors":"Y. Yamanishi, T. Nakano, Yu Sawada, K. Itoga, T. Okano, F. Arai","doi":"10.1109/SII.2010.5708335","DOIUrl":"https://doi.org/10.1109/SII.2010.5708335","url":null,"abstract":"This paper presents a novel method of three-dimensional fabrication using Mask-less exposure equipment and a three dimensional microfluidic application for the cell manipulation. The grayscale data can directly control the height of the photoresist to be exposed without using any mask. Three-dimensional microchannel was successfully fabricated simply by using the low cost exposure system with the height range of 0–200 μm. We have succeeded in removing the zona pellucida of oocyte passing through the 3D-microchannel whose cross section is gradually restricted along the path to provide mechanical stimuli on the surface of the oocyte in every direction. This microfluidic chip contributes to the effective high throughput of the peeled oocyte without damaging them.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128435336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708354
Kazufumi Fukuda, T. Nakano, Y. Sakamoto, T. Kuwahara, Kazuya Yoshida, Y. Takahashi
This paper summarizes the attitude control system of the 50-kg micro satellite RISING-2, which is now under development by the Tohoku University and Hokkaido University. The main mission of the RISING-2 is Earth surface observations with 5-m resolution using a Cassegrain telescope with 10-cm diameter and 1-m focal length. Accurate attitude control capability with less than 0.1 deg direction errors and less than 0.02 deg/s angular velocity errors is required to realize this observation. In addition, because of the larger power consumption of the science units than expected, actuators must be operated with sufficiently low power. The attitude control system realizes 3-axis stabilization for the observation by means of star sensors, gyro sensors, sun attitude sensors and reaction wheels. In this paper the attitude control law of the RISING-2 is analyzed to keep the power of reaction wheels under the limit. This simulation is based on component specifications and also includes noise data of the components which are under development. The simulation results show that the pointing error is less than 0.1 deg in most time with the RISING-2 attitude control system.
{"title":"Attitude control system of micro satellite RISING-2","authors":"Kazufumi Fukuda, T. Nakano, Y. Sakamoto, T. Kuwahara, Kazuya Yoshida, Y. Takahashi","doi":"10.1109/SII.2010.5708354","DOIUrl":"https://doi.org/10.1109/SII.2010.5708354","url":null,"abstract":"This paper summarizes the attitude control system of the 50-kg micro satellite RISING-2, which is now under development by the Tohoku University and Hokkaido University. The main mission of the RISING-2 is Earth surface observations with 5-m resolution using a Cassegrain telescope with 10-cm diameter and 1-m focal length. Accurate attitude control capability with less than 0.1 deg direction errors and less than 0.02 deg/s angular velocity errors is required to realize this observation. In addition, because of the larger power consumption of the science units than expected, actuators must be operated with sufficiently low power. The attitude control system realizes 3-axis stabilization for the observation by means of star sensors, gyro sensors, sun attitude sensors and reaction wheels. In this paper the attitude control law of the RISING-2 is analyzed to keep the power of reaction wheels under the limit. This simulation is based on component specifications and also includes noise data of the components which are under development. The simulation results show that the pointing error is less than 0.1 deg in most time with the RISING-2 attitude control system.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133651256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708323
K. Hoshino, Takuya Kasahara, Naoki Igo, Motomasa Tomida, T. Mukai, Kinji Nishi, Hajime Kotani
The aim of this paper is to propose the technology to allow people to control robots by means of everyday gestures without using sensors or controllers. The hand pose estimation we propose reduces the number of image features per data set to 64, which makes the construction of a large-scale database possible. This has also made it possible to estimate the 3D hand poses of unspecified users with individual differences without sacrificing estimation accuracy. Specifically, the system we propose involved the construction in advance of a large database comprising three elements: hand joint information including the wrist, low-order proportional information on the hand images to indicate the rough hand shape, and hand pose data comprised of 64 image features per data set. To estimate a hand pose, the system first performs coarse screening to select similar data sets from the database based on the three hand proportions of the input image, and then performed a detailed search to find the data set most similar to the input images based on 64 image features. Using subjects with varying hand poses, we performed joint angle estimation using our hand pose estimation system comprised of 750,000 hand pose data sets, achieving roughly the same average estimation error as our previous system, about 2 degrees. However, the standard deviation of the estimation error was smaller than in our previous system having roughly 30,000 data sets: down from 26.91 degrees to 14.57 degrees for the index finger PIP joint and from 15.77 degrees to 10.28 degrees for the thumb. We were thus able to confirm an improvement in estimation accuracy, even for unspecified users. Further, the processing speed, using a notebook PC of normal specifications and a compact high-speed camera, was about 80 fps or more, including image capture, hand pose estimation, and CG rendering and robot control of the estimation result.
{"title":"Gesture-world environment technology for mobile manipulation","authors":"K. Hoshino, Takuya Kasahara, Naoki Igo, Motomasa Tomida, T. Mukai, Kinji Nishi, Hajime Kotani","doi":"10.1109/SII.2010.5708323","DOIUrl":"https://doi.org/10.1109/SII.2010.5708323","url":null,"abstract":"The aim of this paper is to propose the technology to allow people to control robots by means of everyday gestures without using sensors or controllers. The hand pose estimation we propose reduces the number of image features per data set to 64, which makes the construction of a large-scale database possible. This has also made it possible to estimate the 3D hand poses of unspecified users with individual differences without sacrificing estimation accuracy. Specifically, the system we propose involved the construction in advance of a large database comprising three elements: hand joint information including the wrist, low-order proportional information on the hand images to indicate the rough hand shape, and hand pose data comprised of 64 image features per data set. To estimate a hand pose, the system first performs coarse screening to select similar data sets from the database based on the three hand proportions of the input image, and then performed a detailed search to find the data set most similar to the input images based on 64 image features. Using subjects with varying hand poses, we performed joint angle estimation using our hand pose estimation system comprised of 750,000 hand pose data sets, achieving roughly the same average estimation error as our previous system, about 2 degrees. However, the standard deviation of the estimation error was smaller than in our previous system having roughly 30,000 data sets: down from 26.91 degrees to 14.57 degrees for the index finger PIP joint and from 15.77 degrees to 10.28 degrees for the thumb. We were thus able to confirm an improvement in estimation accuracy, even for unspecified users. Further, the processing speed, using a notebook PC of normal specifications and a compact high-speed camera, was about 80 fps or more, including image capture, hand pose estimation, and CG rendering and robot control of the estimation result.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128366689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708344
M. Sato, Seiji Sugiyama, T. Yoshikawa
In this paper, a task-oriented grasp criterion for robot hands is discussed. The objectives of grasp criteria proposed in the past are stable grasping and the task that require adequate forces and moments on the grasping object. These criteria do not consider the other parameters such as positions and velocities on the grasping object and the mechanical property of robot hands. We propose a task-oriented grasp criterion that evaluates the feasibility of the task with considering hand mechanisms. Our proposed method find the grasping position by considering the efficiency of grasping object's positions, velocities, and forces. As a result, this criterion could detect the grasping position like humans do.
{"title":"A grasp criterion for robot hands considering multiple aspects of tasks and hand mechanisms","authors":"M. Sato, Seiji Sugiyama, T. Yoshikawa","doi":"10.1109/SII.2010.5708344","DOIUrl":"https://doi.org/10.1109/SII.2010.5708344","url":null,"abstract":"In this paper, a task-oriented grasp criterion for robot hands is discussed. The objectives of grasp criteria proposed in the past are stable grasping and the task that require adequate forces and moments on the grasping object. These criteria do not consider the other parameters such as positions and velocities on the grasping object and the mechanical property of robot hands. We propose a task-oriented grasp criterion that evaluates the feasibility of the task with considering hand mechanisms. Our proposed method find the grasping position by considering the efficiency of grasping object's positions, velocities, and forces. As a result, this criterion could detect the grasping position like humans do.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132097856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708312
K. Itabashi, M. Kumagai
There are types of walking robots using wheels with their legs. Among those, the robots with passive wheels generate propulsive force by specially designed periodic leg motions. The authors had proposed to use a special axle mechanism that can change its curvature to track a designed path for propulsion. The mechanism showed not only straightforward motion but also curved motion and pivoting motion that is unique to the method. However, the robot did not have enough stiffness for further quantitative investigation. Therefore a new bipedal walking robot was developed for the work. The developed robot could perform the roller walking with the designed forward movement. The method of the roller walk of a biped robot is described briefly followed by the design and implementation of the robot. An idea to use Bézier curve for motion trajectory is also introduced. Experimental results are also described and shown in an accompanied video.
{"title":"Development of a human type legged robot with roller skates","authors":"K. Itabashi, M. Kumagai","doi":"10.1109/SII.2010.5708312","DOIUrl":"https://doi.org/10.1109/SII.2010.5708312","url":null,"abstract":"There are types of walking robots using wheels with their legs. Among those, the robots with passive wheels generate propulsive force by specially designed periodic leg motions. The authors had proposed to use a special axle mechanism that can change its curvature to track a designed path for propulsion. The mechanism showed not only straightforward motion but also curved motion and pivoting motion that is unique to the method. However, the robot did not have enough stiffness for further quantitative investigation. Therefore a new bipedal walking robot was developed for the work. The developed robot could perform the roller walking with the designed forward movement. The method of the roller walk of a biped robot is described briefly followed by the design and implementation of the robot. An idea to use Bézier curve for motion trajectory is also introduced. Experimental results are also described and shown in an accompanied video.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123989811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708332
Takanori Matsukawa, S. Arai, K. Hashimoto
We propose a real-time camera calibration method for an autonomous flight of a small helicopter. Our purpose is to control a small helicopter automatically by using cameras fixed on the ground. We use calibrated cameras, un-calibrated cameras, and the small helicopter that is not attached with any sensors. The proposed method finds correspondences between image features in the two images of a calibrated camera and an un-calibrated camera, and estimates the extrinsic parameters of cameras using a particle filter in real time. We evaluate a utility of the proposed method by experiments. We compare real-time calibration by typical bundle adjustment with a Gauss Newton method to the proposed method in the experiment of the small helicopter flight. The autonomous flight of the small helicopter can be achieved by the proposed method in the situation that a flight of a helicopter can not be achieved with typical bundle adjustment with a Gauss Newton method.
{"title":"Autonomous flight of small helicopter with real-time camera calibration","authors":"Takanori Matsukawa, S. Arai, K. Hashimoto","doi":"10.1109/SII.2010.5708332","DOIUrl":"https://doi.org/10.1109/SII.2010.5708332","url":null,"abstract":"We propose a real-time camera calibration method for an autonomous flight of a small helicopter. Our purpose is to control a small helicopter automatically by using cameras fixed on the ground. We use calibrated cameras, un-calibrated cameras, and the small helicopter that is not attached with any sensors. The proposed method finds correspondences between image features in the two images of a calibrated camera and an un-calibrated camera, and estimates the extrinsic parameters of cameras using a particle filter in real time. We evaluate a utility of the proposed method by experiments. We compare real-time calibration by typical bundle adjustment with a Gauss Newton method to the proposed method in the experiment of the small helicopter flight. The autonomous flight of the small helicopter can be achieved by the proposed method in the situation that a flight of a helicopter can not be achieved with typical bundle adjustment with a Gauss Newton method.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124581066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}