Daniele E. Domenichelli, Silvio Traversaro, L. Muratore, A. Rocchi, F. Nori, L. Natale
The software development cycle in the robotic research environment is hectic and heavily driven by project or paper deadlines. Developers have only little time available for packaging the C/C++ code they write, develop and maintain the build system and continuous integration tools. Research projects are joint efforts of different groups working remotely and asynchronously. The typical solution is to rely on binary distributions and/or large repositories that compile all software and dependencies. This approach hinders code sharing and reuse and often leads to repositories whose inter-dependencies are difficult to manage. Following many years of experience leading software integration is research projects we developed YCM, a tool that supports our best practices addressing these issues. YCM is a set of CMake scripts that provides (1) build system support: to develop and package software libraries and components, and (2) superbuild deployment: to prepare and distribute sets of packages in source form as a single meta build. In this paper we describe YCM and report on our experience adopting it as a tool for managing software repositories in large research projects.
{"title":"A Build System for Software Development in Robotic Academic Collaborative Environments","authors":"Daniele E. Domenichelli, Silvio Traversaro, L. Muratore, A. Rocchi, F. Nori, L. Natale","doi":"10.1109/IRC.2018.00014","DOIUrl":"https://doi.org/10.1109/IRC.2018.00014","url":null,"abstract":"The software development cycle in the robotic research environment is hectic and heavily driven by project or paper deadlines. Developers have only little time available for packaging the C/C++ code they write, develop and maintain the build system and continuous integration tools. Research projects are joint efforts of different groups working remotely and asynchronously. The typical solution is to rely on binary distributions and/or large repositories that compile all software and dependencies. This approach hinders code sharing and reuse and often leads to repositories whose inter-dependencies are difficult to manage. Following many years of experience leading software integration is research projects we developed YCM, a tool that supports our best practices addressing these issues. YCM is a set of CMake scripts that provides (1) build system support: to develop and package software libraries and components, and (2) superbuild deployment: to prepare and distribute sets of packages in source form as a single meta build. In this paper we describe YCM and report on our experience adopting it as a tool for managing software repositories in large research projects.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122674673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yizhe Zhang, Lianjun Li, M. Ripperger, J. Nicho, M. Veeraraghavan, A. Fumagalli
This paper describes an industrial robotics application, named Gilbreth, for picking up objects of different types from a moving conveyor belt and sorting the objects into bins according to type. The environment, which consists of a moving conveyor belt, a break beam sensor, a 3D camera Kinect sensor, a UR10 industrial robot arm with a vacuum gripper, and different object types such as gears, pulleys, piston rods, was inspired by the NIST ARIAC competition. A first version of the Gilbreth application was implemented leveraging many ROS and ROS-I packages. Gazebo was used to simulate the environment, and six external ROS nodes were implemented to execute the required functions. Experimental measurements of CPU usage and processing times of ROS nodes were obtained. Object recognition required the highest processing times that were on par with the time required for the robot arm to execute its movement between four poses: pick approach, pick, pick retreat and place. A need for enhancing the performance of object recognition and Gazebo simulation was identified.
{"title":"Gilbreth: A Conveyor-Belt Based Pick-and-Sort Industrial Robotics Application","authors":"Yizhe Zhang, Lianjun Li, M. Ripperger, J. Nicho, M. Veeraraghavan, A. Fumagalli","doi":"10.1109/IRC.2018.00012","DOIUrl":"https://doi.org/10.1109/IRC.2018.00012","url":null,"abstract":"This paper describes an industrial robotics application, named Gilbreth, for picking up objects of different types from a moving conveyor belt and sorting the objects into bins according to type. The environment, which consists of a moving conveyor belt, a break beam sensor, a 3D camera Kinect sensor, a UR10 industrial robot arm with a vacuum gripper, and different object types such as gears, pulleys, piston rods, was inspired by the NIST ARIAC competition. A first version of the Gilbreth application was implemented leveraging many ROS and ROS-I packages. Gazebo was used to simulate the environment, and six external ROS nodes were implemented to execute the required functions. Experimental measurements of CPU usage and processing times of ROS nodes were obtained. Object recognition required the highest processing times that were on par with the time required for the robot arm to execute its movement between four poses: pick approach, pick, pick retreat and place. A need for enhancing the performance of object recognition and Gazebo simulation was identified.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126302575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the article, we describe a trajectory planning problem for a 6-DOF robotic manipulator arm that carries an ultra-wideband (UWB) radar sensor with synthetic aperture (SAR). The resolution depends on the trajectory and velocity profile of the sensor head. The constraints can be modelled as an optimization problem to obtain a feasible, collision-free target trajectory of the end-effector of the manipulator arm in Cartesian coordinates that minimizes observation time. For 3D-reconstruction, the target is observed in multiple height slices. For Through-the-Wall radar the sensor can be operated in sliding mode for scanning larger areas. For IED inspection the spotlight mode is preferred, constantly pointing the antennas towards the target to obtain maximum azimuth resolution.
{"title":"Robotic Control for Cognitive UWB Radar","authors":"S. Brüggenwirth, F. Rial","doi":"10.1109/IRC.2018.00063","DOIUrl":"https://doi.org/10.1109/IRC.2018.00063","url":null,"abstract":"In the article, we describe a trajectory planning problem for a 6-DOF robotic manipulator arm that carries an ultra-wideband (UWB) radar sensor with synthetic aperture (SAR). The resolution depends on the trajectory and velocity profile of the sensor head. The constraints can be modelled as an optimization problem to obtain a feasible, collision-free target trajectory of the end-effector of the manipulator arm in Cartesian coordinates that minimizes observation time. For 3D-reconstruction, the target is observed in multiple height slices. For Through-the-Wall radar the sensor can be operated in sliding mode for scanning larger areas. For IED inspection the spotlight mode is preferred, constantly pointing the antennas towards the target to obtain maximum azimuth resolution.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125876392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 robot and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy.
{"title":"Grasping Unknown Objects in Clutter by Superquadric Representation","authors":"A. Makhal, F. Thomas, A. P. Gracia","doi":"10.1109/IRC.2018.00062","DOIUrl":"https://doi.org/10.1109/IRC.2018.00062","url":null,"abstract":"In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 robot and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127957049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.
{"title":"Reuleaux: Robot Base Placement by Reachability Analysis","authors":"A. Makhal, Alex K. Goins","doi":"10.1109/IRC.2018.00028","DOIUrl":"https://doi.org/10.1109/IRC.2018.00028","url":null,"abstract":"Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116680127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yosuke Konno, Masayuki Tanaka, M. Okutomi, Y. Yanagawa, Koichi Kinoshita, M. Kawade, Yuki Hasegawa
In this paper, we propose an accurate plane estimation method for 3D point cloud data acquired by ToF cameras. The proposed method is based on an error model of the ToF camera derived from its acquisition geometry and noise characteristic. We utilize the distance between point and plane along the ray direction; the ray-directional distance. We give a brief approximation of a noise model for the ToF camera and utilize it as an adaptive weighting on the ray-directional distances. We formulate the plane estimation problem as the minimization of weighted ray-directional distance. Experimental results demonstrate that the proposed method can outperform conventional plane estimation methods.
{"title":"Accurate Plane Estimation Based on the Error Model of Time-of-Flight Camera","authors":"Yosuke Konno, Masayuki Tanaka, M. Okutomi, Y. Yanagawa, Koichi Kinoshita, M. Kawade, Yuki Hasegawa","doi":"10.1109/IRC.2018.00064","DOIUrl":"https://doi.org/10.1109/IRC.2018.00064","url":null,"abstract":"In this paper, we propose an accurate plane estimation method for 3D point cloud data acquired by ToF cameras. The proposed method is based on an error model of the ToF camera derived from its acquisition geometry and noise characteristic. We utilize the distance between point and plane along the ray direction; the ray-directional distance. We give a brief approximation of a noise model for the ToF camera and utilize it as an adaptive weighting on the ray-directional distances. We formulate the plane estimation problem as the minimization of weighted ray-directional distance. Experimental results demonstrate that the proposed method can outperform conventional plane estimation methods.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"24 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123689023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Wagner, Stefan B. Liu, Andrea Giusti, M. Althoff
We consider two fundamental problems in control of robot manipulators: dynamic scaling of trajectories and collision detection using proprioceptive sensors. While most existing methods approach these problems by assuming accurate knowledge of the robot dynamics, we relax this assumption and account for uncertain model parameters and external disturbances. Our approach is based on the use of a recently proposed interval-arithmetic-based recursive Newton-Euler algorithm. This algorithm enables the efficient numerical computation of over-approximative sets of torques/forces arising from uncertain model parameters. The over-approximative nature of these sets is exploited in this work in order to provide a formally robust trajectory scaling and collision detection strategy. The effectiveness of the proposed approaches has been verified by means of experiments on a 6 degrees-of-freedom robot manipulator with uncertain dynamics.
{"title":"Interval-Arithmetic-Based Trajectory Scaling and Collision Detection for Robots with Uncertain Dynamics","authors":"Michael Wagner, Stefan B. Liu, Andrea Giusti, M. Althoff","doi":"10.1109/IRC.2018.00015","DOIUrl":"https://doi.org/10.1109/IRC.2018.00015","url":null,"abstract":"We consider two fundamental problems in control of robot manipulators: dynamic scaling of trajectories and collision detection using proprioceptive sensors. While most existing methods approach these problems by assuming accurate knowledge of the robot dynamics, we relax this assumption and account for uncertain model parameters and external disturbances. Our approach is based on the use of a recently proposed interval-arithmetic-based recursive Newton-Euler algorithm. This algorithm enables the efficient numerical computation of over-approximative sets of torques/forces arising from uncertain model parameters. The over-approximative nature of these sets is exploited in this work in order to provide a formally robust trajectory scaling and collision detection strategy. The effectiveness of the proposed approaches has been verified by means of experiments on a 6 degrees-of-freedom robot manipulator with uncertain dynamics.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122097653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a test platform for a person carrier robot to measure the impacts and position changes of users on various types of terrain changes when boarding a robot. A robot is developed using a passenger wheelchair and a motor drive system, and Bluetooth is installed in a PC and a MCU (microcontroller unit) to enable movement commands to be transmitted and received. Also, the motor drive system used in this paper is driven by receiving analog signals. However, the MCU that receives the movement command from the PC does not operate because it sends a PWM (Pulse Width Modulation) signal, which is a digital signal, to the motor drive system. Therefore, the PWM signal output from the MCU is converted into an analog signal through the RC filter. These signals are transmitted to the motor driver, enabling the motor to be driven. The simulation and experiment conduct based on the completed person carrier robot.
{"title":"The Designed of Four-Wheeled Person Carrier Robot System","authors":"Youngjae Yun, Donghyeon Seo, Dong Han Kim","doi":"10.1109/IRC.2018.00071","DOIUrl":"https://doi.org/10.1109/IRC.2018.00071","url":null,"abstract":"In this paper, we propose a test platform for a person carrier robot to measure the impacts and position changes of users on various types of terrain changes when boarding a robot. A robot is developed using a passenger wheelchair and a motor drive system, and Bluetooth is installed in a PC and a MCU (microcontroller unit) to enable movement commands to be transmitted and received. Also, the motor drive system used in this paper is driven by receiving analog signals. However, the MCU that receives the movement command from the PC does not operate because it sends a PWM (Pulse Width Modulation) signal, which is a digital signal, to the motor drive system. Therefore, the PWM signal output from the MCU is converted into an analog signal through the RC filter. These signals are transmitted to the motor driver, enabling the motor to be driven. The simulation and experiment conduct based on the completed person carrier robot.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125415172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Pérez-Morales, Olivier Kermorgant, S. D. Quijada, P. Martinet
This paper addresses the perpendicular and parallel parking problems of car-like vehicles for both forward and reverse maneuvers in one trial by extending the work presented in [1] using a multi sensor-based controller with a weighted control scheme. The perception problem is discussed briefly considering a Velodyne VLP-16 and a SICK LMS151 as the sensors providing the required exteroceptive information. The results obtained from simulations and real experimentation for different parking scenarios show the validity and potential of the proposed approach. Furthermore, it is shown that, despite the need of handling several constraints for collision avoidance, the required computation time of the proposed approach is small enough to be used online.
{"title":"Laser-Based Control Law for Autonomous Parallel and Perpendicular Parking","authors":"David Pérez-Morales, Olivier Kermorgant, S. D. Quijada, P. Martinet","doi":"10.1109/IRC.2018.00018","DOIUrl":"https://doi.org/10.1109/IRC.2018.00018","url":null,"abstract":"This paper addresses the perpendicular and parallel parking problems of car-like vehicles for both forward and reverse maneuvers in one trial by extending the work presented in [1] using a multi sensor-based controller with a weighted control scheme. The perception problem is discussed briefly considering a Velodyne VLP-16 and a SICK LMS151 as the sensors providing the required exteroceptive information. The results obtained from simulations and real experimentation for different parking scenarios show the validity and potential of the proposed approach. Furthermore, it is shown that, despite the need of handling several constraints for collision avoidance, the required computation time of the proposed approach is small enough to be used online.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134351211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis F. Contreras-Samame, Olivier Kermorgant, P. Martinet
An efficient mapping in mobile robotics may involve the participation of several agents. In this context, this article presents a framework for collaborative mapping applied to outdoor environments considering a decentralized approach. The mapping approach uses range measurements from a 3D lidar moving in six degrees of freedom. For that case, each robot performs a local SLAM. The maps are then merged when communication is available between the mobile units. This allows building a global map and to improve the state estimation of each agent. Experimental results are presented, where partial maps of the same environment are aligned and merged coherently in spite of the noise from the lidar measurement.
{"title":"Efficient Decentralized Collaborative Mapping for Outdoor Environments","authors":"Luis F. Contreras-Samame, Olivier Kermorgant, P. Martinet","doi":"10.1109/IRC.2018.00017","DOIUrl":"https://doi.org/10.1109/IRC.2018.00017","url":null,"abstract":"An efficient mapping in mobile robotics may involve the participation of several agents. In this context, this article presents a framework for collaborative mapping applied to outdoor environments considering a decentralized approach. The mapping approach uses range measurements from a 3D lidar moving in six degrees of freedom. For that case, each robot performs a local SLAM. The maps are then merged when communication is available between the mobile units. This allows building a global map and to improve the state estimation of each agent. Experimental results are presented, where partial maps of the same environment are aligned and merged coherently in spite of the noise from the lidar measurement.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131778675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}