Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708321
Yutaka Kondo, K. Takemura, J. Takamatsu, T. Ogasawara
Communication with multiple people is more common than one-to-one communication. We therefore developed the system for multi-person communication. By extending our already developed real-time gesture planning method, we propose gesture adjustment suitable for human's demand through parameterization and gaze motion planning which can communicate with multiple people and adjust a gesture to the location of talker and/or object. We implemented the proposed motion planning method on the android robot Actorid-SIT. The components of the system (i.e. input/output processes and selection of interaction rule) are connected each other via Key-Value Store, which is an internet technology and has parallelism and scalability. We conducted multi-person HRI experiments for over 500 subjects in total. In our HRI system, the induction rate of communication was over 60% thanks to parameterization. In addition, residence time of communication was also longer thanks to interruptibility.
{"title":"Multi-person human-robot interaction system for android robot","authors":"Yutaka Kondo, K. Takemura, J. Takamatsu, T. Ogasawara","doi":"10.1109/SII.2010.5708321","DOIUrl":"https://doi.org/10.1109/SII.2010.5708321","url":null,"abstract":"Communication with multiple people is more common than one-to-one communication. We therefore developed the system for multi-person communication. By extending our already developed real-time gesture planning method, we propose gesture adjustment suitable for human's demand through parameterization and gaze motion planning which can communicate with multiple people and adjust a gesture to the location of talker and/or object. We implemented the proposed motion planning method on the android robot Actorid-SIT. The components of the system (i.e. input/output processes and selection of interaction rule) are connected each other via Key-Value Store, which is an internet technology and has parallelism and scalability. We conducted multi-person HRI experiments for over 500 subjects in total. In our HRI system, the induction rate of communication was over 60% thanks to parameterization. In addition, residence time of communication was also longer thanks to interruptibility.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132486248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708300
Hiroshi Yamamoto, M. Moteki, Hui Shao, Kenzi Ootuki, Y. Yanagisawa, Y. Sakaida, A. Nozue, T. Yamaguchi, S. Yuta
Civil engineering work still involves many dangerous and grueling tasks, so improving work environments and ensuring safety are challenges facing this field. The development of construction machines are also essential to prepare for the aging problem of construction workers and shortage of young experienced workers in near future. This research project was conducted to overcome these problems by development of the basic technologies with three-dimensional information, and realize the autonomous operation of hydraulic excavators, which is, a typical general purpose construction machine. We have implemented a prototype of the autonomous hydraulic excavator, which performs the soil excavation and loading work under basic conditions. The achieved work speed and finished product precision were almost same as those of normal work by humans.
{"title":"Development of the autonomous hydraulic excavator prototype using 3-D information for motion planning and control","authors":"Hiroshi Yamamoto, M. Moteki, Hui Shao, Kenzi Ootuki, Y. Yanagisawa, Y. Sakaida, A. Nozue, T. Yamaguchi, S. Yuta","doi":"10.1109/SII.2010.5708300","DOIUrl":"https://doi.org/10.1109/SII.2010.5708300","url":null,"abstract":"Civil engineering work still involves many dangerous and grueling tasks, so improving work environments and ensuring safety are challenges facing this field. The development of construction machines are also essential to prepare for the aging problem of construction workers and shortage of young experienced workers in near future. This research project was conducted to overcome these problems by development of the basic technologies with three-dimensional information, and realize the autonomous operation of hydraulic excavators, which is, a typical general purpose construction machine. We have implemented a prototype of the autonomous hydraulic excavator, which performs the soil excavation and loading work under basic conditions. The achieved work speed and finished product precision were almost same as those of normal work by humans.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128361209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708313
Mamiko Ito, A. Naganawa, K. Oka, K. Sunakoda
In industrial factories and power generation plants, many drain pipes are used to transport various fluids from one location to another, and these pipes have to be regularly inspected for maintenance purposes. The widely used inspection methods are ultrasonic flaw detection, eddy current testing, and so on. Recently, these pipes have been inspected from the inside by using developed endoscopic devices, e.g., CCD devices and optical fiberscopes. However, when certain damages are detected inside of the pipes, many pipes are corked or exchanged because conventional endoscopic devices cannot repair the damage. Therefore, we have developed a laser processing head that can attach to the tip of an optical fiberscope for slender pipes with diameters as small as 12 mm. The laser processing head has a mirror inside of the moving sleeve. The main function of this laser processing head is to reflect the endoscopic images and lasers by using the mirror. In this paper, we describe the structure of the laser processing head designed using ultrasonic actuators and the results of its movement tests.
{"title":"Development of a laser processing head to inspect and repair the damage inside of a half-inch pipe","authors":"Mamiko Ito, A. Naganawa, K. Oka, K. Sunakoda","doi":"10.1109/SII.2010.5708313","DOIUrl":"https://doi.org/10.1109/SII.2010.5708313","url":null,"abstract":"In industrial factories and power generation plants, many drain pipes are used to transport various fluids from one location to another, and these pipes have to be regularly inspected for maintenance purposes. The widely used inspection methods are ultrasonic flaw detection, eddy current testing, and so on. Recently, these pipes have been inspected from the inside by using developed endoscopic devices, e.g., CCD devices and optical fiberscopes. However, when certain damages are detected inside of the pipes, many pipes are corked or exchanged because conventional endoscopic devices cannot repair the damage. Therefore, we have developed a laser processing head that can attach to the tip of an optical fiberscope for slender pipes with diameters as small as 12 mm. The laser processing head has a mirror inside of the moving sleeve. The main function of this laser processing head is to reflect the endoscopic images and lasers by using the mirror. In this paper, we describe the structure of the laser processing head designed using ultrasonic actuators and the results of its movement tests.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128138994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708324
Sarah Cosentino, M. Zecca
In the era of globalization and standardization, in the perspective of industrial system integration, the implementation of a network framework able to interconnect any type of device and supporting both real-time (RT) and time-uncritical communication is becoming a must. This paper proposes an original entirely software-based communication framework to support RT communication over standard Ethernet, and compares it with the most relevant commercially available RT protocols in term of global costs and performances.
{"title":"System integration: Development of a global network communication protocol","authors":"Sarah Cosentino, M. Zecca","doi":"10.1109/SII.2010.5708324","DOIUrl":"https://doi.org/10.1109/SII.2010.5708324","url":null,"abstract":"In the era of globalization and standardization, in the perspective of industrial system integration, the implementation of a network framework able to interconnect any type of device and supporting both real-time (RT) and time-uncritical communication is becoming a must. This paper proposes an original entirely software-based communication framework to support RT communication over standard Ethernet, and compares it with the most relevant commercially available RT protocols in term of global costs and performances.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128118054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708305
E. Rohmer, K. Ohno, Tomoaki Yoshida, K. Nagatani, Eiji Konayagi, S. Tadokoro
Rapid information gathering during the initial stage of investigation is an important process in case of disasters. However this task could be very risky for human rescue crews, when the infrastructure of the building has been compromised or the environment contaminated by nuclear, biological, or chemical weapons. To be able to develop robots that can go inside the site instead of humans, several area of robotics need to be addressed and integrated inside a common robotic platform. In this paper, we described the modular interoperable and extensive hardware and software architecture of Quince, a high degree of mobility crawler type rescue robot having four independent sub-crawlers. To facilitate Quince's navigability, we developed and integrated a semi-autonomous control algorithm that helps the remote operator driving Quince while the flippers are autonomously adjusting to the environment. The robot is then able to overcome obstacles and steps without a special training of the operator. We present here the software integration and the control strategy of the flippers using the embedded basic version of Quince.
{"title":"Integration of a sub-crawlers' autonomous control in Quince highly mobile rescue robot","authors":"E. Rohmer, K. Ohno, Tomoaki Yoshida, K. Nagatani, Eiji Konayagi, S. Tadokoro","doi":"10.1109/SII.2010.5708305","DOIUrl":"https://doi.org/10.1109/SII.2010.5708305","url":null,"abstract":"Rapid information gathering during the initial stage of investigation is an important process in case of disasters. However this task could be very risky for human rescue crews, when the infrastructure of the building has been compromised or the environment contaminated by nuclear, biological, or chemical weapons. To be able to develop robots that can go inside the site instead of humans, several area of robotics need to be addressed and integrated inside a common robotic platform. In this paper, we described the modular interoperable and extensive hardware and software architecture of Quince, a high degree of mobility crawler type rescue robot having four independent sub-crawlers. To facilitate Quince's navigability, we developed and integrated a semi-autonomous control algorithm that helps the remote operator driving Quince while the flippers are autonomously adjusting to the environment. The robot is then able to overcome obstacles and steps without a special training of the operator. We present here the software integration and the control strategy of the flippers using the embedded basic version of Quince.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122098087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708341
Kyong-Mo Koo, Xin Jiang, A. Konno, M. Uchiyama
Robot is now expending their ability from simple repetitive tasks to complex assembling tasks for supporting human life activities and advanced manufacturing automations. Manufacturing automation needs more narrow tolerance to assemble parts than human life support. In manufacturing automation, insertion tasks are the most frequently used primitive tasks. It is simple but impossible assembling tasks for robots without calibrations by high precision sensory devices. Laser displacement sensors are more fast, robust, and high precision one than any other measuring devices. The high precision performance of laser displacement sensor makes robots use it as a calibrations and feature extractions device. This paper will address how to find out the hole position and insertion direction vector from the acquired point clouds. Experiments will show the automated precision insertion task of manipulators.
{"title":"Clamp grasping and insertion task automation for automobile industry","authors":"Kyong-Mo Koo, Xin Jiang, A. Konno, M. Uchiyama","doi":"10.1109/SII.2010.5708341","DOIUrl":"https://doi.org/10.1109/SII.2010.5708341","url":null,"abstract":"Robot is now expending their ability from simple repetitive tasks to complex assembling tasks for supporting human life activities and advanced manufacturing automations. Manufacturing automation needs more narrow tolerance to assemble parts than human life support. In manufacturing automation, insertion tasks are the most frequently used primitive tasks. It is simple but impossible assembling tasks for robots without calibrations by high precision sensory devices. Laser displacement sensors are more fast, robust, and high precision one than any other measuring devices. The high precision performance of laser displacement sensor makes robots use it as a calibrations and feature extractions device. This paper will address how to find out the hole position and insertion direction vector from the acquired point clouds. Experiments will show the automated precision insertion task of manipulators.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114848544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708308
Kazuyuki Shishido, Y. Tsumaki
In this paper, a novel exploration system for 3D virtual space named “the hand glass interface” is proposed. The proposed system provides both scalable exploration and intuitive operationality. It includes a novel hand glass display and a viewpoint changing system, which was proposed in our previous work. The hand glass display consists of four parts: a mini LCD, a micro trackball mouse, a motion sensor and a gripper. The operator can easily handle the hand glass interface in one hand, the motions of which are related to those of either the viewpoint or gaze point. In addition, the directional relationship between the virtual and real worlds is always fixed. This means the operator can intuitively explore within the 3D virtual space by using directional clues in the real world. The fundamental experiment shows its feasibilities.
{"title":"A hand glass interface to explore 3D virtual space","authors":"Kazuyuki Shishido, Y. Tsumaki","doi":"10.1109/SII.2010.5708308","DOIUrl":"https://doi.org/10.1109/SII.2010.5708308","url":null,"abstract":"In this paper, a novel exploration system for 3D virtual space named “the hand glass interface” is proposed. The proposed system provides both scalable exploration and intuitive operationality. It includes a novel hand glass display and a viewpoint changing system, which was proposed in our previous work. The hand glass display consists of four parts: a mini LCD, a micro trackball mouse, a motion sensor and a gripper. The operator can easily handle the hand glass interface in one hand, the motions of which are related to those of either the viewpoint or gaze point. In addition, the directional relationship between the virtual and real worlds is always fixed. This means the operator can intuitively explore within the 3D virtual space by using directional clues in the real world. The fundamental experiment shows its feasibilities.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127234110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708338
I. Shimizu, K. Kikuchi, S. Tsuchitani
In this study, in order to develop fabrication process of ionic polymer-metal composite (IPMC) actuator of microcantilever type, we evaluated actuation characteristics of a thin film type IPMC. Though its displacement was smaller than that of IPMCs fabricated by using the commercial ionic polymer (Nafion) film, the actuation of the thin film type IPMCs was confirmed. It depended on the thickness of the Nafion layer. We can expect the realization of MEMS devices integrated with the thin film type micro IPMCs fabricated by the propose method.
{"title":"Fabrication of ionic polymer-metal actuator of microcantilever type","authors":"I. Shimizu, K. Kikuchi, S. Tsuchitani","doi":"10.1109/SII.2010.5708338","DOIUrl":"https://doi.org/10.1109/SII.2010.5708338","url":null,"abstract":"In this study, in order to develop fabrication process of ionic polymer-metal composite (IPMC) actuator of microcantilever type, we evaluated actuation characteristics of a thin film type IPMC. Though its displacement was smaller than that of IPMCs fabricated by using the commercial ionic polymer (Nafion) film, the actuation of the thin film type IPMCs was confirmed. It depended on the thickness of the Nafion layer. We can expect the realization of MEMS devices integrated with the thin film type micro IPMCs fabricated by the propose method.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127134420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708318
S. Rahman, R. Ikeura, Ishibashi Shinsuke, S. Hayakawa, H. Sawai
A power assist robot system reduces the weight of an object lifted with it. However, the root causes of the reduced heaviness as well as the factors affecting the heaviness are still unknown. The knowledge on the root causes and factors could be used to modulate the interactions between the human user and the robot when lifting objects with it. This paper investigated the reasons and factors behind the reduced heaviness of objects lifted with a power assist system. We hypothesized that weight perception due to inertia might be different from that due to gravity when lifting an object with a power assist system because the actual weight and the perceived weight were different. Subjects lifted objects manually and with power-assist separately. We compared load forces and motion features for the manually lifted objects to that for the power-assisted objects and found that the load force and its rate, velocity and acceleration for the powerassisted objects were lower than that for the manually lifted objects. We noticed that there were time delays in force sensing, position sensing, servomotor etc. for the power-assisted objects, but not for the manually lifted objects. We assumed that the delays were responsible for the reduced heaviness of objects lifted with power-assist. Finally, we proposed to use the findings to develop human-friendly power assist devices for manipulating heavy objects in industries that would help improve/modulate interactions between users and robots.
{"title":"Why does a power assist robot system reduce the weight of an object lifted with it? the preliminary results","authors":"S. Rahman, R. Ikeura, Ishibashi Shinsuke, S. Hayakawa, H. Sawai","doi":"10.1109/SII.2010.5708318","DOIUrl":"https://doi.org/10.1109/SII.2010.5708318","url":null,"abstract":"A power assist robot system reduces the weight of an object lifted with it. However, the root causes of the reduced heaviness as well as the factors affecting the heaviness are still unknown. The knowledge on the root causes and factors could be used to modulate the interactions between the human user and the robot when lifting objects with it. This paper investigated the reasons and factors behind the reduced heaviness of objects lifted with a power assist system. We hypothesized that weight perception due to inertia might be different from that due to gravity when lifting an object with a power assist system because the actual weight and the perceived weight were different. Subjects lifted objects manually and with power-assist separately. We compared load forces and motion features for the manually lifted objects to that for the power-assisted objects and found that the load force and its rate, velocity and acceleration for the powerassisted objects were lower than that for the manually lifted objects. We noticed that there were time delays in force sensing, position sensing, servomotor etc. for the power-assisted objects, but not for the manually lifted objects. We assumed that the delays were responsible for the reduced heaviness of objects lifted with power-assist. Finally, we proposed to use the findings to develop human-friendly power assist devices for manipulating heavy objects in industries that would help improve/modulate interactions between users and robots.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/SII.2010.5708343
Dongbo Zhou, Y. Aiyama
This paper presents an activity teaching and playback system for multi-fingered hand-arm robot. Activity of hand is taught by dataglove that been fixed on the manipulator, motion of manipulator is decided by operator's hand that wears dataglove, force is determined by external minitype sensors that mounted on operator's finger-tips. Direct teaching is utilized therefore teaching parameters are determined by operator's intuition. Motion of manipulator and hand can be taught simultaneously. The system is simple and easily being used. In process of playback, a special force playback formula is proposed as the some inaccuracy of dataglove exists. This system is being expected to be applied for grasping tasks.
{"title":"Intuitive and direct teaching system of multi-fingered hand-arm robot for grasping task","authors":"Dongbo Zhou, Y. Aiyama","doi":"10.1109/SII.2010.5708343","DOIUrl":"https://doi.org/10.1109/SII.2010.5708343","url":null,"abstract":"This paper presents an activity teaching and playback system for multi-fingered hand-arm robot. Activity of hand is taught by dataglove that been fixed on the manipulator, motion of manipulator is decided by operator's hand that wears dataglove, force is determined by external minitype sensors that mounted on operator's finger-tips. Direct teaching is utilized therefore teaching parameters are determined by operator's intuition. Motion of manipulator and hand can be taught simultaneously. The system is simple and easily being used. In process of playback, a special force playback formula is proposed as the some inaccuracy of dataglove exists. This system is being expected to be applied for grasping tasks.","PeriodicalId":334652,"journal":{"name":"2010 IEEE/SICE International Symposium on System Integration","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131354473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}