Cooperation is often considered as one of the key and unclear concepts, which differentiates multi-agent systems from other related fields such as distributed computing. One of the popular benchmarks for the verification of the effectiveness of various cooperation algorithms is multi-agent foraging task. Different approaches have been proposed among which Markov game based ones are widely used, though they could not select consistent equilibrium for the group. In this paper, an evolutionary game based method is proposed. In this method, the interactions among the agents are modeled by snow-drift game to evolve the evolutionary stable strategy (ESS) and bring the maximal reward for the group of agents. The simulation verified the efficiency of the proposed algorithm.
{"title":"Multi-agent Cooperation Using Snow-Drift Evolutionary Game Model: Case Study in Foraging Task","authors":"Ahmad Esmaeili, Zahra Ghorrati, E. Matson","doi":"10.1109/IRC.2018.00065","DOIUrl":"https://doi.org/10.1109/IRC.2018.00065","url":null,"abstract":"Cooperation is often considered as one of the key and unclear concepts, which differentiates multi-agent systems from other related fields such as distributed computing. One of the popular benchmarks for the verification of the effectiveness of various cooperation algorithms is multi-agent foraging task. Different approaches have been proposed among which Markov game based ones are widely used, though they could not select consistent equilibrium for the group. In this paper, an evolutionary game based method is proposed. In this method, the interactions among the agents are modeled by snow-drift game to evolve the evolutionary stable strategy (ESS) and bring the maximal reward for the group of agents. The simulation verified the efficiency of the proposed algorithm.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130880068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sujin Park, Sang-Gyu Park, Hyeonggun Lee, Minji Hyun, Eunsuh Lee, Jeonghyeon Ahn, Lauren Featherstun, Yongho Kim, E. Matson
Distributed multiagent systems consist of multiple agents which perform related tasks. In this kind of system, the tasks are distributed amongst the agents by an operator based on shared information. The information used to assign tasks includes not only agent's capability, but also agent's state, the goal's state, and conditions from the surrounding environments. Distributed multi agent systems are usually constrained by uncertain information about nearby agents, and by limited network availability to transfer information to the operator. Given these constraints of using an operator, a better designed system might allow agents to distribute tasks on their own. This paper proposes a goal distribution strategy for collaborative distributed multi agent systems where agents distribute tasks amongst themselves. In this strategy, a goal model is shared amongst all participating agents, enabling them to synchronize in order to achieve complex goals that require sequential executions. Agents in this system are capable of transferring information over the network where all others belong to. The approach was tested and verified using StarCraft II APIs, introduced by Blizzard and Google Deepmind.
{"title":"Collaborative Goal Distribution in Distributed Multiagent Systems","authors":"Sujin Park, Sang-Gyu Park, Hyeonggun Lee, Minji Hyun, Eunsuh Lee, Jeonghyeon Ahn, Lauren Featherstun, Yongho Kim, E. Matson","doi":"10.1109/IRC.2018.00066","DOIUrl":"https://doi.org/10.1109/IRC.2018.00066","url":null,"abstract":"Distributed multiagent systems consist of multiple agents which perform related tasks. In this kind of system, the tasks are distributed amongst the agents by an operator based on shared information. The information used to assign tasks includes not only agent's capability, but also agent's state, the goal's state, and conditions from the surrounding environments. Distributed multi agent systems are usually constrained by uncertain information about nearby agents, and by limited network availability to transfer information to the operator. Given these constraints of using an operator, a better designed system might allow agents to distribute tasks on their own. This paper proposes a goal distribution strategy for collaborative distributed multi agent systems where agents distribute tasks amongst themselves. In this strategy, a goal model is shared amongst all participating agents, enabling them to synchronize in order to achieve complex goals that require sequential executions. Agents in this system are capable of transferring information over the network where all others belong to. The approach was tested and verified using StarCraft II APIs, introduced by Blizzard and Google Deepmind.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce our Open-Finger that integrates the smartphone with a physical finger. Smartphones are widely used for communication and entertainment, and have characteristic features such an even surface and a few buttons. Our interaction with them is quite simple and really limited. In contrast, we have found a way to use a physical finger attached to a smartphone. A real finger has many capabilities such as pointing and touching. For example, we use our finger to point at something or someone, to move something, or to count a number. We can also use such features for interactions between us and our smartphones. Thus, the finger approach makes smartphone more intuitive and familiar for novice and elderly users who are not good at manipulating smartphone. In this paper, we describe our design concepts, prototype implementation and application possibilities.
{"title":"Open-Finger: Mobile Application Platform Enhanced by Physical Finger","authors":"Hiroaki Tobita, Hirotaka Saitoh","doi":"10.1109/IRC.2018.00041","DOIUrl":"https://doi.org/10.1109/IRC.2018.00041","url":null,"abstract":"We introduce our Open-Finger that integrates the smartphone with a physical finger. Smartphones are widely used for communication and entertainment, and have characteristic features such an even surface and a few buttons. Our interaction with them is quite simple and really limited. In contrast, we have found a way to use a physical finger attached to a smartphone. A real finger has many capabilities such as pointing and touching. For example, we use our finger to point at something or someone, to move something, or to count a number. We can also use such features for interactions between us and our smartphones. Thus, the finger approach makes smartphone more intuitive and familiar for novice and elderly users who are not good at manipulating smartphone. In this paper, we describe our design concepts, prototype implementation and application possibilities.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122706589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new learning based multimodal sensing paradigm within a probabilistic framework to improve the depth image measurements of an RGB-D camera. The proposed approach uses an RGB-D camera and laser range finder to provide an improved depth image using convolutional neural network (CNN) approximation within a probabilistic inference framework. Synchronized RGB-D and laser measurements are collected in an environment to train a model, which is then used for depth image accuracy improvements and sensor range extension. The model exploits additional RGB information, which contains depth cues, to enhance the accuracy of pixel level measurements. A computationally efficient implementation of the CNN allows the model to train while exploring an unknown area to provide improved depth image measurements. The approach yields depth images containing spatial information far beyond the suggested operational limits. We demonstrate a nearly three-fold depth range extension (3:5m to 10m) while maintaining similar camera accuracy at the maximum range. The mean absolute error is also reduced from the original depth image by a factor of six. The efficacy of this approach is demonstrated in an unstructured office space.
{"title":"Environment-Dependent Depth Enhancement with Multi-modal Sensor Fusion Learning","authors":"Kuya Takami, Taeyoung Lee","doi":"10.1109/IRC.2018.00049","DOIUrl":"https://doi.org/10.1109/IRC.2018.00049","url":null,"abstract":"This paper presents a new learning based multimodal sensing paradigm within a probabilistic framework to improve the depth image measurements of an RGB-D camera. The proposed approach uses an RGB-D camera and laser range finder to provide an improved depth image using convolutional neural network (CNN) approximation within a probabilistic inference framework. Synchronized RGB-D and laser measurements are collected in an environment to train a model, which is then used for depth image accuracy improvements and sensor range extension. The model exploits additional RGB information, which contains depth cues, to enhance the accuracy of pixel level measurements. A computationally efficient implementation of the CNN allows the model to train while exploring an unknown area to provide improved depth image measurements. The approach yields depth images containing spatial information far beyond the suggested operational limits. We demonstrate a nearly three-fold depth range extension (3:5m to 10m) while maintaining similar camera accuracy at the maximum range. The mean absolute error is also reduced from the original depth image by a factor of six. The efficacy of this approach is demonstrated in an unstructured office space.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114290498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan Kaufman, Kuya Takami, Zhuming Ai, Taeyoung Lee
This paper deals with the aerial exploration for an unknown three-dimensional environment, where Bayesian probabilistic mapping is integrated with a stochastic motion planning scheme to minimize the map uncertainties in an optimal fashion. We utilize the popular occupancy grid mapping representation, with the goal of determining occupancy probabilities of evenly-spaced grid cells in 3D with sensor fusion from multiple depth sensors with realistic sensor capabilities. The 3D exploration problem is decomposed into 3D mapping and 2D motion planning for efficient real-time implementation. This is achieved by projecting important aspects of the 3D map onto 2D maps, where a predicted level of map uncertainty, known as Shannon's entropy, provides an exploration policy that governs robotic motion. Both mapping and exploration algorithms are demonstrated with both numerical simulations and quadrotor flight experiments.
{"title":"Autonomous Quadrotor 3D Mapping and Exploration Using Exact Occupancy Probabilities","authors":"Evan Kaufman, Kuya Takami, Zhuming Ai, Taeyoung Lee","doi":"10.1109/IRC.2018.00016","DOIUrl":"https://doi.org/10.1109/IRC.2018.00016","url":null,"abstract":"This paper deals with the aerial exploration for an unknown three-dimensional environment, where Bayesian probabilistic mapping is integrated with a stochastic motion planning scheme to minimize the map uncertainties in an optimal fashion. We utilize the popular occupancy grid mapping representation, with the goal of determining occupancy probabilities of evenly-spaced grid cells in 3D with sensor fusion from multiple depth sensors with realistic sensor capabilities. The 3D exploration problem is decomposed into 3D mapping and 2D motion planning for efficient real-time implementation. This is achieved by projecting important aspects of the 3D map onto 2D maps, where a predicted level of map uncertainty, known as Shannon's entropy, provides an exploration policy that governs robotic motion. Both mapping and exploration algorithms are demonstrated with both numerical simulations and quadrotor flight experiments.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122207717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swarm robotic systems are a type of multi-robot systems, in which robots operate without any form of centralized control. The most popular approach for SRS is the so-called ad hoc or behavior-based approach; desired collective behavior is obtained by manually by designing the behavior of individual robot in advance. On the other hand, in the principled or automatic design approach, a certain general methodology for developing appropriate collective behavior is adopted. This paper investigates a deep reinforcement learning approach to collective behavior acquisition of swarm robotics systems. Robots are expected to collect information in parallel and share their experience for accelerating the learning. We conduct real swarm robot experiments and evaluate the learning performance in a scenario where robots consecutively travel between two landmarks.
{"title":"Collective Behavior Acquisition of Real Robotic Swarms Using Deep Reinforcement Learning","authors":"T. Yasuda, K. Ohkura","doi":"10.1109/IRC.2018.00038","DOIUrl":"https://doi.org/10.1109/IRC.2018.00038","url":null,"abstract":"Swarm robotic systems are a type of multi-robot systems, in which robots operate without any form of centralized control. The most popular approach for SRS is the so-called ad hoc or behavior-based approach; desired collective behavior is obtained by manually by designing the behavior of individual robot in advance. On the other hand, in the principled or automatic design approach, a certain general methodology for developing appropriate collective behavior is adopted. This paper investigates a deep reinforcement learning approach to collective behavior acquisition of swarm robotics systems. Robots are expected to collect information in parallel and share their experience for accelerating the learning. We conduct real swarm robot experiments and evaluate the learning performance in a scenario where robots consecutively travel between two landmarks.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128047552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber-physical control systems (CPCS), and their instantiation as autonomous robotic control architectures, are notoriously difficult to specify, implement, test, validate and verify. In this paper, we propose to integrate hybrid systems and their declension as hybrid automata and DEVS simulation models within a full-fledged and well-founded software component model tailored for CPCS. We present how the resulting comprehensive modeling tool can support the different phases of the software development to provide more reliable, more robust and more adaptable CPCS. The key concept is to provide components with a modeling and simulation capability that seamlessly support the software development process, from model-in-the-loop initial validations, until deployment time actual system verification.
{"title":"Towards a Well-Founded Software Component Model for Cyber-Physical Control Systems","authors":"J. Malenfant","doi":"10.1109/IRC.2018.00055","DOIUrl":"https://doi.org/10.1109/IRC.2018.00055","url":null,"abstract":"Cyber-physical control systems (CPCS), and their instantiation as autonomous robotic control architectures, are notoriously difficult to specify, implement, test, validate and verify. In this paper, we propose to integrate hybrid systems and their declension as hybrid automata and DEVS simulation models within a full-fledged and well-founded software component model tailored for CPCS. We present how the resulting comprehensive modeling tool can support the different phases of the software development to provide more reliable, more robust and more adaptable CPCS. The key concept is to provide components with a modeling and simulation capability that seamlessly support the software development process, from model-in-the-loop initial validations, until deployment time actual system verification.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132626457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eli M. Baum, Mario Harper, Ryan Alicea, Camilo Ordonez
A structure engulfed in flames can pose an extreme danger for fire-fighting personnel as well as any people trapped inside. A companion robot to assist the fire-fighters could potentially help speed up the search for humans while reducing risk for the fire-fighters. However, robots operating in these environments need to be able to operate in very low visibility conditions because of the heavy smoke, debris and unstructured terrain. This paper develops an audio classification algorithm to identify sounds relevant to fire-fighting such as people in distress (baby cries, screams, coughs), structural failure (wood snapping, glass breaking), fire, fire trucks, and crowds. The outputs of the classifier are then used as alerts for the fire-fighter or to modify the configuration of a robot capable of navigating unstructured terrain. The approach used extracts an array of features from audio recordings and employs a single hidden layer, feed forward neural network for classification. The simplicity in network structure enables performance on limited hardware and obtains classification results with an overall accuracy of 85.7%.
{"title":"Sound Identification for Fire-Fighting Mobile Robots","authors":"Eli M. Baum, Mario Harper, Ryan Alicea, Camilo Ordonez","doi":"10.1109/IRC.2018.00020","DOIUrl":"https://doi.org/10.1109/IRC.2018.00020","url":null,"abstract":"A structure engulfed in flames can pose an extreme danger for fire-fighting personnel as well as any people trapped inside. A companion robot to assist the fire-fighters could potentially help speed up the search for humans while reducing risk for the fire-fighters. However, robots operating in these environments need to be able to operate in very low visibility conditions because of the heavy smoke, debris and unstructured terrain. This paper develops an audio classification algorithm to identify sounds relevant to fire-fighting such as people in distress (baby cries, screams, coughs), structural failure (wood snapping, glass breaking), fire, fire trucks, and crowds. The outputs of the classifier are then used as alerts for the fire-fighter or to modify the configuration of a robot capable of navigating unstructured terrain. The approach used extracts an array of features from audio recordings and employs a single hidden layer, feed forward neural network for classification. The simplicity in network structure enables performance on limited hardware and obtains classification results with an overall accuracy of 85.7%.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114988196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article discusses the problem of deploying component-based software for a robotic system, including both the initial deployment and re-deployment at run-time to account for changing requirements and conditions. We begin by evaluating a set of tools used for all or part of the deployment activity. The evaluated tools are the OMG DEPL specification, Chef, Ansible, Salt, Puppet, roslaunch and Orocos Deployer/ROCK. These tools were chosen to cover a range of capabilities and styles. The evaluation identifies a set of core roles found in the deployment activity, and based on this we propose a reference architecture for a set of tools that satisfy the deployment activity. This reference architecture provides a foundation for future work in developing and evaluating tools that can be used in deployment.
{"title":"A Reference Architecture for Deploying Component-Based Robot Software and Comparison with Existing Tools","authors":"N. Hochgeschwender, G. Biggs, H. Voos","doi":"10.1109/IRC.2018.00026","DOIUrl":"https://doi.org/10.1109/IRC.2018.00026","url":null,"abstract":"This article discusses the problem of deploying component-based software for a robotic system, including both the initial deployment and re-deployment at run-time to account for changing requirements and conditions. We begin by evaluating a set of tools used for all or part of the deployment activity. The evaluated tools are the OMG DEPL specification, Chef, Ansible, Salt, Puppet, roslaunch and Orocos Deployer/ROCK. These tools were chosen to cover a range of capabilities and styles. The evaluation identifies a set of core roles found in the deployment activity, and based on this we propose a reference architecture for a set of tools that satisfy the deployment activity. This reference architecture provides a foundation for future work in developing and evaluating tools that can be used in deployment.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122430584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chan-Gun Lee, S. Park, Yoonha Jung, Youngji Lee, Mariah Mathews
The purpose of this project is to integrate IoT technology into the homes of the elderly that live alone using simple, inexpensive, accessible devices and open source software. Using technology such as Raspberry Pi (RPi), Open Source Computer Vision (OpenCV), and Node.js web server, actions can be controlled to supervise an unaccompanied elderly person. There are five services in this paper: opening the door via facial recognition with a servo motor, detecting motion and sending alarms to their family members, getting real-time indoor temperatures, remotely toggling the light switch on or off, and measuring the amount of trash in a selected trash bin. All functions are controlled by an Android application that can be customized depending on the specific visual needs of the user. This project proposes solutions to help the elderly benefit from user-friendly IoT technology. The solutions allow for notifications to be shared with family members, which can provide peace of mind.
{"title":"Internet of Things: Technology to Enable the Elderly","authors":"Chan-Gun Lee, S. Park, Yoonha Jung, Youngji Lee, Mariah Mathews","doi":"10.1109/IRC.2018.00075","DOIUrl":"https://doi.org/10.1109/IRC.2018.00075","url":null,"abstract":"The purpose of this project is to integrate IoT technology into the homes of the elderly that live alone using simple, inexpensive, accessible devices and open source software. Using technology such as Raspberry Pi (RPi), Open Source Computer Vision (OpenCV), and Node.js web server, actions can be controlled to supervise an unaccompanied elderly person. There are five services in this paper: opening the door via facial recognition with a servo motor, detecting motion and sending alarms to their family members, getting real-time indoor temperatures, remotely toggling the light switch on or off, and measuring the amount of trash in a selected trash bin. All functions are controlled by an Android application that can be customized depending on the specific visual needs of the user. This project proposes solutions to help the elderly benefit from user-friendly IoT technology. The solutions allow for notifications to be shared with family members, which can provide peace of mind.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124105057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}