Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652530
R. Sasaki, Yasunori Yamada, Yuki Tsukahara, Y. Kuniyoshi
This paper describes the environmental factor for structuring somatosensory cortex model in fetus. Previous studies showed that somatosensory cortex could develop through interactions with body and environment. However, it remains unclear how the somatosensory develops through environmental interactions, especially what environments contribute to the development. To verify the environment, we applied computer simulations to emulate tactile stimuli of fetus as input for a learning model of proposed somatosensory cortex model. First, we verified proposed somatosensory cortex model is plausible for biological properties And then, we verified the important factor of uterine environment to organize the somatosensory cortex. In result, somatosensory cortex could not be organized well without amnionic fluid. Our results show that fluid resistance derived from aminionic fluid contributed to develop fetus somatosensory cortex in uterine evironment.
{"title":"Tactile stimuli from amniotic fluid guides the development of somatosensory cortex with hierarchical structure using human fetus simulation","authors":"R. Sasaki, Yasunori Yamada, Yuki Tsukahara, Y. Kuniyoshi","doi":"10.1109/DEVLRN.2013.6652530","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652530","url":null,"abstract":"This paper describes the environmental factor for structuring somatosensory cortex model in fetus. Previous studies showed that somatosensory cortex could develop through interactions with body and environment. However, it remains unclear how the somatosensory develops through environmental interactions, especially what environments contribute to the development. To verify the environment, we applied computer simulations to emulate tactile stimuli of fetus as input for a learning model of proposed somatosensory cortex model. First, we verified proposed somatosensory cortex model is plausible for biological properties And then, we verified the important factor of uterine environment to organize the somatosensory cortex. In result, somatosensory cortex could not be organized well without amnionic fluid. Our results show that fluid resistance derived from aminionic fluid contributed to develop fetus somatosensory cortex in uterine evironment.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652532
Walter A. Talbott, Tingfan Wu, J. Movellan
To interact with objects effectively, a robot can use model-based or model-free control approaches. The superior performance typical of model-based control comes at the cost of developing or learning an accurate model of the system to be controlled. In this paper, we suggest an approach that generates models for novel objects based on visual features of those objects. These models can then be used for anticipatory control. We demonstrate this approach by replicating an infant experiment on a pneumatic humanoid robot. Infants seem to use visual information to estimate the mass of rods, and when they are presented a rod with an unexpected length-to-mass relationship, infants produce a large overcompensating arm movement when compared to an object with an expected mass. Our replication shows that the visual model-based control approach qualitatively replicates the behavior observed in the infant experiment, whereas a popular model-free approach, PID control, does not.
{"title":"Estimating dynamic properties of objects from appearance","authors":"Walter A. Talbott, Tingfan Wu, J. Movellan","doi":"10.1109/DEVLRN.2013.6652532","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652532","url":null,"abstract":"To interact with objects effectively, a robot can use model-based or model-free control approaches. The superior performance typical of model-based control comes at the cost of developing or learning an accurate model of the system to be controlled. In this paper, we suggest an approach that generates models for novel objects based on visual features of those objects. These models can then be used for anticipatory control. We demonstrate this approach by replicating an infant experiment on a pneumatic humanoid robot. Infants seem to use visual information to estimate the mass of rods, and when they are presented a rod with an unexpected length-to-mass relationship, infants produce a large overcompensating arm movement when compared to an object with an expected mass. Our replication shows that the visual model-based control approach qualitatively replicates the behavior observed in the infant experiment, whereas a popular model-free approach, PID control, does not.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652522
A. Zgonnikov, I. Lubashevsky
We conduct a theoretical analysis of the effects of intrinsic motivation on learning dynamics. We study a simple example of a single agent adapting to unknown environment; the agent is biased by the desire to take those actions she has little information about. We show that the intrinsic motivation may induce the instability (namely, periodic oscillations) of the learning process that is stable in case of rational agent. Most interestingly, we discover that the opposite effect may arise as well: the cyclic learning dynamics is stabilized by high levels of agent intrinsic motivation. Based on the presented results we argue that the effects of human intrinsic motivation in particular and bounded rationality in general may appear dominant in complex socio-economic systems and therefore deserve much attention in the formal models of such systems.
{"title":"Intrinsically motivated reinforcement learning in socio-economic systems: The dynamical analysis","authors":"A. Zgonnikov, I. Lubashevsky","doi":"10.1109/DEVLRN.2013.6652522","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652522","url":null,"abstract":"We conduct a theoretical analysis of the effects of intrinsic motivation on learning dynamics. We study a simple example of a single agent adapting to unknown environment; the agent is biased by the desire to take those actions she has little information about. We show that the intrinsic motivation may induce the instability (namely, periodic oscillations) of the learning process that is stable in case of rational agent. Most interestingly, we discover that the opposite effect may arise as well: the cyclic learning dynamics is stabilized by high levels of agent intrinsic motivation. Based on the presented results we argue that the effects of human intrinsic motivation in particular and bounded rationality in general may appear dominant in complex socio-economic systems and therefore deserve much attention in the formal models of such systems.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134006268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652572
Andrea Soltoggio, R. F. Reinhart, A. Lemme, Jochen J. Steil
Learning in human-robot interaction, as well as in human-to-human situations, is characterised by noisy stimuli, variable timing of stimuli and actions, and delayed rewards. A recent model of neural learning, based on modulated plasticity, suggested the use of rare correlations and eligibility traces to model conditioning in real-world situations with uncertain timing. The current study tests neural learning with rare correlations in a human-robot realistic teaching scenario. The humanoid robot iCub learns the rules of the game rock-paper-scissors while playing with a human tutor. The feedback of the tutor is often delayed, missing, or at times even incorrect. Nevertheless, the neural system learns with great robustness and similar performance both in simulation and in robotic experiments. The results demonstrate the efficacy of the plasticity rule based on rare correlations in implementing robotic neural conditioning.
{"title":"Learning the rules of a game: Neural conditioning in human-robot interaction with delayed rewards","authors":"Andrea Soltoggio, R. F. Reinhart, A. Lemme, Jochen J. Steil","doi":"10.1109/DEVLRN.2013.6652572","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652572","url":null,"abstract":"Learning in human-robot interaction, as well as in human-to-human situations, is characterised by noisy stimuli, variable timing of stimuli and actions, and delayed rewards. A recent model of neural learning, based on modulated plasticity, suggested the use of rare correlations and eligibility traces to model conditioning in real-world situations with uncertain timing. The current study tests neural learning with rare correlations in a human-robot realistic teaching scenario. The humanoid robot iCub learns the rules of the game rock-paper-scissors while playing with a human tutor. The feedback of the tutor is often delayed, missing, or at times even incorrect. Nevertheless, the neural system learns with great robustness and similar performance both in simulation and in robotic experiments. The results demonstrate the efficacy of the plasticity rule based on rare correlations in implementing robotic neural conditioning.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123878309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DevLrn.2013.6652534
Michael Spranger
This paper discusses grounded acquisition experiments of increasing complexity. Humanoid robots acquire English spatial lexicons from robot tutors. We identify how various spatial language systems, such as projective, absolute and proximal can be learned. The proposed learning mechanisms do not rely on direct meaning transfer or direct access to world models of interlocutors. Finally, we show how multiple systems can be acquired at the same time.
{"title":"Grounded lexicon acquisition — Case studies in spatial language","authors":"Michael Spranger","doi":"10.1109/DevLrn.2013.6652534","DOIUrl":"https://doi.org/10.1109/DevLrn.2013.6652534","url":null,"abstract":"This paper discusses grounded acquisition experiments of increasing complexity. Humanoid robots acquire English spatial lexicons from robot tutors. We identify how various spatial language systems, such as projective, absolute and proximal can be learned. The proposed learning mechanisms do not rely on direct meaning transfer or direct access to world models of interlocutors. Finally, we show how multiple systems can be acquired at the same time.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124871028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652545
Shingo Murata, Jun Namikawa, H. Arie, J. Tani, S. Sugano
This study shows that a novel type of recurrent neural network model can learn to reproduce fluctuating training sequences by inferring their stochastic structures. The network learns to predict not only the mean of the next input state, but also its time-varying variance. The network is trained through maximum likelihood estimation by utilizing the gradient descent method, and the likelihood function is expressed as a function of both the predicted mean and variance. In a numerical experiment, in order to evaluate the performance of the model, we first tested its ability to reproduce fluctuating training sequences generated by a known dynamical system that were perturbed by Gaussian noise with state-dependent variance. Our analysis showed that the network can reproduce the sequences by predicting the variance correctly. Furthermore, the other experiment showed that a humanoid robot equipped with the network can learn to reproduce fluctuating tutoring sequences by inferring latent stochastic structures hidden in the sequences.
{"title":"Learning to reproduce fluctuating behavioral sequences using a dynamic neural network model with time-varying variance estimation mechanism","authors":"Shingo Murata, Jun Namikawa, H. Arie, J. Tani, S. Sugano","doi":"10.1109/DEVLRN.2013.6652545","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652545","url":null,"abstract":"This study shows that a novel type of recurrent neural network model can learn to reproduce fluctuating training sequences by inferring their stochastic structures. The network learns to predict not only the mean of the next input state, but also its time-varying variance. The network is trained through maximum likelihood estimation by utilizing the gradient descent method, and the likelihood function is expressed as a function of both the predicted mean and variance. In a numerical experiment, in order to evaluate the performance of the model, we first tested its ability to reproduce fluctuating training sequences generated by a known dynamical system that were perturbed by Gaussian noise with state-dependent variance. Our analysis showed that the network can reproduce the sequences by predicting the variance correctly. Furthermore, the other experiment showed that a humanoid robot equipped with the network can learn to reproduce fluctuating tutoring sequences by inferring latent stochastic structures hidden in the sequences.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125033140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652523
Jonathan Grizou, M. Lopes, Pierre-Yves Oudeyer
This paper presents an algorithm to bootstrap shared understanding in a human-robot interaction scenario where the user teaches a robot a new task using teaching instructions yet unknown to it. In such cases, the robot needs to estimate simultaneously what the task is and the associated meaning of instructions received from the user. For this work, we consider a scenario where a human teacher uses initially unknown spoken words, whose associated unknown meaning is either a feedback (good/bad) or a guidance (go left, right, ...). We present computational results, within an inverse reinforcement learning framework, showing that a) it is possible to learn the meaning of unknown and noisy teaching instructions, as well as a new task at the same time, b) it is possible to reuse the acquired knowledge about instructions for learning new tasks, and c) even if the robot initially knows some of the instructions' meanings, the use of extra unknown teaching instructions improves learning efficiency.
{"title":"Robot learning simultaneously a task and how to interpret human instructions","authors":"Jonathan Grizou, M. Lopes, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2013.6652523","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652523","url":null,"abstract":"This paper presents an algorithm to bootstrap shared understanding in a human-robot interaction scenario where the user teaches a robot a new task using teaching instructions yet unknown to it. In such cases, the robot needs to estimate simultaneously what the task is and the associated meaning of instructions received from the user. For this work, we consider a scenario where a human teacher uses initially unknown spoken words, whose associated unknown meaning is either a feedback (good/bad) or a guidance (go left, right, ...). We present computational results, within an inverse reinforcement learning framework, showing that a) it is possible to learn the meaning of unknown and noisy teaching instructions, as well as a new task at the same time, b) it is possible to reuse the acquired knowledge about instructions for learning new tasks, and c) even if the robot initially knows some of the instructions' meanings, the use of extra unknown teaching instructions improves learning efficiency.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123400594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652544
Alexandre Pitti, Y. Kuniyoshi, M. Quoy, P. Gaussier
We propose a developmental scenario for explaining neonatal imitation. We hypothesize that the early maturation of the superior colliculus (SC) at the fetal period may strongly contribute to the construction of the social brain. We underly two mechanisms in SC potentially important which are (1) spatial topological organization of the unisensory modalities and (2) the conformed sensory alignment between these different modalities. We make a neural model of SC learning from a fetus facial tissues and from the fetus eyes and we show preference for facelike patterns.
{"title":"Explaining neonate facial imitation from the sensory alignment in the superior colliculus","authors":"Alexandre Pitti, Y. Kuniyoshi, M. Quoy, P. Gaussier","doi":"10.1109/DEVLRN.2013.6652544","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652544","url":null,"abstract":"We propose a developmental scenario for explaining neonatal imitation. We hypothesize that the early maturation of the superior colliculus (SC) at the fetal period may strongly contribute to the construction of the social brain. We underly two mechanisms in SC potentially important which are (1) spatial topological organization of the unisensory modalities and (2) the conformed sensory alignment between these different modalities. We make a neural model of SC learning from a fetus facial tissues and from the fetus eyes and we show preference for facelike patterns.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"355 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122801452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652549
Kristína Rebrová, Matej Pechác, I. Farkaš
Action understanding undoubtedly involves visual representations. However, linking the observed action with the respective motor category might facilitate processing and provide us with the mechanism to “step into the shoes” of the observed agent. Such principle might be very useful also for a cognitive robot allowing it to link the observed action with its own motor repertoire in order to understand the observed scene. A recent account on action understanding based on computational modeling methodology suggests that it depends on mutual interaction between visual and motor areas. We present a multi-layer connectionist model of action understanding circuitry and mirror neurons, emphasizing the bidirectional activation flow between visual and motor areas. To accomplish the mapping between two high-level modal representations we developed a bidirectional activation-based learning algorithm inspired by a supervised, biologically plausible GeneRec algorithm. We implemented our model in a simulated iCub robot that learns a grasping task. Within two experiments we show the function of the two topmost layers of our model. We also discuss further steps to be done to extend the functionality of our model.
{"title":"Towards a robotic model of the mirror neuron system","authors":"Kristína Rebrová, Matej Pechác, I. Farkaš","doi":"10.1109/DEVLRN.2013.6652549","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652549","url":null,"abstract":"Action understanding undoubtedly involves visual representations. However, linking the observed action with the respective motor category might facilitate processing and provide us with the mechanism to “step into the shoes” of the observed agent. Such principle might be very useful also for a cognitive robot allowing it to link the observed action with its own motor repertoire in order to understand the observed scene. A recent account on action understanding based on computational modeling methodology suggests that it depends on mutual interaction between visual and motor areas. We present a multi-layer connectionist model of action understanding circuitry and mirror neurons, emphasizing the bidirectional activation flow between visual and motor areas. To accomplish the mapping between two high-level modal representations we developed a bidirectional activation-based learning algorithm inspired by a supervised, biologically plausible GeneRec algorithm. We implemented our model in a simulated iCub robot that learns a grasping task. Within two experiments we show the function of the two topmost layers of our model. We also discuss further steps to be done to extend the functionality of our model.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122866437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-04DOI: 10.1109/DEVLRN.2013.6652571
N. Kuppuswamy, C. Harris
In this work, the notion of reduced dimensionality and its relevance for systems undergoing development is examined. The various motor control theories of degree of freedom change, optimal control, and motor primitives are related using the framework of control dimensionality reduction. Based on their relationship, we propose a developmental approach based on progressively utilising increasingly higher dimension representations of the system. A simulated planar 2 link arm model is then used to demonstrate the effect of utilising reduced dimensional models for control; comparisons on step and sinusoidal tasks are presented showing a progressive decrease in error that is task dependent quantitatively. Arguments are presented for why such a strategy might be essential from an evolutionary perspective for the developmental acquisition motor control in a tractable manner.
{"title":"Developing learnability — The case for reduced dimensionality","authors":"N. Kuppuswamy, C. Harris","doi":"10.1109/DEVLRN.2013.6652571","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652571","url":null,"abstract":"In this work, the notion of reduced dimensionality and its relevance for systems undergoing development is examined. The various motor control theories of degree of freedom change, optimal control, and motor primitives are related using the framework of control dimensionality reduction. Based on their relationship, we propose a developmental approach based on progressively utilising increasingly higher dimension representations of the system. A simulated planar 2 link arm model is then used to demonstrate the effect of utilising reduced dimensional models for control; comparisons on step and sinusoidal tasks are presented showing a progressive decrease in error that is task dependent quantitatively. Arguments are presented for why such a strategy might be essential from an evolutionary perspective for the developmental acquisition motor control in a tractable manner.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115841154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}