Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033393
Tao Zhou, P. Dudek, Bertram E. Shi
We present an extension of Kohonen's Self Organizing Map (SOM) algorithm called the Self Organizing Neural Population Coding (SONPC) algorithm. The algorithm adapts online the neural population encoding of sensory and motor coordinates of a robot according to the underlying data distribution. By allocating more neurons towards area of sensory or motor space which are more frequently visited, this representation improves the accuracy of a robot system on a visually guided reaching task. We also suggest a Mean Reflection method to solve the notorious border effect problem encountered with SOMs for the special case where the latent space and the data space dimensions are the same.
{"title":"Self-Organizing Neural Population Coding for improving robotic visuomotor coordination","authors":"Tao Zhou, P. Dudek, Bertram E. Shi","doi":"10.1109/IJCNN.2011.6033393","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033393","url":null,"abstract":"We present an extension of Kohonen's Self Organizing Map (SOM) algorithm called the Self Organizing Neural Population Coding (SONPC) algorithm. The algorithm adapts online the neural population encoding of sensory and motor coordinates of a robot according to the underlying data distribution. By allocating more neurons towards area of sensory or motor space which are more frequently visited, this representation improves the accuracy of a robot system on a visually guided reaching task. We also suggest a Mean Reflection method to solve the notorious border effect problem encountered with SOMs for the special case where the latent space and the data space dimensions are the same.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033329
Naho Ito, M. Hagiwara
One of the practical targets of neural network research is to enable conversation ability with humans. This paper proposes a novel natural language generation method using automatically constructed lexical resources. In the proposed method, two lexical resources are employed: Kyoto University's case frame data and Google N-gram data. Word frequency in case frame can be regarded to be obtained by Hebb's learning rule. The co-occurence frequency of Google N-gram can be considered to be gained by an associative memory. The proposed method uses words as an input. It generates a sentence from case frames, using Google N-gram as to consider co-occurrence frequency between words. We only use lexical resources which are constructed automatically. Therefore the proposed method has high coverage compared to the other methods using manually constructed templates. We carried out experiments to examine the quality of generated sentences and obtained satisfactory results.
{"title":"Natural language generation using automatically constructed lexical resources","authors":"Naho Ito, M. Hagiwara","doi":"10.1109/IJCNN.2011.6033329","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033329","url":null,"abstract":"One of the practical targets of neural network research is to enable conversation ability with humans. This paper proposes a novel natural language generation method using automatically constructed lexical resources. In the proposed method, two lexical resources are employed: Kyoto University's case frame data and Google N-gram data. Word frequency in case frame can be regarded to be obtained by Hebb's learning rule. The co-occurence frequency of Google N-gram can be considered to be gained by an associative memory. The proposed method uses words as an input. It generates a sentence from case frames, using Google N-gram as to consider co-occurrence frequency between words. We only use lexical resources which are constructed automatically. Therefore the proposed method has high coverage compared to the other methods using manually constructed templates. We carried out experiments to examine the quality of generated sentences and obtained satisfactory results.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033330
B. Chandra, Shalini Bhaskar
Mining frequent substructures has gained importance in the recent past. Number of algorithms has been presented for mining undirected graphs. Focus of this paper is on mining frequent substructures in directed labeled graphs since it has variety of applications in the area of biology, web mining etc. A novel approach of using equivalence class principle has been proposed for reducing the size of the graph database to be processed for finding frequent substructures. For generating candidate substructures a combination of L-R join operation, serial and mixed extensions have been carried out. This avoids missing of any candidate substructures and at the same time candidate substructures that have high probability of becoming frequent are generated.
{"title":"A new algorithm for graph mining","authors":"B. Chandra, Shalini Bhaskar","doi":"10.1109/IJCNN.2011.6033330","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033330","url":null,"abstract":"Mining frequent substructures has gained importance in the recent past. Number of algorithms has been presented for mining undirected graphs. Focus of this paper is on mining frequent substructures in directed labeled graphs since it has variety of applications in the area of biology, web mining etc. A novel approach of using equivalence class principle has been proposed for reducing the size of the graph database to be processed for finding frequent substructures. For generating candidate substructures a combination of L-R join operation, serial and mixed extensions have been carried out. This avoids missing of any candidate substructures and at the same time candidate substructures that have high probability of becoming frequent are generated.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132749789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033596
T. Shimizu, R. Saegusa, Shuhei Ikemoto, H. Ishiguro, G. Metta
This paper describes a self-protective whole-body control method for humanoid robots. A set of postural reactions are used to create whole-body movements. A set of reactions is merged to cope with a general falling down direction, while allowing the upper limbs to contact safely with obstacles. The collision detection is achieved by force sensing. We verified that our method generates the self-protective motion in real time, and reduced the impact energy in multiple situations by simulator. We also verified that our systems works adequately in real-robot.
{"title":"Adaptive self-protective motion based on reflex control","authors":"T. Shimizu, R. Saegusa, Shuhei Ikemoto, H. Ishiguro, G. Metta","doi":"10.1109/IJCNN.2011.6033596","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033596","url":null,"abstract":"This paper describes a self-protective whole-body control method for humanoid robots. A set of postural reactions are used to create whole-body movements. A set of reactions is merged to cope with a general falling down direction, while allowing the upper limbs to contact safely with obstacles. The collision detection is achieved by force sensing. We verified that our method generates the self-protective motion in real time, and reduced the impact energy in multiple situations by simulator. We also verified that our systems works adequately in real-robot.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133122791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033257
J. Karhunen, T. Hao
Independent component analysis (ICA) and blind source separation (BSS) are usually applied to a single data set. Both these techniques are nowadays well understood, and several good methods based on somewhat varying assumptions on the data are available. In this paper, we consider an extension of ICA and BSS for separating mutually dependent and independent components from two different but related data sets. This problem is important in practice, because such data sets are common in real-world applications. We propose a new method which first uses canonical correlation analysis (CCA) for detecting subspaces of independent and dependent components. Standard ICA and BSS methods can after this be used for final separation of these components. The proposed method performs excellently for synthetic data sets for which the assumed data model holds exactly, and provides meaningful results for real-world robot grasping data. The method has a sound theoretical basis, and it is straightforward to implement and computationally not too demanding. Moreover, the proposed method has a very important by-product: its improves clearly the separation results provided by the FastICA and UniBSS methods that we have used in our experiments. Not only are the signal-to-noise ratios of the separated sources often clearly higher, but CCA preprocessing also helps FastICA to separate sources that it alone is not able to separate.
{"title":"Finding dependent and independent components from two related data sets","authors":"J. Karhunen, T. Hao","doi":"10.1109/IJCNN.2011.6033257","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033257","url":null,"abstract":"Independent component analysis (ICA) and blind source separation (BSS) are usually applied to a single data set. Both these techniques are nowadays well understood, and several good methods based on somewhat varying assumptions on the data are available. In this paper, we consider an extension of ICA and BSS for separating mutually dependent and independent components from two different but related data sets. This problem is important in practice, because such data sets are common in real-world applications. We propose a new method which first uses canonical correlation analysis (CCA) for detecting subspaces of independent and dependent components. Standard ICA and BSS methods can after this be used for final separation of these components. The proposed method performs excellently for synthetic data sets for which the assumed data model holds exactly, and provides meaningful results for real-world robot grasping data. The method has a sound theoretical basis, and it is straightforward to implement and computationally not too demanding. Moreover, the proposed method has a very important by-product: its improves clearly the separation results provided by the FastICA and UniBSS methods that we have used in our experiments. Not only are the signal-to-noise ratios of the separated sources often clearly higher, but CCA preprocessing also helps FastICA to separate sources that it alone is not able to separate.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133211857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033220
Chao Yuan
In multi-output regression, the goal is to establish a mapping from inputs to multivariate outputs that are often assumed unknown. However, in practice, some outputs may become available. How can we use this extra information to improve our prediction on the remaining outputs? For example, can we use the job data released today to better predict the house sales data to be released tomorrow? Most previous approaches use a single generative model to model the joint predictive distribution of all outputs, based on which unknown outputs are inferred conditionally from the known outputs. However, learning such a joint distribution for all outputs is very challenging and also unnecessary if our goal is just to predict each of the unknown outputs. We propose a conditional model to directly model the conditional probability of a target output on both inputs and all other outputs. A simple generative model is used to infer other outputs if they are unknown. Both models only consist of standard regression predictors, for example, Gaussian process, which can be easily learned.
{"title":"Conditional multi-output regression","authors":"Chao Yuan","doi":"10.1109/IJCNN.2011.6033220","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033220","url":null,"abstract":"In multi-output regression, the goal is to establish a mapping from inputs to multivariate outputs that are often assumed unknown. However, in practice, some outputs may become available. How can we use this extra information to improve our prediction on the remaining outputs? For example, can we use the job data released today to better predict the house sales data to be released tomorrow? Most previous approaches use a single generative model to model the joint predictive distribution of all outputs, based on which unknown outputs are inferred conditionally from the known outputs. However, learning such a joint distribution for all outputs is very challenging and also unnecessary if our goal is just to predict each of the unknown outputs. We propose a conditional model to directly model the conditional probability of a target output on both inputs and all other outputs. A simple generative model is used to infer other outputs if they are unknown. Both models only consist of standard regression predictors, for example, Gaussian process, which can be easily learned.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133634968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033470
R. Chalasani, J. Príncipe
The CA3 region of the hippocampus acts as an auto-associative memory and is responsible for the consolidation of episodic memory. Two important characteristics of such a network is the sparsity of the stored patterns and the nonsaturating firing rate dynamics. To construct such a network, here we use a maximum a posteriori based cost function, regularized with L1-norm, to change the internal state of the neurons. Then a linear thresholding function is used to obtain the desired output firing rate. We show how such a model leads to a more biologically reasonable dynamic model which can produce a sparse output and recalls with good accuracy when the network is presented with a corrupted input.
{"title":"Sparse analog associative memory via L1-regularization and thresholding","authors":"R. Chalasani, J. Príncipe","doi":"10.1109/IJCNN.2011.6033470","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033470","url":null,"abstract":"The CA3 region of the hippocampus acts as an auto-associative memory and is responsible for the consolidation of episodic memory. Two important characteristics of such a network is the sparsity of the stored patterns and the nonsaturating firing rate dynamics. To construct such a network, here we use a maximum a posteriori based cost function, regularized with L1-norm, to change the internal state of the neurons. Then a linear thresholding function is used to obtain the desired output firing rate. We show how such a model leads to a more biologically reasonable dynamic model which can produce a sparse output and recalls with good accuracy when the network is presented with a corrupted input.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132198252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033256
Robert A. Nawrocki, S. Shaheen, R. Voyles
Artificial Intelligence (AI) has made tremendous progress since it was first postulated in the 1950s. However, AI systems are primarily emulated on serial machine hardware that result in high power consumption, especially when compared to their biological counterparts. Recent interest in neuromorphic architectures aims to more directly emulate biological information processing to achieve substantially lower power consumption for appropriate information processing tasks. We propose a novel way of realizing a neuromorphic architecture, termed Synthetic Neural Network (SNN), that is modeled after conventional artificial neural networks and incorporates organic bistable devices as circuit elements that resemble the basic operation of a binary synapse. Via computer simulation we demonstrate how a single synthetic neuron, created with only a single transistor, a single-bistable-device-per-input, and two resistors, exhibits a behavior of an artificial neuron and approximates the sigmoidal activation function. We also show that, by increasing the number of bistable devices per input, a single neuron can be trained to behave like a Boolean logic AND or OR gate. To validate the efficacy of our design, we show two simulations where SNN is used as a pattern classifier of complicated, non-linear relationships based on real-world problems. In the first example, our SNN is shown to perform the trained task of directional propulsion due to water hammer effect with an average error of about 7.2%. The second task, a robotic wall following, resulted in SNN error of approximately 9.6%. Our simulations and analysis are based on the performance of organic electronic elements created in our laboratory.
{"title":"A neuromorphic architecture from single transistor neurons with organic bistable devices for weights","authors":"Robert A. Nawrocki, S. Shaheen, R. Voyles","doi":"10.1109/IJCNN.2011.6033256","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033256","url":null,"abstract":"Artificial Intelligence (AI) has made tremendous progress since it was first postulated in the 1950s. However, AI systems are primarily emulated on serial machine hardware that result in high power consumption, especially when compared to their biological counterparts. Recent interest in neuromorphic architectures aims to more directly emulate biological information processing to achieve substantially lower power consumption for appropriate information processing tasks. We propose a novel way of realizing a neuromorphic architecture, termed Synthetic Neural Network (SNN), that is modeled after conventional artificial neural networks and incorporates organic bistable devices as circuit elements that resemble the basic operation of a binary synapse. Via computer simulation we demonstrate how a single synthetic neuron, created with only a single transistor, a single-bistable-device-per-input, and two resistors, exhibits a behavior of an artificial neuron and approximates the sigmoidal activation function. We also show that, by increasing the number of bistable devices per input, a single neuron can be trained to behave like a Boolean logic AND or OR gate. To validate the efficacy of our design, we show two simulations where SNN is used as a pattern classifier of complicated, non-linear relationships based on real-world problems. In the first example, our SNN is shown to perform the trained task of directional propulsion due to water hammer effect with an average error of about 7.2%. The second task, a robotic wall following, resulted in SNN error of approximately 9.6%. Our simulations and analysis are based on the performance of organic electronic elements created in our laboratory.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"357 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122851968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033334
F. Samuelson, David G. Brown
For any N points arbitrarily located in a d-dimensional space, Thomas Cover popularized and augmented a theorem that gives an expression for the number of the 2N possible two-class dichotomies of those points that are separable by a hyperplane. Since separation of two-class dichotomies in d dimensions is a common problem addressed by computational intelligence (CI) decision functions or “observers,” Cover's theorem provides a benchmark against which CI observer performance can be measured. We demonstrate that the performance of a simple perceptron approaches the ideal performance and how a single layer MLP and an SVM fare in comparison. We show how Cover's theorem can be used to develop a procedure for CI parameter optimization and to serve as a descriptor of CI complexity. Both simulated and micro-array genomic data are used.
{"title":"Application of Cover's theorem to the evaluation of the performance of CI observers","authors":"F. Samuelson, David G. Brown","doi":"10.1109/IJCNN.2011.6033334","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033334","url":null,"abstract":"For any N points arbitrarily located in a d-dimensional space, Thomas Cover popularized and augmented a theorem that gives an expression for the number of the 2N possible two-class dichotomies of those points that are separable by a hyperplane. Since separation of two-class dichotomies in d dimensions is a common problem addressed by computational intelligence (CI) decision functions or “observers,” Cover's theorem provides a benchmark against which CI observer performance can be measured. We demonstrate that the performance of a simple perceptron approaches the ideal performance and how a single layer MLP and an SVM fare in comparison. We show how Cover's theorem can be used to develop a procedure for CI parameter optimization and to serve as a descriptor of CI complexity. Both simulated and micro-array genomic data are used.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123863332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-03DOI: 10.1109/IJCNN.2011.6033602
L. Werbos, R. Kozma, Rodrigo Silva-Lugo, G. E. Pazienza, P. Werbos
Optimization in large-scale networks - such as large logistical networks and electric power grids involving many thousands of variables - is a very challenging task. In this paper, we present the theoretical basis and the related experiments involving the development and use of visualization tools and improvements in existing best practices in managing optimization software, as preparation for the use of “metamodeling” - the insertion of complex neural networks or other universal nonlinear function approximators into key parts of these complicated and expensive computations; this novel approach has been developed by the new Center for Large-Scale Integrated Optimization and Networks (CLION) at University of Memphis, TN.
{"title":"Metamodeling for large-scale optimization tasks based on object networks","authors":"L. Werbos, R. Kozma, Rodrigo Silva-Lugo, G. E. Pazienza, P. Werbos","doi":"10.1109/IJCNN.2011.6033602","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033602","url":null,"abstract":"Optimization in large-scale networks - such as large logistical networks and electric power grids involving many thousands of variables - is a very challenging task. In this paper, we present the theoretical basis and the related experiments involving the development and use of visualization tools and improvements in existing best practices in managing optimization software, as preparation for the use of “metamodeling” - the insertion of complex neural networks or other universal nonlinear function approximators into key parts of these complicated and expensive computations; this novel approach has been developed by the new Center for Large-Scale Integrated Optimization and Networks (CLION) at University of Memphis, TN.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}