Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404290
D.R. Roberts, C. Koutsougeras, R. Nudo, C. Cusick
We review experimental methods used to study cortex functional architecture, including the optical imaging technique. Information gained from studies of stimulus evoked brain activity will aid our understanding of sensory coding and information processing in central nervous systems and should be incorporated into biologically plausible models of cortical function.<>
{"title":"Modeling sensory representations in brain: new methods for studying functional architecture reveal unique spatial patterns","authors":"D.R. Roberts, C. Koutsougeras, R. Nudo, C. Cusick","doi":"10.1109/INBS.1995.404290","DOIUrl":"https://doi.org/10.1109/INBS.1995.404290","url":null,"abstract":"We review experimental methods used to study cortex functional architecture, including the optical imaging technique. Information gained from studies of stimulus evoked brain activity will aid our understanding of sensory coding and information processing in central nervous systems and should be incorporated into biologically plausible models of cortical function.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115511794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404274
T. Shimada, M. Hagiya, M. Arita, S. Nishizaki, C. Tan
We have developed a knowledge-based, discrete-event simulation system to simulate proteins-regulated genetic action in lambda phage. Lambda phage is a kind of virus which infects Escherichia coli (E. Coli). Specifically, we simulate the decision between two developmental pathways, that is, lytic growth and lysogenic growth on such conditions as mutation. The novelty of this work is the employment of two different levels of abstraction in a genetic model for the purpose of achieving greater precision. Our model is composed of a roughly abstracted level for the noncritical parts which constitute most parts of our model, and a precisely abstracted level for the critical parts. In the former level, our model is a discrete-event simulation in qualitative representation on a knowledge-based system. In the latter level, it is based on reaction formulae in quantitative representation.<>
{"title":"Knowledge-based simulation of regulatory action in lambda phage","authors":"T. Shimada, M. Hagiya, M. Arita, S. Nishizaki, C. Tan","doi":"10.1109/INBS.1995.404274","DOIUrl":"https://doi.org/10.1109/INBS.1995.404274","url":null,"abstract":"We have developed a knowledge-based, discrete-event simulation system to simulate proteins-regulated genetic action in lambda phage. Lambda phage is a kind of virus which infects Escherichia coli (E. Coli). Specifically, we simulate the decision between two developmental pathways, that is, lytic growth and lysogenic growth on such conditions as mutation. The novelty of this work is the employment of two different levels of abstraction in a genetic model for the purpose of achieving greater precision. Our model is composed of a roughly abstracted level for the noncritical parts which constitute most parts of our model, and a precisely abstracted level for the critical parts. In the former level, our model is a discrete-event simulation in qualitative representation on a knowledge-based system. In the latter level, it is based on reaction formulae in quantitative representation.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131084247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404281
T. Yokomori, S. Kobayashi
The authors are concerned with analysing formal linguistic properties of DNA sequences in which a number of the language theoretic analysis on DNA sequences are established by means of computational methods. First, employing a formal language theoretic framework, the authors consider a kind of evolutionary problem of DNA sequences, asking (1) how DNA sequences were initially created and then evolved (grew up) to be a language of certain complexity, and (2) what primitive constructs were minimally required for the process of evolution. In terms of formal linguistic concepts, the authors present several results that provide their views on these questions at a conceptual level. Based on the formal analysis on these biological questions, the authors then choose a certain type of tree generating grammars called tree adjunct grammars (TAG) to attach the problem of modeling the secondary structure of RNA sequences. By proposing an extended model of TAGs, the authors demonstrate the usefulness of the grammars for modeling some typical RNA secondary structures including "pseudoknots", which suggests that TAG families as RNA grammars have a great potential for RNA secondary structure prediction.<>
{"title":"DNA evolutionary linguistics and RNA structure modeling: a computational approach","authors":"T. Yokomori, S. Kobayashi","doi":"10.1109/INBS.1995.404281","DOIUrl":"https://doi.org/10.1109/INBS.1995.404281","url":null,"abstract":"The authors are concerned with analysing formal linguistic properties of DNA sequences in which a number of the language theoretic analysis on DNA sequences are established by means of computational methods. First, employing a formal language theoretic framework, the authors consider a kind of evolutionary problem of DNA sequences, asking (1) how DNA sequences were initially created and then evolved (grew up) to be a language of certain complexity, and (2) what primitive constructs were minimally required for the process of evolution. In terms of formal linguistic concepts, the authors present several results that provide their views on these questions at a conceptual level. Based on the formal analysis on these biological questions, the authors then choose a certain type of tree generating grammars called tree adjunct grammars (TAG) to attach the problem of modeling the secondary structure of RNA sequences. By proposing an extended model of TAGs, the authors demonstrate the usefulness of the grammars for modeling some typical RNA secondary structures including \"pseudoknots\", which suggests that TAG families as RNA grammars have a great potential for RNA secondary structure prediction.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"54 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132714045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404265
J. S. Mertoguno, G. Bourbakis
In this paper, a learning model and a failure recovery approach of an autonomous vision system multi-layer architecture, called KYDON, are presented. The KYDON architecture consists of "k" layers array processors. The lowest layers compose the KYDON's low level processing group, and the rest compose the higher level processing groups. The interconnectivity of the processors in each array is based on a full hexagonal mesh structure. The lowest layer array processors captures images from the environment by employing a 2-D photoarray. The top most layer deals with the image interpretation and understanding. The intermediate layers perform learning and pattern recognition processes to bridge the image information flow from the bottom most layer to the top most one. KYDON uses graph models to represent and process the knowledge, extracted from the image. An important feature of KYDON is that it does not need any host computer or control processor to handle I/O and other miscellaneous tasks. A novel learning model has been developed for the KYDON's distributed knowledge base.<>
{"title":"KYDON, a self-organized autonomous net: learning model and failure recovery","authors":"J. S. Mertoguno, G. Bourbakis","doi":"10.1109/INBS.1995.404265","DOIUrl":"https://doi.org/10.1109/INBS.1995.404265","url":null,"abstract":"In this paper, a learning model and a failure recovery approach of an autonomous vision system multi-layer architecture, called KYDON, are presented. The KYDON architecture consists of \"k\" layers array processors. The lowest layers compose the KYDON's low level processing group, and the rest compose the higher level processing groups. The interconnectivity of the processors in each array is based on a full hexagonal mesh structure. The lowest layer array processors captures images from the environment by employing a 2-D photoarray. The top most layer deals with the image interpretation and understanding. The intermediate layers perform learning and pattern recognition processes to bridge the image information flow from the bottom most layer to the top most one. KYDON uses graph models to represent and process the knowledge, extracted from the image. An important feature of KYDON is that it does not need any host computer or control processor to handle I/O and other miscellaneous tasks. A novel learning model has been developed for the KYDON's distributed knowledge base.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133103814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404269
Ying Xu, G. Helt, J. Einstein, G. Rubin, E. Uberbacher
An AI-based system for gene recognition in Drosophila DNA sequences was designed and implemented. The system consists of two main modules, one for coding exon recognition and one for single gene model construction. The exon recognition module finds a coding exon by recognition of its splice junctions (or translation start) and coding potential. The core of this module is a set of neural networks which evaluate an exon candidate for the possibility of being a true coding exon using the "recognized" splice junction (or translation start) and coding signals. The recognition process consists of four steps: generation of an exon candidate pool, elimination of improbable candidates using heuristic rules, candidate evaluation by trained neural networks, and candidate cluster resolution and final exon prediction. The gene model construction module takes as input the clustered exon candidates and builds a "best" possible single gene model using an efficient dynamic programming algorithm. 129 Drosophila sequences consisting of 441 coding exons including 216358 coding bases were extracted from GenBank and used to build statistical matrices and to train the neural networks. On this training set the system recognized 97% of the coding messages and predicted only 5% false messages. Among the "correctly" predicted exons, 68% match the actual exon exactly and 96% have at least one edge predicted correctly. On an independent test set consisting of 30 Drosophila sequences, the system recognized 96% of the coding messages and predicted 7% false messages.<>
{"title":"Drosophila GRAIL: an intelligent system for gene recognition in Drosophila DNA sequences","authors":"Ying Xu, G. Helt, J. Einstein, G. Rubin, E. Uberbacher","doi":"10.1109/INBS.1995.404269","DOIUrl":"https://doi.org/10.1109/INBS.1995.404269","url":null,"abstract":"An AI-based system for gene recognition in Drosophila DNA sequences was designed and implemented. The system consists of two main modules, one for coding exon recognition and one for single gene model construction. The exon recognition module finds a coding exon by recognition of its splice junctions (or translation start) and coding potential. The core of this module is a set of neural networks which evaluate an exon candidate for the possibility of being a true coding exon using the \"recognized\" splice junction (or translation start) and coding signals. The recognition process consists of four steps: generation of an exon candidate pool, elimination of improbable candidates using heuristic rules, candidate evaluation by trained neural networks, and candidate cluster resolution and final exon prediction. The gene model construction module takes as input the clustered exon candidates and builds a \"best\" possible single gene model using an efficient dynamic programming algorithm. 129 Drosophila sequences consisting of 441 coding exons including 216358 coding bases were extracted from GenBank and used to build statistical matrices and to train the neural networks. On this training set the system recognized 97% of the coding messages and predicted only 5% false messages. Among the \"correctly\" predicted exons, 68% match the actual exon exactly and 96% have at least one edge predicted correctly. On an independent test set consisting of 30 Drosophila sequences, the system recognized 96% of the coding messages and predicted 7% false messages.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126144593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404273
Cathy H. Wu, Hsi-Lien Chen, Sheng-Chih Chen
A gene classification artificial neural system has been developed for rapid annotation of the molecular sequencing data being generated by the Human Genome Project. Three neural networks have been implemented, one full-scale system to classify protein sequences according to PIR (protein identification resources) superfamilies, one system to classify ribosomal RNA sequences into RDP (ribosomal database project) phylogenetic classes, and one pilot system to classify proteins according to Blocks motifs. The sequence encoding schema involved an n-gram hashing method to convert molecular sequences into neural input vectors, a SVD (singular value decomposition) method to compress vectors, and a term weighting method to extract motif information. The neural networks used were three-layered, feed-forward networks that employed backpropagation or counter-propagation learning paradigms. The system runs faster by one to two orders of magnitude than existing method and has a sensitivity of 85 to 100%. The gene classification artificial neural system is available on the Internet, and may be extended into a gene identification system for classifying indiscriminately sequenced DNA fragments.<>
{"title":"Gene classification artificial neural system","authors":"Cathy H. Wu, Hsi-Lien Chen, Sheng-Chih Chen","doi":"10.1109/INBS.1995.404273","DOIUrl":"https://doi.org/10.1109/INBS.1995.404273","url":null,"abstract":"A gene classification artificial neural system has been developed for rapid annotation of the molecular sequencing data being generated by the Human Genome Project. Three neural networks have been implemented, one full-scale system to classify protein sequences according to PIR (protein identification resources) superfamilies, one system to classify ribosomal RNA sequences into RDP (ribosomal database project) phylogenetic classes, and one pilot system to classify proteins according to Blocks motifs. The sequence encoding schema involved an n-gram hashing method to convert molecular sequences into neural input vectors, a SVD (singular value decomposition) method to compress vectors, and a term weighting method to extract motif information. The neural networks used were three-layered, feed-forward networks that employed backpropagation or counter-propagation learning paradigms. The system runs faster by one to two orders of magnitude than existing method and has a sensitivity of 85 to 100%. The gene classification artificial neural system is available on the Internet, and may be extended into a gene identification system for classifying indiscriminately sequenced DNA fragments.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114581263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404257
R. Drechsler, B. Becker, Nicole Drechsler, A. Jahnke
A genetic algorithm (GA) is applied to find decomposition type lists (DTLs) that minimize the size of ordered Kronecker functional decision diagrams (OKFDDs). OKFDDs are a data structure for representation and manipulation of Boolean functions. The choice of the DTL largely influences the size of the OKFDD, i.e. its size may vary from polynomial to exponential. In Dreschsler, Becker, and Jahnke (1995) heuristic methods have been presented. In this paper the authors show by experiments that better results can be obtained by using GAs.<>
{"title":"A genetic algorithm for decomposition type choice in OKFDDs","authors":"R. Drechsler, B. Becker, Nicole Drechsler, A. Jahnke","doi":"10.1109/INBS.1995.404257","DOIUrl":"https://doi.org/10.1109/INBS.1995.404257","url":null,"abstract":"A genetic algorithm (GA) is applied to find decomposition type lists (DTLs) that minimize the size of ordered Kronecker functional decision diagrams (OKFDDs). OKFDDs are a data structure for representation and manipulation of Boolean functions. The choice of the DTL largely influences the size of the OKFDD, i.e. its size may vary from polynomial to exponential. In Dreschsler, Becker, and Jahnke (1995) heuristic methods have been presented. In this paper the authors show by experiments that better results can be obtained by using GAs.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121541259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404282
A. Bibbig, T. Wennekers, G. Palm
We present computer simulations of a neural network comprising two sensory pathways, each built of preprocessing and associative memory modules perhaps corresponding to a primary and higher sensory area, and a hippocampal area that serves as an integration or fusion zone during learning and retrieval of polymodal information. The network is able to store unimodal details about a complex environment in local assemblies restricted to the corresponding associative memory, whereas a representation of the simultaneous occurrences of several stimuli is constituted and stored in a self-organizing manner in the hippocampal area. This can be viewed as storage of a "particular context". If many stimulus constellations are presented to the network during learning, it may over-learn, that is, the hippocampal area can no longer distinguish particular situations, but instead represents more general contexts or categories, a given environmental situation may belong to. Feedback from the hippocampal region to association areas can restore particular memories; it can still act as a threshold control gate raising sensitivity in the appropriate cortex regions when it is overloaded.<>
{"title":"A neural network model of the cortico-hippocampal interplay: contexts and generalization","authors":"A. Bibbig, T. Wennekers, G. Palm","doi":"10.1109/INBS.1995.404282","DOIUrl":"https://doi.org/10.1109/INBS.1995.404282","url":null,"abstract":"We present computer simulations of a neural network comprising two sensory pathways, each built of preprocessing and associative memory modules perhaps corresponding to a primary and higher sensory area, and a hippocampal area that serves as an integration or fusion zone during learning and retrieval of polymodal information. The network is able to store unimodal details about a complex environment in local assemblies restricted to the corresponding associative memory, whereas a representation of the simultaneous occurrences of several stimuli is constituted and stored in a self-organizing manner in the hippocampal area. This can be viewed as storage of a \"particular context\". If many stimulus constellations are presented to the network during learning, it may over-learn, that is, the hippocampal area can no longer distinguish particular situations, but instead represents more general contexts or categories, a given environmental situation may belong to. Feedback from the hippocampal region to association areas can restore particular memories; it can still act as a threshold control gate raising sensitivity in the appropriate cortex regions when it is overloaded.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130124381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404277
Jun Han, C. Moraga
This paper presents a novel approach under regular backpropagation. We introduce hybrid neural nets that have different activation functions for different layers in fully connected feed forward neural nets. We change the parameters of activation functions in hidden layers and output layer to accelerate the learning speed and to reduce the oscillation respectively. Results on the two-spirals benchmark are presented which are better than any results under backpropagation feed forward nets using monotone activation functions published previously.<>
{"title":"Hybrid nets with variable parameters: a novel approach to fast learning under backpropagation","authors":"Jun Han, C. Moraga","doi":"10.1109/INBS.1995.404277","DOIUrl":"https://doi.org/10.1109/INBS.1995.404277","url":null,"abstract":"This paper presents a novel approach under regular backpropagation. We introduce hybrid neural nets that have different activation functions for different layers in fully connected feed forward neural nets. We change the parameters of activation functions in hidden layers and output layer to accelerate the learning speed and to reduce the oscillation respectively. Results on the two-spirals benchmark are presented which are better than any results under backpropagation feed forward nets using monotone activation functions published previously.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125695934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-05-29DOI: 10.1109/INBS.1995.404261
R. Keene
A theory is presented: that a subsumptive neural system coupled with a semi-randomly connected, teachable, neural net will result in cognitive behavior similar to what appears to happen in biological brains. The paper discusses a new theory of what cognition is, and an algorithm for the simulation of cognition. The topics of what the brain appears to do, why the brain provides the functions it does, and how this could be simulated are discussed. The intent is to arrive at a single unified algorithm that covers all functions of the brain.<>
{"title":"A new model for the cognitive process. Artificial cognition","authors":"R. Keene","doi":"10.1109/INBS.1995.404261","DOIUrl":"https://doi.org/10.1109/INBS.1995.404261","url":null,"abstract":"A theory is presented: that a subsumptive neural system coupled with a semi-randomly connected, teachable, neural net will result in cognitive behavior similar to what appears to happen in biological brains. The paper discusses a new theory of what cognition is, and an algorithm for the simulation of cognition. The topics of what the brain appears to do, why the brain provides the functions it does, and how this could be simulated are discussed. The intent is to arrive at a single unified algorithm that covers all functions of the brain.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122710640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}