M. Felleisen, M. Wand, Daniel P. Friedman, B. Duba
Continuation semantics is the traditional mathematical formalism for specifying the semantics of non-local control operations. Modern Lisp-style languages, however, contain advanced control structures like full functional jumps and control delimiters for which continuation semantics is insufficient. We solve this problem by introducing an abstract domain of rests of computations with appropriate operations. Beyond being useful for the problem at hand, these abstract continuations turn out to have applications in a much broader context, e.g., the explication of parallelism, the modeling of control facilities in parallel languages, and the design of new control structures.
{"title":"Abstract continuations: a mathematical semantics for handling full jumps","authors":"M. Felleisen, M. Wand, Daniel P. Friedman, B. Duba","doi":"10.1145/62678.62684","DOIUrl":"https://doi.org/10.1145/62678.62684","url":null,"abstract":"Continuation semantics is the traditional mathematical formalism for specifying the semantics of non-local control operations. Modern Lisp-style languages, however, contain advanced control structures like full functional jumps and control delimiters for which continuation semantics is insufficient. We solve this problem by introducing an abstract domain of rests of computations with appropriate operations. Beyond being useful for the problem at hand, these abstract continuations turn out to have applications in a much broader context, e.g., the explication of parallelism, the modeling of control facilities in parallel languages, and the design of new control structures.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"10 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130135377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combinator reduction is a well-known implementation technique for executing functional programs. In this paper we present a new method for parallel combinator reduction based on viewing combinators simply as “graph mutators.” We show that each combinator in Turner's standard set can be expressed using two primitive operations on a binary graph — one to alter an edge and one to insert a vertex — and four symmetric variants of them. We call these primitive operations graphinators, and present a single 7-step graphinator sequence which implements the reduction rules for all combinators in the set. This sequence allows redexes involving any of the combinators to be reduced in parallel on a SIMD machine. We have implemented a graph reducer on the Connection Machine based on these results, together with a novel execution strategy called prudent evaluation. Preliminary performance results suggest that our implementation does reasonably well, significantly better than previous efforts, but perhaps still not well enough to be practical. Nevertheless, the approach suggests a new way of thinking about program execution, and we have thoughts on how to improve our implementation.
{"title":"Graphinators and the duality of SIMD and MIMD","authors":"P. Hudak, Eric Mohr","doi":"10.1145/62678.62714","DOIUrl":"https://doi.org/10.1145/62678.62714","url":null,"abstract":"Combinator reduction is a well-known implementation technique for executing functional programs. In this paper we present a new method for parallel combinator reduction based on viewing combinators simply as “graph mutators.” We show that each combinator in Turner's standard set can be expressed using two primitive operations on a binary graph — one to alter an edge and one to insert a vertex — and four symmetric variants of them. We call these primitive operations graphinators, and present a single 7-step graphinator sequence which implements the reduction rules for all combinators in the set. This sequence allows redexes involving any of the combinators to be reduced in parallel on a SIMD machine. We have implemented a graph reducer on the Connection Machine based on these results, together with a novel execution strategy called prudent evaluation. Preliminary performance results suggest that our implementation does reasonably well, significantly better than previous efforts, but perhaps still not well enough to be practical. Nevertheless, the approach suggests a new way of thinking about program execution, and we have thoughts on how to improve our implementation.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133804282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Standard ML includes a set of module constructs that support programming in the large. These constructs extend ML's basic polymorphic type system by introducing the dependent types of Martin Löf's Intuitionistic Type Theory. This paper discusses the problems involved in implementing Standard ML's modules and describes a practical, efficient solution to these problems. The representations and algorithms of this implementation were inspired by a detailed formal semantics of Standard ML developed by Milner, Tofte, and Harper. The implementation is part of a new Standard ML compiler that is written in Standard ML using the module system.
{"title":"An implementation of standard ML modules","authors":"David B. MacQueen","doi":"10.1145/62678.62704","DOIUrl":"https://doi.org/10.1145/62678.62704","url":null,"abstract":"Standard ML includes a set of module constructs that support programming in the large. These constructs extend ML's basic polymorphic type system by introducing the dependent types of Martin Löf's Intuitionistic Type Theory. This paper discusses the problems involved in implementing Standard ML's modules and describes a practical, efficient solution to these problems. The representations and algorithms of this implementation were inspired by a detailed formal semantics of Standard ML developed by Milner, Tofte, and Harper. The implementation is part of a new Standard ML compiler that is written in Standard ML using the module system.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116611102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Milner Calculus is the typed &lgr;-calculus underlying the type system for the programming language ML [Har86] and several other strongly typed polymorphic functional languages such as Miranda [Tur86] and SPS [Wan84]. Mycroft [Myc84] extended the problematical typing rule for recursive definitions and proved that the resulting calculus, termed Milner-Mycroft Calculus here, is sound with respect to Milner's [Mil78] semantics and that it preserves the principal typing property [DM82] of the Milner Calculus. The extension is of practical significance in typed logic programming languages [MO84] and, more generally, in any language with (mutually) recursive definitions. Mycroft didn't solve the decidability problem for typings in this calculus, though. This was an open problem independently raised also by Meertens [Mee83]. The decidability question was answered in the affirmative just recently by Kfoury et al. in [KTU88]. We show that the type inference problems in the Milner and the Milner-Mycroft Calculi can be reduced to solving equations and inequations between first-order terms, a problem we have termed semi-unification. We show that semi-unification problems have most general solutions in analogy to unification problems — which translates into principal typing properties for the underlying calculi. In contrast to the (essentially) nonconstructive methods of [KTU88] we present functional specifications, which we prove partially correct, for computing the most general solution of semi-unification problems, and we devise a concrete nondeterministic algorithm on a graph-theoretic representation for computing these most general solutions. Finally, we point out some erroneous statements about the efficiency of polymorphic type checking that have persisted throughout the literature including an incorrect claim, submitted by ourselves, of polynomial time type checking in the Milner-Mycroft Calculus.
{"title":"Type inference and semi-unification","authors":"F. Henglein","doi":"10.1145/62678.62701","DOIUrl":"https://doi.org/10.1145/62678.62701","url":null,"abstract":"The Milner Calculus is the typed &lgr;-calculus underlying the type system for the programming language ML [Har86] and several other strongly typed polymorphic functional languages such as Miranda [Tur86] and SPS [Wan84]. Mycroft [Myc84] extended the problematical typing rule for recursive definitions and proved that the resulting calculus, termed Milner-Mycroft Calculus here, is sound with respect to Milner's [Mil78] semantics and that it preserves the principal typing property [DM82] of the Milner Calculus. The extension is of practical significance in typed logic programming languages [MO84] and, more generally, in any language with (mutually) recursive definitions. Mycroft didn't solve the decidability problem for typings in this calculus, though. This was an open problem independently raised also by Meertens [Mee83]. The decidability question was answered in the affirmative just recently by Kfoury et al. in [KTU88]. We show that the type inference problems in the Milner and the Milner-Mycroft Calculi can be reduced to solving equations and inequations between first-order terms, a problem we have termed semi-unification. We show that semi-unification problems have most general solutions in analogy to unification problems — which translates into principal typing properties for the underlying calculi. In contrast to the (essentially) nonconstructive methods of [KTU88] we present functional specifications, which we prove partially correct, for computing the most general solution of semi-unification problems, and we devise a concrete nondeterministic algorithm on a graph-theoretic representation for computing these most general solutions. Finally, we point out some erroneous statements about the efficiency of polymorphic type checking that have persisted throughout the literature including an incorrect claim, submitted by ourselves, of polynomial time type checking in the Milner-Mycroft Calculus.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121784342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The goal of this paper is to describe an open-ended type system for Lisp with explicit and full control of bit-level data representations. This description uses a reflective architecture based on a metatype facility. This low-level formalism solves the problem of an harmonious design of a class taxononomy inside a type system. A prototype for this framework has been written in Le-Lisp and is used to build the integrated type and object systems of the EU_LISP proposal.
{"title":"An open-ended data representation model for EU_LISP","authors":"C. Queinnec, P. Cointe","doi":"10.1145/62678.62722","DOIUrl":"https://doi.org/10.1145/62678.62722","url":null,"abstract":"The goal of this paper is to describe an open-ended type system for Lisp with explicit and full control of bit-level data representations. This description uses a reflective architecture based on a metatype facility. This low-level formalism solves the problem of an harmonious design of a class taxononomy inside a type system. A prototype for this framework has been written in Le-Lisp and is used to build the integrated type and object systems of the EU_LISP proposal.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125122389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many functional languages have a construction to define inductive data types [Hoa75] (also called general :structured types [Pey87], structures [Lan64], datatypes [Mil84] and free algebras [GTWW77]). An inductive defin.ition of a data type can also be seen as a grammar for at language and the elements of the data type as the phrases of the language. So defining an inductive data type can be seen as introducing an embedded language of values into the programming language. This correspondence is however not fully exploited in existing functional languages. The elements can presently only be written in a very restricted form. They are just the parse trees of the elements written in prefix form. A generalization, that we will consider in this paper, is to allow the elements to be written in a more general form. Instead of directly writing the parse trees of the embedded language, we would like to use a more concrete syntactical form and let an automatically generated parser translate the concrete syntactical form to the corresponding parse tree. We think that this is especi.ally useful when we manipulate languages in programs, for example, when implementing compilers, interpreters, program transformation systems, and programming logics. It is also convenient if we want to use the concrete syntax for other kinds of data in a program.
{"title":"Concrete syntax for data objects in functional languages","authors":"Annika Aasa, Kent Petersson, Dan Synek","doi":"10.1145/62678.62688","DOIUrl":"https://doi.org/10.1145/62678.62688","url":null,"abstract":"Many functional languages have a construction to define inductive data types [Hoa75] (also called general :structured types [Pey87], structures [Lan64], datatypes [Mil84] and free algebras [GTWW77]). An inductive defin.ition of a data type can also be seen as a grammar for at language and the elements of the data type as the phrases of the language. So defining an inductive data type can be seen as introducing an embedded language of values into the programming language. This correspondence is however not fully exploited in existing functional languages. The elements can presently only be written in a very restricted form. They are just the parse trees of the elements written in prefix form. A generalization, that we will consider in this paper, is to allow the elements to be written in a more general form. Instead of directly writing the parse trees of the embedded language, we would like to use a more concrete syntactical form and let an automatically generated parser translate the concrete syntactical form to the corresponding parse tree. We think that this is especi.ally useful when we manipulate languages in programs, for example, when implementing compilers, interpreters, program transformation systems, and programming logics. It is also convenient if we want to use the concrete syntax for other kinds of data in a program.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125518232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Buckwheat is a working implementation of a functional language on the Encore Multimax multiprocessor. It is based on a heterogeneous abstract machine model consisting of both graph reduction and stack oriented execution. Buckwheat consists of two major components: a compiler and a run-time system. The task of the compiler is to detect the exploitable parallelism in programs written in ALFL, a conventional functional language. The run-time system supports processor scheduling, dynamic typing and storage management. In this paper we describe the organization, execution model, and scheduling policies of the Buckwheat run-time system. A large number of experiments have been performed and we present the results.
{"title":"Buckwheat: graph reduction on a shared-memory multiprocessor","authors":"B. Goldberg","doi":"10.1145/62678.62683","DOIUrl":"https://doi.org/10.1145/62678.62683","url":null,"abstract":"Buckwheat is a working implementation of a functional language on the Encore Multimax multiprocessor. It is based on a heterogeneous abstract machine model consisting of both graph reduction and stack oriented execution. Buckwheat consists of two major components: a compiler and a run-time system. The task of the compiler is to detect the exploitable parallelism in programs written in ALFL, a conventional functional language. The run-time system supports processor scheduling, dynamic typing and storage management. In this paper we describe the organization, execution model, and scheduling policies of the Buckwheat run-time system. A large number of experiments have been performed and we present the results.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130104233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents a model of the reflective tower based on the formal semantics of its levels. They are related extensionally by their mutual interpretation and intensionally by reification and reflection. The key points obtained here are: a formal relation between the semantic domains of each level; a formal identification of reification and reflection; the visualisation of intensional snapshots of a tower of interpreters; a formal justification and a generalization of Brown's meta-continuation; a (structural) denotational semantics for a compositional subset of the model; the distinction between making continuations jumpy and pushy; the discovery of the tail-reflection property; and a Scheme implementation of a properly tail-reflective and single-threaded reflective tower. Section 1 presents the new approach taken here: rather than implementing reification and reflection leading to a tower, we consider an infinite tower described by the semantics of each level and relate these by reification and reflection. Meta-circularity then gives sufficient conditions for implementing it. Section 2 investigates some aspects of the environments and control in a reflective tower. An analog of the funarg problem is pointed out, in relation with the correct environment at reification time. Jumpy and pushy continuations are contrasted, and the notions of ephemeral level and proper tail-reflection are introduced. Our approach is compared with related work and after a conclusion, some issues are proposed.
{"title":"Intensions and extensions in a reflective tower","authors":"O. Danvy, Karoline Malmkjær","doi":"10.1145/62678.62725","DOIUrl":"https://doi.org/10.1145/62678.62725","url":null,"abstract":"This article presents a model of the reflective tower based on the formal semantics of its levels. They are related extensionally by their mutual interpretation and intensionally by reification and reflection. The key points obtained here are: a formal relation between the semantic domains of each level; a formal identification of reification and reflection; the visualisation of intensional snapshots of a tower of interpreters; a formal justification and a generalization of Brown's meta-continuation; a (structural) denotational semantics for a compositional subset of the model; the distinction between making continuations jumpy and pushy; the discovery of the tail-reflection property; and a Scheme implementation of a properly tail-reflective and single-threaded reflective tower. Section 1 presents the new approach taken here: rather than implementing reification and reflection leading to a tower, we consider an infinite tower described by the semantics of each level and relate these by reification and reflection. Meta-circularity then gives sufficient conditions for implementing it. Section 2 investigates some aspects of the environments and control in a reflective tower. An analog of the funarg problem is pointed out, in relation with the correct environment at reification time. Jumpy and pushy continuations are contrasted, and the notions of ephemeral level and proper tail-reflection are introduced. Our approach is compared with related work and after a conclusion, some issues are proposed.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current programming languages tend to have several different mechanisms that provide parameterization. For example, most languages have both variables, which communicate parameter values primarily within routines, and procedure invocations, which communicate parameter values primarily between routines. Depending on the situation, one mechanism or the other must be used. The mechanisms also tend to involve more than just parameterization; a procedure call, for example, also implies a transfer of control. The principle of orthogonal design for programming languages suggests that it would be desirable to have a single parameterization mechanism that can be used in all situations and that doesn't affect anything but parameterization. We consider what properties such a mechanism would have to have, concluding, for example, that parameters must be nameable, and, more importantly, that all the parameterization facilities that are available in language expressions must also be available in data objects. These properties, in turn, put requirements on the underlying semantic structure of a programming language. We develop a formal system of parameterization with those properties. The system has theoretical power equivalent to the lambda calculus, and is about the same size. It can serve as the basis of all parameterization in a functional programming language, being able to express constructions like procedure call, variable binding, mutual recursion, and module linkage. In addition to being able to express the common parameterization constructions, the uniformity and universality of the system offer improved modularity, extensibility, and simplicity. It is especially useful for applications that need to create new code at runtime.
{"title":"A unified system of parameterization for programming languages","authors":"J. Lamping","doi":"10.1145/62678.62724","DOIUrl":"https://doi.org/10.1145/62678.62724","url":null,"abstract":"Current programming languages tend to have several different mechanisms that provide parameterization. For example, most languages have both variables, which communicate parameter values primarily within routines, and procedure invocations, which communicate parameter values primarily between routines. Depending on the situation, one mechanism or the other must be used. The mechanisms also tend to involve more than just parameterization; a procedure call, for example, also implies a transfer of control. \u0000The principle of orthogonal design for programming languages suggests that it would be desirable to have a single parameterization mechanism that can be used in all situations and that doesn't affect anything but parameterization. \u0000We consider what properties such a mechanism would have to have, concluding, for example, that parameters must be nameable, and, more importantly, that all the parameterization facilities that are available in language expressions must also be available in data objects. These properties, in turn, put requirements on the underlying semantic structure of a programming language. \u0000We develop a formal system of parameterization with those properties. The system has theoretical power equivalent to the lambda calculus, and is about the same size. It can serve as the basis of all parameterization in a functional programming language, being able to express constructions like procedure call, variable binding, mutual recursion, and module linkage. In addition to being able to express the common parameterization constructions, the uniformity and universality of the system offer improved modularity, extensibility, and simplicity. It is especially useful for applications that need to create new code at runtime.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that the problem of partial type inference in the nth-order polymorphic &lgr;-calculus is equivalent to nth-order unification. On the one hand, this means that partial type inference in polymorphic &lgr;-calculi of order 2 or higher is undecidable. On the other hand, higher-order unification is often tractable in practice, and our translation entails a very useful algorithm for partial type inference in the &ohgr;-order polymorphic &lgr;-calculus. We present an implementation in &lgr;Prolog in full.
{"title":"Partial polymorphic type inference and higher-order unification","authors":"F. Pfenning","doi":"10.1145/62678.62697","DOIUrl":"https://doi.org/10.1145/62678.62697","url":null,"abstract":"We show that the problem of partial type inference in the nth-order polymorphic &lgr;-calculus is equivalent to nth-order unification. On the one hand, this means that partial type inference in polymorphic &lgr;-calculi of order 2 or higher is undecidable. On the other hand, higher-order unification is often tractable in practice, and our translation entails a very useful algorithm for partial type inference in the &ohgr;-order polymorphic &lgr;-calculus. We present an implementation in &lgr;Prolog in full.","PeriodicalId":119710,"journal":{"name":"Proceedings of the 1988 ACM conference on LISP and functional programming","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129898844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}