We present relational interpreters for several subsets of Scheme, written in the pure logic programming language miniKanren. We demonstrate these interpreters running "backwards"---that is, generating programs that evaluate to a specified value---and show how the interpreters can trivially generate quines (programs that evaluate to themselves). We demonstrate how to transform environment-passing interpreters written in Scheme into relational interpreters written in miniKanren. We show how constraint extensions to core miniKanren can be used to allow shadowing of the interpreter's primitive forms (using the absent° tree constraint), and to avoid having to tag expressions in the languages being interpreted (using disequality constraints and symbol/number type-constraints), simplifying the interpreters and eliminating the need for parsers/unparsers. We provide four appendices to make the code in the paper completely self-contained. Three of these appendices contain new code: the complete implementation of core miniKanren extended with the new constraints; an extended relational interpreter capable of running factorial and doing list processing; and a simple pattern matcher that uses Dijkstra guards. The other appendix presents our preferred version of code that has been presented elsewhere: the miniKanren relational arithmetic system used in the extended interpreter.
{"title":"miniKanren, live and untagged: quine generation via relational interpreters (programming pearl)","authors":"William E. Byrd, Eric Holk, Daniel P. Friedman","doi":"10.1145/2661103.2661105","DOIUrl":"https://doi.org/10.1145/2661103.2661105","url":null,"abstract":"We present relational interpreters for several subsets of Scheme, written in the pure logic programming language miniKanren. We demonstrate these interpreters running \"backwards\"---that is, generating programs that evaluate to a specified value---and show how the interpreters can trivially generate quines (programs that evaluate to themselves). We demonstrate how to transform environment-passing interpreters written in Scheme into relational interpreters written in miniKanren. We show how constraint extensions to core miniKanren can be used to allow shadowing of the interpreter's primitive forms (using the absent° tree constraint), and to avoid having to tag expressions in the languages being interpreted (using disequality constraints and symbol/number type-constraints), simplifying the interpreters and eliminating the need for parsers/unparsers.\u0000 We provide four appendices to make the code in the paper completely self-contained. Three of these appendices contain new code: the complete implementation of core miniKanren extended with the new constraints; an extended relational interpreter capable of running factorial and doing list processing; and a simple pattern matcher that uses Dijkstra guards. The other appendix presents our preferred version of code that has been presented elsewhere: the miniKanren relational arithmetic system used in the extended interpreter.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115352029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many languages include a syntax for declaring programmer-defined structured data types, i.e., structs or records. R6RS supports syntactic record definitions but also allows records to be defined procedurally, i.e., via a set of run-time operations. Indeed, the procedural interface is considered to be the primitive interface, and the syntactic interface is designed to be macro expandable into code that uses the procedural interface. Run-time creation of record types has a potentially significant impact. In particular, record creation, field access, and field mutation cannot generally be open coded, as it can be with syntactically specified records. Often, however, the shape of a record type can be determined statically, and in such a case, performance equivalent to that of syntactically specified record types can be attained. This paper describes an efficient run-time implementation of procedural record types, discusses its overhead, and describes a set of compiler optimizations that eliminate the overhead when record-type information can be determined statically. The optimizations improve the performance of a set of representative benchmark programs by over 20% on average.
{"title":"A sufficiently smart compiler for procedural records","authors":"Andrew W. Keep, R. Dybvig","doi":"10.1145/2661103.2661107","DOIUrl":"https://doi.org/10.1145/2661103.2661107","url":null,"abstract":"Many languages include a syntax for declaring programmer-defined structured data types, i.e., structs or records. R6RS supports syntactic record definitions but also allows records to be defined procedurally, i.e., via a set of run-time operations. Indeed, the procedural interface is considered to be the primitive interface, and the syntactic interface is designed to be macro expandable into code that uses the procedural interface. Run-time creation of record types has a potentially significant impact. In particular, record creation, field access, and field mutation cannot generally be open coded, as it can be with syntactically specified records. Often, however, the shape of a record type can be determined statically, and in such a case, performance equivalent to that of syntactically specified record types can be attained. This paper describes an efficient run-time implementation of procedural record types, discusses its overhead, and describes a set of compiler optimizations that eliminate the overhead when record-type information can be determined statically. The optimizations improve the performance of a set of representative benchmark programs by over 20% on average.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an approach for compiling Scheme's tail calls and first-class continuations to JavaScript, a dynamic language without those features. Our approach is based on the use of a simple custom virtual machine intermediate representation that is translated to JavaScript. We compare this approach, which is used by the Gambit-JS compiler, to the Replay-C algorithm, used by Scheme2JS (a derivative of Bigloo), and Cheney on the MTA, used by Spock (a derivative of Chicken). We analyse the performance of the three systems with a set of benchmark programs on recent versions of four popular JavaScript VMs (V8, SpiderMonkey, Nitro and Chakra). On the benchmark programs, all systems perform best when executed with V8 and our approach is consistently faster than the others on all VMs. For some VMs and benchmarks our approach is moderately faster than the others (below a factor of 2), but in some cases there is a very large performance gap (with Nitro there is a slowdown of up to 3 orders of magnitude for Scheme2JS, and up to 2 orders of magnitude for Spock).
{"title":"Efficient compilation of tail calls and continuations to JavaScript","authors":"Eric Thivierge, M. Feeley","doi":"10.1145/2661103.2661108","DOIUrl":"https://doi.org/10.1145/2661103.2661108","url":null,"abstract":"This paper describes an approach for compiling Scheme's tail calls and first-class continuations to JavaScript, a dynamic language without those features. Our approach is based on the use of a simple custom virtual machine intermediate representation that is translated to JavaScript. We compare this approach, which is used by the Gambit-JS compiler, to the Replay-C algorithm, used by Scheme2JS (a derivative of Bigloo), and Cheney on the MTA, used by Spock (a derivative of Chicken). We analyse the performance of the three systems with a set of benchmark programs on recent versions of four popular JavaScript VMs (V8, SpiderMonkey, Nitro and Chakra). On the benchmark programs, all systems perform best when executed with V8 and our approach is consistently faster than the others on all VMs. For some VMs and benchmarks our approach is moderately faster than the others (below a factor of 2), but in some cases there is a very large performance gap (with Nitro there is a slowdown of up to 3 orders of magnitude for Scheme2JS, and up to 2 orders of magnitude for Spock).","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"41 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gradual typing is an approach to integrating static and dynamic type checking within the same language [Siek and Taha 2006]. Given the name "gradual typing", one might think that the most interesting aspect is the type system. It turns out that the dynamic semantics of gradually-typed languages is more complex than the static semantics, with many points in the design space [Wadler and Findler 2009; Siek et al. 2009] and many challenges concerning efficiency [Herman et al. 2007; Hansen 2007; Siek and Taha 2007; Siek and Wadler 2010; Wrigstad et al. 2010; Rastogi et al. 2012]. In this distilled tutorial, we explore the meaning of gradual typing and the challenges to efficient implementation by writing several definitional interpreters and abstract machines in Scheme for the gradually-typed lambda calculus.
渐进式类型是一种在同一语言中集成静态和动态类型检查的方法[Siek和Taha 2006]。考虑到“渐进式类型”这个名字,人们可能会认为最有趣的方面是类型系统。事实证明,渐进类型语言的动态语义比静态语义更复杂,在设计空间中有许多点[Wadler and Findler 2009;Siek et al. 2009]和许多关于效率的挑战[Herman et al. 2007;汉森2007;Siek and Taha 2007;Siek and Wadler 2010;Wrigstad et al. 2010;Rastogi et al. 2012]。在这篇经过提炼的教程中,我们将探讨渐进式类型的含义,并通过在Scheme中为渐进式lambda演算编写几个定义解释器和抽象机器来探索有效实现的挑战。
{"title":"Interpretations of the gradually-typed lambda calculus","authors":"Jeremy G. Siek, Ronald Garcia","doi":"10.1145/2661103.2661112","DOIUrl":"https://doi.org/10.1145/2661103.2661112","url":null,"abstract":"Gradual typing is an approach to integrating static and dynamic type checking within the same language [Siek and Taha 2006]. Given the name \"gradual typing\", one might think that the most interesting aspect is the type system. It turns out that the dynamic semantics of gradually-typed languages is more complex than the static semantics, with many points in the design space [Wadler and Findler 2009; Siek et al. 2009] and many challenges concerning efficiency [Herman et al. 2007; Hansen 2007; Siek and Taha 2007; Siek and Wadler 2010; Wrigstad et al. 2010; Rastogi et al. 2012]. In this distilled tutorial, we explore the meaning of gradual typing and the challenges to efficient implementation by writing several definitional interpreters and abstract machines in Scheme for the gradually-typed lambda calculus.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123086268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-reconfigurable, modular robots are distributed mechatronic devices that can change their physical shape; modules are programmed individually but must coordinate across the robot. Programming modular robots is difficult due to the complexity of programming a distributed embedded system with a dynamically evolving topology. We are currently experimenting with programming-language abstractions to help overcome these difficulties. This tutorial describes a few such experiments using Scheme to control simulated modular robots.
{"title":"Using scheme to control simulated modular robots","authors":"U. Schultz","doi":"10.1145/2661103.2661114","DOIUrl":"https://doi.org/10.1145/2661103.2661114","url":null,"abstract":"Self-reconfigurable, modular robots are distributed mechatronic devices that can change their physical shape; modules are programmed individually but must coordinate across the robot. Programming modular robots is difficult due to the complexity of programming a distributed embedded system with a dynamically evolving topology. We are currently experimenting with programming-language abstractions to help overcome these difficulties. This tutorial describes a few such experiments using Scheme to control simulated modular robots.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131726570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teaching beginners how to program is hard: As knowledge about systematic construction of programs is quite young, knowledge about the didactics of the discipline is even younger and correspondingly incomplete. Developing and refining an introductory-programming course for more than a decade, we have learned that designing a successful course is a comprehensive activity and teachers should consider and question all aspects of a course. We doubt reports of sweeping successes in introductory-programming classes by the use of just one single didactic device---including claims that "switching to Scheme" magically turns a bad course into a good one. Of course, the choice of individual devices (including the use of Scheme) does matter, but for teaching an effective course the whole package counts. This paper describes the basic ideas and insights that have driven the development of our introductory course. In particular, a number of conclusions about effective teaching were not as we had originally expected.
{"title":"Form over function: teaching beginners how to construct programs","authors":"Michael Sperber, Marcus Crestani","doi":"10.1145/2661103.2661113","DOIUrl":"https://doi.org/10.1145/2661103.2661113","url":null,"abstract":"Teaching beginners how to program is hard: As knowledge about systematic construction of programs is quite young, knowledge about the didactics of the discipline is even younger and correspondingly incomplete. Developing and refining an introductory-programming course for more than a decade, we have learned that designing a successful course is a comprehensive activity and teachers should consider and question all aspects of a course. We doubt reports of sweeping successes in introductory-programming classes by the use of just one single didactic device---including claims that \"switching to Scheme\" magically turns a bad course into a good one. Of course, the choice of individual devices (including the use of Scheme) does matter, but for teaching an effective course the whole package counts. This paper describes the basic ideas and insights that have driven the development of our introductory course. In particular, a number of conclusions about effective teaching were not as we had originally expected.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131919445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LAML is a software system that brings XML languages into Scheme as a collection of Scheme functions. The XML languages are defined by XML document type definitions (DTDs). We review the development of LAML during more than a decade, and we collect the experiences from these efforts. The paper describes four substantial applications that have been developed on top of the LAML libraries.
{"title":"Scheme on the web and in the classroom: a retrospective about the LAML project","authors":"K. Nørmark","doi":"10.1145/2661103.2661104","DOIUrl":"https://doi.org/10.1145/2661103.2661104","url":null,"abstract":"LAML is a software system that brings XML languages into Scheme as a collection of Scheme functions. The XML languages are defined by XML document type definitions (DTDs). We review the development of LAML during more than a decade, and we collect the experiences from these efforts. The paper describes four substantial applications that have been developed on top of the LAML libraries.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123247440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quasiquotation in Scheme is nearly ideal for implementing programs that generate other programs. These programs lack only the ability to generate fresh bound identifiers, as required to make such code-manipulating programs hygienic, but any Scheme programmer knows how to provide this ability using gensym. In this tutorial we investigate hygienic quasiquotation in Scheme and in languages influenced by Scheme. Stepping back from implementation issues, we first identify the source of the freshness condition in the semantics of a hygienic quasiquotation facility. We then show how gensym is needed to break a meta-circularity in interpreters and compilers for hygienic quasiquotation. Finally, following our recent work, we present a type system for hygienic quasiquotation that supports evaluation under dynamic λ-abstraction, manipulation of open code, a first-class eval function, and mutable state. This tutorial outlines Scheme programs implementing an interpreter, a compiler, and a macro for hygienic quasiquotation.
{"title":"Hygienic quasiquotation in scheme","authors":"Morten Rhiger","doi":"10.1145/2661103.2661109","DOIUrl":"https://doi.org/10.1145/2661103.2661109","url":null,"abstract":"Quasiquotation in Scheme is nearly ideal for implementing programs that generate other programs. These programs lack only the ability to generate fresh bound identifiers, as required to make such code-manipulating programs hygienic, but any Scheme programmer knows how to provide this ability using gensym. In this tutorial we investigate hygienic quasiquotation in Scheme and in languages influenced by Scheme. Stepping back from implementation issues, we first identify the source of the freshness condition in the semantics of a hygienic quasiquotation facility. We then show how gensym is needed to break a meta-circularity in interpreters and compilers for hygienic quasiquotation. Finally, following our recent work, we present a type system for hygienic quasiquotation that supports evaluation under dynamic λ-abstraction, manipulation of open code, a first-class eval function, and mutable state. This tutorial outlines Scheme programs implementing an interpreter, a compiler, and a macro for hygienic quasiquotation.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123839941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of programs running in JavaScript engines is notoriously difficult to predict. Indeed, JavaScript is a complex language and, due to time-constraints and limited engineering resources, all popular virtual machines only optimize a subset of the language. Code that runs outside this (non obvious) sweet spot can pay huge performance penalties. JavaScript engines generally have at least two modes of operation: one non-optimized, and one optimized. Initially all functions are compiled to run in the non-optimized mode. Heuristics, like statistic profilers or invocation counters, then trigger the expensive recompilation of hot methods. Methods frequently see a performance improvement of an order of magnitude when they run in optimized mode. It is hence crucial that programs spend their time in optimized code. There are several ways compilers can do this: • avoid statements that cannot be optimized by the JIT. Indeed, V8 still cannot generate optimized code for all JavaScript constructs. • avoid bailouts. Optimized code is generated under the assumption that the generated code will run with similar dynamic types as seen before. If that assumption fails, the optimized code must be thrown away. • make code monomorphic. Optimized code is more efficient if it specializes for fewer dynamic types. Frequently it is possible to reduce the number of types by duplicating functions. Knowing when and where to apply these tips is almost impossible without proper tool-support. In this tutorial I will discuss the listed optimization techniques and present the tools that allow the investigation of V8 generated code. In particular, I will focus on V8s tracing flags, which report when methods are (de)optimized or inlined, Hydrogen traces, which represent V8s intermediate representation, and assembly dumps. During the talk I will concentrate on a Scheme-to-JavaScript compilation, but all talking-points will be of interest to any developer creating JavaScript code. http://floitsch.blogspot.com/2012/03/optimizing-for-v8-introduction.html
{"title":"Optimizing JavaScript code for V8","authors":"Florian Loitsch","doi":"10.1145/2661103.2661111","DOIUrl":"https://doi.org/10.1145/2661103.2661111","url":null,"abstract":"The performance of programs running in JavaScript engines is notoriously difficult to predict. Indeed, JavaScript is a complex language and, due to time-constraints and limited engineering resources, all popular virtual machines only optimize a subset of the language. Code that runs outside this (non obvious) sweet spot can pay huge performance penalties.\u0000 JavaScript engines generally have at least two modes of operation: one non-optimized, and one optimized. Initially all functions are compiled to run in the non-optimized mode. Heuristics, like statistic profilers or invocation counters, then trigger the expensive recompilation of hot methods. Methods frequently see a performance improvement of an order of magnitude when they run in optimized mode. It is hence crucial that programs spend their time in optimized code.\u0000 There are several ways compilers can do this:\u0000 • avoid statements that cannot be optimized by the JIT. Indeed, V8 still cannot generate optimized code for all JavaScript constructs.\u0000 • avoid bailouts. Optimized code is generated under the assumption that the generated code will run with similar dynamic types as seen before. If that assumption fails, the optimized code must be thrown away.\u0000 • make code monomorphic. Optimized code is more efficient if it specializes for fewer dynamic types. Frequently it is possible to reduce the number of types by duplicating functions.\u0000 Knowing when and where to apply these tips is almost impossible without proper tool-support. In this tutorial I will discuss the listed optimization techniques and present the tools that allow the investigation of V8 generated code. In particular, I will focus on V8s tracing flags, which report when methods are (de)optimized or inlined, Hydrogen traces, which represent V8s intermediate representation, and assembly dumps. During the talk I will concentrate on a Scheme-to-JavaScript compilation, but all talking-points will be of interest to any developer creating JavaScript code. http://floitsch.blogspot.com/2012/03/optimizing-for-v8-introduction.html","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117055163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AspectScheme is an implementation of the Scheme programming language[5] built on MzScheme[4], providing support for pointcuts and advice aspect-oriented programming. In order to use it, place #lang racket (require (planet dutchyn/aspectscheme:1:0/aspectscheme)) in your code, before using any AspectScheme features.
{"title":"AspectScheme: aspects in higher-order languages","authors":"Christopher Dutchyn","doi":"10.1145/2661103.2661110","DOIUrl":"https://doi.org/10.1145/2661103.2661110","url":null,"abstract":"AspectScheme is an implementation of the Scheme programming language[5] built on MzScheme[4], providing support for pointcuts and advice aspect-oriented programming. In order to use it, place\u0000 #lang racket\u0000 (require (planet dutchyn/aspectscheme:1:0/aspectscheme))\u0000 in your code, before using any AspectScheme features.","PeriodicalId":113092,"journal":{"name":"Scheme and Functional Programming","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124196374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}