首页 > 最新文献

Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems最新文献

英文 中文
Sulong - execution of LLVM-based languages on the JVM: position paper 在JVM上执行基于llvm的语言:立场文件
Manuel Rigger, Matthias Grimmer, H. Mössenböck
For the last decade, the Java Virtual Machine (JVM) has been a popular platform to host languages other than Java. Language implementation frameworks like Truffle allow the implementation of dynamic languages such as JavaScript or Ruby with competitive performance and completeness. However, statically typed languages are still rare under Truffle. We present Sulong, an LLVM IR interpreter that brings all LLVM-based languages including C, C++, and Fortran in one stroke to the JVM. Executing these languages on the JVM enables a wide area of future research, including high-performance interoperability between high-level and low-level languages, combination of static and dynamic optimizations, and a memory-safe execution of otherwise unsafe and unmanaged languages.
在过去十年中,Java虚拟机(JVM)一直是托管Java以外语言的流行平台。像Truffle这样的语言实现框架允许实现动态语言,如JavaScript或Ruby,具有竞争性的性能和完整性。然而,静态类型语言在Truffle中仍然很少见。我们介绍了Sulong,一个LLVM IR解释器,它将所有基于LLVM的语言(包括C、c++和Fortran)一次性导入JVM。在JVM上执行这些语言为未来的研究提供了广阔的领域,包括高级语言和低级语言之间的高性能互操作性、静态和动态优化的组合,以及对不安全和非托管语言的内存安全执行。
{"title":"Sulong - execution of LLVM-based languages on the JVM: position paper","authors":"Manuel Rigger, Matthias Grimmer, H. Mössenböck","doi":"10.1145/3012408.3012416","DOIUrl":"https://doi.org/10.1145/3012408.3012416","url":null,"abstract":"For the last decade, the Java Virtual Machine (JVM) has been a popular platform to host languages other than Java. Language implementation frameworks like Truffle allow the implementation of dynamic languages such as JavaScript or Ruby with competitive performance and completeness. However, statically typed languages are still rare under Truffle. We present Sulong, an LLVM IR interpreter that brings all LLVM-based languages including C, C++, and Fortran in one stroke to the JVM. Executing these languages on the JVM enables a wide area of future research, including high-performance interoperability between high-level and low-level languages, combination of static and dynamic optimizations, and a memory-safe execution of otherwise unsafe and unmanaged languages.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126356304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Source language representation of function summaries in static analysis 静态分析中函数摘要的源语言表示
G. Horváth, Norbert Pataki
Static analysis is a popular method to find bugs. In context-sensitive static analysis the analyzer considers the calling context when evaluating a function call. This approach makes it possible to find bugs that span across multiple functions. In order to find those issues the analyzer engine requires information about both the calling context and the callee. Unfortunately the implementation of the callee might only be available in a separate translation unit or module. In these scenarios the analyzer either makes some assumptions about the behavior of the callee (which may be unsound) or conservatively creates a program state that marks every value that might be affected by this function call. In this case the marked value becomes unknown which implies significant loss of precision. In order to mitigate this overapproximation, a common approach is to assign a summary to some of the functions, and each time the implementation is not available, use the summary to analyze the effect of the function call. These summaries are in fact approximations of the function implementations that can be used to model some behavior of the called functions in a given context. The most proper way to represent summaries, however, remains an open question. This paper describes a method for summarising C (or C++) functions in C (or C++) itself. We evaluate the advantages and disadvantages of this approach. It is challenging to use source language representation efficiently due to the compilation model of C/C++. We propose an efficient solution. The emphasis of the paper is on using static analysis to find errors in the programs, however the same approach can be used to optimize programs or any other tasks that static analysis is capable of. Our proof of concept implementation is available in the upstream version of the Clang compiler.
静态分析是发现bug的常用方法。在上下文敏感的静态分析中,分析器在计算函数调用时考虑调用上下文。这种方法使得发现跨越多个函数的bug成为可能。为了找到这些问题,分析引擎需要关于调用上下文和被调用方的信息。不幸的是,被调用方的实现可能只能在单独的翻译单元或模块中可用。在这些场景中,分析器要么对被调用者的行为做出一些假设(这可能是不合理的),要么保守地创建一个程序状态,标记可能受此函数调用影响的每个值。在这种情况下,标记的值变得未知,这意味着精度的重大损失。为了减轻这种过度近似,一种常见的方法是为某些函数分配摘要,并且每次实现不可用时,使用摘要来分析函数调用的效果。这些摘要实际上是函数实现的近似值,可用于在给定上下文中对被调用函数的某些行为建模。然而,表示摘要的最合适方式仍然是一个悬而未决的问题。本文描述了一种用C(或c++)本身来总结C(或c++)函数的方法。我们评估了这种方法的优点和缺点。由于C/ c++的编译模型,对源语言表示的高效使用提出了挑战。我们提出了一个有效的解决方案。本文的重点是使用静态分析来查找程序中的错误,然而,同样的方法可以用于优化程序或静态分析能够完成的任何其他任务。我们的概念验证实现可以在Clang编译器的上游版本中获得。
{"title":"Source language representation of function summaries in static analysis","authors":"G. Horváth, Norbert Pataki","doi":"10.1145/3012408.3012414","DOIUrl":"https://doi.org/10.1145/3012408.3012414","url":null,"abstract":"Static analysis is a popular method to find bugs. In context-sensitive static analysis the analyzer considers the calling context when evaluating a function call. This approach makes it possible to find bugs that span across multiple functions. In order to find those issues the analyzer engine requires information about both the calling context and the callee. Unfortunately the implementation of the callee might only be available in a separate translation unit or module. In these scenarios the analyzer either makes some assumptions about the behavior of the callee (which may be unsound) or conservatively creates a program state that marks every value that might be affected by this function call. In this case the marked value becomes unknown which implies significant loss of precision. In order to mitigate this overapproximation, a common approach is to assign a summary to some of the functions, and each time the implementation is not available, use the summary to analyze the effect of the function call. These summaries are in fact approximations of the function implementations that can be used to model some behavior of the called functions in a given context. The most proper way to represent summaries, however, remains an open question. This paper describes a method for summarising C (or C++) functions in C (or C++) itself. We evaluate the advantages and disadvantages of this approach. It is challenging to use source language representation efficiently due to the compilation model of C/C++. We propose an efficient solution. The emphasis of the paper is on using static analysis to find errors in the programs, however the same approach can be used to optimize programs or any other tasks that static analysis is capable of. Our proof of concept implementation is available in the upstream version of the Clang compiler.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126632348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Demo of docking: enabling language based dynamic coupling 对接演示:启用基于语言的动态耦合
Magnus Haugom Christensen, E. Jul
This demo shows how two objects that each live within their own world, i.e., the are not in each others transitive closure of object references, can get to know each other in a well-defined manner using a new language construct. The basic problem is that if two object are in different worlds, there is no way they can communicate. Our proposed language construct, added to the Emerald programming language, allows objects in close proximity to get to know each other in a well-defined, language based manner.
这个演示展示了两个分别生活在它们自己的世界中的对象,也就是说,它们不在对象引用的彼此传递闭包中,它们如何使用一个新的语言结构以一种定义良好的方式相互了解。基本的问题是,如果两个物体处于不同的世界,它们就没有办法交流。我们提出的语言构造被添加到Emerald编程语言中,它允许距离很近的对象以一种定义良好的、基于语言的方式相互了解。
{"title":"Demo of docking: enabling language based dynamic coupling","authors":"Magnus Haugom Christensen, E. Jul","doi":"10.1145/3012408.3012419","DOIUrl":"https://doi.org/10.1145/3012408.3012419","url":null,"abstract":"This demo shows how two objects that each live within their own world, i.e., the are not in each others transitive closure of object references, can get to know each other in a well-defined manner using a new language construct. The basic problem is that if two object are in different worlds, there is no way they can communicate. Our proposed language construct, added to the Emerald programming language, allows objects in close proximity to get to know each other in a well-defined, language based manner.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123153270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building object oriented programs out of pieces 用碎片构建面向对象的程序
Richard A. O'Keefe
This paper presents a technique for assembling Smalltalk programs out of pieces using propositional Horn clauses. The technique allows the dependencies and restrictions of a piece to be stated inside the piece or outside, allowing components from other dialects to be used. The technique is applicable to any OO language allowing class extensions.
本文提出了一种使用命题Horn子句将Smalltalk程序组装成片断的技术。该技术允许在组件内部或外部声明组件的依赖关系和限制,从而允许使用来自其他方言的组件。该技术适用于任何允许类扩展的OO语言。
{"title":"Building object oriented programs out of pieces","authors":"Richard A. O'Keefe","doi":"10.1145/3012408.3012413","DOIUrl":"https://doi.org/10.1145/3012408.3012413","url":null,"abstract":"This paper presents a technique for assembling Smalltalk programs out of pieces using propositional Horn clauses. The technique allows the dependencies and restrictions of a piece to be stated inside the piece or outside, allowing components from other dialects to be used. The technique is applicable to any OO language allowing class extensions.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133257161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MHDeS: deduplicating method handle graphs for efficient dynamic JVM language implementations MHDeS:用于高效动态JVM语言实现的重复数据删除方法处理图
Shijie Xu, David Bremner, Daniel Heidinga
A method handle (MH) is a reference to an underlying Java method with potential method type transformations. Multiple inter-connected method handles form a method handle graph (MHG). Together with the Java Virtual Machine (JVM) instruction, invokedynamic, the implementation of MHGs is significant to dynamically typed language implementations on the JVM. Addressing the abundance of equivalent MHGs during program runtime, this paper presents an MHG equivalence model and an online Method Handle Deduplication System (MHDeS). The equivalence model determines the equivalence of two MHGs in terms of two kinds of keys, i.e., MH key and MHG key, which uniquely identify the transformation of an MH and an MHG, respectively. MHDeS is an implementation of the equivalence model. Instead of creating equivalent MHGs and then detecting their equivalence, MHDeS employs a light-weight structure, the MHG index key, which wraps constructor parameters of an MH. MHDeS uses a transformation index, fast-path comparison, and filters, to speed up the equivalence detection of an MHG index key. Our experimental results with the Computer Language Benchmark Game (CLBG) JRuby micro-indy show that 1) MHDeS with filtering off can speed up the benchmark by 4.67% and reduces memory usage by 7.19% on average; 2) the deduplication result can be affected by the choice of MH transformations for filtering; 3) MHDeS can have the MH JIT compilation performed earlier; and 4) as much as 32% of MHG index keys are detected as non-unique and eliminated by MHDeS, and the expense for a single detection is trivial.
方法句柄(MH)是对具有潜在方法类型转换的底层Java方法的引用。多个相互连接的方法句柄形成一个方法句柄图(MHG)。与Java虚拟机(JVM)指令invokedynamic一起,mhg的实现对于JVM上的动态类型语言实现非常重要。针对程序运行过程中存在大量等价的重复数据删除系统(MHDeS)的问题,提出了MHG等价模型和在线方法处理重复数据删除系统(MHDeS)。等价模型通过MH键和MHG键这两种键来确定两个MHG的等价性,这两种键分别唯一标识一个MH和一个MHG的变换。MHDeS是等效模型的一种实现。MHDeS不是先创建等价的MHG,然后再检测它们的等价性,而是使用轻量级结构MHG索引键,它封装了MH的构造函数参数。MHDeS使用转换索引、快速路径比较和过滤器来加快MHG索引键的等价性检测。我们在计算机语言基准游戏(CLBG) JRuby micro-indy上的实验结果表明:1)关闭滤波后的MHDeS可以使基准测试的速度提高4.67%,平均减少7.19%的内存使用;2)选择MH变换进行过滤会影响重复数据删除的结果;3) mhde可以更早地执行MH JIT编译;4)多达32%的MHG索引键被mhde检测为非唯一并被消除,单个检测的费用微不足道。
{"title":"MHDeS: deduplicating method handle graphs for efficient dynamic JVM language implementations","authors":"Shijie Xu, David Bremner, Daniel Heidinga","doi":"10.1145/3012408.3012412","DOIUrl":"https://doi.org/10.1145/3012408.3012412","url":null,"abstract":"A method handle (MH) is a reference to an underlying Java method with potential method type transformations. Multiple inter-connected method handles form a method handle graph (MHG). Together with the Java Virtual Machine (JVM) instruction, invokedynamic, the implementation of MHGs is significant to dynamically typed language implementations on the JVM. Addressing the abundance of equivalent MHGs during program runtime, this paper presents an MHG equivalence model and an online Method Handle Deduplication System (MHDeS). The equivalence model determines the equivalence of two MHGs in terms of two kinds of keys, i.e., MH key and MHG key, which uniquely identify the transformation of an MH and an MHG, respectively. MHDeS is an implementation of the equivalence model. Instead of creating equivalent MHGs and then detecting their equivalence, MHDeS employs a light-weight structure, the MHG index key, which wraps constructor parameters of an MH. MHDeS uses a transformation index, fast-path comparison, and filters, to speed up the equivalence detection of an MHG index key. Our experimental results with the Computer Language Benchmark Game (CLBG) JRuby micro-indy show that 1) MHDeS with filtering off can speed up the benchmark by 4.67% and reduces memory usage by 7.19% on average; 2) the deduplication result can be affected by the choice of MH transformations for filtering; 3) MHDeS can have the MH JIT compilation performed earlier; and 4) as much as 32% of MHG index keys are detected as non-unique and eliminated by MHDeS, and the expense for a single detection is trivial.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"187 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124932438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The performance of object encodings in JavaScript JavaScript中对象编码的性能
Forrest Alexander, A. Black
We investigate how to represent objects when JavaScript is used as a compilation target. This is an interesting question because JavaScript is the target language of choice for compiler writers who wish to deploy to "the Internet", and because JavaScript offers many ways to say the same thing. We looked at three axes of variability: whether an object's methods are stored in the object itself, or in a prototype; whether the object uses Javascript's closures or builds its own, and whether an object's fields are accessed directly or via accessor methods. The results reveal that certain variations are more than a hundred times faster than others. We conclude that the particular choices we make may be critical.
我们将研究使用JavaScript作为编译目标时如何表示对象。这是一个有趣的问题,因为JavaScript是希望部署到“Internet”的编译器编写者选择的目标语言,而且因为JavaScript提供了许多方式来表达同一件事。我们研究了三个可变性轴:对象的方法是存储在对象本身中,还是存储在原型中;对象是使用Javascript的闭包还是构建自己的闭包,对象的字段是直接访问还是通过访问器方法访问。结果显示,某些变异的速度比其他变异快100倍以上。我们的结论是,我们所做的特定选择可能是至关重要的。
{"title":"The performance of object encodings in JavaScript","authors":"Forrest Alexander, A. Black","doi":"10.1145/3012408.3012417","DOIUrl":"https://doi.org/10.1145/3012408.3012417","url":null,"abstract":"We investigate how to represent objects when JavaScript is used as a compilation target. This is an interesting question because JavaScript is the target language of choice for compiler writers who wish to deploy to \"the Internet\", and because JavaScript offers many ways to say the same thing. We looked at three axes of variability: whether an object's methods are stored in the object itself, or in a prototype; whether the object uses Javascript's closures or builds its own, and whether an object's fields are accessed directly or via accessor methods. The results reveal that certain variations are more than a hundred times faster than others. We conclude that the particular choices we make may be critical.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Growing an abstract grammar: teaching language engineering 培养抽象语法:语言工程教学
T. D'Hondt
Abstract grammars are neglected resources in language processor implementations. In the most favourable case they are used to format first-class program representations. In the worst case they serve as a temporary interface between compiler phases. But they can enable so much more âĂę certainly in teaching. In this presentation we report on a long-running experiment (>5 y.) to develop a language interpreter that is maximally supported by an extensible (abstract) grammar. The context of the experiment is an advanced course on Programming Language Engineering (http://soft.vub.ac.be/PLE). The reference language is a simplified variation on Scheme, so: no objects in this story. In this course abstract grammars serve as backbone for material ranging from formal language specifications to low-level implementation with an eye for optimisation. In order to do so, we require that an instance of an abstract grammar be first class, and that all of its attributes should be setable and getable from within any program that is associated with this instance. Depending on the level of detail at which its semantics are captured in the abstract grammar, this regulates the depth at which the program can reflect over its specification. Nothing new here, this is lisp and s-expressions, only more so. A central idea to this notion of a rich abstract grammar, is a unified memory model. At a basic level no distinction is made between stacks, heaps, frames &c. concerning their residence in memory. This of course raises potential performance issues about memory management - but a sufficiently powerful garbage collector and various caching and inlining tactics go a long way in mitigating this concern. We will consider what it takes to explain memory models and garbage collection at a sufficient level of detail to investigate performance issues. We proceed with s-expressions and grow this in successive steps to describe the various structures employed by a language interpreter. Considering that the eval operation should ultimately map the grammar onto itself, the obvious ones are computational values that do not correspond with literals, such as closures and continuations. But with the introduction of lexical addressing, we should also include frames and environments; and if the interpretation strategy is based on a transformation into continuation passing style (as is the case here), structures resulting from lambda-lifting should be considered. However, the most interesting extensions to the abstract grammar are related to optimisations: tail call optimisation, inlining, prevalent function call patterns, &c. This approach proved to be an interesting setting to expose graduate students to the vagaries low level language processor implementations. But it has also been suitable as a platform for sophisticated experiments with optimisations for language interpreters.
抽象语法是语言处理器实现中被忽略的资源。在最有利的情况下,它们被用来格式化一级程序表示。在最坏的情况下,它们充当编译器阶段之间的临时接口。但他们可以使这么多âĂę当然在教学。在本报告中,我们报告了一个长期运行的实验(>5年),以开发一个最大限度地由可扩展(抽象)语法支持的语言解释器。实验的背景是一门编程语言工程的高级课程(http://soft.vub.ac.be/PLE)。参考语言是Scheme的简化变体,因此:在这个故事中没有对象。在这门课程中,抽象语法是从形式语言规范到着眼于优化的低级实现的基础。为了做到这一点,我们要求抽象语法的实例是第一类,并且它的所有属性都应该是可设置的,并且可以从与该实例相关联的任何程序中获得。根据在抽象语法中捕获其语义的详细程度,这将调节程序可以反映其规范的深度。这里没有什么新鲜的,这是口齿不清和s-表达式,只是更多。这种丰富的抽象语法概念的中心思想是统一的记忆模型。在基本层面上,堆栈、堆、帧等之间没有区别。关于他们在记忆中的住所。这当然会引起关于内存管理的潜在性能问题——但是一个足够强大的垃圾收集器以及各种缓存和内联策略在减轻这种担忧方面大有帮助。我们将考虑如何足够详细地解释内存模型和垃圾收集,以调查性能问题。我们从s表达式开始,并逐步扩展,以描述语言解释器所使用的各种结构。考虑到eval操作最终应该将语法映射到自身,最明显的是与文字不对应的计算值,比如闭包和延续。但是随着词法寻址的引入,我们还应该包括框架和环境;如果解释策略是基于向延续传递风格的转换(就像这里的情况一样),则应该考虑由lambda提升产生的结构。然而,抽象语法最有趣的扩展与优化有关:尾部调用优化、内联、流行的函数调用模式等。这种方法被证明是一个有趣的设置,可以让研究生接触到变幻莫测的低级语言处理器实现。但它也适合作为复杂实验的平台,对语言解释器进行优化。
{"title":"Growing an abstract grammar: teaching language engineering","authors":"T. D'Hondt","doi":"10.1145/3012408.3012410","DOIUrl":"https://doi.org/10.1145/3012408.3012410","url":null,"abstract":"Abstract grammars are neglected resources in language processor implementations. In the most favourable case they are used to format first-class program representations. In the worst case they serve as a temporary interface between compiler phases. But they can enable so much more âĂę certainly in teaching. In this presentation we report on a long-running experiment (>5 y.) to develop a language interpreter that is maximally supported by an extensible (abstract) grammar. The context of the experiment is an advanced course on Programming Language Engineering (http://soft.vub.ac.be/PLE). The reference language is a simplified variation on Scheme, so: no objects in this story. In this course abstract grammars serve as backbone for material ranging from formal language specifications to low-level implementation with an eye for optimisation. In order to do so, we require that an instance of an abstract grammar be first class, and that all of its attributes should be setable and getable from within any program that is associated with this instance. Depending on the level of detail at which its semantics are captured in the abstract grammar, this regulates the depth at which the program can reflect over its specification. Nothing new here, this is lisp and s-expressions, only more so. A central idea to this notion of a rich abstract grammar, is a unified memory model. At a basic level no distinction is made between stacks, heaps, frames &c. concerning their residence in memory. This of course raises potential performance issues about memory management - but a sufficiently powerful garbage collector and various caching and inlining tactics go a long way in mitigating this concern. We will consider what it takes to explain memory models and garbage collection at a sufficient level of detail to investigate performance issues. We proceed with s-expressions and grow this in successive steps to describe the various structures employed by a language interpreter. Considering that the eval operation should ultimately map the grammar onto itself, the obvious ones are computational values that do not correspond with literals, such as closures and continuations. But with the introduction of lexical addressing, we should also include frames and environments; and if the interpretation strategy is based on a transformation into continuation passing style (as is the case here), structures resulting from lambda-lifting should be considered. However, the most interesting extensions to the abstract grammar are related to optimisations: tail call optimisation, inlining, prevalent function call patterns, &c. This approach proved to be an interesting setting to expose graduate students to the vagaries low level language processor implementations. But it has also been suitable as a platform for sophisticated experiments with optimisations for language interpreters.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124673308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trufflereloader: a low-overhead language-neutral reloader trufflerloader:一个低开销的语言中立的重新加载器
Tõnis Pool, A. Gregersen, Vesal Vojdani
Updating running programs is a well-researched and increasingly popular feature of programming language implementations. While there are solutions targeting specific languages and platforms, implementing dynamic update for new languages can require significant effort. We have built TruffleReloader, a reusable dynamic updating solution, on top of the Truffle language implementation framework, and adapted the Truffle implementations of Python, Ruby and JavaScript to use this feature. We show how TruffleReloader adds reloading capabilities to these implementations requiring limited language-specific modifications. With Truffle's just-in-time compiler enabled, our solution incurs close to zero overhead on steady-state performance. This approach reduces the effort required to add dynamic update support for existing and future languages.
更新正在运行的程序是编程语言实现中一个研究深入且日益流行的特性。虽然有针对特定语言和平台的解决方案,但实现新语言的动态更新可能需要大量的工作。我们在Truffle语言实现框架之上构建了TruffleReloader,一个可重用的动态更新解决方案,并调整了Python, Ruby和JavaScript的Truffle实现来使用该功能。我们将展示TruffleReloader如何将重新加载功能添加到这些实现中,这些实现需要进行有限的特定于语言的修改。启用了Truffle的即时编译器后,我们的解决方案在稳态性能上的开销接近于零。这种方法减少了为现有和未来的语言添加动态更新支持所需的工作量。
{"title":"Trufflereloader: a low-overhead language-neutral reloader","authors":"Tõnis Pool, A. Gregersen, Vesal Vojdani","doi":"10.1145/3012408.3012411","DOIUrl":"https://doi.org/10.1145/3012408.3012411","url":null,"abstract":"Updating running programs is a well-researched and increasingly popular feature of programming language implementations. While there are solutions targeting specific languages and platforms, implementing dynamic update for new languages can require significant effort. We have built TruffleReloader, a reusable dynamic updating solution, on top of the Truffle language implementation framework, and adapted the Truffle implementations of Python, Ruby and JavaScript to use this feature. We show how TruffleReloader adds reloading capabilities to these implementations requiring limited language-specific modifications. With Truffle's just-in-time compiler enabled, our solution incurs close to zero overhead on steady-state performance. This approach reduces the effort required to add dynamic update support for existing and future languages.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116949121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Beneath the bytecode: observing the JVM at work using bytecode instrumentation 在字节码下面:使用字节码检测观察JVM的工作情况
L. Bulej, Y. Zheng, Walter Binder
Many dynamic program analysis (DPA) tools for profiling, debugging, and monitoring programs executing on managed platforms such as the Java Virtual Machine (JVM) rely on bytecode instrumentation (sometimes combined with agents utilizing the JVM Tool Interface and native code libraries) to observe the base program behavior. While this is both the recommended and preferred technique for implementing DPA tools, it has certain noticeable drawbacks [1]. One, the analysis runs in the same process as the base program, and often shares the Java Class Library (JCL) and other resources with the base program. This creates potential for interference that may result in deadlocks, or state corruption in code that does not expect reentrancy. Two, certain parts of the JCL are typically off-limits for instrumentation, because they either play a vital role during the JVM bootstrap, or the JVM implementation makes certain assumptions about properties of specific classes, or both. These two issues are typically solved by reducing the scope of the instrumentation, leading to under-approximation of the program's behavior. And three, bytecode instrumentation only allows observing base program events at the bytecode level. The instrumentation code remains oblivious to optimizations performed by the dynamic compiler, and conversely, the compiler is completely unaware of the presence of the instrumentation code. Because the instrumentation code may significantly inflate the base program code and create additional data dependencies as a result of observing the program's behavior, various optimizations performed by the dynamic compiler (e.g., inlining, partial escape analysis, code motion) will be perturbed by the presence of the instrumentation code. As a result, the dynamic analysis may observe events that would not have happened in the base program had it been left alone, thus over-approximating the actual behavior. In this talk, we will discuss some of the challenges in making the JVM more observable for instrumentation-based DPA tools, with specific focus on getting accurate profiling information in presence of an optimizing dynamic compiler. The core of this talk is based on the work that was originally presented at OOPSLA'15 [4]. In the meantime, the work has been integrated into the Graal project. Additional parts are based on joint work with other authors, originally presented at AOSD'12 [3] and GPCE'13 [2].
许多用于分析、调试和监视在托管平台(如Java虚拟机(JVM))上执行的程序的动态程序分析(DPA)工具依赖于字节码插装(有时与利用JVM工具接口和本机代码库的代理结合使用)来观察基本程序行为。虽然这是实现DPA工具的推荐和首选技术,但它有某些明显的缺点[1]。第一,分析在与基程序相同的进程中运行,并且经常与基程序共享Java类库(JCL)和其他资源。这就产生了可能导致死锁的潜在干扰,或者不期望可重入的代码中的状态损坏。第二,JCL的某些部分通常不允许插装,因为它们要么在JVM引导过程中发挥重要作用,要么JVM实现对特定类的属性做出某些假设,或者两者兼而有之。这两个问题通常是通过减小插装的范围来解决的,这会导致对程序行为的近似不足。第三,字节码检测只允许在字节码级别观察基本程序事件。插装代码对动态编译器执行的优化保持不知情,相反,编译器完全不知道插装代码的存在。由于检测代码可能会显著地扩展基本程序代码,并在观察程序的行为时创建额外的数据依赖关系,因此动态编译器执行的各种优化(例如,内联、部分转义分析、代码移动)将被检测代码的存在所干扰。因此,动态分析可能会观察到在基本程序中不可能发生的事件,从而过度接近实际行为。在本次演讲中,我们将讨论使基于仪表的DPA工具的JVM更具可观察性的一些挑战,并特别关注在优化动态编译器的情况下获得准确的分析信息。这次演讲的核心是基于最初在OOPSLA'15[4]上提出的工作。与此同时,这项工作已被集成到Graal项目中。其他部分基于与其他作者的联合工作,最初在AOSD'12[3]和GPCE'13[2]上发表。
{"title":"Beneath the bytecode: observing the JVM at work using bytecode instrumentation","authors":"L. Bulej, Y. Zheng, Walter Binder","doi":"10.1145/3012408.3012409","DOIUrl":"https://doi.org/10.1145/3012408.3012409","url":null,"abstract":"Many dynamic program analysis (DPA) tools for profiling, debugging, and monitoring programs executing on managed platforms such as the Java Virtual Machine (JVM) rely on bytecode instrumentation (sometimes combined with agents utilizing the JVM Tool Interface and native code libraries) to observe the base program behavior. While this is both the recommended and preferred technique for implementing DPA tools, it has certain noticeable drawbacks [1]. One, the analysis runs in the same process as the base program, and often shares the Java Class Library (JCL) and other resources with the base program. This creates potential for interference that may result in deadlocks, or state corruption in code that does not expect reentrancy. Two, certain parts of the JCL are typically off-limits for instrumentation, because they either play a vital role during the JVM bootstrap, or the JVM implementation makes certain assumptions about properties of specific classes, or both. These two issues are typically solved by reducing the scope of the instrumentation, leading to under-approximation of the program's behavior. And three, bytecode instrumentation only allows observing base program events at the bytecode level. The instrumentation code remains oblivious to optimizations performed by the dynamic compiler, and conversely, the compiler is completely unaware of the presence of the instrumentation code. Because the instrumentation code may significantly inflate the base program code and create additional data dependencies as a result of observing the program's behavior, various optimizations performed by the dynamic compiler (e.g., inlining, partial escape analysis, code motion) will be perturbed by the presence of the instrumentation code. As a result, the dynamic analysis may observe events that would not have happened in the base program had it been left alone, thus over-approximating the actual behavior. In this talk, we will discuss some of the challenges in making the JVM more observable for instrumentation-based DPA tools, with specific focus on getting accurate profiling information in presence of an optimizing dynamic compiler. The core of this talk is based on the work that was originally presented at OOPSLA'15 [4]. In the meantime, the work has been integrated into the Graal project. Additional parts are based on joint work with other authors, originally presented at AOSD'12 [3] and GPCE'13 [2].","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126260840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient profiling of actor-based applications in parallel and distributed systems 并行和分布式系统中基于角色的应用程序的高效分析
Andrea Rosà, L. Chen, Walter Binder
Applications employing the actor model of concurrent computation are becoming popular nowadays. On the one hand, the foundational characteristics of the actor model make it attractive in parallel and distributed settings. On the other hand, effective investigation of poor performance in actor-based applications requires dedicated metrics and profiling methods. Unfortunately, little research has been conducted on this topic to date, and developers are forced to investigate suboptimal performance with general-purpose profilers that fall short in locating scalability bottlenecks and performance inefficiencies. This position paper advocates the need for dedicated profiling techniques and tools for actor-based applications, focusing specifically on inter-actor communication and actor utilization. Our preliminary results support the importance of dedicated actor profiling and motivate further research on this topic.
目前,采用参与者模型的并发计算应用越来越流行。一方面,行动者模型的基本特征使其在并行和分布式环境中具有吸引力。另一方面,在基于角色的应用程序中对糟糕的性能进行有效的调查需要专门的度量和分析方法。不幸的是,到目前为止,关于这个主题的研究很少,开发人员被迫使用通用分析器来调查次优性能,这些分析器在定位可伸缩性瓶颈和性能低下方面做得不够。这篇立场文件提倡为基于参与者的应用程序提供专门的分析技术和工具,特别关注参与者之间的通信和参与者的利用。我们的初步结果支持了行动者分析的重要性,并激发了对这一主题的进一步研究。
{"title":"Efficient profiling of actor-based applications in parallel and distributed systems","authors":"Andrea Rosà, L. Chen, Walter Binder","doi":"10.1145/3012408.3012418","DOIUrl":"https://doi.org/10.1145/3012408.3012418","url":null,"abstract":"Applications employing the actor model of concurrent computation are becoming popular nowadays. On the one hand, the foundational characteristics of the actor model make it attractive in parallel and distributed settings. On the other hand, effective investigation of poor performance in actor-based applications requires dedicated metrics and profiling methods. Unfortunately, little research has been conducted on this topic to date, and developers are forced to investigate suboptimal performance with general-purpose profilers that fall short in locating scalability bottlenecks and performance inefficiencies. This position paper advocates the need for dedicated profiling techniques and tools for actor-based applications, focusing specifically on inter-actor communication and actor utilization. Our preliminary results support the importance of dedicated actor profiling and motivate further research on this topic.","PeriodicalId":186136,"journal":{"name":"Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114887112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 11th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1