首页 > 最新文献

Proceedings of the ACM on Programming Languages最新文献

英文 中文
Resource-Aware Soundness for Big-Step Semantics 大步语义的资源感知合理性
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622843
Riccardo Bianchini, Francesco Dagnino, Paola Giannini, Elena Zucca
We extend the semantics and type system of a lambda calculus equipped with common constructs to be resource-aware . That is, reduction is instrumented to keep track of the usage of resources, and the type system guarantees, besides standard soundness, that for well-typed programs there is a computation where no needed resource gets exhausted. The resource-aware extension is parametric on an arbitrary grade algebra , and does not require ad-hoc changes to the underlying language. To this end, the semantics needs to be formalized in big-step style; as a consequence, expressing and proving (resource-aware) soundness is challenging, and is achieved by applying recent techniques based on coinductive reasoning.
我们将lambda演算的语义和类型系统扩展为具有资源意识的公共结构。也就是说,减少被用来跟踪资源的使用,并且类型系统保证,除了标准的可靠性之外,对于类型良好的程序,有一个不耗尽所需资源的计算。资源感知扩展在任意等级代数上是参数化的,并且不需要对底层语言进行特别的更改。为此,语义需要以大踏步的方式进行形式化;因此,表达和证明(资源感知)合理性是具有挑战性的,并且可以通过应用基于协归纳推理的最新技术来实现。
{"title":"Resource-Aware Soundness for Big-Step Semantics","authors":"Riccardo Bianchini, Francesco Dagnino, Paola Giannini, Elena Zucca","doi":"10.1145/3622843","DOIUrl":"https://doi.org/10.1145/3622843","url":null,"abstract":"We extend the semantics and type system of a lambda calculus equipped with common constructs to be resource-aware . That is, reduction is instrumented to keep track of the usage of resources, and the type system guarantees, besides standard soundness, that for well-typed programs there is a computation where no needed resource gets exhausted. The resource-aware extension is parametric on an arbitrary grade algebra , and does not require ad-hoc changes to the underlying language. To this end, the semantics needs to be formalized in big-step style; as a consequence, expressing and proving (resource-aware) soundness is challenging, and is achieved by applying recent techniques based on coinductive reasoning.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"1131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beacons: An End-to-End Compiler Framework for Predicting and Utilizing Dynamic Loop Characteristics 信标:预测和利用动态循环特性的端到端编译器框架
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622803
Girish Mururu, Sharjeel Khan, Bodhisatwa Chatterjee, Chao Chen, Chris Porter, Ada Gavrilovska, Santosh Pande
Efficient management of shared resources is a critical problem in high-performance computing (HPC) environments. Existing workload management systems often promote non-sharing of resources among different co-executing applications to achieve performance isolation. Such schemes lead to poor resource utilization and suboptimal process throughput, adversely affecting user productivity. Tackling this problem in a scalable fashion is extremely challenging, since it requires the workload scheduler to possess an in-depth knowledge about various application resource requirements and runtime phases at fine granularities within individual applications. In this work, we show that applications’ resource requirements and execution phase behaviour can be captured in a scalable and lightweight manner at runtime by estimating important program artifacts termed as “ dynamic loop characteristics ”. Specifically, we propose a solution to the problem of efficient workload scheduling by designing a compiler and runtime cooperative framework that leverages novel loop-based compiler analysis for resource allocation . We present Beacons Framework , an end-to-end compiler and scheduling framework, that estimates dynamic loop characteristics, encapsulates them in compiler-instrumented beacons in an application, and broadcasts them during application runtime, for proactive workload scheduling. We focus on estimating four important loop characteristics : loop trip-count , loop timing , loop memory footprint , and loop data-reuse behaviour , through a combination of compiler analysis and machine learning. The novelty of the Beacons Framework also lies in its ability to tackle irregular loops that exhibit complex control flow with indeterminate loop bounds involving structure fields, aliased variables and function calls , which are highly prevalent in modern workloads. At the backend, Beacons Framework entails a proactive workload scheduler that leverages the runtime information to orchestrate aggressive process co-locations, for maximizing resource concurrency, without causing cache thrashing . Our results show that Beacons Framework can predict different loop characteristics with an accuracy of 85% to 95% on average, and the proactive scheduler obtains an average throughput improvement of 1.9x (up to 3.2x ) over the state-of-the-art schedulers on an Amazon Graviton2 machine on consolidated workloads involving 1000-10000 co-executing processes, across 51 benchmarks.
共享资源的有效管理是高性能计算环境中的一个关键问题。现有的工作负载管理系统通常提倡在不同的协同执行应用程序之间不共享资源,以实现性能隔离。这样的方案导致资源利用率低下和次优流程吞吐量,对用户生产力产生不利影响。以可伸缩的方式解决这个问题是极具挑战性的,因为它要求工作负载调度器对各个应用程序中的各种应用程序资源需求和运行时阶段有深入的了解。在这项工作中,我们展示了应用程序的资源需求和执行阶段行为可以通过评估被称为“动态循环特征”的重要程序工件,在运行时以可伸缩和轻量级的方式捕获。具体来说,我们通过设计一个编译器和运行时协作框架来解决高效工作负载调度问题,该框架利用新颖的基于循环的编译器分析来进行资源分配。我们提出了Beacons Framework,这是一个端到端编译器和调度框架,它估计动态循环特征,将它们封装在应用程序中的编译器配置的信标中,并在应用程序运行时广播它们,以进行主动工作负载调度。通过编译器分析和机器学习的结合,我们专注于估计四个重要的循环特征:循环行程计数、循环定时、循环内存占用和循环数据重用行为。Beacons框架的新颖之处还在于它能够处理不规则循环,这些循环表现出复杂的控制流,包含不确定的循环边界,涉及结构字段、别名变量和函数调用,这在现代工作负载中非常普遍。在后端,Beacons Framework需要一个主动的工作负载调度器,该调度器利用运行时信息编排积极的进程共存,以最大化资源并发性,而不会导致缓存抖动。我们的结果表明,Beacons Framework可以预测不同的循环特征,平均准确率为85%至95%,并且在涉及1000-10000个协同执行进程的合并工作负载上,主动调度器在51个基准测试中,比Amazon Graviton2机器上最先进的调度器平均吞吐量提高1.9倍(最高可达3.2倍)。
{"title":"Beacons: An End-to-End Compiler Framework for Predicting and Utilizing Dynamic Loop Characteristics","authors":"Girish Mururu, Sharjeel Khan, Bodhisatwa Chatterjee, Chao Chen, Chris Porter, Ada Gavrilovska, Santosh Pande","doi":"10.1145/3622803","DOIUrl":"https://doi.org/10.1145/3622803","url":null,"abstract":"Efficient management of shared resources is a critical problem in high-performance computing (HPC) environments. Existing workload management systems often promote non-sharing of resources among different co-executing applications to achieve performance isolation. Such schemes lead to poor resource utilization and suboptimal process throughput, adversely affecting user productivity. Tackling this problem in a scalable fashion is extremely challenging, since it requires the workload scheduler to possess an in-depth knowledge about various application resource requirements and runtime phases at fine granularities within individual applications. In this work, we show that applications’ resource requirements and execution phase behaviour can be captured in a scalable and lightweight manner at runtime by estimating important program artifacts termed as “ dynamic loop characteristics ”. Specifically, we propose a solution to the problem of efficient workload scheduling by designing a compiler and runtime cooperative framework that leverages novel loop-based compiler analysis for resource allocation . We present Beacons Framework , an end-to-end compiler and scheduling framework, that estimates dynamic loop characteristics, encapsulates them in compiler-instrumented beacons in an application, and broadcasts them during application runtime, for proactive workload scheduling. We focus on estimating four important loop characteristics : loop trip-count , loop timing , loop memory footprint , and loop data-reuse behaviour , through a combination of compiler analysis and machine learning. The novelty of the Beacons Framework also lies in its ability to tackle irregular loops that exhibit complex control flow with indeterminate loop bounds involving structure fields, aliased variables and function calls , which are highly prevalent in modern workloads. At the backend, Beacons Framework entails a proactive workload scheduler that leverages the runtime information to orchestrate aggressive process co-locations, for maximizing resource concurrency, without causing cache thrashing . Our results show that Beacons Framework can predict different loop characteristics with an accuracy of 85% to 95% on average, and the proactive scheduler obtains an average throughput improvement of 1.9x (up to 3.2x ) over the state-of-the-art schedulers on an Amazon Graviton2 machine on consolidated workloads involving 1000-10000 co-executing processes, across 51 benchmarks.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Debugging of Datalog Programs 数据程序的交互式调试
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622824
André Pacak, Sebastian Erdweg
Datalog is used for complex programming tasks nowadays, consisting of numerous inter-dependent predicates. But Datalog lacks interactive debugging techniques that support the stepwise execution and inspection of the execution state. In this paper, we propose interactive debugging of Datalog programs following a top-down evaluation strategy called recursive query/subquery. While the recursive query/subquery approach is well-known in the literature, we are the first to provide a complete programming-language semantics based on it. Specifically, we develop the first small-step operational semantics for top-down Datalog, where subqueries occur as nested intermediate terms. The small-step semantics forms the basis of step-into interactions in the debugger. Moreover, we show how step-over interactions can be realized efficiently based on a hybrid Datalog semantics that adds a bottom-up database to our top-down operational semantics. We implemented a debugger for core Datalog following these semantics and explain how to adopt it for debugging the frontend languages of Soufflé and IncA. Our evaluation shows that our hybrid Datalog semantics can be used to debug real-world Datalog programs with realistic workloads.
如今,数据表被用于复杂的编程任务,它由许多相互依赖的谓词组成。但是Datalog缺乏支持逐步执行和检查执行状态的交互式调试技术。在本文中,我们提出了一种自顶向下的评估策略,称为递归查询/子查询,以交互式调试Datalog程序。虽然递归查询/子查询方法在文献中是众所周知的,但我们是第一个基于它提供完整的编程语言语义的人。具体来说,我们为自顶向下的Datalog开发了第一个小步骤操作语义,其中子查询作为嵌套的中间项出现。小步骤语义构成了调试器中分步进入交互的基础。此外,我们还展示了如何基于混合Datalog语义有效地实现跨步交互,该语义将自底向上的数据库添加到自顶向下的操作语义中。我们按照这些语义为core Datalog实现了一个调试器,并解释了如何采用它来调试souffl和IncA的前端语言。我们的评估表明,我们的混合Datalog语义可以用于调试具有实际工作负载的真实Datalog程序。
{"title":"Interactive Debugging of Datalog Programs","authors":"André Pacak, Sebastian Erdweg","doi":"10.1145/3622824","DOIUrl":"https://doi.org/10.1145/3622824","url":null,"abstract":"Datalog is used for complex programming tasks nowadays, consisting of numerous inter-dependent predicates. But Datalog lacks interactive debugging techniques that support the stepwise execution and inspection of the execution state. In this paper, we propose interactive debugging of Datalog programs following a top-down evaluation strategy called recursive query/subquery. While the recursive query/subquery approach is well-known in the literature, we are the first to provide a complete programming-language semantics based on it. Specifically, we develop the first small-step operational semantics for top-down Datalog, where subqueries occur as nested intermediate terms. The small-step semantics forms the basis of step-into interactions in the debugger. Moreover, we show how step-over interactions can be realized efficiently based on a hybrid Datalog semantics that adds a bottom-up database to our top-down operational semantics. We implemented a debugger for core Datalog following these semantics and explain how to adopt it for debugging the frontend languages of Soufflé and IncA. Our evaluation shows that our hybrid Datalog semantics can be used to debug real-world Datalog programs with realistic workloads.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Capabilities to Regions: Enabling Efficient Compilation of Lexical Effect Handlers 从功能到区域:启用词法效果处理程序的有效编译
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622831
Marius Müller, Philipp Schuster, Jonathan Lindegaard Starup, Klaus Ostermann, Jonathan Immanuel Brachthäuser
Effect handlers are a high-level abstraction that enables programmers to use effects in a structured way. They have gained a lot of popularity within academia and subsequently also in industry. However, the abstraction often comes with a significant runtime cost and there has been intensive research recently on how to reduce this price. A promising approach in this regard is to implement effect handlers using a CPS translation and to provide sufficient information about the nesting of handlers. With this information the CPS translation can decide how effects have to be lifted through handlers, i.e., which handlers need to be skipped, in order to handle the effect at the correct place. A structured way to make this information available is to use a calculus with a region system and explicit subregion evidence. Such calculi, however, are quite verbose, which makes them impractical to use as a source-level language. We present a method to infer the lifting information for a calculus underlying a source-level language. This calculus uses second-class capabilities for the safe use of effects. To do so, we define a typed translation to a calculus with regions and evidence and we show that this lift-inference translation is typability- and semantics-preserving. On the one hand, this exposes the precise relation between the second-class property and the structure given by regions. On the other hand, it closes a gap in a compiler pipeline enabling efficient compilation of the source-level language. We have implemented lift inference in this compiler pipeline and conducted benchmarks which indicate that the approach is indeed working.
效果处理程序是一种高级抽象,它使程序员能够以结构化的方式使用效果。它们在学术界和随后的工业界都很受欢迎。然而,抽象通常伴随着巨大的运行成本,最近人们对如何降低这一成本进行了深入的研究。在这方面,一个很有前途的方法是使用CPS转换实现效果处理程序,并提供有关处理程序嵌套的充分信息。有了这些信息,CPS转换可以决定如何通过处理程序提升效果,也就是说,为了在正确的位置处理效果,需要跳过哪些处理程序。使这些信息可用的结构化方法是使用具有区域系统和显式子区域证据的演算。然而,这样的演算是相当冗长的,这使得它们不适合作为源代码级语言使用。我们提出了一种方法来推断一个微积分的提升信息底层的源级语言。这种演算使用二级能力来安全使用效果。为此,我们定义了一个带区域和证据的微积分的类型翻译,并证明了这种提升推理翻译是可类型化和语义保留的。这一方面揭示了二级属性与区域给出的结构之间的精确关系。另一方面,它填补了编译器管道中的空白,从而可以有效地编译源级语言。我们已经在这个编译器管道中实现了提升推理,并进行了基准测试,表明该方法确实有效。
{"title":"From Capabilities to Regions: Enabling Efficient Compilation of Lexical Effect Handlers","authors":"Marius Müller, Philipp Schuster, Jonathan Lindegaard Starup, Klaus Ostermann, Jonathan Immanuel Brachthäuser","doi":"10.1145/3622831","DOIUrl":"https://doi.org/10.1145/3622831","url":null,"abstract":"Effect handlers are a high-level abstraction that enables programmers to use effects in a structured way. They have gained a lot of popularity within academia and subsequently also in industry. However, the abstraction often comes with a significant runtime cost and there has been intensive research recently on how to reduce this price. A promising approach in this regard is to implement effect handlers using a CPS translation and to provide sufficient information about the nesting of handlers. With this information the CPS translation can decide how effects have to be lifted through handlers, i.e., which handlers need to be skipped, in order to handle the effect at the correct place. A structured way to make this information available is to use a calculus with a region system and explicit subregion evidence. Such calculi, however, are quite verbose, which makes them impractical to use as a source-level language. We present a method to infer the lifting information for a calculus underlying a source-level language. This calculus uses second-class capabilities for the safe use of effects. To do so, we define a typed translation to a calculus with regions and evidence and we show that this lift-inference translation is typability- and semantics-preserving. On the one hand, this exposes the precise relation between the second-class property and the structure given by regions. On the other hand, it closes a gap in a compiler pipeline enabling efficient compilation of the source-level language. We have implemented lift inference in this compiler pipeline and conducted benchmarks which indicate that the approach is indeed working.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MemPerf: Profiling Allocator-Induced Performance Slowdowns MemPerf:分析分配器引起的性能下降
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622848
Jin Zhou, Sam Silvestro, Steven (Jiaxun) Tang, Hanmei Yang, Hongyu Liu, Guangming Zeng, Bo Wu, Cong Liu, Tongping Liu
The memory allocator plays a key role in the performance of applications, but none of the existing profilers can pinpoint performance slowdowns caused by a memory allocator. Consequently, programmers may spend time improving application code incorrectly or unnecessarily, achieving low or no performance improvement. This paper designs the first profiler—MemPerf—to identify allocator-induced performance slowdowns without comparing against another allocator. Based on the key observation that an allocator may impact the whole life-cycle of heap objects, including the accesses (or uses) of these objects, MemPerf proposes a life-cycle based detection to identify slowdowns caused by slow memory management operations and slow accesses separately. For the prior one, MemPerf proposes a thread-aware and type-aware performance modeling to identify slow management operations. For slow memory accesses, MemPerf utilizes a top-down approach to identify all possible reasons for slow memory accesses introduced by the allocator, mainly due to cache and TLB misses, and further proposes a unified method to identify them correctly and efficiently. Based on our extensive evaluation, MemPerf reports 98% medium and large allocator-reduced slowdowns (larger than 5%) correctly without reporting any false positives. MemPerf also pinpoints multiple known and unknown design issues in widely-used allocators.
内存分配器在应用程序的性能中起着关键作用,但是现有的分析器都不能精确地指出由内存分配器引起的性能下降。因此,程序员可能会花费时间错误地或不必要地改进应用程序代码,从而实现较低或没有性能改进。本文设计了第一个分析器—memperf—来识别由分配器引起的性能下降,而无需与另一个分配器进行比较。基于分配器可能影响堆对象的整个生命周期(包括这些对象的访问(或使用))这一关键观察,MemPerf提出了一种基于生命周期的检测,以分别识别由缓慢的内存管理操作和缓慢的访问引起的减速。对于前一个,MemPerf提出了一个线程感知和类型感知的性能建模,以识别缓慢的管理操作。对于内存访问缓慢,MemPerf采用自顶向下的方法识别分配器引入的所有可能的内存访问缓慢的原因,主要是由于缓存和TLB丢失,并进一步提出了一个统一的方法来正确有效地识别它们。根据我们的广泛评估,MemPerf可以正确报告98%的中型和大型分配器减少的减速(大于5%),而不会报告任何误报。MemPerf还指出了广泛使用的分配器中多个已知和未知的设计问题。
{"title":"MemPerf: Profiling Allocator-Induced Performance Slowdowns","authors":"Jin Zhou, Sam Silvestro, Steven (Jiaxun) Tang, Hanmei Yang, Hongyu Liu, Guangming Zeng, Bo Wu, Cong Liu, Tongping Liu","doi":"10.1145/3622848","DOIUrl":"https://doi.org/10.1145/3622848","url":null,"abstract":"The memory allocator plays a key role in the performance of applications, but none of the existing profilers can pinpoint performance slowdowns caused by a memory allocator. Consequently, programmers may spend time improving application code incorrectly or unnecessarily, achieving low or no performance improvement. This paper designs the first profiler—MemPerf—to identify allocator-induced performance slowdowns without comparing against another allocator. Based on the key observation that an allocator may impact the whole life-cycle of heap objects, including the accesses (or uses) of these objects, MemPerf proposes a life-cycle based detection to identify slowdowns caused by slow memory management operations and slow accesses separately. For the prior one, MemPerf proposes a thread-aware and type-aware performance modeling to identify slow management operations. For slow memory accesses, MemPerf utilizes a top-down approach to identify all possible reasons for slow memory accesses introduced by the allocator, mainly due to cache and TLB misses, and further proposes a unified method to identify them correctly and efficiently. Based on our extensive evaluation, MemPerf reports 98% medium and large allocator-reduced slowdowns (larger than 5%) correctly without reporting any false positives. MemPerf also pinpoints multiple known and unknown design issues in widely-used allocators.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structural Subtyping as Parametric Polymorphism 结构子类型作为参数多态性
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622836
Tang, Wenhao, Hillerström, Daniel, McKinna, James, Steuwer, Michel, Dardha, Ornela, Fu, Rongxiao, Lindley, Sam
Structural subtyping and parametric polymorphism provide similar flexibility and reusability to programmers. For example, both features enable the programmer to provide a wider record as an argument to a function that expects a narrower one. However, the means by which they do so differs substantially, and the precise details of the relationship between them exists, at best, as folklore in literature. In this paper, we systematically study the relative expressive power of structural subtyping and parametric polymorphism. We focus our investigation on establishing the extent to which parametric polymorphism, in the form of row and presence polymorphism, can encode structural subtyping for variant and record types. We base our study on various Church-style $lambda$-calculi extended with records and variants, different forms of structural subtyping, and row and presence polymorphism. We characterise expressiveness by exhibiting compositional translations between calculi. For each translation we prove a type preservation and operational correspondence result. We also prove a number of non-existence results. By imposing restrictions on both source and target types, we reveal further subtleties in the expressiveness landscape, the restrictions enabling otherwise impossible translations to be defined. More specifically, we prove that full subtyping cannot be encoded via polymorphism, but we show that several restricted forms of subtyping can be encoded via particular forms of polymorphism.
结构子类型和参数多态性为程序员提供了类似的灵活性和可重用性。例如,这两个特性都允许程序员提供一个更宽的记录作为函数的参数,而函数需要一个更窄的记录。然而,他们这样做的手段有很大的不同,他们之间关系的精确细节存在,充其量,作为文学中的民间传说。本文系统地研究了结构亚型和参数多态性的相对表达能力。我们的研究重点是确定参数多态性(以行和存在多态性的形式)在多大程度上可以为变量和记录类型编码结构子类型。我们的研究基于各种教会风格的$lambda$-演算,这些演算扩展了记录和变体,不同形式的结构亚型,以及行和存在多态性。我们通过展示微积分之间的组合翻译来表征表现力。对于每个翻译,我们证明了一个类型保持和操作对应的结果。我们还证明了一些不存在的结果。通过对源类型和目标类型施加限制,我们进一步揭示了表达性领域的微妙之处,这些限制使不可能的翻译得以定义。更具体地说,我们证明了完整的子类型不能通过多态性编码,但我们证明了一些限制形式的子类型可以通过特定形式的多态性编码。
{"title":"Structural Subtyping as Parametric Polymorphism","authors":"Tang, Wenhao, Hillerström, Daniel, McKinna, James, Steuwer, Michel, Dardha, Ornela, Fu, Rongxiao, Lindley, Sam","doi":"10.1145/3622836","DOIUrl":"https://doi.org/10.1145/3622836","url":null,"abstract":"Structural subtyping and parametric polymorphism provide similar flexibility and reusability to programmers. For example, both features enable the programmer to provide a wider record as an argument to a function that expects a narrower one. However, the means by which they do so differs substantially, and the precise details of the relationship between them exists, at best, as folklore in literature. In this paper, we systematically study the relative expressive power of structural subtyping and parametric polymorphism. We focus our investigation on establishing the extent to which parametric polymorphism, in the form of row and presence polymorphism, can encode structural subtyping for variant and record types. We base our study on various Church-style $lambda$-calculi extended with records and variants, different forms of structural subtyping, and row and presence polymorphism. We characterise expressiveness by exhibiting compositional translations between calculi. For each translation we prove a type preservation and operational correspondence result. We also prove a number of non-existence results. By imposing restrictions on both source and target types, we reveal further subtleties in the expressiveness landscape, the restrictions enabling otherwise impossible translations to be defined. More specifically, we prove that full subtyping cannot be encoded via polymorphism, but we show that several restricted forms of subtyping can be encoded via particular forms of polymorphism.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136077527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concrete Type Inference for Code Optimization using Machine Learning with SMT Solving 基于SMT求解的机器学习代码优化的具体类型推断
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622825
Fangke Ye, Jisheng Zhao, Jun Shirako, Vivek Sarkar
Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization due to the lack of concrete type information. To overcome this limitation, many ahead-of-time optimizing compiler approaches for Python rely on programmers to provide optional type information as a prerequisite for extensive code optimization. Since few programmers provide this information, a large majority of Python applications are executed without the benefit of code optimization, thereby contributing collectively to a significant worldwide wastage of compute and energy resources. In this paper, we introduce a new approach to concrete type inference that is shown to be effective in enabling code optimization for dynamically typed languages, without requiring the programmer to provide any type information. We explore three kinds of type inference algorithms in our approach based on: 1) machine learning models including GPT-4, 2) constraint-based inference based on SMT solving, and 3) a combination of 1) and 2). Our approach then uses the output from type inference to generate multi-version code for a bounded number of concrete type options, while also including a catch-all untyped version for the case when no match is found. The typed versions are then amenable to code optimization. Experimental results show that the combined algorithm in 3) delivers far superior precision and performance than the separate algorithms for 1) and 2). The performance improvement due to type inference, in terms of geometric mean speedup across all benchmarks compared to standard Python, when using 3) is 26.4× with Numba as an AOT optimizing back-end and 62.2× with the Intrepydd optimizing compiler as a back-end. These vast performance improvements can have a significant impact on programmers’ productivity, while also reducing their applications’ use of compute and energy resources.
尽管Python等动态类型语言广泛流行,但众所周知,由于缺乏具体的类型信息,它们对代码优化构成了重大挑战。为了克服这一限制,许多Python的提前优化编译器方法依赖于程序员提供可选的类型信息,作为广泛的代码优化的先决条件。由于很少有程序员提供此信息,因此大多数Python应用程序在执行时没有获得代码优化的好处,从而共同造成了全球范围内计算和能源资源的重大浪费。在本文中,我们介绍了一种具体类型推断的新方法,该方法被证明可以有效地实现动态类型语言的代码优化,而不需要程序员提供任何类型信息。我们在我们的方法中探索了三种类型推理算法:1)包括GPT-4在内的机器学习模型,2)基于SMT求解的基于约束的推理,以及3)1)和2)的组合。然后,我们的方法使用类型推理的输出为有限数量的具体类型选项生成多版本代码,同时还包括一个捕获所有无类型的版本,用于没有找到匹配的情况。然后,类型化版本可以进行代码优化。实验结果表明,3)中的组合算法比1)和2)中的单独算法提供了更高的精度和性能。与标准Python相比,在所有基准测试中,类型推断带来的性能提升,在使用Numba作为AOT优化后端时为26.4倍,使用Intrepydd优化编译器作为后端时为62.2倍。这些巨大的性能改进可以对程序员的生产力产生重大影响,同时也减少了应用程序对计算和能源的使用。
{"title":"Concrete Type Inference for Code Optimization using Machine Learning with SMT Solving","authors":"Fangke Ye, Jisheng Zhao, Jun Shirako, Vivek Sarkar","doi":"10.1145/3622825","DOIUrl":"https://doi.org/10.1145/3622825","url":null,"abstract":"Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization due to the lack of concrete type information. To overcome this limitation, many ahead-of-time optimizing compiler approaches for Python rely on programmers to provide optional type information as a prerequisite for extensive code optimization. Since few programmers provide this information, a large majority of Python applications are executed without the benefit of code optimization, thereby contributing collectively to a significant worldwide wastage of compute and energy resources. In this paper, we introduce a new approach to concrete type inference that is shown to be effective in enabling code optimization for dynamically typed languages, without requiring the programmer to provide any type information. We explore three kinds of type inference algorithms in our approach based on: 1) machine learning models including GPT-4, 2) constraint-based inference based on SMT solving, and 3) a combination of 1) and 2). Our approach then uses the output from type inference to generate multi-version code for a bounded number of concrete type options, while also including a catch-all untyped version for the case when no match is found. The typed versions are then amenable to code optimization. Experimental results show that the combined algorithm in 3) delivers far superior precision and performance than the separate algorithms for 1) and 2). The performance improvement due to type inference, in terms of geometric mean speedup across all benchmarks compared to standard Python, when using 3) is 26.4× with Numba as an AOT optimizing back-end and 62.2× with the Intrepydd optimizing compiler as a back-end. These vast performance improvements can have a significant impact on programmers’ productivity, while also reducing their applications’ use of compute and energy resources.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136114713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adventure of a Lifetime: Extract Method Refactoring for Rust 一生的冒险:Rust的提取方法重构
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622821
Sewen Thy, Andreea Costea, Kiran Gopinathan, Ilya Sergey
We present a design and implementation of the automated "Extract Method" refactoring for Rust programs. Even though Extract Method is one of the most well-studied and widely used in practice automated refactorings, featured in all major IDEs for all popular programming languages, implementing it soundly for Rust is surprisingly non-trivial due to the restrictions of the Rust's ownership and lifetime-based type system. In this work, we provide a systematic decomposition of the Extract Method refactoring for Rust programs into a series of program transformations, each concerned with satisfying a particular aspect of Rust type safety, eventually producing a well-typed Rust program. Our key discovery is the formulation of Extract Method as a composition of naive function hoisting and a series of automated program repair procedures that progressively make the resulting program "more well-typed" by relying on the corresponding repair oracles. Those oracles include a novel static intra-procedural ownership analysis that infers correct sharing annotations for the extracted function's parameters, and the lifetime checker of rustc, Rust's reference compiler. We implemented our approach in a tool called REM---an automated Extract Method refactoring built on top of IntelliJ IDEA plugin for Rust. Our extensive evaluation on a corpus of changes in five popular Rust projects shows that REM (a) can extract a larger class of feature-rich code fragments into semantically correct functions than other existing refactoring tools, (b) can reproduce method extractions performed manually by human developers in the past, and (c) is efficient enough to be used in interactive development.
我们提出了一个Rust程序自动“提取方法”重构的设计和实现。尽管Extract Method是在自动化重构实践中被研究得最充分、应用最广泛的方法之一,在所有流行编程语言的主要ide中都有它的特点,但由于Rust的所有权和基于生命周期的类型系统的限制,在Rust中实现它是非常重要的。在这项工作中,我们将Rust程序的提取方法重构系统地分解为一系列程序转换,每个转换都涉及满足Rust类型安全的特定方面,最终生成类型良好的Rust程序。我们的关键发现是Extract Method的公式,它是由原始函数提升和一系列自动程序修复程序组成的,这些程序通过依赖相应的修复预言器,逐步使生成的程序“类型更佳”。这些oracle包括一种新的静态过程内所有权分析,它可以为提取的函数参数推断正确的共享注释,以及Rust的参考编译器rustc的生命周期检查器。我们在一个叫做REM的工具中实现了我们的方法——一个基于IntelliJ IDEA Rust插件的自动提取方法重构工具。我们对五个流行的Rust项目的变更语料库进行了广泛的评估,结果表明REM (a)可以比其他现有的重构工具将更大的一类功能丰富的代码片段提取为语义正确的函数,(b)可以重现过去由人类开发人员手动执行的方法提取,(c)足够高效,可用于交互式开发。
{"title":"Adventure of a Lifetime: Extract Method Refactoring for Rust","authors":"Sewen Thy, Andreea Costea, Kiran Gopinathan, Ilya Sergey","doi":"10.1145/3622821","DOIUrl":"https://doi.org/10.1145/3622821","url":null,"abstract":"We present a design and implementation of the automated \"Extract Method\" refactoring for Rust programs. Even though Extract Method is one of the most well-studied and widely used in practice automated refactorings, featured in all major IDEs for all popular programming languages, implementing it soundly for Rust is surprisingly non-trivial due to the restrictions of the Rust's ownership and lifetime-based type system. In this work, we provide a systematic decomposition of the Extract Method refactoring for Rust programs into a series of program transformations, each concerned with satisfying a particular aspect of Rust type safety, eventually producing a well-typed Rust program. Our key discovery is the formulation of Extract Method as a composition of naive function hoisting and a series of automated program repair procedures that progressively make the resulting program \"more well-typed\" by relying on the corresponding repair oracles. Those oracles include a novel static intra-procedural ownership analysis that infers correct sharing annotations for the extracted function's parameters, and the lifetime checker of rustc, Rust's reference compiler. We implemented our approach in a tool called REM---an automated Extract Method refactoring built on top of IntelliJ IDEA plugin for Rust. Our extensive evaluation on a corpus of changes in five popular Rust projects shows that REM (a) can extract a larger class of feature-rich code fragments into semantically correct functions than other existing refactoring tools, (b) can reproduce method extractions performed manually by human developers in the past, and (c) is efficient enough to be used in interactive development.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradual Typing for Effect Handlers 效果处理程序的渐进式输入
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622860
Max S. New, Eric Giovannini, Daniel R. Licata
We present a gradually typed language, GrEff, with effects and handlers that supports migration from unchecked to checked effect typing. This serves as a simple model of the integration of an effect typing discipline with an existing effectful typed language that does not track fine-grained effect information. Our language supports a simple module system to model the programming model of gradual migration from unchecked to checked effect typing in the style of Typed Racket. The surface language GrEff is given semantics by elaboration to a core language Core GrEff. We equip Core GrEff with an inequational theory for reasoning about the semantic error ordering and desired program equivalences for programming with effects and handlers. We derive an operational semantics for the language from the equations provable in the theory. We then show that the theory is sound by constructing an operational logical relations model to prove the graduality theorem. This extends prior work on embedding-projection pair models of gradual typing to handle effect typing and subtyping.
我们提出了一种逐渐类型化的语言GrEff,其效果和处理程序支持从未检查的效果类型迁移到已检查的效果类型。这可以作为效果类型规程与现有的有效类型语言集成的简单模型,该语言不跟踪细粒度的效果信息。我们的语言支持一个简单的模块系统,以类型化球拍的风格对从未检查到检查效果类型逐渐迁移的编程模型进行建模。通过对核心语言core GrEff的细化,赋予表层语言GrEff语义。我们为Core GrEff提供了一个不等式理论,用于推理语义错误排序和使用效果和处理程序编程所需的程序等价。我们从理论中可证明的方程推导出语言的操作语义。然后通过构造一个运算逻辑关系模型来证明渐近定理,证明了该理论的正确性。这扩展了先前关于逐渐类型的嵌入-投影对模型的工作,以处理效果类型和子类型。
{"title":"Gradual Typing for Effect Handlers","authors":"Max S. New, Eric Giovannini, Daniel R. Licata","doi":"10.1145/3622860","DOIUrl":"https://doi.org/10.1145/3622860","url":null,"abstract":"We present a gradually typed language, GrEff, with effects and handlers that supports migration from unchecked to checked effect typing. This serves as a simple model of the integration of an effect typing discipline with an existing effectful typed language that does not track fine-grained effect information. Our language supports a simple module system to model the programming model of gradual migration from unchecked to checked effect typing in the style of Typed Racket. The surface language GrEff is given semantics by elaboration to a core language Core GrEff. We equip Core GrEff with an inequational theory for reasoning about the semantic error ordering and desired program equivalences for programming with effects and handlers. We derive an operational semantics for the language from the equations provable in the theory. We then show that the theory is sound by constructing an operational logical relations model to prove the graduality theorem. This extends prior work on embedding-projection pair models of gradual typing to handle effect typing and subtyping.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Historia: Refuting Callback Reachability with Message-History Logics 历史:用消息历史逻辑驳斥回调可达性
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622865
Meier, Shawn, Mover, Sergio, Kaki, Gowtham, Chang, Bor-Yuh Evan
This paper considers the callback reachability problem --- determining if a callback can be called by an event-driven framework in an unexpected state. Event-driven programming frameworks are pervasive for creating user-interactive applications (apps) on just about every modern platform. Control flow between callbacks is determined by the framework and largely opaque to the programmer. This opacity of the callback control flow not only causes difficulty for the programmer but is also difficult for those developing static analysis. Previous static analysis techniques address this opacity either by assuming an arbitrary framework implementation or attempting to eagerly specify all possible callback control flow, but this is either too coarse to prove properties requiring callback-ordering constraints or too burdensome and tricky to get right. Instead, we present a middle way where the callback control flow can be gradually refined in a targeted manner to prove assertions of interest. The key insight to get this middle way is by reasoning about the history of method invocations at the boundary between app and framework code --- enabling a decoupling of the specification of callback control flow from the analysis of app code. We call the sequence of such boundary-method invocations message histories and develop message-history logics to do this reasoning. In particular, we define the notion of an application-only transition system with boundary transitions, a message-history program logic for programs with such transitions, and a temporal specification logic for capturing callback control flow in a targeted and compositional manner. Then to utilize the logics in a goal-directed verifier, we define a way to combine after-the-fact an assertion about message histories with a specification of callback control flow. We implemented a prototype message history-based verifier called Historia and provide evidence that our approach is uniquely capable of distinguishing between buggy and fixed versions on challenging examples drawn from real-world issues and that our targeted specification approach enables proving the absence of multi-callback bug patterns in real-world open-source Android apps.
本文考虑回调可达性问题——确定事件驱动框架是否可以在意外状态下调用回调。事件驱动的编程框架在几乎每个现代平台上都广泛用于创建用户交互应用程序(app)。回调之间的控制流由框架决定,对程序员来说很大程度上是不透明的。这种回调控制流的不透明性不仅给程序员带来了困难,也给开发静态分析的人员带来了困难。以前的静态分析技术通过假设任意的框架实现或尝试迫切地指定所有可能的回调控制流来解决这种不透明性,但这要么太粗糙,无法证明需要回调排序约束的属性,要么太繁琐,难以正确处理。相反,我们提出了一种中间方法,其中回调控制流可以以有针对性的方式逐步改进,以证明感兴趣的断言。获得这种中间方法的关键洞察力是通过推理应用程序和框架代码之间边界的方法调用历史——使回调控制流的规范与应用程序代码的分析解耦。我们将这些边界方法调用的序列称为消息历史,并开发消息历史逻辑来执行此推理。特别是,我们定义了具有边界转换的仅应用程序转换系统的概念、用于具有此类转换的程序的消息历史程序逻辑,以及用于以目标和组合方式捕获回调控制流的临时规范逻辑。然后,为了利用目标导向验证器中的逻辑,我们定义了一种方法,将关于消息历史的事后断言与回调控制流规范结合起来。我们实现了一个名为Historia的基于消息历史的原型验证器,并提供了证据,证明我们的方法在从现实世界问题中提取的具有挑战性的例子中能够独特地区分有bug的版本和固定的版本,并且我们的目标规范方法能够证明在现实世界的开源Android应用中不存在多回调错误模式。
{"title":"Historia: Refuting Callback Reachability with Message-History Logics","authors":"Meier, Shawn, Mover, Sergio, Kaki, Gowtham, Chang, Bor-Yuh Evan","doi":"10.1145/3622865","DOIUrl":"https://doi.org/10.1145/3622865","url":null,"abstract":"This paper considers the callback reachability problem --- determining if a callback can be called by an event-driven framework in an unexpected state. Event-driven programming frameworks are pervasive for creating user-interactive applications (apps) on just about every modern platform. Control flow between callbacks is determined by the framework and largely opaque to the programmer. This opacity of the callback control flow not only causes difficulty for the programmer but is also difficult for those developing static analysis. Previous static analysis techniques address this opacity either by assuming an arbitrary framework implementation or attempting to eagerly specify all possible callback control flow, but this is either too coarse to prove properties requiring callback-ordering constraints or too burdensome and tricky to get right. Instead, we present a middle way where the callback control flow can be gradually refined in a targeted manner to prove assertions of interest. The key insight to get this middle way is by reasoning about the history of method invocations at the boundary between app and framework code --- enabling a decoupling of the specification of callback control flow from the analysis of app code. We call the sequence of such boundary-method invocations message histories and develop message-history logics to do this reasoning. In particular, we define the notion of an application-only transition system with boundary transitions, a message-history program logic for programs with such transitions, and a temporal specification logic for capturing callback control flow in a targeted and compositional manner. Then to utilize the logics in a goal-directed verifier, we define a way to combine after-the-fact an assertion about message histories with a specification of callback control flow. We implemented a prototype message history-based verifier called Historia and provide evidence that our approach is uniquely capable of distinguishing between buggy and fixed versions on challenging examples drawn from real-world issues and that our targeted specification approach enables proving the absence of multi-callback bug patterns in real-world open-source Android apps.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136077382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1