首页 > 最新文献

Proceedings of the ACM on Programming Languages最新文献

英文 中文
API-Driven Program Synthesis for Testing Static Typing Implementations 测试静态类型实现的 API 驱动程序综合
IF 1.8 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-01-05 DOI: 10.1145/3632904
Thodoris Sotiropoulos, Stefanos Chaliasos, Zhendong Su
We introduce a novel approach for testing static typing implementations based on the concept of API-driven program synthesis. The idea is to synthesize type-intensive but small and well-typed programs by leveraging and combining application programming interfaces (APIs) derived from existing software libraries. Our primary insight is backed up by real-world evidence: a significant number of compiler typing bugs are caused by small test cases that employ APIs from the standard library of the language under test. This is attributed to the inherent complexity of the majority of these APIs, which often exercise a wide range of sophisticated type-related features. The main contribution of our approach is the ability to produce small client programs with increased feature coverage, without bearing the burden of generating the corresponding well-formed API definitions from scratch. To validate diverse aspects of static typing procedures (i.e., soundness, precision of type inference), we also enrich our API-driven approach with fault-injection and semantics-preserving modes, along with their corresponding test oracles. We evaluate our implemented tool, Thalia on testing the static typing implementations of the compilers for three popular languages, namely, Scala, Kotlin, and Groovy. Thalia has uncovered 84 typing bugs (77 confirmed and 22 fixed), most of which are triggered by test cases featuring APIs that rely on parametric polymorphism, overloading, and higher-order functions. Our comparison with state-of-the-art shows that Thalia yields test programs with distinct characteristics, offering additional and complementary benefits.
我们基于 API 驱动的程序综合概念,介绍了一种测试静态类型实现的新方法。我们的想法是,通过利用和组合从现有软件库中衍生出来的应用编程接口(API),合成类型密集型的小型良好类型程序。我们的主要见解得到了现实世界证据的支持:大量编译器类型错误是由采用被测语言标准库中的应用程序接口的小型测试用例引起的。这归因于大多数这些应用程序接口的固有复杂性,它们通常会使用各种复杂的类型相关功能。我们的方法的主要贡献在于能够生成具有更多特性覆盖范围的小型客户端程序,而无需承担从头开始生成相应的格式良好的 API 定义的负担。为了验证静态类型化程序的各个方面(即健全性、类型推断的精确性),我们还通过故障注入和语义保留模式以及相应的测试谕令来丰富我们的 API 驱动方法。我们在测试三种流行语言(Scala、Kotlin 和 Groovy)编译器的静态类型实现时评估了我们的工具 Thalia。Thalia 发现了 84 个类型错误(77 个已确认,22 个已修复),其中大部分是由依赖参数多态性、重载和高阶函数的 API 测试用例触发的。我们与最新技术的比较表明,Thalia 生成的测试程序具有鲜明的特点,提供了额外的互补优势。
{"title":"API-Driven Program Synthesis for Testing Static Typing Implementations","authors":"Thodoris Sotiropoulos, Stefanos Chaliasos, Zhendong Su","doi":"10.1145/3632904","DOIUrl":"https://doi.org/10.1145/3632904","url":null,"abstract":"We introduce a novel approach for testing static typing implementations based on the concept of API-driven program synthesis. The idea is to synthesize type-intensive but small and well-typed programs by leveraging and combining application programming interfaces (APIs) derived from existing software libraries. Our primary insight is backed up by real-world evidence: a significant number of compiler typing bugs are caused by small test cases that employ APIs from the standard library of the language under test. This is attributed to the inherent complexity of the majority of these APIs, which often exercise a wide range of sophisticated type-related features. The main contribution of our approach is the ability to produce small client programs with increased feature coverage, without bearing the burden of generating the corresponding well-formed API definitions from scratch. To validate diverse aspects of static typing procedures (i.e., soundness, precision of type inference), we also enrich our API-driven approach with fault-injection and semantics-preserving modes, along with their corresponding test oracles. We evaluate our implemented tool, Thalia on testing the static typing implementations of the compilers for three popular languages, namely, Scala, Kotlin, and Groovy. Thalia has uncovered 84 typing bugs (77 confirmed and 22 fixed), most of which are triggered by test cases featuring APIs that rely on parametric polymorphism, overloading, and higher-order functions. Our comparison with state-of-the-art shows that Thalia yields test programs with distinct characteristics, offering additional and complementary benefits.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"52 7","pages":"1850 - 1881"},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139384150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Bounded Pathwidth of Control-Flow Graphs 控制流图的有界路径宽度
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622807
Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Chun Kit Lam
Pathwidth and treewidth are standard and well-studied graph sparsity parameters which intuitively model the degree to which a given graph resembles a path or a tree, respectively. It is well-known that the control-flow graphs of structured goto-free programs have a tree-like shape and bounded treewidth. This fact has been exploited to design considerably more efficient algorithms for a wide variety of static analysis and compiler optimization problems, such as register allocation, µ-calculus model-checking and parity games, data-flow analysis, cache management, and liftetime-optimal redundancy elimination. However, there is no bound in the literature for the pathwidth of programs, except the general inequality that the pathwidth of a graph is at most O (lg n ) times its treewidth, where n is the number of vertices of the graph. In this work, we prove that control-flow graphs of structured programs have bounded pathwidth and provide a linear-time algorithm to obtain a path decomposition of small width. Specifically, we establish a bound of 2 · d on the pathwidth of programs with nesting depth d . Since real-world programs have small nesting depth, they also have bounded pathwidth. This is significant for a number of reasons: (i) ‍pathwidth is a strictly stronger parameter than treewidth, i.e. ‍any graph family with bounded pathwidth has bounded treewidth, but the converse does not hold; (ii) ‍any algorithm that is designed with treewidth in mind can be applied to bounded-pathwidth graphs with no change; (iii) ‍there are problems that are fixed-parameter tractable with respect to pathwidth but not treewidth; (iv) ‍verification algorithms that are designed based on treewidth would become significantly faster when using pathwidth as the parameter; and (v) ‍it is easier to design algorithms based on bounded pathwidth since one does not have to consider the often-challenging case of merge nodes in treewidth-based dynamic programming. Thus, we invite the static analysis and compiler optimization communities to adopt pathwidth as their parameter of choice instead of, or in addition to, treewidth. Intuitively, control-flow graphs are not only tree-like, but also path-like and one can obtain simpler and more scalable algorithms by relying on path-likeness instead of tree-likeness. As a motivating example, we provide a simpler and more efficient algorithm for spill-free register allocation using bounded pathwidth instead of treewidth. Our algorithm reduces the runtime from O ( n · r 2 · tw · r + 2 · r ) to O ( n · pw · r pw · r + r + 1 ), where n is the number of lines of code, r is the number of registers, pw is the pathwidth of the control-flow graph and tw is its treewidth. We provide extensive experimental results showing that our approach is applicable to a wide variety of real-world embedded benchmarks from SDCC and obtains runtime improvements of 2-3 orders of magnitude. This is because the pathwidth is equal to the treewidth, or one more, in the ove
路径宽度和树宽度是标准的、经过充分研究的图稀疏性参数,它们分别直观地模拟给定图与路径或树的相似程度。众所周知,结构化goto-free程序的控制流图具有树状形状和有界树宽。这一事实已被用于设计更有效的算法,用于各种静态分析和编译器优化问题,如寄存器分配、微微积分模型检查和奇偶博弈、数据流分析、缓存管理和生命周期最优冗余消除。然而,对于程序的路径宽度,除了图的路径宽度不超过O (lgn)乘以它的树宽这一一般不等式之外,文献中并没有关于它的边界。其中n是图的顶点数。在这项工作中,我们证明了结构化程序的控制流图具有有界的路径宽度,并提供了一个线性时间算法来获得小宽度的路径分解。具体来说,我们在嵌套深度为d的程序的路径宽度上建立了一个2·d的界。由于现实世界的程序具有较小的嵌套深度,因此它们也具有有限的路径宽度。这是很重要的,原因如下:(i)‍路径宽度是一个严格的比树宽更强的参数,即‍任何具有有界路径宽度的图族都具有有界树宽,但反之则不成立;(ii)‍任何在设计时考虑到树宽的算法都可以不改变地应用于有界径宽图;(iii)‍存在一些问题,这些问题在路径宽度方面是固定参数可处理的,但在树宽方面不是;(iv)‍以树宽为参数设计的验证算法在以路径宽度为参数时速度明显加快;(v)‍更容易设计基于有界路径宽度的算法,因为人们不必考虑基于树宽度的动态规划中合并节点的经常具有挑战性的情况。因此,我们邀请静态分析和编译器优化社区采用pathwidth作为他们选择的参数,而不是treewidth,或者除了treewidth之外。直观地说,控制流图不仅是树状的,而且是路径状的,通过依赖路径相似而不是树相似,可以获得更简单、更可扩展的算法。作为一个激励的例子,我们提供了一个更简单和更有效的算法,用于使用有界路径宽度而不是树宽度进行无溢出寄存器分配。我们的算法将运行时间从O (n·r 2·tw·r + 2·r)减少到O (n·pw·r pw·r + r + 1),其中n为代码行数,r为寄存器数,pw为控制流图的路径宽度,tw为控制流图的树宽度。我们提供了大量的实验结果,表明我们的方法适用于来自SDCC的各种真实世界的嵌入式基准测试,并获得了2-3个数量级的运行时改进。这是因为在绝大多数现实世界的cfg中,路径宽度等于树宽度,或者更多,因此我们的算法提供了指数级的运行时间改进。因此,使用路径宽度的好处不仅限于理论方面和算法设计的简单性,而且在实践中也很明显。
{"title":"The Bounded Pathwidth of Control-Flow Graphs","authors":"Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Chun Kit Lam","doi":"10.1145/3622807","DOIUrl":"https://doi.org/10.1145/3622807","url":null,"abstract":"Pathwidth and treewidth are standard and well-studied graph sparsity parameters which intuitively model the degree to which a given graph resembles a path or a tree, respectively. It is well-known that the control-flow graphs of structured goto-free programs have a tree-like shape and bounded treewidth. This fact has been exploited to design considerably more efficient algorithms for a wide variety of static analysis and compiler optimization problems, such as register allocation, µ-calculus model-checking and parity games, data-flow analysis, cache management, and liftetime-optimal redundancy elimination. However, there is no bound in the literature for the pathwidth of programs, except the general inequality that the pathwidth of a graph is at most O (lg n ) times its treewidth, where n is the number of vertices of the graph. In this work, we prove that control-flow graphs of structured programs have bounded pathwidth and provide a linear-time algorithm to obtain a path decomposition of small width. Specifically, we establish a bound of 2 · d on the pathwidth of programs with nesting depth d . Since real-world programs have small nesting depth, they also have bounded pathwidth. This is significant for a number of reasons: (i) ‍pathwidth is a strictly stronger parameter than treewidth, i.e. ‍any graph family with bounded pathwidth has bounded treewidth, but the converse does not hold; (ii) ‍any algorithm that is designed with treewidth in mind can be applied to bounded-pathwidth graphs with no change; (iii) ‍there are problems that are fixed-parameter tractable with respect to pathwidth but not treewidth; (iv) ‍verification algorithms that are designed based on treewidth would become significantly faster when using pathwidth as the parameter; and (v) ‍it is easier to design algorithms based on bounded pathwidth since one does not have to consider the often-challenging case of merge nodes in treewidth-based dynamic programming. Thus, we invite the static analysis and compiler optimization communities to adopt pathwidth as their parameter of choice instead of, or in addition to, treewidth. Intuitively, control-flow graphs are not only tree-like, but also path-like and one can obtain simpler and more scalable algorithms by relying on path-likeness instead of tree-likeness. As a motivating example, we provide a simpler and more efficient algorithm for spill-free register allocation using bounded pathwidth instead of treewidth. Our algorithm reduces the runtime from O ( n · r 2 · tw · r + 2 · r ) to O ( n · pw · r pw · r + r + 1 ), where n is the number of lines of code, r is the number of registers, pw is the pathwidth of the control-flow graph and tw is its treewidth. We provide extensive experimental results showing that our approach is applicable to a wide variety of real-world embedded benchmarks from SDCC and obtains runtime improvements of 2-3 orders of magnitude. This is because the pathwidth is equal to the treewidth, or one more, in the ove","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Container-Usage-Pattern-Based Context Debloating Approach for Object-Sensitive Pointer Analysis 面向对象敏感指针分析的基于容器使用模式的上下文展开方法
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622832
Dongjie He, Yujiang Gui, Wei Li, Yonggang Tao, Changwei Zou, Yulei Sui, Jingling Xue
In this paper, we introduce DebloaterX, a new approach for automatically identifying context-independent objects to debloat contexts in object-sensitive pointer analysis ( k obj). Object sensitivity achieves high precision, but its context construction mechanism combines objects with their contexts indiscriminately. This leads to a combinatorial explosion of contexts in large programs, resulting in inefficiency. Previous research has proposed a context-debloating approach that inhibits a pre-selected set of context-independent objects from forming new contexts, improving the efficiency of k obj. However, this earlier context-debloating approach under-approximates the set of context-independent objects identified, limiting performance speedups. We introduce a novel context-debloating pre-analysis approach that identifies objects as context-dependent only when they are potentially precision-critical to k obj based on three general container-usage patterns. Our research finds that objects containing no fields of ”abstract” (i.e., open) types can be analyzed context-insensitively with negligible precision loss in real-world applications. We provide clear rules and efficient algorithms to recognize these patterns, selecting more context-independent objects for better debloating. We have implemented DebloaterX in the Qilin framework and will release it as an open-source tool. Our experimental results on 12 standard Java benchmarks and real-world programs show that DebloaterX selects 92.4% of objects to be context-independent on average, enabling k obj to run significantly faster (an average of 19.3x when k = 2 and 150.2x when k = 3) and scale up to 8 more programs when k = 3, with only a negligible loss of precision (less than 0.2%). Compared to state-of-the-art alternative pre-analyses in accelerating k obj, DebloaterX outperforms Zipper significantly in both precision and efficiency and outperforms Conch (the earlier context-debloating approach) in efficiency substantially while achieving nearly the same precision.
在本文中,我们介绍了DebloaterX,这是一种在对象敏感指针分析(k obj)中自动识别上下文无关对象以展开上下文的新方法。对象敏感性实现了高精度,但其上下文构建机制不加区分地将对象与其上下文结合在一起。这将导致大型程序中上下文的组合爆炸,从而导致效率低下。先前的研究提出了一种上下文消去方法,该方法可以抑制预先选择的一组与上下文无关的对象形成新的上下文,从而提高k对象的效率。然而,这种早期的上下文分解方法对所识别的上下文无关对象集的近似不够,限制了性能提升。我们引入了一种新的上下文扩展预分析方法,该方法根据三种通用容器使用模式,仅当对象对k对象具有潜在的精度关键时,才将其识别为上下文相关的对象。我们的研究发现,在现实世界的应用程序中,不包含“抽象”(即开放)类型字段的对象可以被上下文不敏感地分析,精度损失可以忽略不计。我们提供了清晰的规则和有效的算法来识别这些模式,选择更多的上下文无关的对象来更好地展开。我们已经在麒麟框架中实现了DebloaterX,并将作为开源工具发布。我们在12个标准Java基准测试和实际程序上的实验结果表明,DebloaterX平均选择92.4%的对象与上下文无关,从而使k obj运行速度显著提高(k = 2时平均为19.3倍,k = 3时平均为150.2倍),并在k = 3时扩展到8个以上的程序,而精度的损失可以忽略不计(小于0.2%)。与最先进的加速k对象的替代预分析相比,DebloaterX在精度和效率方面都明显优于Zipper,在效率方面也大大优于Conch(早期的上下文展开方法),同时达到几乎相同的精度。
{"title":"A Container-Usage-Pattern-Based Context Debloating Approach for Object-Sensitive Pointer Analysis","authors":"Dongjie He, Yujiang Gui, Wei Li, Yonggang Tao, Changwei Zou, Yulei Sui, Jingling Xue","doi":"10.1145/3622832","DOIUrl":"https://doi.org/10.1145/3622832","url":null,"abstract":"In this paper, we introduce DebloaterX, a new approach for automatically identifying context-independent objects to debloat contexts in object-sensitive pointer analysis ( k obj). Object sensitivity achieves high precision, but its context construction mechanism combines objects with their contexts indiscriminately. This leads to a combinatorial explosion of contexts in large programs, resulting in inefficiency. Previous research has proposed a context-debloating approach that inhibits a pre-selected set of context-independent objects from forming new contexts, improving the efficiency of k obj. However, this earlier context-debloating approach under-approximates the set of context-independent objects identified, limiting performance speedups. We introduce a novel context-debloating pre-analysis approach that identifies objects as context-dependent only when they are potentially precision-critical to k obj based on three general container-usage patterns. Our research finds that objects containing no fields of ”abstract” (i.e., open) types can be analyzed context-insensitively with negligible precision loss in real-world applications. We provide clear rules and efficient algorithms to recognize these patterns, selecting more context-independent objects for better debloating. We have implemented DebloaterX in the Qilin framework and will release it as an open-source tool. Our experimental results on 12 standard Java benchmarks and real-world programs show that DebloaterX selects 92.4% of objects to be context-independent on average, enabling k obj to run significantly faster (an average of 19.3x when k = 2 and 150.2x when k = 3) and scale up to 8 more programs when k = 3, with only a negligible loss of precision (less than 0.2%). Compared to state-of-the-art alternative pre-analyses in accelerating k obj, DebloaterX outperforms Zipper significantly in both precision and efficiency and outperforms Conch (the earlier context-debloating approach) in efficiency substantially while achieving nearly the same precision.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Domain Experts Use an Embedded DSL 领域专家如何使用嵌入式DSL
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622851
Lisa Rennels, Sarah E. Chasins
Programming tools are increasingly integral to research and analysis in myriad domains, including specialized areas with no formal relation to computer science. Embedded domain-specific languages (eDSLs) have the potential to serve these programmers while placing relatively light implementation burdens on language designers. However, barriers to eDSL use reduce their practical value and adoption. In this paper, we aim to deepen our understanding of how programmers use eDSLs and identify user needs to inform future eDSL designs. We performed a contextual inquiry (9 participants) with domain experts using Mimi, an eDSL for climate change economics modeling. A thematic analysis identified five key themes, including: the interaction between the eDSL and the host language has significant and sometimes unexpected impacts on eDSL user experience, and users preferentially engage with domain-specific communities and code templates rather than host language resources. The needs uncovered in our study offer design considerations for future eDSLs and suggest directions for future DSL usability research.
编程工具在无数领域的研究和分析中越来越不可或缺,包括与计算机科学没有正式关系的专业领域。嵌入式领域特定语言(edsl)有潜力为这些程序员服务,同时给语言设计人员带来相对较轻的实现负担。然而,使用eDSL的障碍降低了它们的实用价值和采用率。在本文中,我们的目标是加深我们对程序员如何使用eDSL的理解,并确定用户需求,为未来的eDSL设计提供信息。我们与领域专家一起使用Mimi(用于气候变化经济学建模的eDSL)进行了上下文调查(9名参与者)。一项专题分析确定了五个关键主题,包括:eDSL和宿主语言之间的交互对eDSL用户体验有重大的、有时是意想不到的影响,用户优先使用特定领域的社区和代码模板,而不是宿主语言资源。在我们的研究中发现的需求为未来的DSL提供了设计考虑,并为未来的DSL可用性研究提出了方向。
{"title":"How Domain Experts Use an Embedded DSL","authors":"Lisa Rennels, Sarah E. Chasins","doi":"10.1145/3622851","DOIUrl":"https://doi.org/10.1145/3622851","url":null,"abstract":"Programming tools are increasingly integral to research and analysis in myriad domains, including specialized areas with no formal relation to computer science. Embedded domain-specific languages (eDSLs) have the potential to serve these programmers while placing relatively light implementation burdens on language designers. However, barriers to eDSL use reduce their practical value and adoption. In this paper, we aim to deepen our understanding of how programmers use eDSLs and identify user needs to inform future eDSL designs. We performed a contextual inquiry (9 participants) with domain experts using Mimi, an eDSL for climate change economics modeling. A thematic analysis identified five key themes, including: the interaction between the eDSL and the host language has significant and sometimes unexpected impacts on eDSL user experience, and users preferentially engage with domain-specific communities and code templates rather than host language resources. The needs uncovered in our study offer design considerations for future eDSLs and suggest directions for future DSL usability research.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid: Region-Based Pointer Disambiguation 快速:基于区域的指针消歧
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622859
Khushboo Chitre, Piyus Kedia, Rahul Purandare
Interprocedural alias analyses often sacrifice precision for scalability. Thus, modern compilers such as GCC and LLVM implement more scalable but less precise intraprocedural alias analyses. This compromise makes the compilers miss out on potential optimization opportunities, affecting the performance of the application. Modern compilers implement loop-versioning with dynamic checks for pointer disambiguation to enable the missed optimizations. Polyhedral access range analysis and symbolic range analysis enable 𝑂 (1) range checks for non-overlapping of memory accesses inside loops. However, these approaches work only for the loops in which the loop bounds are loop invariants. To address this limitation, researchers proposed a technique that requires 𝑂 (𝑙𝑜𝑔 𝑛) memory accesses for pointer disambiguation. Others improved the performance of dynamic checks to single memory access by constraining the object size and alignment. However, the former approach incurs noticeable overhead due to its dynamic checks, whereas the latter has a noticeable allocator overhead. Thus, scalability remains a challenge. In this work, we present a tool, Rapid, that further reduces the overheads of the allocator and dynamic checks proposed in the existing approaches. The key idea is to identify objects that need disambiguation checks using a profiler and allocate them in different regions, which are disjoint memory areas. The disambiguation checks simply compare the regions corresponding to the objects. The regions are aligned such that the top 32 bits in the addresses of any two objects allocated in different regions are always different. As a consequence, the dynamic checks do not require any memory access to ensure that the objects belong to different regions, making them efficient. Rapid achieved a maximum performance benefit of around 52.94% for Polybench and 1.88% for CPU SPEC 2017 benchmarks. The maximum CPU overhead of our allocator is 0.57% with a geometric mean of -0.2% for CPU SPEC 2017 benchmarks. Due to the low overhead of the allocator and dynamic checks, Rapid could improve the performance of 12 out of 16 CPU SPEC 2017 benchmarks. In contrast, a state-of-the-art approach used in the comparison could improve only five CPU SPEC 2017 benchmarks.
过程间别名分析常常为了可伸缩性而牺牲精度。因此,像GCC和LLVM这样的现代编译器实现了更大的可伸缩性,但更不精确的过程内别名分析。这种折衷会使编译器错过潜在的优化机会,从而影响应用程序的性能。现代编译器通过动态检查指针消歧来实现循环版本控制,以启用错过的优化。多面体访问范围分析和符号范围分析使𝑂(1)范围检查循环内内存访问的不重叠。然而,这些方法只适用于循环边界为循环不变量的循环。为了解决这一限制,研究人员提出了一种需要𝑂(𝑙𝑜𝑔𝑛)内存访问来消除指针歧义的技术。其他人通过限制对象大小和对齐来提高单内存访问的动态检查的性能。但是,前一种方法由于其动态检查而产生明显的开销,而后一种方法具有明显的分配器开销。因此,可伸缩性仍然是一个挑战。在这项工作中,我们提出了一个工具,Rapid,它进一步降低了分配器的开销和现有方法中提出的动态检查。关键思想是使用分析器识别需要消歧检查的对象,并将它们分配到不同的区域,这些区域是不相交的内存区域。消歧检查只是比较对象对应的区域。这些区域是对齐的,因此在不同区域分配的任意两个对象的地址的前32位总是不同的。因此,动态检查不需要任何内存访问来确保对象属于不同的区域,从而提高了检查的效率。Rapid在Polybench上实现了52.94%的最大性能优势,在CPU SPEC 2017基准测试中实现了1.88%的最大性能优势。我们的分配器的最大CPU开销为0.57%,CPU SPEC 2017基准测试的几何平均值为-0.2%。由于分配器和动态检查的低开销,Rapid可以提高16个CPU SPEC 2017基准中的12个的性能。相比之下,在比较中使用的最先进的方法只能提高五个CPU SPEC 2017基准。
{"title":"Rapid: Region-Based Pointer Disambiguation","authors":"Khushboo Chitre, Piyus Kedia, Rahul Purandare","doi":"10.1145/3622859","DOIUrl":"https://doi.org/10.1145/3622859","url":null,"abstract":"Interprocedural alias analyses often sacrifice precision for scalability. Thus, modern compilers such as GCC and LLVM implement more scalable but less precise intraprocedural alias analyses. This compromise makes the compilers miss out on potential optimization opportunities, affecting the performance of the application. Modern compilers implement loop-versioning with dynamic checks for pointer disambiguation to enable the missed optimizations. Polyhedral access range analysis and symbolic range analysis enable 𝑂 (1) range checks for non-overlapping of memory accesses inside loops. However, these approaches work only for the loops in which the loop bounds are loop invariants. To address this limitation, researchers proposed a technique that requires 𝑂 (𝑙𝑜𝑔 𝑛) memory accesses for pointer disambiguation. Others improved the performance of dynamic checks to single memory access by constraining the object size and alignment. However, the former approach incurs noticeable overhead due to its dynamic checks, whereas the latter has a noticeable allocator overhead. Thus, scalability remains a challenge. In this work, we present a tool, Rapid, that further reduces the overheads of the allocator and dynamic checks proposed in the existing approaches. The key idea is to identify objects that need disambiguation checks using a profiler and allocate them in different regions, which are disjoint memory areas. The disambiguation checks simply compare the regions corresponding to the objects. The regions are aligned such that the top 32 bits in the addresses of any two objects allocated in different regions are always different. As a consequence, the dynamic checks do not require any memory access to ensure that the objects belong to different regions, making them efficient. Rapid achieved a maximum performance benefit of around 52.94% for Polybench and 1.88% for CPU SPEC 2017 benchmarks. The maximum CPU overhead of our allocator is 0.57% with a geometric mean of -0.2% for CPU SPEC 2017 benchmarks. Due to the low overhead of the allocator and dynamic checks, Rapid could improve the performance of 12 out of 16 CPU SPEC 2017 benchmarks. In contrast, a state-of-the-art approach used in the comparison could improve only five CPU SPEC 2017 benchmarks.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136116388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs 用于训练程序的神经代理的复杂性引导数据采样
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622856
Renda, Alex, Ding, Yi, Carbin, Michael
Programmers and researchers are increasingly developing surrogates of programs, models of a subset of the observable behavior of a given program, to solve a variety of software development challenges. Programmers train surrogates from measurements of the behavior of a program on a dataset of input examples. A key challenge of surrogate construction is determining what training data to use to train a surrogate of a given program. We present a methodology for sampling datasets to train neural-network-based surrogates of programs. We first characterize the proportion of data to sample from each region of a program's input space (corresponding to different execution paths of the program) based on the complexity of learning a surrogate of the corresponding execution path. We next provide a program analysis to determine the complexity of different paths in a program. We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy.
程序员和研究人员越来越多地开发程序的替代品,即给定程序的可观察行为子集的模型,以解决各种软件开发挑战。程序员通过在输入示例数据集上对程序行为的测量来训练代理。构建代理的一个关键挑战是确定使用什么训练数据来训练给定程序的代理。我们提出了一种采样数据集的方法来训练基于神经网络的程序代理。我们首先根据学习相应执行路径的代理的复杂性来表征程序输入空间的每个区域(对应于程序的不同执行路径)的样本数据比例。接下来,我们将提供一个程序分析,以确定程序中不同路径的复杂性。我们在一系列现实世界的程序中评估了这些结果,证明了复杂性引导的采样结果在准确性方面的经验改进。
{"title":"Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs","authors":"Renda, Alex, Ding, Yi, Carbin, Michael","doi":"10.1145/3622856","DOIUrl":"https://doi.org/10.1145/3622856","url":null,"abstract":"Programmers and researchers are increasingly developing surrogates of programs, models of a subset of the observable behavior of a given program, to solve a variety of software development challenges. Programmers train surrogates from measurements of the behavior of a program on a dataset of input examples. A key challenge of surrogate construction is determining what training data to use to train a surrogate of a given program. We present a methodology for sampling datasets to train neural-network-based surrogates of programs. We first characterize the proportion of data to sample from each region of a program's input space (corresponding to different execution paths of the program) based on the complexity of learning a surrogate of the corresponding execution path. We next provide a program analysis to determine the complexity of different paths in a program. We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136078393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception Contracts for Safety of ML-Enabled Systems 基于机器学习的系统安全感知契约
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622875
Angello Astorga, Chiao Hsieh, P. Madhusudan, Sayan Mitra
We introduce a novel notion of perception contracts to reason about the safety of controllers that interact with an environment using neural perception. Perception contracts capture errors in ground-truth estimations that preserve invariants when systems act upon them. We develop a theory of perception contracts and design symbolic learning algorithms for synthesizing them from a finite set of images. We implement our algorithms and evaluate synthesized perception contracts for two realistic vision-based control systems, a lane tracking system for an electric vehicle and an agricultural robot that follows crop rows. Our evaluation shows that our approach is effective in synthesizing perception contracts and generalizes well when evaluated over test images obtained during runtime monitoring of the systems.
我们引入了一种新的感知契约概念来推理使用神经感知与环境交互的控制器的安全性。感知契约捕获了当系统对其起作用时保留不变量的基础真值估计中的错误。我们发展了一种感知契约理论,并设计了符号学习算法,用于从有限的图像集合中合成它们。我们实现了我们的算法,并评估了两种基于现实视觉的控制系统的综合感知契约,一种是电动汽车的车道跟踪系统,另一种是跟踪作物行的农业机器人。我们的评估表明,我们的方法在综合感知契约方面是有效的,并且在对系统运行时监控期间获得的测试图像进行评估时泛化得很好。
{"title":"Perception Contracts for Safety of ML-Enabled Systems","authors":"Angello Astorga, Chiao Hsieh, P. Madhusudan, Sayan Mitra","doi":"10.1145/3622875","DOIUrl":"https://doi.org/10.1145/3622875","url":null,"abstract":"We introduce a novel notion of perception contracts to reason about the safety of controllers that interact with an environment using neural perception. Perception contracts capture errors in ground-truth estimations that preserve invariants when systems act upon them. We develop a theory of perception contracts and design symbolic learning algorithms for synthesizing them from a finite set of images. We implement our algorithms and evaluate synthesized perception contracts for two realistic vision-based control systems, a lane tracking system for an electric vehicle and an agricultural robot that follows crop rows. Our evaluation shows that our approach is effective in synthesizing perception contracts and generalizes well when evaluated over test images obtained during runtime monitoring of the systems.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Complete First-Order Reasoning for Properties of Functional Programs 函数程序性质的完全一阶推理
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622835
Adithya Murali, Lucas Peña, Ranjit Jhala, P. Madhusudan
Several practical tools for automatically verifying functional programs (e.g., Liquid Haskell and Leon for Scala programs) rely on a heuristic based on unrolling recursive function definitions followed by quantifier-free reasoning using SMT solvers. We uncover foundational theoretical properties of this heuristic, revealing that it can be generalized and formalized as a technique that is in fact complete for reasoning with combined First-Order theories of algebraic datatypes and background theories, where background theories support decidable quantifier-free reasoning. The theory developed in this paper explains the efficacy of these heuristics when they succeed, explain why they fail when they fail, and the precise role that user help plays in making proofs succeed.
一些用于自动验证函数程序的实用工具(例如,用于Scala程序的Liquid Haskell和Leon)依赖于启发式方法,该方法基于展开递归函数定义,然后使用SMT求解器进行无量词推理。我们揭示了这种启发式的基本理论特性,揭示了它可以被推广和形式化为一种技术,实际上是一种完整的推理技术,结合代数数据类型的一阶理论和背景理论,其中背景理论支持可决定的无量词推理。本文中发展的理论解释了这些启发式在成功时的有效性,解释了它们失败时失败的原因,以及用户帮助在证明成功中所起的确切作用。
{"title":"Complete First-Order Reasoning for Properties of Functional Programs","authors":"Adithya Murali, Lucas Peña, Ranjit Jhala, P. Madhusudan","doi":"10.1145/3622835","DOIUrl":"https://doi.org/10.1145/3622835","url":null,"abstract":"Several practical tools for automatically verifying functional programs (e.g., Liquid Haskell and Leon for Scala programs) rely on a heuristic based on unrolling recursive function definitions followed by quantifier-free reasoning using SMT solvers. We uncover foundational theoretical properties of this heuristic, revealing that it can be generalized and formalized as a technique that is in fact complete for reasoning with combined First-Order theories of algebraic datatypes and background theories, where background theories support decidable quantifier-free reasoning. The theory developed in this paper explains the efficacy of these heuristics when they succeed, explain why they fail when they fail, and the precise role that user help plays in making proofs succeed.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Type-Safe Dynamic Placement with First-Class Placed Values 具有一级放置值的类型安全动态放置
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622873
George Zakhour, Pascal Weisenburger, Guido Salvaneschi
Several distributed programming language solutions have been proposed to reason about the placement of data, computations, and peers interaction. Such solutions include, among the others, multitier programming, choreographic programming and various approaches based on behavioral types. These methods statically ensure safety properties thanks to a complete knowledge about placement of data and computation at compile time. In distributed systems, however, dynamic placement of computation and data is crucial to enable performance optimizations, e.g., driven by data locality or in presence of a number of other constraints such as security and compliance regarding data storage location. Unfortunately, in existing programming languages, dynamic placement conflicts with static reasoning about distributed programs: the flexibility required by dynamic placement hinders statically tracking the location of data and computation. In this paper we present Dyno, a programming language that enables static reasoning about dynamic placement. Dyno features a type system where values are explicitly placed, but in contrast to existing approaches, placed values are also first class, ensuring that they can be passed around and referred to from other locations. Building on top of this mechanism, we provide a novel interpretation of dynamic placement as unions of placement types. We formalize type soundness, placement correctness (as part of type soundness) and architecture conformance. In case studies and benchmarks, our evaluation shows that Dyno enables static reasoning about programs even in presence of dynamic placement, ensuring type safety and placement correctness of programs at negligible performance cost. We reimplement an Android app with ∼ 7 K LOC in Dyno, find a bug in the existing implementation, and show that the app's approach is representative of a common way to implement dynamic placement found in over 100 apps in a large open-source app store.
已经提出了几种分布式编程语言解决方案来解释数据、计算和对等体交互的位置。这些解决方案包括多层编程、编排编程和基于行为类型的各种方法。由于在编译时完全了解数据和计算的位置,这些方法静态地确保了安全属性。然而,在分布式系统中,计算和数据的动态放置对于实现性能优化至关重要,例如,由数据位置驱动或存在许多其他约束,例如关于数据存储位置的安全性和遵从性。不幸的是,在现有的编程语言中,动态放置与分布式程序的静态推理相冲突:动态放置所需的灵活性阻碍了对数据和计算位置的静态跟踪。在本文中,我们介绍了Dyno,这是一种编程语言,可以对动态放置进行静态推理。Dyno具有显式放置值的类型系统,但与现有方法不同的是,放置的值也是第一类的,确保它们可以被传递并从其他位置引用。在此机制的基础上,我们将动态放置作为放置类型的联合提供了一种新的解释。我们将类型稳健性、位置正确性(作为类型稳健性的一部分)和体系结构一致性形式化。在案例研究和基准测试中,我们的评估表明,即使存在动态放置,Dyno也支持对程序的静态推理,以微不足道的性能成本确保程序的类型安全和放置正确性。我们在Dyno中重新实现了一个具有7 K LOC的Android应用程序,发现了现有实现中的一个bug,并表明该应用程序的方法代表了在大型开源应用程序商店中超过100个应用程序中实现动态放置的常见方法。
{"title":"Type-Safe Dynamic Placement with First-Class Placed Values","authors":"George Zakhour, Pascal Weisenburger, Guido Salvaneschi","doi":"10.1145/3622873","DOIUrl":"https://doi.org/10.1145/3622873","url":null,"abstract":"Several distributed programming language solutions have been proposed to reason about the placement of data, computations, and peers interaction. Such solutions include, among the others, multitier programming, choreographic programming and various approaches based on behavioral types. These methods statically ensure safety properties thanks to a complete knowledge about placement of data and computation at compile time. In distributed systems, however, dynamic placement of computation and data is crucial to enable performance optimizations, e.g., driven by data locality or in presence of a number of other constraints such as security and compliance regarding data storage location. Unfortunately, in existing programming languages, dynamic placement conflicts with static reasoning about distributed programs: the flexibility required by dynamic placement hinders statically tracking the location of data and computation. In this paper we present Dyno, a programming language that enables static reasoning about dynamic placement. Dyno features a type system where values are explicitly placed, but in contrast to existing approaches, placed values are also first class, ensuring that they can be passed around and referred to from other locations. Building on top of this mechanism, we provide a novel interpretation of dynamic placement as unions of placement types. We formalize type soundness, placement correctness (as part of type soundness) and architecture conformance. In case studies and benchmarks, our evaluation shows that Dyno enables static reasoning about programs even in presence of dynamic placement, ensuring type safety and placement correctness of programs at negligible performance cost. We reimplement an Android app with ∼ 7 K LOC in Dyno, find a bug in the existing implementation, and show that the app's approach is representative of a common way to implement dynamic placement found in over 100 apps in a large open-source app store.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Better Semantics Exploration for Browser Fuzzing 面向浏览器模糊测试更好的语义探索
Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-16 DOI: 10.1145/3622819
Chijin Zhou, Quan Zhang, Lihua Guo, Mingzhe Wang, Yu Jiang, Qing Liao, Zhiyong Wu, Shanshan Li, Bin Gu
Web browsers exhibit rich semantics that enable a plethora of web-based functionalities. However, these intricate semantics present significant challenges for the implementation and testing of browsers. For example, fuzzing, a widely adopted testing technique, typically relies on handwritten context-free grammars (CFGs) for automatically generating inputs. However, these CFGs fall short in adequately modeling the complex semantics of browsers, resulting in generated inputs that cover only a portion of the semantics and are prone to semantic errors. In this paper, we present SaGe, an automated method that enhances browser fuzzing through the use of production-context sensitive grammars (PCSGs) incorporating semantic information. Our approach begins by extracting a rudimentary CFG from W3C standards and iteratively enhancing it to create a PCSG. The resulting PCSG enables our fuzzer to generate inputs that explore a broader range of browser semantics with a higher proportion of semantically-correct inputs. To evaluate the efficacy of SaGe, we conducted 24-hour fuzzing campaigns on mainstream browsers, including Chrome, Safari, and Firefox. Our approach demonstrated better performance compared to existing browser fuzzers, with a 6.03%-277.80% improvement in edge coverage, a 3.56%-161.71% boost in semantic correctness rate, twice the number of bugs discovered. Moreover, we identified 62 bugs across the three browsers, with 40 confirmed and 10 assigned CVEs.
Web浏览器提供丰富的语义,支持大量基于Web的功能。然而,这些复杂的语义给浏览器的实现和测试带来了巨大的挑战。例如,模糊测试是一种广泛采用的测试技术,它通常依赖于手写的上下文无关语法(cfg)来自动生成输入。然而,这些cfg在充分建模浏览器的复杂语义方面存在不足,导致生成的输入只覆盖了一部分语义,并且容易出现语义错误。在本文中,我们提出了SaGe,这是一种通过使用包含语义信息的生产上下文敏感语法(PCSGs)来增强浏览器模糊测试的自动化方法。我们的方法首先从W3C标准中提取基本的CFG,并对其进行迭代增强以创建PCSG。由此产生的PCSG使我们的模糊器能够生成能够探索更广泛的浏览器语义的输入,并且具有更高比例的语义正确输入。为了评估SaGe的有效性,我们在主流浏览器(包括Chrome、Safari和Firefox)上进行了24小时的模糊测试。与现有的浏览器模糊器相比,我们的方法表现出更好的性能,边缘覆盖率提高了6.03%-277.80%,语义正确率提高了3.56%-161.71%,发现的错误数量增加了一倍。此外,我们在三个浏览器中发现了62个bug,其中40个已确认,10个已分配cve。
{"title":"Towards Better Semantics Exploration for Browser Fuzzing","authors":"Chijin Zhou, Quan Zhang, Lihua Guo, Mingzhe Wang, Yu Jiang, Qing Liao, Zhiyong Wu, Shanshan Li, Bin Gu","doi":"10.1145/3622819","DOIUrl":"https://doi.org/10.1145/3622819","url":null,"abstract":"Web browsers exhibit rich semantics that enable a plethora of web-based functionalities. However, these intricate semantics present significant challenges for the implementation and testing of browsers. For example, fuzzing, a widely adopted testing technique, typically relies on handwritten context-free grammars (CFGs) for automatically generating inputs. However, these CFGs fall short in adequately modeling the complex semantics of browsers, resulting in generated inputs that cover only a portion of the semantics and are prone to semantic errors. In this paper, we present SaGe, an automated method that enhances browser fuzzing through the use of production-context sensitive grammars (PCSGs) incorporating semantic information. Our approach begins by extracting a rudimentary CFG from W3C standards and iteratively enhancing it to create a PCSG. The resulting PCSG enables our fuzzer to generate inputs that explore a broader range of browser semantics with a higher proportion of semantically-correct inputs. To evaluate the efficacy of SaGe, we conducted 24-hour fuzzing campaigns on mainstream browsers, including Chrome, Safari, and Firefox. Our approach demonstrated better performance compared to existing browser fuzzers, with a 6.03%-277.80% improvement in edge coverage, a 3.56%-161.71% boost in semantic correctness rate, twice the number of bugs discovered. Moreover, we identified 62 bugs across the three browsers, with 40 confirmed and 10 assigned CVEs.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136114709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1