首页 > 最新文献

ACM Transactions on Software Engineering and Methodology (TOSEM)最新文献

英文 中文
Turnover of Companies in OpenStack: Prevalence and Rationale OpenStack中的公司更替:流行和基本原理
Pub Date : 2022-07-12 DOI: 10.1145/3510849
Yuxia Zhang, Hui Liu, Xin Tan, Minghui Zhou, Zhi Jin, Jiaxin Zhu
To achieve commercial goals, companies have made substantial contributions to large open-source software (OSS) ecosystems such as OpenStack and have become the main contributors. However, they often withdraw their employees for a variety of reasons, which may affect the sustainability of OSS projects. While the turnover of individual contributors has been extensively investigated, there is a lack of knowledge about the nature of companies’ withdrawal. To this end, we conduct a mixed-methods empirical study on OpenStack to reveal how common company withdrawals were, to what degree withdrawn companies made contributions, and what the rationale behind withdrawals was. By analyzing the commit data of 18 versions of OpenStack, we find that the number of companies that have left is increasing and even surpasses the number of companies that have joined in later versions. Approximately 12% of the companies in each version have exited by the next version. Compared to the sustaining companies that joined in the same version, the withdrawn companies tend to have a weaker contribution intensity but contribute to a similar scope of repositories in OpenStack. Through conducting a developer survey, we find four aspects of reasons for companies’ withdrawal from OpenStack: company, community, developer, and project. The most common reasons lie in the company aspect, i.e., the company either achieved its goals or failed to do so. By fitting the survival analysis model, we find that commercial goals are associated with the probability of the company’s withdrawal, and that a company’s contribution intensity and scale are positively correlated with its retention. Maintaining good retention is important but challenging for OSS ecosystems, and our results may shed light on potential approaches to improve company retention and reduce the negative impact of company withdrawal.
为了实现商业目标,公司对大型开源软件(OSS)生态系统(如OpenStack)做出了大量贡献,并成为主要贡献者。然而,他们经常因为各种各样的原因撤回他们的员工,这可能会影响OSS项目的可持续性。虽然对个人出资人的离职进行了广泛调查,但人们对公司离职的性质却缺乏了解。为此,我们对OpenStack进行了混合方法的实证研究,以揭示公司退出的常见程度,退出的公司做出贡献的程度以及退出背后的理由是什么。通过分析18个版本OpenStack的提交数据,我们发现退出的公司越来越多,甚至超过了后续版本加入的公司。每个版本中大约有12%的公司在下一个版本中退出。与加入同一版本的持续公司相比,退出的公司往往贡献强度较弱,但对OpenStack中存储库的贡献范围相似。通过对开发者的调查,我们发现企业退出OpenStack的原因有四个方面:公司、社区、开发者和项目。最常见的原因是公司方面,即公司要么实现了目标,要么没有实现目标。通过拟合生存分析模型,我们发现商业目标与企业退出概率呈正相关,企业的贡献强度和规模与企业保留率呈正相关。保持良好的保留对于OSS生态系统来说是重要的,但也是具有挑战性的,我们的结果可能会揭示出改善公司保留和减少公司退出的负面影响的潜在方法。
{"title":"Turnover of Companies in OpenStack: Prevalence and Rationale","authors":"Yuxia Zhang, Hui Liu, Xin Tan, Minghui Zhou, Zhi Jin, Jiaxin Zhu","doi":"10.1145/3510849","DOIUrl":"https://doi.org/10.1145/3510849","url":null,"abstract":"To achieve commercial goals, companies have made substantial contributions to large open-source software (OSS) ecosystems such as OpenStack and have become the main contributors. However, they often withdraw their employees for a variety of reasons, which may affect the sustainability of OSS projects. While the turnover of individual contributors has been extensively investigated, there is a lack of knowledge about the nature of companies’ withdrawal. To this end, we conduct a mixed-methods empirical study on OpenStack to reveal how common company withdrawals were, to what degree withdrawn companies made contributions, and what the rationale behind withdrawals was. By analyzing the commit data of 18 versions of OpenStack, we find that the number of companies that have left is increasing and even surpasses the number of companies that have joined in later versions. Approximately 12% of the companies in each version have exited by the next version. Compared to the sustaining companies that joined in the same version, the withdrawn companies tend to have a weaker contribution intensity but contribute to a similar scope of repositories in OpenStack. Through conducting a developer survey, we find four aspects of reasons for companies’ withdrawal from OpenStack: company, community, developer, and project. The most common reasons lie in the company aspect, i.e., the company either achieved its goals or failed to do so. By fitting the survival analysis model, we find that commercial goals are associated with the probability of the company’s withdrawal, and that a company’s contribution intensity and scale are positively correlated with its retention. Maintaining good retention is important but challenging for OSS ecosystems, and our results may shed light on potential approaches to improve company retention and reduce the negative impact of company withdrawal.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"14 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81720724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Super-optimization of Smart Contracts 智能合约的超级优化
Pub Date : 2022-07-12 DOI: 10.1145/3506800
E. Albert, Pablo Gordillo, Alejandro Hernández-Cerezo, A. Rubio, M. A. Schett
Smart contracts are programs deployed on a blockchain. They are executed for a monetary fee paid in gas—a clear optimization target for smart contract compilers. Because smart contracts are a young, fast-moving field without (manually) fine-tuned compilers, they highly benefit from automated and adaptable approaches, especially as smart contracts are effectively immutable, and as such need a high level of assurance. This makes them an ideal domain for applying formal methods. Super-optimization is a technique to find the best translation of a block of instructions by trying all possible sequences of instructions that produce the same result. We present a framework for super-optimizing smart contracts based on Max-SMT with two main ingredients: (1) a stack functional specification extracted from the basic blocks of a smart contract, which is simplified using rules capturing the semantics of arithmetic, bit-wise, and relational operations, and (2) the synthesis of optimized blocks, which finds—by means of an efficient SMT encoding—basic blocks with minimal gas cost whose stack functional specification is equal (modulo commutativity) to the extracted one. We implemented our framework in the tool syrup 2.0. Through large-scale experiments on real-world smart contracts, we analyze performance improvements for different SMT encodings, as well as tradeoffs between quality of optimizations and required optimization time.
智能合约是部署在区块链上的程序。它们的执行需要以gas支付货币费用——这是智能合约编译器明确的优化目标。因为智能合约是一个年轻的、快速发展的领域,没有(手动)微调的编译器,它们从自动化和可适应的方法中受益匪浅,特别是因为智能合约是有效的不可变的,因此需要高水平的保证。这使得它们成为应用形式化方法的理想领域。超级优化是一种通过尝试产生相同结果的所有可能的指令序列来找到指令块的最佳翻译的技术。我们提出了一个基于Max-SMT的超优化智能合约框架,其中包括两个主要成分:(1)从智能合约的基本块中提取堆栈功能规范,使用捕获算术、位和关系操作语义的规则对其进行简化;(2)优化块的综合,通过高效的SMT编码找到具有最小gas成本的基本块,其堆栈功能规范与提取的基本块相等(模交换性)。我们在糖浆2.0工具中实现了我们的框架。通过对现实世界智能合约的大规模实验,我们分析了不同SMT编码的性能改进,以及优化质量和所需优化时间之间的权衡。
{"title":"Super-optimization of Smart Contracts","authors":"E. Albert, Pablo Gordillo, Alejandro Hernández-Cerezo, A. Rubio, M. A. Schett","doi":"10.1145/3506800","DOIUrl":"https://doi.org/10.1145/3506800","url":null,"abstract":"Smart contracts are programs deployed on a blockchain. They are executed for a monetary fee paid in gas—a clear optimization target for smart contract compilers. Because smart contracts are a young, fast-moving field without (manually) fine-tuned compilers, they highly benefit from automated and adaptable approaches, especially as smart contracts are effectively immutable, and as such need a high level of assurance. This makes them an ideal domain for applying formal methods. Super-optimization is a technique to find the best translation of a block of instructions by trying all possible sequences of instructions that produce the same result. We present a framework for super-optimizing smart contracts based on Max-SMT with two main ingredients: (1) a stack functional specification extracted from the basic blocks of a smart contract, which is simplified using rules capturing the semantics of arithmetic, bit-wise, and relational operations, and (2) the synthesis of optimized blocks, which finds—by means of an efficient SMT encoding—basic blocks with minimal gas cost whose stack functional specification is equal (modulo commutativity) to the extracted one. We implemented our framework in the tool syrup 2.0. Through large-scale experiments on real-world smart contracts, we analyze performance improvements for different SMT encodings, as well as tradeoffs between quality of optimizations and required optimization time.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"6 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84069976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Verification of Programs Sensitive to Heap Layout 对堆布局敏感的程序验证
Pub Date : 2022-06-27 DOI: 10.1145/3508363
Henrich Lauko, Lukás Korencik, Petr Ročkai
Most C and C++ programs use dynamically allocated memory (often known as a heap) to store and organize their data. In practice, it can be useful to compare addresses of different heap objects, for instance, to store them in a binary search tree or a sorted array. However, comparisons of pointers to distinct objects are inherently ambiguous: The address order of two objects can be reversed in different executions of the same program, due to the nature of the allocation algorithm and other external factors. This poses a significant challenge to program verification, since a sound verifier must consider all possible behaviors of a program, including an arbitrary reordering of the heap. A naive verification of all possibilities, of course, leads to a combinatorial explosion of the state space: For this reason, we propose an under-approximating abstract domain that can be soundly refined to consider all relevant heap orderings. We have implemented the proposed abstract domain and evaluated it against several existing software verification tools on a collection of pointer-manipulating programs. In many cases, existing tools only consider a single fixed heap order, which is a source of unsoundness. We demonstrate that using our abstract domain, this unsoundness can be repaired at only a very modest performance cost. Additionally, we show that, even though many verifiers ignore it, ambiguous behavior is present in a considerable fraction of programs from software verification competition (sv-comp).
大多数C和c++程序使用动态分配的内存(通常称为堆)来存储和组织它们的数据。在实践中,比较不同堆对象的地址可能很有用,例如,将它们存储在二叉搜索树或排序数组中。然而,指向不同对象的指针的比较本质上是不明确的:由于分配算法的性质和其他外部因素,在同一程序的不同执行中,两个对象的地址顺序可能颠倒。这对程序验证提出了重大挑战,因为可靠的验证器必须考虑程序的所有可能行为,包括对堆的任意重新排序。当然,对所有可能性的天真验证会导致状态空间的组合爆炸:出于这个原因,我们提出了一个近似不足的抽象域,可以对其进行完善,以考虑所有相关的堆排序。我们已经实现了提出的抽象域,并在指针操作程序的集合上对几个现有的软件验证工具进行了评估。在许多情况下,现有工具只考虑单一的固定堆顺序,这是不健全的根源。我们证明,使用我们的抽象域,这种不健全可以只以非常适度的性能成本进行修复。此外,我们表明,尽管许多验证者忽略了它,但在软件验证竞争(sv-comp)的相当一部分程序中存在模棱两可的行为。
{"title":"Verification of Programs Sensitive to Heap Layout","authors":"Henrich Lauko, Lukás Korencik, Petr Ročkai","doi":"10.1145/3508363","DOIUrl":"https://doi.org/10.1145/3508363","url":null,"abstract":"Most C and C++ programs use dynamically allocated memory (often known as a heap) to store and organize their data. In practice, it can be useful to compare addresses of different heap objects, for instance, to store them in a binary search tree or a sorted array. However, comparisons of pointers to distinct objects are inherently ambiguous: The address order of two objects can be reversed in different executions of the same program, due to the nature of the allocation algorithm and other external factors. This poses a significant challenge to program verification, since a sound verifier must consider all possible behaviors of a program, including an arbitrary reordering of the heap. A naive verification of all possibilities, of course, leads to a combinatorial explosion of the state space: For this reason, we propose an under-approximating abstract domain that can be soundly refined to consider all relevant heap orderings. We have implemented the proposed abstract domain and evaluated it against several existing software verification tools on a collection of pointer-manipulating programs. In many cases, existing tools only consider a single fixed heap order, which is a source of unsoundness. We demonstrate that using our abstract domain, this unsoundness can be repaired at only a very modest performance cost. Additionally, we show that, even though many verifiers ignore it, ambiguous behavior is present in a considerable fraction of programs from software verification competition (sv-comp).","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"4 1","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84138716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing and Improving an Evaluation Dataset for Detecting Semantic Code Clones via Deep Learning 基于深度学习的语义代码克隆检测评估数据集的评估与改进
Pub Date : 2022-06-25 DOI: 10.1145/3502852
Hao Yu, Xing Hu, Ge Li, Ying Li, Qianxiang Wang, Tao Xie
In recent years, applying deep learning to detect semantic code clones has received substantial attention from the research community. Accordingly, various evaluation benchmark datasets, with the most popular one as BigCloneBench, are constructed and selected as benchmarks to assess and compare different deep learning models for detecting semantic clones. However, there is no study to investigate whether an evaluation benchmark dataset such as BigCloneBench is properly used to evaluate models for detecting semantic code clones. In this article, we present an experimental study to show that BigCloneBench typically includes semantic clone pairs that use the same identifier names, which however are not used in non-semantic-clone pairs. Subsequently, we propose an undesirable-by-design Linear-Model that considers only which identifiers appear in a code fragment; this model can achieve high effectiveness for detecting semantic clones when evaluated on BigCloneBench, even comparable to state-of-the-art deep learning models recently proposed for detecting semantic clones. To alleviate these issues, we abstract a subset of the identifier names (including type, variable, and method names) in BigCloneBench to result in AbsBigCloneBench and use AbsBigCloneBench to better assess the effectiveness of deep learning models on the task of detecting semantic clones.
近年来,应用深度学习来检测语义代码克隆受到了研究界的广泛关注。因此,构建各种评估基准数据集,并选择最流行的BigCloneBench作为基准,评估和比较不同的深度学习模型检测语义克隆。然而,没有研究调查评估基准数据集(如BigCloneBench)是否适合用于评估检测语义代码克隆的模型。在本文中,我们提出了一项实验研究,表明BigCloneBench通常包括使用相同标识符名称的语义克隆对,而非语义克隆对中不使用这些标识符名称。随后,我们提出了一个不受欢迎的设计线性模型,只考虑哪些标识符出现在代码片段中;当在BigCloneBench上进行评估时,该模型在检测语义克隆方面可以达到很高的效率,甚至可以与最近提出的用于检测语义克隆的最先进的深度学习模型相媲美。为了缓解这些问题,我们在BigCloneBench中抽象了标识符名称的一个子集(包括类型、变量和方法名称),从而产生了AbsBigCloneBench,并使用AbsBigCloneBench来更好地评估深度学习模型在检测语义克隆任务上的有效性。
{"title":"Assessing and Improving an Evaluation Dataset for Detecting Semantic Code Clones via Deep Learning","authors":"Hao Yu, Xing Hu, Ge Li, Ying Li, Qianxiang Wang, Tao Xie","doi":"10.1145/3502852","DOIUrl":"https://doi.org/10.1145/3502852","url":null,"abstract":"In recent years, applying deep learning to detect semantic code clones has received substantial attention from the research community. Accordingly, various evaluation benchmark datasets, with the most popular one as BigCloneBench, are constructed and selected as benchmarks to assess and compare different deep learning models for detecting semantic clones. However, there is no study to investigate whether an evaluation benchmark dataset such as BigCloneBench is properly used to evaluate models for detecting semantic code clones. In this article, we present an experimental study to show that BigCloneBench typically includes semantic clone pairs that use the same identifier names, which however are not used in non-semantic-clone pairs. Subsequently, we propose an undesirable-by-design Linear-Model that considers only which identifiers appear in a code fragment; this model can achieve high effectiveness for detecting semantic clones when evaluated on BigCloneBench, even comparable to state-of-the-art deep learning models recently proposed for detecting semantic clones. To alleviate these issues, we abstract a subset of the identifier names (including type, variable, and method names) in BigCloneBench to result in AbsBigCloneBench and use AbsBigCloneBench to better assess the effectiveness of deep learning models on the task of detecting semantic clones.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"14 1","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2022-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91063434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Guaranteeing Timed Opacity using Parametric Timed Model Checking 使用参数定时模型检查保证定时不透明度
Pub Date : 2022-06-11 DOI: 10.1145/3502851
É. André, D. Lime, Dylan Marinho, Junbo Sun
Information leakage can have dramatic consequences on systems security. Among harmful information leaks, the timing information leakage occurs whenever an attacker successfully deduces confidential internal information. In this work, we consider that the attacker has access (only) to the system execution time. We address the following timed opacity problem: given a timed system, a private location and a final location, synthesize the execution times from the initial location to the final location for which one cannot deduce whether the system went through the private location. We also consider the full timed opacity problem, asking whether the system is opaque for all execution times. We show that these problems are decidable for timed automata (TAs) but become undecidable when one adds parameters, yielding parametric timed automata (PTAs). We identify a subclass with some decidability results. We then devise an algorithm for synthesizing PTAs parameter valuations guaranteeing that the resulting TA is opaque. We finally show that our method can also apply to program analysis.
信息泄露会对系统安全造成严重后果。在有害的信息泄露中,定时信息泄露发生在攻击者成功推断出内部机密信息的时候。在这项工作中,我们认为攻击者(仅)有权访问系统执行时间。我们解决了以下时间不透明问题:给定一个定时系统,一个私有位置和一个最终位置,综合从初始位置到最终位置的执行时间,其中无法推断系统是否经过了私有位置。我们还考虑了全时间不透明问题,询问系统是否在所有执行时间都是不透明的。我们证明了这些问题对于时间自动机(TAs)是可确定的,但当一个人增加参数时,产生参数时间自动机(pta),这些问题就变得不可确定了。我们用一些可判定的结果来确定一个子类。然后,我们设计了一种算法来合成pta参数估值,保证结果TA是不透明的。结果表明,该方法同样适用于程序分析。
{"title":"Guaranteeing Timed Opacity using Parametric Timed Model Checking","authors":"É. André, D. Lime, Dylan Marinho, Junbo Sun","doi":"10.1145/3502851","DOIUrl":"https://doi.org/10.1145/3502851","url":null,"abstract":"Information leakage can have dramatic consequences on systems security. Among harmful information leaks, the timing information leakage occurs whenever an attacker successfully deduces confidential internal information. In this work, we consider that the attacker has access (only) to the system execution time. We address the following timed opacity problem: given a timed system, a private location and a final location, synthesize the execution times from the initial location to the final location for which one cannot deduce whether the system went through the private location. We also consider the full timed opacity problem, asking whether the system is opaque for all execution times. We show that these problems are decidable for timed automata (TAs) but become undecidable when one adds parameters, yielding parametric timed automata (PTAs). We identify a subclass with some decidability results. We then devise an algorithm for synthesizing PTAs parameter valuations guaranteeing that the resulting TA is opaque. We finally show that our method can also apply to program analysis.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"11 1","pages":"1 - 36"},"PeriodicalIF":0.0,"publicationDate":"2022-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81828412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Verification Witnesses 验证证人
Pub Date : 2022-05-27 DOI: 10.1145/3477579
D. Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, D. Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, T. Lemberger
Over the last years, witness-based validation of verification results has become an established practice in software verification: An independent validator re-establishes verification results of a software verifier using verification witnesses, which are stored in a standardized exchange format. In addition to validation, such exchangable information about proofs and alarms found by a verifier can be shared across verification tools, and users can apply independent third-party tools to visualize and explore witnesses to help them comprehend the causes of bugs or the reasons why a given program is correct. To achieve the goal of making verification results more accessible to engineers, it is necessary to consider witnesses as first-class exchangeable objects, stored independently from the source code and checked independently from the verifier that produced them, respecting the important principle of separation of concerns. We present the conceptual principles of verification witnesses, give a description of how to use them, provide a technical specification of the exchange format for witnesses, and perform an extensive experimental study on the application of witness-based result validation, using the validators CPAchecker, UAutomizer, CPA-witness2test, and FShell-witness2test.
在过去的几年中,对验证结果的基于证人的验证已经成为软件验证中的一种既定实践:独立的验证者使用验证证人重新建立软件验证者的验证结果,这些验证证人以标准化的交换格式存储。除了验证之外,由验证者发现的关于证明和警报的可交换信息可以在验证工具之间共享,用户可以应用独立的第三方工具来可视化和探索目击者,以帮助他们理解错误的原因或给定程序正确的原因。为了实现使验证结果更易于工程师访问的目标,有必要将见证视为一级可交换对象,独立于源代码存储,独立于产生它们的验证者进行检查,尊重关注点分离的重要原则。我们提出了验证证人的概念原理,描述了如何使用它们,提供了证人交换格式的技术规范,并使用验证器CPAchecker、UAutomizer、cpa -证人2test和fshell -证人2test对基于证人的结果验证的应用进行了广泛的实验研究。
{"title":"Verification Witnesses","authors":"D. Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, D. Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, T. Lemberger","doi":"10.1145/3477579","DOIUrl":"https://doi.org/10.1145/3477579","url":null,"abstract":"Over the last years, witness-based validation of verification results has become an established practice in software verification: An independent validator re-establishes verification results of a software verifier using verification witnesses, which are stored in a standardized exchange format. In addition to validation, such exchangable information about proofs and alarms found by a verifier can be shared across verification tools, and users can apply independent third-party tools to visualize and explore witnesses to help them comprehend the causes of bugs or the reasons why a given program is correct. To achieve the goal of making verification results more accessible to engineers, it is necessary to consider witnesses as first-class exchangeable objects, stored independently from the source code and checked independently from the verifier that produced them, respecting the important principle of separation of concerns. We present the conceptual principles of verification witnesses, give a description of how to use them, provide a technical specification of the exchange format for witnesses, and perform an extensive experimental study on the application of witness-based result validation, using the validators CPAchecker, UAutomizer, CPA-witness2test, and FShell-witness2test.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"252 1","pages":"1 - 69"},"PeriodicalIF":0.0,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77683661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Correlating Automated and Human Evaluation of Code Documentation Generation Quality 关联代码文档生成质量的自动化和人工评估
Pub Date : 2022-05-23 DOI: 10.1145/3502853
Xing Hu, Qiuyuan Chen, Haoye Wang, Xin Xia, D. Lo, Thomas Zimmermann
Automatic code documentation generation has been a crucial task in the field of software engineering. It not only relieves developers from writing code documentation but also helps them to understand programs better. Specifically, deep-learning-based techniques that leverage large-scale source code corpora have been widely used in code documentation generation. These works tend to use automatic metrics (such as BLEU, METEOR, ROUGE, CIDEr, and SPICE) to evaluate different models. These metrics compare generated documentation to reference texts by measuring the overlapping words. Unfortunately, there is no evidence demonstrating the correlation between these metrics and human judgment. We conduct experiments on two popular code documentation generation tasks, code comment generation and commit message generation, to investigate the presence or absence of correlations between these metrics and human judgments. For each task, we replicate three state-of-the-art approaches and the generated documentation is evaluated automatically in terms of BLEU, METEOR, ROUGE-L, CIDEr, and SPICE. We also ask 24 participants to rate the generated documentation considering three aspects (i.e., language, content, and effectiveness). Each participant is given Java methods or commit diffs along with the target documentation to be rated. The results show that the ranking of generated documentation from automatic metrics is different from that evaluated by human annotators. Thus, these automatic metrics are not reliable enough to replace human evaluation for code documentation generation tasks. In addition, METEOR shows the strongest correlation (with moderate Pearson correlation r about 0.7) to human evaluation metrics. However, it is still much lower than the correlation observed between different annotators (with a high Pearson correlation r about 0.8) and correlations that are reported in the literature for other tasks (e.g., Neural Machine Translation [39]). Our study points to the need to develop specialized automated evaluation metrics that can correlate more closely to human evaluation metrics for code generation tasks.
代码文档的自动生成一直是软件工程领域的一项重要任务。它不仅使开发人员从编写代码文档中解脱出来,而且还帮助他们更好地理解程序。具体来说,利用大规模源代码语料库的基于深度学习的技术已广泛用于代码文档生成。这些作品倾向于使用自动度量(如BLEU、METEOR、ROUGE、CIDEr和SPICE)来评估不同的模型。这些指标通过测量重叠的单词来比较生成的文档和参考文本。不幸的是,没有证据表明这些指标与人类判断之间存在关联。我们对两个流行的代码文档生成任务,代码注释生成和提交消息生成进行了实验,以调查这些度量和人类判断之间是否存在相关性。对于每个任务,我们复制三种最先进的方法,并根据BLEU、METEOR、ROUGE-L、CIDEr和SPICE自动评估生成的文档。我们还要求24名参与者从三个方面(即语言、内容和有效性)对生成的文档进行评分。每个参与者都获得Java方法或提交差异以及要评估的目标文档。结果表明,自动度量生成的文档的排名与人工注释器评估的不同。因此,这些自动度量不够可靠,无法取代代码文档生成任务的人工评估。此外,METEOR与人类评价指标的相关性最强(中等Pearson相关r约为0.7)。然而,它仍然远低于不同注释器之间观察到的相关性(Pearson相关r约为0.8)和文献中报道的其他任务(例如,神经机器翻译[39])的相关性。我们的研究指出需要开发专门的自动化评估指标,它可以更紧密地与代码生成任务的人类评估指标相关联。
{"title":"Correlating Automated and Human Evaluation of Code Documentation Generation Quality","authors":"Xing Hu, Qiuyuan Chen, Haoye Wang, Xin Xia, D. Lo, Thomas Zimmermann","doi":"10.1145/3502853","DOIUrl":"https://doi.org/10.1145/3502853","url":null,"abstract":"Automatic code documentation generation has been a crucial task in the field of software engineering. It not only relieves developers from writing code documentation but also helps them to understand programs better. Specifically, deep-learning-based techniques that leverage large-scale source code corpora have been widely used in code documentation generation. These works tend to use automatic metrics (such as BLEU, METEOR, ROUGE, CIDEr, and SPICE) to evaluate different models. These metrics compare generated documentation to reference texts by measuring the overlapping words. Unfortunately, there is no evidence demonstrating the correlation between these metrics and human judgment. We conduct experiments on two popular code documentation generation tasks, code comment generation and commit message generation, to investigate the presence or absence of correlations between these metrics and human judgments. For each task, we replicate three state-of-the-art approaches and the generated documentation is evaluated automatically in terms of BLEU, METEOR, ROUGE-L, CIDEr, and SPICE. We also ask 24 participants to rate the generated documentation considering three aspects (i.e., language, content, and effectiveness). Each participant is given Java methods or commit diffs along with the target documentation to be rated. The results show that the ranking of generated documentation from automatic metrics is different from that evaluated by human annotators. Thus, these automatic metrics are not reliable enough to replace human evaluation for code documentation generation tasks. In addition, METEOR shows the strongest correlation (with moderate Pearson correlation r about 0.7) to human evaluation metrics. However, it is still much lower than the correlation observed between different annotators (with a high Pearson correlation r about 0.8) and correlations that are reported in the literature for other tasks (e.g., Neural Machine Translation [39]). Our study points to the need to develop specialized automated evaluation metrics that can correlate more closely to human evaluation metrics for code generation tasks.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"119 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91195337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Context-Aware Code Change Embedding for Better Patch Correctness Assessment 上下文感知代码更改嵌入,以更好地进行补丁正确性评估
Pub Date : 2022-05-18 DOI: 10.1145/3505247
Bo Lin, Shangwen Wang, Ming Wen, Xiaoguang Mao
Despite the capability in successfully fixing more and more real-world bugs, existing Automated Program Repair (APR) techniques are still challenged by the long-standing overfitting problem (i.e., a generated patch that passes all tests is actually incorrect). Plenty of approaches have been proposed for automated patch correctness assessment (APCA). Nonetheless, dynamic ones (i.e., those that needed to execute tests) are time-consuming while static ones (i.e., those built on top of static code features) are less precise. Therefore, embedding techniques have been proposed recently, which assess patch correctness via embedding token sequences extracted from the changed code of a generated patch. However, existing techniques rarely considered the context information and program structures of a generated patch, which are crucial for patch correctness assessment as revealed by existing studies. In this study, we explore the idea of context-aware code change embedding considering program structures for patch correctness assessment. Specifically, given a patch, we not only focus on the changed code but also take the correlated unchanged part into consideration, through which the context information can be extracted and leveraged. We then utilize the AST path technique for representation where the structure information from AST node can be captured. Finally, based on several pre-defined heuristics, we build a deep learning based classifier to predict the correctness of the patch. We implemented this idea as Cache and performed extensive experiments to assess its effectiveness. Our results demonstrate that Cache can (1) perform better than previous representation learning based techniques (e.g., Cache relatively outperforms existing techniques by ( approx ) 6%, ( approx ) 3%, and ( approx ) 16%, respectively under three diverse experiment settings), and (2) achieve overall higher performance than existing APCA techniques while even being more precise than certain dynamic ones including PATCH-SIM (92.9% vs. 83.0%). Further results reveal that the context information and program structures leveraged by Cache contributed significantly to its outstanding performance.
尽管能够成功地修复越来越多的现实世界中的错误,现有的自动化程序修复(APR)技术仍然受到长期存在的过拟合问题的挑战(即,生成的通过所有测试的补丁实际上是不正确的)。已经提出了许多用于自动补丁正确性评估(APCA)的方法。尽管如此,动态代码(例如,那些需要执行测试的代码)非常耗时,而静态代码(例如,那些构建在静态代码特性之上的代码)则不那么精确。因此,最近提出了嵌入技术,该技术通过嵌入从生成补丁的更改代码中提取的令牌序列来评估补丁的正确性。然而,现有技术很少考虑所生成补丁的上下文信息和程序结构,而已有研究表明,这对补丁正确性评估至关重要。在这项研究中,我们探讨了上下文感知代码更改嵌入的想法,考虑了补丁正确性评估的程序结构。具体来说,给定一个补丁,我们不仅关注改变了的代码,还考虑了相关的未改变的部分,通过它可以提取和利用上下文信息。然后,我们利用AST路径技术进行表示,其中可以捕获来自AST节点的结构信息。最后,基于几个预定义的启发式算法,我们构建了一个基于深度学习的分类器来预测补丁的正确性。我们将这个想法作为缓存实现,并进行了大量的实验来评估其有效性。我们的结果表明,Cache可以(1)比以前基于表示学习的技术表现得更好(例如,Cache相对优于现有技术( approx ) 6)%, ( approx ) 3%, and ( approx ) 16%, respectively under three diverse experiment settings), and (2) achieve overall higher performance than existing APCA techniques while even being more precise than certain dynamic ones including PATCH-SIM (92.9% vs. 83.0%). Further results reveal that the context information and program structures leveraged by Cache contributed significantly to its outstanding performance.
{"title":"Context-Aware Code Change Embedding for Better Patch Correctness Assessment","authors":"Bo Lin, Shangwen Wang, Ming Wen, Xiaoguang Mao","doi":"10.1145/3505247","DOIUrl":"https://doi.org/10.1145/3505247","url":null,"abstract":"Despite the capability in successfully fixing more and more real-world bugs, existing Automated Program Repair (APR) techniques are still challenged by the long-standing overfitting problem (i.e., a generated patch that passes all tests is actually incorrect). Plenty of approaches have been proposed for automated patch correctness assessment (APCA). Nonetheless, dynamic ones (i.e., those that needed to execute tests) are time-consuming while static ones (i.e., those built on top of static code features) are less precise. Therefore, embedding techniques have been proposed recently, which assess patch correctness via embedding token sequences extracted from the changed code of a generated patch. However, existing techniques rarely considered the context information and program structures of a generated patch, which are crucial for patch correctness assessment as revealed by existing studies. In this study, we explore the idea of context-aware code change embedding considering program structures for patch correctness assessment. Specifically, given a patch, we not only focus on the changed code but also take the correlated unchanged part into consideration, through which the context information can be extracted and leveraged. We then utilize the AST path technique for representation where the structure information from AST node can be captured. Finally, based on several pre-defined heuristics, we build a deep learning based classifier to predict the correctness of the patch. We implemented this idea as Cache and performed extensive experiments to assess its effectiveness. Our results demonstrate that Cache can (1) perform better than previous representation learning based techniques (e.g., Cache relatively outperforms existing techniques by ( approx ) 6%, ( approx ) 3%, and ( approx ) 16%, respectively under three diverse experiment settings), and (2) achieve overall higher performance than existing APCA techniques while even being more precise than certain dynamic ones including PATCH-SIM (92.9% vs. 83.0%). Further results reveal that the context information and program structures leveraged by Cache contributed significantly to its outstanding performance.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"1 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2022-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90438363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
An Empirical Study on Data Distribution-Aware Test Selection for Deep Learning Enhancement 面向深度学习增强的数据分布感知测试选择实证研究
Pub Date : 2022-04-19 DOI: 10.1145/3511598
Huangping Qiang, Yuejun Guo, Maxime Cordy, Xiaofei Xie, Qiang Hu, Yuejun Guo, Maxime Cordy, Xiaofei Xie, L. Ma, Mike Papadakis
Similar to traditional software that is constantly under evolution, deep neural networks need to evolve upon the rapid growth of test data for continuous enhancement (e.g., adapting to distribution shift in a new environment for deployment). However, it is labor intensive to manually label all of the collected test data. Test selection solves this problem by strategically choosing a small set to label. Via retraining with the selected set, deep neural networks will achieve competitive accuracy. Unfortunately, existing selection metrics involve three main limitations: (1) using different retraining processes, (2) ignoring data distribution shifts, and (3) being insufficiently evaluated. To fill this gap, we first conduct a systemically empirical study to reveal the impact of the retraining process and data distribution on model enhancement. Then based on our findings, we propose DAT, a novel distribution-aware test selection metric. Experimental results reveal that retraining using both the training and selected data outperforms using only the selected data. None of the selection metrics perform the best under various data distributions. By contrast, DAT effectively alleviates the impact of distribution shifts and outperforms the compared metrics by up to five times and 30.09% accuracy improvement for model enhancement on simulated and in-the-wild distribution shift scenarios, respectively.
与不断进化的传统软件类似,深度神经网络需要随着测试数据的快速增长而进化,以不断增强(例如,适应新环境中的分布变化进行部署)。然而,手动标记所有收集的测试数据是一项劳动密集型工作。测试选择通过策略性地选择一个小集合来标记来解决这个问题。通过对所选集的再训练,深度神经网络将达到竞争精度。不幸的是,现有的选择度量包括三个主要的限制:(1)使用不同的再训练过程,(2)忽略数据分布的变化,(3)没有得到充分的评估。为了填补这一空白,我们首先进行了系统的实证研究,揭示了再训练过程和数据分布对模型增强的影响。然后基于我们的发现,我们提出了一种新的分布感知测试选择度量。实验结果表明,同时使用训练数据和选定数据进行再训练的效果优于仅使用选定数据进行再训练。在各种数据分布下,没有一个选择指标表现最好。相比之下,DAT有效地缓解了分布变化的影响,在模拟和野外分布变化场景下,模型增强的准确率分别提高了5倍和30.09%。
{"title":"An Empirical Study on Data Distribution-Aware Test Selection for Deep Learning Enhancement","authors":"Huangping Qiang, Yuejun Guo, Maxime Cordy, Xiaofei Xie, Qiang Hu, Yuejun Guo, Maxime Cordy, Xiaofei Xie, L. Ma, Mike Papadakis","doi":"10.1145/3511598","DOIUrl":"https://doi.org/10.1145/3511598","url":null,"abstract":"Similar to traditional software that is constantly under evolution, deep neural networks need to evolve upon the rapid growth of test data for continuous enhancement (e.g., adapting to distribution shift in a new environment for deployment). However, it is labor intensive to manually label all of the collected test data. Test selection solves this problem by strategically choosing a small set to label. Via retraining with the selected set, deep neural networks will achieve competitive accuracy. Unfortunately, existing selection metrics involve three main limitations: (1) using different retraining processes, (2) ignoring data distribution shifts, and (3) being insufficiently evaluated. To fill this gap, we first conduct a systemically empirical study to reveal the impact of the retraining process and data distribution on model enhancement. Then based on our findings, we propose DAT, a novel distribution-aware test selection metric. Experimental results reveal that retraining using both the training and selected data outperforms using only the selected data. None of the selection metrics perform the best under various data distributions. By contrast, DAT effectively alleviates the impact of distribution shifts and outperforms the compared metrics by up to five times and 30.09% accuracy improvement for model enhancement on simulated and in-the-wild distribution shift scenarios, respectively.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"37 23","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91406493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Just-In-Time Defect Prediction on JavaScript Projects: A Replication Study JavaScript项目的即时缺陷预测:一项复制研究
Pub Date : 2022-04-19 DOI: 10.1145/3508479
Chao Ni, Xin Xia, D. Lo, Xiaohu Yang, A. Hassan
Change-level defect prediction is widely referred to as just-in-time (JIT) defect prediction since it identifies a defect-inducing change at the check-in time, and researchers have proposed many approaches based on the language-independent change-level features. These approaches can be divided into two types: supervised approaches and unsupervised approaches, and their effectiveness has been verified on Java or C++ projects. However, whether the language-independent change-level features can effectively identify the defects of JavaScript projects is still unknown. Additionally, many researches have confirmed that supervised approaches outperform unsupervised approaches on Java or C++ projects when considering inspection effort. However, whether supervised JIT defect prediction approaches can still perform best on JavaScript projects is still unknown. Lastly, prior proposed change-level features are programming language–independent, whether programming language–specific change-level features can further improve the performance of JIT approaches on identifying defect-prone changes is also unknown. To address the aforementioned gap in knowledge, in this article, we collect and label the top-20 most starred JavaScript projects on GitHub. JavaScript is an extremely popular and widely used programming language in the industry. We propose five JavaScript-specific change-level features and conduct a large-scale empirical study (i.e., involving a total of 176,902 changes) and find that (1) supervised JIT defect prediction approaches (i.e., CBS+) still statistically significantly outperform unsupervised approaches on JavaScript projects when considering inspection effort; (2) JavaScript-specific change-level features can further improve the performance of approach built with language-independent features on identifying defect-prone changes; (3) the change-level features in the dimension of size (i.e., LT), diffusion (i.e., NF), and JavaScript-specific (i.e., SO and TC) are the most important features for indicating the defect-proneness of a change on JavaScript projects; and (4) project-related features (i.e., Stars, Branches, Def Ratio, Changes, Files, Defective, and Forks) have a high association with the probability of a change to be a defect-prone one on JavaScript projects.
变更级缺陷预测被广泛地称为即时(JIT)缺陷预测,因为它在签入时识别了导致缺陷的变更,并且研究人员已经提出了许多基于与语言无关的变更级特性的方法。这些方法可以分为两种类型:监督方法和非监督方法,它们的有效性已经在Java或c++项目中得到了验证。然而,独立于语言的变更级特性是否能够有效地识别JavaScript项目的缺陷仍然是未知的。此外,许多研究已经证实,在Java或c++项目中,当考虑检查工作时,有监督的方法优于无监督的方法。然而,是否有监督的JIT缺陷预测方法仍然可以在JavaScript项目中表现最好仍然是未知的。最后,先前提出的变更级特性是独立于编程语言的,特定于编程语言的变更级特性是否能够进一步提高JIT方法在识别容易出现缺陷的变更方面的性能也是未知的。为了解决上述知识差距,在本文中,我们收集并标记了GitHub上最受关注的前20个JavaScript项目。JavaScript是业界非常流行和广泛使用的编程语言。我们提出了五个JavaScript特定的变更级别特性,并进行了大规模的实证研究(即,涉及总共176,902个变更),并发现(1)在考虑检查工作时,有监督的JIT缺陷预测方法(即CBS+)在JavaScript项目上仍然在统计上显著优于无监督的方法;(2)特定于javascript的变更级特性可以进一步提高使用独立于语言的特性构建的方法在识别容易出现缺陷的变更方面的性能;(3)大小维度(即LT)、扩散维度(即NF)和JavaScript特定维度(即SO和TC)中的变更级别特征是指示JavaScript项目变更缺陷倾向性的最重要特征;(4)与项目相关的特性(例如,Stars、Branches、Def Ratio、Changes、Files、defect和Forks)与JavaScript项目中容易出现缺陷的变更概率有很高的关联。
{"title":"Just-In-Time Defect Prediction on JavaScript Projects: A Replication Study","authors":"Chao Ni, Xin Xia, D. Lo, Xiaohu Yang, A. Hassan","doi":"10.1145/3508479","DOIUrl":"https://doi.org/10.1145/3508479","url":null,"abstract":"Change-level defect prediction is widely referred to as just-in-time (JIT) defect prediction since it identifies a defect-inducing change at the check-in time, and researchers have proposed many approaches based on the language-independent change-level features. These approaches can be divided into two types: supervised approaches and unsupervised approaches, and their effectiveness has been verified on Java or C++ projects. However, whether the language-independent change-level features can effectively identify the defects of JavaScript projects is still unknown. Additionally, many researches have confirmed that supervised approaches outperform unsupervised approaches on Java or C++ projects when considering inspection effort. However, whether supervised JIT defect prediction approaches can still perform best on JavaScript projects is still unknown. Lastly, prior proposed change-level features are programming language–independent, whether programming language–specific change-level features can further improve the performance of JIT approaches on identifying defect-prone changes is also unknown. To address the aforementioned gap in knowledge, in this article, we collect and label the top-20 most starred JavaScript projects on GitHub. JavaScript is an extremely popular and widely used programming language in the industry. We propose five JavaScript-specific change-level features and conduct a large-scale empirical study (i.e., involving a total of 176,902 changes) and find that (1) supervised JIT defect prediction approaches (i.e., CBS+) still statistically significantly outperform unsupervised approaches on JavaScript projects when considering inspection effort; (2) JavaScript-specific change-level features can further improve the performance of approach built with language-independent features on identifying defect-prone changes; (3) the change-level features in the dimension of size (i.e., LT), diffusion (i.e., NF), and JavaScript-specific (i.e., SO and TC) are the most important features for indicating the defect-proneness of a change on JavaScript projects; and (4) project-related features (i.e., Stars, Branches, Def Ratio, Changes, Files, Defective, and Forks) have a high association with the probability of a change to be a defect-prone one on JavaScript projects.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"83 1","pages":"1 - 38"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82294145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
ACM Transactions on Software Engineering and Methodology (TOSEM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1