首页 > 最新文献

2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)最新文献

英文 中文
The Product Backlog 产品待办事项列表
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00036
Todd Sedano, P. Ralph, Cécile Péraire
Context: One of the most common artifacts in contemporary software projects is a product backlog comprising user stories, bugs, chores or other work items. However, little research has investigated how the backlog is generated or the precise role it plays in a project. Objective: The purpose of this paper is to determine what is a product backlog, what is its role, and how does it emerge? Method: Following Constructivist Grounded Theory, we conducted a two-year, five-month participant-observation study of eight software development projects at Pivotal, a large, international software company. We interviewed 56 software engineers, product designers, and product managers.We conducted a survey of 27 product designers. We alternated between analysis and theoretical sampling until achieving theoretical saturation. Results: We observed 13 practices and 6 obstacles related to product backlog generation. Limitations: Grounded Theory does not support statistical generalization. While the proposed theory of product backlogs appears widely applicable, organizations with different software development cultures may use different practices. Conclusion: The product backlog is simultaneously a model of work to be done and a boundary object that helps bridge the gap between the processes of generating user stories and realizing them in working code. It emerges from sensemaking (the team making sense of the project context) and coevolution (a cognitive process where the team simultaneously refines its understanding of the problematic context and fledgling solution concepts).
上下文:当代软件项目中最常见的工件之一是包含用户故事、错误、杂务或其他工作项的产品待办事项列表。然而,很少有研究调查待办事项是如何产生的,或者它在项目中扮演的确切角色。目标:本文的目的是确定什么是产品待办事项列表,它的角色是什么,以及它是如何出现的?方法:遵循建构主义扎根理论,我们对Pivotal(一家大型国际软件公司)的八个软件开发项目进行了为期两年,五个月的参与者观察研究。我们采访了56位软件工程师、产品设计师和产品经理。我们对27位产品设计师进行了调查。我们在分析和理论抽样之间交替进行,直到达到理论饱和。结果:我们观察到与产品待办事项列表生成相关的13个实践和6个障碍。局限性:扎根理论不支持统计泛化。虽然提出的产品积压理论似乎广泛适用,但具有不同软件开发文化的组织可能使用不同的实践。结论:产品待办事项列表既是一个待完成工作的模型,也是一个边界对象,它有助于弥合生成用户故事和在工作代码中实现它们的过程之间的差距。它来自于意义构建(团队对项目上下文的理解)和共同进化(一个认知过程,团队同时精炼其对问题上下文和未成熟的解决方案概念的理解)。
{"title":"The Product Backlog","authors":"Todd Sedano, P. Ralph, Cécile Péraire","doi":"10.1109/ICSE.2019.00036","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00036","url":null,"abstract":"Context: One of the most common artifacts in contemporary software projects is a product backlog comprising user stories, bugs, chores or other work items. However, little research has investigated how the backlog is generated or the precise role it plays in a project. Objective: The purpose of this paper is to determine what is a product backlog, what is its role, and how does it emerge? Method: Following Constructivist Grounded Theory, we conducted a two-year, five-month participant-observation study of eight software development projects at Pivotal, a large, international software company. We interviewed 56 software engineers, product designers, and product managers.We conducted a survey of 27 product designers. We alternated between analysis and theoretical sampling until achieving theoretical saturation. Results: We observed 13 practices and 6 obstacles related to product backlog generation. Limitations: Grounded Theory does not support statistical generalization. While the proposed theory of product backlogs appears widely applicable, organizations with different software development cultures may use different practices. Conclusion: The product backlog is simultaneously a model of work to be done and a boundary object that helps bridge the gap between the processes of generating user stories and realizing them in working code. It emerges from sensemaking (the team making sense of the project context) and coevolution (a cognitive process where the team simultaneously refines its understanding of the problematic context and fledgling solution concepts).","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79392662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Recovering Variable Names for Minified Code with Usage Contexts 为带有用法上下文的精简代码恢复变量名
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00119
H. Tran, Ngoc M. Tran, S. Nguyen, H. Nguyen, T. Nguyen
To avoid the exposure of original source code in a Web application, the variable names in JS code deployed in the wild are often replaced by short, meaningless names, thus making the code extremely difficult to manually understand and analysis. This paper presents JSNeat, an information retrieval (IR)-based approach to recover the variable names in minified JS code. JSNeat follows a data-driven approach to recover names by searching for them in a large corpus of open-source JS code. We use three types of contexts to match a variable in given minified code against the corpus including the context of the properties and roles of the variable, the context of that variable and relations with other variables under recovery, and the context of the task of the function to which the variable contributes. We performed several empirical experiments to evaluate JSNeat on the dataset of more than 322K JS files with 1M functions, and 3.5M variables with 176K unique variable names. We found that JSNeat achieves a high accuracy of 69.1%, which is the relative improvements of 66.1% and 43% over two state-of-the-art approaches JSNice and JSNaughty, respectively. The time to recover for a file or a variable with JSNeat is twice as fast as with JSNice and 4x as fast as with JNaughty, respectively.
为了避免在Web应用程序中暴露原始源代码,在野外部署的JS代码中的变量名经常被简短的、无意义的名称所取代,从而使代码极其难以手动理解和分析。本文提出了一种基于信息检索(IR)的JSNeat方法,用于在小型JS代码中恢复变量名。JSNeat遵循数据驱动的方法,通过在大量开源JS代码语料库中搜索名称来恢复名称。我们使用三种类型的上下文来匹配给定的最小化代码中的变量与语料库,包括变量的属性和角色的上下文,该变量的上下文以及与恢复中的其他变量的关系,以及变量所贡献的函数的任务的上下文。我们在超过322K个JS文件、1M个函数、3.5万个变量和176K个唯一变量名的数据集上进行了几个实证实验来评估JSNeat。我们发现JSNeat达到了69.1%的高准确率,比JSNice和JSNaughty两种最先进的方法分别提高了66.1%和43%。使用JSNeat恢复文件或变量的时间分别是使用JSNice的两倍和使用JNaughty的4倍。
{"title":"Recovering Variable Names for Minified Code with Usage Contexts","authors":"H. Tran, Ngoc M. Tran, S. Nguyen, H. Nguyen, T. Nguyen","doi":"10.1109/ICSE.2019.00119","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00119","url":null,"abstract":"To avoid the exposure of original source code in a Web application, the variable names in JS code deployed in the wild are often replaced by short, meaningless names, thus making the code extremely difficult to manually understand and analysis. This paper presents JSNeat, an information retrieval (IR)-based approach to recover the variable names in minified JS code. JSNeat follows a data-driven approach to recover names by searching for them in a large corpus of open-source JS code. We use three types of contexts to match a variable in given minified code against the corpus including the context of the properties and roles of the variable, the context of that variable and relations with other variables under recovery, and the context of the task of the function to which the variable contributes. We performed several empirical experiments to evaluate JSNeat on the dataset of more than 322K JS files with 1M functions, and 3.5M variables with 176K unique variable names. We found that JSNeat achieves a high accuracy of 69.1%, which is the relative improvements of 66.1% and 43% over two state-of-the-art approaches JSNice and JSNaughty, respectively. The time to recover for a file or a variable with JSNeat is twice as fast as with JSNice and 4x as fast as with JNaughty, respectively.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78551787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Message from the Artifact Evaluation Chairs of ICSE 2019 来自2019年ICSE人工制品评估主席的信息
Pub Date : 2019-05-01 DOI: 10.1109/icse.2019.00009
P. Grünbacher, Baishakhi Ray
Authors of papers accepted to the technical track were invited to submit their artifact for evaluation. We received 51 submissions. Each artifact was reviewed by at least two members of our evaluation committee and further assessed in the online discussion phase. If satisfying the criteria, artifacts were awarded one or more badges, which are shown on the front page of their paper and on the conference website.
被技术轨道接受的论文的作者被邀请提交他们的工件以供评估。我们收到了51份意见书。每个工件至少由我们评估委员会的两名成员审查,并在在线讨论阶段进一步评估。如果满足标准,工件将被授予一个或多个徽章,这些徽章将显示在其论文的首页和会议网站上。
{"title":"Message from the Artifact Evaluation Chairs of ICSE 2019","authors":"P. Grünbacher, Baishakhi Ray","doi":"10.1109/icse.2019.00009","DOIUrl":"https://doi.org/10.1109/icse.2019.00009","url":null,"abstract":"Authors of papers accepted to the technical track were invited to submit their artifact for evaluation. We received 51 submissions. Each artifact was reviewed by at least two members of our evaluation committee and further assessed in the online discussion phase. If satisfying the criteria, artifacts were awarded one or more badges, which are shown on the front page of their paper and on the conference website.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73740407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing and Supporting Adaptation of Online Code Examples 在线代码示例的分析与支持适配
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00046
Tianyi Zhang, Di Yang, C. Lopes, Miryung Kim
Developers often resort to online Q&A forums such as Stack Overflow (SO) for filling their programming needs. Although code examples on those forums are good starting points, they are often incomplete and inadequate for developers' local program contexts; adaptation of those examples is necessary to integrate them to production code. As a consequence, the process of adapting online code examples is done over and over again, by multiple developers independently. Our work extensively studies these adaptations and variations, serving as the basis for a tool that helps integrate these online code examples in a target context in an interactive manner. We perform a large-scale empirical study about the nature and extent of adaptations and variations of SO snippets. We construct a comprehensive dataset linking SO posts to GitHub counterparts based on clone detection, time stamp analysis, and explicit URL references. We then qualitatively inspect 400 SO examples and their GitHub counterparts and develop a taxonomy of 24 adaptation types. Using this taxonomy, we build an automated adaptation analysis technique on top of GumTree to classify the entire dataset into these types. We build a Chrome extension called ExampleStack that automatically lifts an adaptation-aware template from each SO example and its GitHub counterparts to identify hot spots where most changes happen. A user study with sixteen programmers shows that seeing the commonalities and variations in similar GitHub counterparts increases their confidence about the given SO example, and helps them grasp a more comprehensive view about how to reuse the example differently and avoid common pitfalls.
开发人员经常求助于在线问答论坛,如Stack Overflow (SO)来满足他们的编程需求。尽管这些论坛上的代码示例是很好的起点,但对于开发人员的本地程序环境来说,它们往往是不完整的,而且不够充分;为了将这些示例集成到生产代码中,需要对它们进行调整。因此,调整在线代码示例的过程是由多个独立的开发人员一次又一次地完成的。我们的工作广泛地研究了这些适应和变化,作为一个工具的基础,帮助以交互的方式将这些在线代码示例集成到目标上下文中。我们对SO片段的适应和变异的性质和程度进行了大规模的实证研究。基于克隆检测、时间戳分析和明确的URL引用,我们构建了一个综合的数据集,将SO帖子链接到GitHub对应的帖子。然后,我们定性地检查了400个SO示例及其GitHub对应项,并开发了24种适应类型的分类法。使用这种分类法,我们在GumTree之上构建了一种自动适应分析技术,将整个数据集分类为这些类型。我们构建了一个名为ExampleStack的Chrome扩展,它可以自动从每个SO示例及其GitHub对应物中提升一个适应感知模板,以识别大多数变化发生的热点。一项针对16名程序员的用户研究表明,看到类似GitHub中的共性和变化会增加他们对给定SO示例的信心,并帮助他们更全面地了解如何以不同的方式重用示例并避免常见陷阱。
{"title":"Analyzing and Supporting Adaptation of Online Code Examples","authors":"Tianyi Zhang, Di Yang, C. Lopes, Miryung Kim","doi":"10.1109/ICSE.2019.00046","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00046","url":null,"abstract":"Developers often resort to online Q&A forums such as Stack Overflow (SO) for filling their programming needs. Although code examples on those forums are good starting points, they are often incomplete and inadequate for developers' local program contexts; adaptation of those examples is necessary to integrate them to production code. As a consequence, the process of adapting online code examples is done over and over again, by multiple developers independently. Our work extensively studies these adaptations and variations, serving as the basis for a tool that helps integrate these online code examples in a target context in an interactive manner. We perform a large-scale empirical study about the nature and extent of adaptations and variations of SO snippets. We construct a comprehensive dataset linking SO posts to GitHub counterparts based on clone detection, time stamp analysis, and explicit URL references. We then qualitatively inspect 400 SO examples and their GitHub counterparts and develop a taxonomy of 24 adaptation types. Using this taxonomy, we build an automated adaptation analysis technique on top of GumTree to classify the entire dataset into these types. We build a Chrome extension called ExampleStack that automatically lifts an adaptation-aware template from each SO example and its GitHub counterparts to identify hot spots where most changes happen. A user study with sixteen programmers shows that seeing the commonalities and variations in similar GitHub counterparts increases their confidence about the given SO example, and helps them grasp a more comprehensive view about how to reuse the example differently and avoid common pitfalls.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91207748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Statistical Algorithmic Profiling for Randomized Approximate Programs 随机近似程序的统计算法分析
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00071
Keyur Joshi, V. Fernando, Sasa Misailovic
Many modern applications require low-latency processing of large data sets, often by using approximate algorithms that trade accuracy of the results for faster execution or reduced memory consumption. Although the algorithms provide probabilistic accuracy and performance guarantees, a software developer who implements these algorithms has little support from existing tools. Standard profilers do not consider accuracy of the computation and do not check whether the outputs of these programs satisfy their accuracy specifications. We present AXPROF, an algorithmic profiling framework for analyzing randomized approximate programs. The developer provides the accuracy specification as a formula in a mathematical notation, using probability or expected value predicates. AXPROF automatically generates statistical reasoning code. It first constructs the empirical models of accuracy, time, and memory consumption. It then selects and runs appropriate statistical tests that can, with high confidence, determine if the implementation satisfies the specification. We used AXPROF to profile 15 approximate applications from three domains - data analytics, numerical linear algebra, and approximate computing. AXPROF was effective in finding bugs and identifying various performance optimizations. In particular, we discovered five previously unknown bugs in the implementations of the algorithms and created fixes, guided by AXPROF.
许多现代应用程序需要对大型数据集进行低延迟处理,通常通过使用近似算法来交换结果的准确性,以换取更快的执行速度或减少内存消耗。尽管这些算法提供了概率准确性和性能保证,但是实现这些算法的软件开发人员很少得到现有工具的支持。标准分析器不考虑计算的准确性,也不检查这些程序的输出是否满足其精度规范。我们提出了AXPROF,一个分析随机近似程序的算法分析框架。开发人员使用概率或期望值谓词,将精度规范作为数学符号中的公式提供。AXPROF自动生成统计推理代码。首先构建了准确性、时间和记忆消耗的经验模型。然后,它选择并运行适当的统计测试,这些测试可以以高置信度确定实现是否满足规范。我们使用AXPROF分析了来自三个领域的15个近似应用程序——数据分析、数值线性代数和近似计算。AXPROF在查找bug和识别各种性能优化方面非常有效。特别是,我们在算法的实现中发现了五个以前未知的错误,并在AXPROF的指导下创建了修复程序。
{"title":"Statistical Algorithmic Profiling for Randomized Approximate Programs","authors":"Keyur Joshi, V. Fernando, Sasa Misailovic","doi":"10.1109/ICSE.2019.00071","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00071","url":null,"abstract":"Many modern applications require low-latency processing of large data sets, often by using approximate algorithms that trade accuracy of the results for faster execution or reduced memory consumption. Although the algorithms provide probabilistic accuracy and performance guarantees, a software developer who implements these algorithms has little support from existing tools. Standard profilers do not consider accuracy of the computation and do not check whether the outputs of these programs satisfy their accuracy specifications. We present AXPROF, an algorithmic profiling framework for analyzing randomized approximate programs. The developer provides the accuracy specification as a formula in a mathematical notation, using probability or expected value predicates. AXPROF automatically generates statistical reasoning code. It first constructs the empirical models of accuracy, time, and memory consumption. It then selects and runs appropriate statistical tests that can, with high confidence, determine if the implementation satisfies the specification. We used AXPROF to profile 15 approximate applications from three domains - data analytics, numerical linear algebra, and approximate computing. AXPROF was effective in finding bugs and identifying various performance optimizations. In particular, we discovered five previously unknown bugs in the implementations of the algorithms and created fixes, guided by AXPROF.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91496251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
NL2Type: Inferring JavaScript Function Types from Natural Language Information NL2Type:从自然语言信息推断JavaScript函数类型
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00045
Rabee Sohail Malik, Jibesh Patra, Michael Pradel
JavaScript is dynamically typed and hence lacks the type safety of statically typed languages, leading to suboptimal IDE support, difficult to understand APIs, and unexpected runtime behavior. Several gradual type systems have been proposed, e.g., Flow and TypeScript, but they rely on developers to annotate code with types. This paper presents NL2Type, a learning-based approach for predicting likely type signatures of JavaScript functions. The key idea is to exploit natural language information in source code, such as comments, function names, and parameter names, a rich source of knowledge that is typically ignored by type inference algorithms. We formulate the problem of predicting types as a classification problem and train a recurrent, LSTM-based neural model that, after learning from an annotated code base, predicts function types for unannotated code. We evaluate the approach with a corpus of 162,673 JavaScript files from real-world projects. NL2Type predicts types with a precision of 84.1% and a recall of 78.9% when considering only the top-most suggestion, and with a precision of 95.5% and a recall of 89.6% when considering the top-5 suggestions. The approach outperforms both JSNice, a state-of-the-art approach that analyzes implementations of functions instead of natural language information, and DeepTyper, a recent type prediction approach that is also based on deep learning. Beyond predicting types, NL2Type serves as a consistency checker for existing type annotations. We show that it discovers 39 inconsistencies that deserve developer attention (from a manual analysis of 50 warnings), most of which are due to incorrect type annotations.
JavaScript是动态类型的,因此缺乏静态类型语言的类型安全性,导致IDE支持不够理想,难以理解api,以及意外的运行时行为。已经提出了几种渐进式类型系统,例如Flow和TypeScript,但它们依赖于开发人员用类型注释代码。本文介绍了NL2Type,一种基于学习的方法,用于预测JavaScript函数可能的类型签名。关键思想是利用源代码中的自然语言信息,例如注释、函数名和参数名,这是通常被类型推断算法忽略的丰富的知识来源。我们将预测类型的问题表述为一个分类问题,并训练一个循环的、基于lstm的神经模型,该模型在从带注释的代码库学习后,预测未带注释的代码的函数类型。我们使用来自实际项目的162,673个JavaScript文件的语料库来评估这种方法。NL2Type仅考虑最前面的建议时,预测类型的准确率为84.1%,召回率为78.9%;考虑前5条建议时,预测类型的准确率为95.5%,召回率为89.6%。该方法优于JSNice(一种最先进的方法,分析功能实现而不是自然语言信息)和DeepTyper(一种最新的基于深度学习的类型预测方法)。除了预测类型之外,NL2Type还可以作为现有类型注释的一致性检查器。我们表明,它发现了39个值得开发人员注意的不一致(来自对50个警告的手动分析),其中大多数是由于不正确的类型注释。
{"title":"NL2Type: Inferring JavaScript Function Types from Natural Language Information","authors":"Rabee Sohail Malik, Jibesh Patra, Michael Pradel","doi":"10.1109/ICSE.2019.00045","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00045","url":null,"abstract":"JavaScript is dynamically typed and hence lacks the type safety of statically typed languages, leading to suboptimal IDE support, difficult to understand APIs, and unexpected runtime behavior. Several gradual type systems have been proposed, e.g., Flow and TypeScript, but they rely on developers to annotate code with types. This paper presents NL2Type, a learning-based approach for predicting likely type signatures of JavaScript functions. The key idea is to exploit natural language information in source code, such as comments, function names, and parameter names, a rich source of knowledge that is typically ignored by type inference algorithms. We formulate the problem of predicting types as a classification problem and train a recurrent, LSTM-based neural model that, after learning from an annotated code base, predicts function types for unannotated code. We evaluate the approach with a corpus of 162,673 JavaScript files from real-world projects. NL2Type predicts types with a precision of 84.1% and a recall of 78.9% when considering only the top-most suggestion, and with a precision of 95.5% and a recall of 89.6% when considering the top-5 suggestions. The approach outperforms both JSNice, a state-of-the-art approach that analyzes implementations of functions instead of natural language information, and DeepTyper, a recent type prediction approach that is also based on deep learning. Beyond predicting types, NL2Type serves as a consistency checker for existing type annotations. We show that it discovers 39 inconsistencies that deserve developer attention (from a manual analysis of 50 warnings), most of which are due to incorrect type annotations.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89358483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
Exposing Library API Misuses Via Mutation Analysis 通过突变分析暴露库API的滥用
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00093
Ming Wen, Yepang Liu, Rongxin Wu, Xuan Xie, S. Cheung, Z. Su
Misuses of library APIs are pervasive and often lead to software crashes and vulnerability issues. Various static analysis tools have been proposed to detect library API misuses. They often involve mining frequent patterns from a large number of correct API usage examples, which can be hard to obtain in practice. They also suffer from low precision due to an over-simplified assumption that a deviation from frequent usage patterns indicates a misuse. We make two observations on the discovery of API misuse patterns. First, API misuses can be represented as mutants of the corresponding correct usages. Second, whether a mutant will introduce a misuse can be validated via executing it against a test suite and analyzing the execution information. Based on these observations, we propose MutApi, the first approach to discovering API misuse patterns via mutation analysis. To effectively mimic API misuses based on correct usages, we first design eight effective mutation operators inspired by the common characteristics of API misuses. MutApi generates mutants by applying these mutation operators on a set of client projects and collects mutant-killing tests as well as the associated stack traces. Misuse patterns are discovered from the killed mutants that are prioritized according to their likelihood of causing API misuses based on the collected information. We applied MutApi on 16 client projects with respect to 73 popular Java APIs. The results show that MutApi is able to discover substantial API misuse patterns with a high precision of 0.78. It also achieves a recall of $0.49$ on the MuBench benchmark, which outperforms the state-of-the-art techniques.
对库api的滥用非常普遍,经常导致软件崩溃和漏洞问题。已经提出了各种静态分析工具来检测库API的滥用。它们通常涉及从大量正确的API使用示例中挖掘频繁的模式,这在实践中很难获得。由于过度简化的假设,即偏离频繁使用模式表示误用,它们的精度也很低。我们对API滥用模式的发现进行了两个观察。首先,API误用可以表示为相应正确用法的变体。其次,突变体是否会引入误用,可以通过在测试套件中执行它并分析执行信息来验证。基于这些观察,我们提出了MutApi,这是通过突变分析发现API滥用模式的第一种方法。为了在正确使用的基础上有效地模拟API误用,我们首先根据API误用的共同特征设计了8个有效的变异操作符。MutApi通过在一组客户端项目上应用这些突变操作符来生成突变,并收集突变终止测试以及相关的堆栈跟踪。滥用模式是从被杀死的突变体中发现的,这些突变体根据它们根据收集到的信息导致API滥用的可能性来确定优先级。我们针对73个流行的Java api在16个客户端项目上应用了MutApi。结果表明,MutApi能够以0.78的高精度发现大量的API误用模式。在MuBench基准测试中,它的召回率也达到了0.49美元,超过了最先进的技术。
{"title":"Exposing Library API Misuses Via Mutation Analysis","authors":"Ming Wen, Yepang Liu, Rongxin Wu, Xuan Xie, S. Cheung, Z. Su","doi":"10.1109/ICSE.2019.00093","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00093","url":null,"abstract":"Misuses of library APIs are pervasive and often lead to software crashes and vulnerability issues. Various static analysis tools have been proposed to detect library API misuses. They often involve mining frequent patterns from a large number of correct API usage examples, which can be hard to obtain in practice. They also suffer from low precision due to an over-simplified assumption that a deviation from frequent usage patterns indicates a misuse. We make two observations on the discovery of API misuse patterns. First, API misuses can be represented as mutants of the corresponding correct usages. Second, whether a mutant will introduce a misuse can be validated via executing it against a test suite and analyzing the execution information. Based on these observations, we propose MutApi, the first approach to discovering API misuse patterns via mutation analysis. To effectively mimic API misuses based on correct usages, we first design eight effective mutation operators inspired by the common characteristics of API misuses. MutApi generates mutants by applying these mutation operators on a set of client projects and collects mutant-killing tests as well as the associated stack traces. Misuse patterns are discovered from the killed mutants that are prioritized according to their likelihood of causing API misuses based on the collected information. We applied MutApi on 16 client projects with respect to 73 popular Java APIs. The results show that MutApi is able to discover substantial API misuse patterns with a high precision of 0.78. It also achieves a recall of $0.49$ on the MuBench benchmark, which outperforms the state-of-the-art techniques.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73085032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Graph Embedding Based Familial Analysis of Android Malware using Unsupervised Learning 基于图嵌入的无监督学习Android恶意软件家族分析
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00085
Ming Fan, Xiapu Luo, Jun Liu, Meng Wang, Chunyin Nong, Q. Zheng, Ting Liu
The rapid growth of Android malware has posed severe security threats to smartphone users. On the basis of the familial trait of Android malware observed by previous work, the familial analysis is a promising way to help analysts better focus on the commonalities of malware samples within the same families, thus reducing the analytical workload and accelerating malware analysis. The majority of existing approaches rely on supervised learning and face three main challenges, i.e., low accuracy, low efficiency, and the lack of labeled dataset. To address these challenges, we first construct a fine-grained behavior model by abstracting the program semantics into a set of subgraphs. Then, we propose SRA, a novel feature that depicts the similarity relationships between the Structural Roles of sensitive API call nodes in subgraphs. An SRA is obtained based on graph embedding techniques and represented as a vector, thus we can effectively reduce the high complexity of graph matching. After that, instead of training a classifier with labeled samples, we construct malware link network based on SRAs and apply community detection algorithms on it to group the unlabeled samples into groups. We implement these ideas in a system called GefDroid that performs Graph embedding based familial analysis of AnDroid malware using unsupervised learning. Moreover, we conduct extensive experiments to evaluate GefDroid on three datasets with ground truth. The results show that GefDroid can achieve high agreements (0.707-0.883 in term of NMI) between the clustering results and the ground truth. Furthermore, GefDroid requires only linear run-time overhead and takes around 8.6s to analyze a sample on average, which is considerably faster than the previous work.
Android恶意软件的快速增长给智能手机用户带来了严重的安全威胁。基于以往工作观察到的Android恶意软件的家族特征,家族分析是一种很有前途的方法,可以帮助分析人员更好地关注同一家族内恶意软件样本的共性,从而减少分析工作量,加快恶意软件分析速度。现有的大多数方法依赖于监督学习,面临三个主要挑战,即低准确率、低效率和缺乏标记数据集。为了应对这些挑战,我们首先通过将程序语义抽象为一组子图来构建一个细粒度的行为模型。然后,我们提出了一种描述子图中敏感API调用节点的结构角色之间相似关系的新特征SRA。基于图嵌入技术得到了一个SRA,并将其表示为一个向量,从而有效地降低了图匹配的高复杂度。之后,我们不再使用标记样本训练分类器,而是基于sra构建恶意链接网络,并在其上应用社区检测算法对未标记的样本进行分组。我们在一个名为GefDroid的系统中实现了这些想法,该系统使用无监督学习对AnDroid恶意软件进行基于图嵌入的家族分析。此外,我们进行了大量的实验来评估GefDroid在三个数据集与地面真实。结果表明,GefDroid聚类结果与地面真实度的一致性较高(NMI为0.707-0.883)。此外,GefDroid只需要线性运行时开销,分析一个样本的平均时间约为8.6秒,这比以前的工作快得多。
{"title":"Graph Embedding Based Familial Analysis of Android Malware using Unsupervised Learning","authors":"Ming Fan, Xiapu Luo, Jun Liu, Meng Wang, Chunyin Nong, Q. Zheng, Ting Liu","doi":"10.1109/ICSE.2019.00085","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00085","url":null,"abstract":"The rapid growth of Android malware has posed severe security threats to smartphone users. On the basis of the familial trait of Android malware observed by previous work, the familial analysis is a promising way to help analysts better focus on the commonalities of malware samples within the same families, thus reducing the analytical workload and accelerating malware analysis. The majority of existing approaches rely on supervised learning and face three main challenges, i.e., low accuracy, low efficiency, and the lack of labeled dataset. To address these challenges, we first construct a fine-grained behavior model by abstracting the program semantics into a set of subgraphs. Then, we propose SRA, a novel feature that depicts the similarity relationships between the Structural Roles of sensitive API call nodes in subgraphs. An SRA is obtained based on graph embedding techniques and represented as a vector, thus we can effectively reduce the high complexity of graph matching. After that, instead of training a classifier with labeled samples, we construct malware link network based on SRAs and apply community detection algorithms on it to group the unlabeled samples into groups. We implement these ideas in a system called GefDroid that performs Graph embedding based familial analysis of AnDroid malware using unsupervised learning. Moreover, we conduct extensive experiments to evaluate GefDroid on three datasets with ground truth. The results show that GefDroid can achieve high agreements (0.707-0.883 in term of NMI) between the clustering results and the ground truth. Furthermore, GefDroid requires only linear run-time overhead and takes around 8.6s to analyze a sample on average, which is considerably faster than the previous work.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78961479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Program Board of ICSE 2019 ICSE 2019项目委员会
Pub Date : 2019-05-01 DOI: 10.1109/icse.2019.00013
{"title":"Program Board of ICSE 2019","authors":"","doi":"10.1109/icse.2019.00013","DOIUrl":"https://doi.org/10.1109/icse.2019.00013","url":null,"abstract":"","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75745949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Based Mining of In-the-Wild, Fine-Grained, Semantic Code Change Patterns 基于图的野外、细粒度、语义代码更改模式挖掘
Pub Date : 2019-05-01 DOI: 10.1109/ICSE.2019.00089
H. Nguyen, T. Nguyen, Danny Dig, S. Nguyen, H. Tran, Michael C Hilton
Prior research exploited the repetitiveness of code changes to enable several tasks such as code completion, bug-fix recommendation, library adaption, etc. These and other novel applications require accurate detection of semantic changes, but the state-of-the-art methods are limited to algorithms that detect specific kinds of changes at the syntactic level. Existing algorithms relying on syntactic similarity have lower accuracy, and cannot effectively detect semantic change patterns. We introduce a novel graph-based mining approach, CPatMiner, to detect previously unknown repetitive changes in the wild, by mining fine-grained semantic code change patterns from a large number of repositories. To overcome unique challenges such as detecting meaningful change patterns and scaling to large repositories, we rely on fine-grained change graphs to capture program dependencies. We evaluate CPatMiner by mining change patterns in a diverse corpus of 5,000+ open-source projects from GitHub across a population of 170,000+ developers. We use three complementary methods. First, we sent the mined patterns to 108 open-source developers. We found that 70% of respondents recognized those patterns as their meaningful frequent changes. Moreover, 79% of respondents even named the patterns, and 44% wanted future IDEs to automate such repetitive changes. We found that the mined change patterns belong to various development activities: adaptive (9%), perfective (20%), corrective (35%) and preventive (36%, including refactorings). Second, we compared our tool with the state-of-the-art, AST-based technique, and reported that it detects 2.1x more meaningful patterns. Third, we use CPatMiner to search for patterns in a corpus of 88 GitHub projects with longer histories consisting of 164M SLOCs. It constructed 322K fine-grained change graphs containing 3M nodes, and detected 17K instances of change patterns from which we provide unique insights on the practice of change patterns among individuals and teams. We found that a large percentage (75%) of the change patterns from individual developers are commonly shared with others, and this holds true for teams. Moreover, we found that the patterns are not intermittent but spread widely over time. Thus, we call for a community-based change pattern database to provide important resources in novel applications.
先前的研究利用代码更改的重复性来实现一些任务,如代码完成、bug修复建议、库适配等。这些和其他新颖的应用程序需要准确地检测语义变化,但是最先进的方法仅限于在语法级别检测特定类型变化的算法。现有的依赖句法相似度的算法准确率较低,不能有效检测语义变化模式。我们引入了一种新颖的基于图的挖掘方法CPatMiner,通过从大量存储库中挖掘细粒度的语义代码更改模式来检测以前未知的重复更改。为了克服独特的挑战,例如检测有意义的变更模式和扩展到大型存储库,我们依赖于细粒度的变更图来捕获程序依赖关系。我们通过挖掘来自GitHub的5000多个开源项目的不同语料库中的变化模式来评估CPatMiner,这些项目涉及170,000多名开发人员。我们使用三种互补的方法。首先,我们将挖掘的模式发送给108个开源开发人员。我们发现70%的受访者认为这些模式是他们有意义的频繁变化。此外,79%的受访者甚至命名了模式,44%的受访者希望未来的ide能够自动化这种重复的更改。我们发现挖掘的变更模式属于各种开发活动:适应性的(9%),完美的(20%),纠正性的(35%)和预防性的(36%,包括重构)。其次,我们将我们的工具与最先进的基于ast的技术进行了比较,并报告说它检测到的有意义的模式多2.1倍。第三,我们使用CPatMiner在包含88个GitHub项目的语料库中搜索模式,这些项目的历史较长,包含164M sloc。它构建了包含3M节点的322K个细粒度变更图,并检测了17K个变更模式实例,从中我们提供了关于个人和团队之间变更模式实践的独特见解。我们发现,来自单个开发人员的很大比例(75%)的变更模式通常与其他人共享,对于团队来说也是如此。此外,我们发现这种模式不是间歇性的,而是随着时间的推移而广泛传播的。因此,我们需要一个基于社区的变化模式数据库来为新的应用程序提供重要的资源。
{"title":"Graph-Based Mining of In-the-Wild, Fine-Grained, Semantic Code Change Patterns","authors":"H. Nguyen, T. Nguyen, Danny Dig, S. Nguyen, H. Tran, Michael C Hilton","doi":"10.1109/ICSE.2019.00089","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00089","url":null,"abstract":"Prior research exploited the repetitiveness of code changes to enable several tasks such as code completion, bug-fix recommendation, library adaption, etc. These and other novel applications require accurate detection of semantic changes, but the state-of-the-art methods are limited to algorithms that detect specific kinds of changes at the syntactic level. Existing algorithms relying on syntactic similarity have lower accuracy, and cannot effectively detect semantic change patterns. We introduce a novel graph-based mining approach, CPatMiner, to detect previously unknown repetitive changes in the wild, by mining fine-grained semantic code change patterns from a large number of repositories. To overcome unique challenges such as detecting meaningful change patterns and scaling to large repositories, we rely on fine-grained change graphs to capture program dependencies. We evaluate CPatMiner by mining change patterns in a diverse corpus of 5,000+ open-source projects from GitHub across a population of 170,000+ developers. We use three complementary methods. First, we sent the mined patterns to 108 open-source developers. We found that 70% of respondents recognized those patterns as their meaningful frequent changes. Moreover, 79% of respondents even named the patterns, and 44% wanted future IDEs to automate such repetitive changes. We found that the mined change patterns belong to various development activities: adaptive (9%), perfective (20%), corrective (35%) and preventive (36%, including refactorings). Second, we compared our tool with the state-of-the-art, AST-based technique, and reported that it detects 2.1x more meaningful patterns. Third, we use CPatMiner to search for patterns in a corpus of 88 GitHub projects with longer histories consisting of 164M SLOCs. It constructed 322K fine-grained change graphs containing 3M nodes, and detected 17K instances of change patterns from which we provide unique insights on the practice of change patterns among individuals and teams. We found that a large percentage (75%) of the change patterns from individual developers are commonly shared with others, and this holds true for teams. Moreover, we found that the patterns are not intermittent but spread widely over time. Thus, we call for a community-based change pattern database to provide important resources in novel applications.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80854978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
期刊
2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1