首页 > 最新文献

2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)最新文献

英文 中文
How does a typical tutorial for mobile development look like? 典型的手机开发教程是什么样的?
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597106
Rebecca Tiarks, W. Maalej
We report on an exploratory study, which aims at understanding how development tutorials are structured, what types of tutorials exist, and how official tutorials differ from tutorials written by development communities. We analyzed over 1.200 tutorials for mobile application development provided by six different sources for the three major platforms: Android, Apple iOS, and Windows Phone. We found that a typical tutorial contains around 2700 words distributed over 4 pages and including a list of instructions with 18 items. Overall, 70% of the tutorials contain source code examples and a similar fraction contain images. On average, one tutorial has 6 images. When analyzing the images, we found that the studied iOS community posted the largest number of images, 14 images per tutorial, on average, from which 74% are plain images, i.e., mainly screenshots without stencils, diagrams, or highlights. In contrast, 36% of the images included in the official tutorials by Apple were diagrams or images with stencils. Community sites seem to follow a similar structure to the official sites but include items and images which are rather underrepresented in the official tutorials. From the analysis of the tutorials content by means of natural language processing combined with manual content analysis, we derived four categories for mobile development tutorials: infrastructure and design, application and services, distribution and maintenance, and development platform. Our categorization can help tutorial writers to better organize and evaluate the content of their tutorials and identify missing tutorials.
我们报告了一项探索性研究,旨在了解开发教程的结构,存在哪些类型的教程,以及官方教程与开发社区编写的教程有何不同。我们分析了来自6个不同平台的1200多本手机应用开发教程:Android、苹果iOS和Windows Phone。我们发现一个典型的教程包含大约2700个单词,分布在4页纸上,并包含包含18个项目的说明列表。总体而言,70%的教程包含源代码示例,类似的部分包含图像。一个教程平均有6张图片。在分析这些图片时,我们发现iOS社区发布的图片数量最多,平均每个教程发布14张图片,其中74%是普通图片,即主要是没有模板、图表或亮点的截图。相比之下,苹果官方教程中36%的图片是图表或带有模板的图片。社区网站似乎遵循与官方网站类似的结构,但包括在官方教程中较少代表的项目和图像。通过自然语言处理与人工内容分析相结合的方法对教程内容进行分析,我们将移动开发教程分为基础设施与设计、应用与服务、分发与维护、开发平台四大类。我们的分类可以帮助教程作者更好地组织和评估他们的教程内容,并找出缺失的教程。
{"title":"How does a typical tutorial for mobile development look like?","authors":"Rebecca Tiarks, W. Maalej","doi":"10.1145/2597073.2597106","DOIUrl":"https://doi.org/10.1145/2597073.2597106","url":null,"abstract":"We report on an exploratory study, which aims at understanding how development tutorials are structured, what types of tutorials exist, and how official tutorials differ from tutorials written by development communities. We analyzed over 1.200 tutorials for mobile application development provided by six different sources for the three major platforms: Android, Apple iOS, and Windows Phone. We found that a typical tutorial contains around 2700 words distributed over 4 pages and including a list of instructions with 18 items. Overall, 70% of the tutorials contain source code examples and a similar fraction contain images. On average, one tutorial has 6 images. When analyzing the images, we found that the studied iOS community posted the largest number of images, 14 images per tutorial, on average, from which 74% are plain images, i.e., mainly screenshots without stencils, diagrams, or highlights. In contrast, 36% of the images included in the official tutorials by Apple were diagrams or images with stencils. Community sites seem to follow a similar structure to the official sites but include items and images which are rather underrepresented in the official tutorials. From the analysis of the tutorials content by means of natural language processing combined with manual content analysis, we derived four categories for mobile development tutorials: infrastructure and design, application and services, distribution and maintenance, and development platform. Our categorization can help tutorial writers to better organize and evaluate the content of their tutorials and identify missing tutorials.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"7 1","pages":"272-281"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74531301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Incremental origin analysis of source code files 增量来源分析的源代码文件
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597111
Daniela Steidl, B. Hummel, Elmar Jürgens
The history of software systems tracked by version control systems is often incomplete because many file movements are not recorded. However, static code analyses that mine the file history, such as change frequency or code churn, produce precise results only if the complete history of a source code file is available. In this paper, we show that up to 38.9% of the files in open source systems have an incomplete history, and we propose an incremental, commit-based approach to reconstruct the history based on clone information and name similarity. With this approach, the history of a file can be reconstructed across repository boundaries and thus provides accurate information for any source code analysis. We evaluate the approach in terms of correctness, completeness, performance, and relevance with a case study among seven open source systems and a developer survey.
由版本控制系统跟踪的软件系统的历史通常是不完整的,因为许多文件的移动没有被记录下来。然而,挖掘文件历史的静态代码分析,例如更改频率或代码变动,只有在源代码文件的完整历史可用的情况下才能产生精确的结果。在本文中,我们发现开源系统中高达38.9%的文件具有不完整的历史记录,并提出了一种基于克隆信息和名称相似度的增量式、基于提交的方法来重建历史记录。使用这种方法,可以跨存储库边界重建文件的历史,从而为任何源代码分析提供准确的信息。我们通过对七个开源系统的案例研究和一项开发人员调查,从正确性、完整性、性能和相关性方面评估了该方法。
{"title":"Incremental origin analysis of source code files","authors":"Daniela Steidl, B. Hummel, Elmar Jürgens","doi":"10.1145/2597073.2597111","DOIUrl":"https://doi.org/10.1145/2597073.2597111","url":null,"abstract":"The history of software systems tracked by version control systems is often incomplete because many file movements are not recorded. However, static code analyses that mine the file history, such as change frequency or code churn, produce precise results only if the complete history of a source code file is available. In this paper, we show that up to 38.9% of the files in open source systems have an incomplete history, and we propose an incremental, commit-based approach to reconstruct the history based on clone information and name similarity. With this approach, the history of a file can be reconstructed across repository boundaries and thus provides accurate information for any source code analysis. We evaluate the approach in terms of correctness, completeness, performance, and relevance with a case study among seven open source systems and a developer survey.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"09 1","pages":"42-51"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85056969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Process mining multiple repositories for software defect resolution from control and organizational perspective 从控制和组织的角度为软件缺陷解决挖掘多个存储库的过程
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597081
Monika Gupta, A. Sureka, S. Padmanabhuni
Issue reporting and resolution is a software engineering process supported by tools such as Issue Tracking System (ITS), Peer Code Review (PCR) system and Version Control System (VCS). Several open source software projects such as Google Chromium and Android follow process in which a defect or feature enhancement request is reported to an issue tracker followed by source-code change or patch review and patch commit using a version control system. We present an application of process mining three software repositories (ITS, PCR and VCS) from control flow and organizational perspective for effective process management. ITS, PCR and VCS are not explicitly linked so we implement regular expression based heuristics to integrate data from three repositories for Google Chromium project. We define activities such as bug reporting, bug fixing, bug verification, patch submission, patch review, and source code commit and create an event log of the bug resolution process. The extracted event log contains audit trail data such as caseID, timestamp, activity name and performer. We discover runtime process model for bug resolution process spanning three repositories using process mining tool, Disco, and conduct process performance and efficiency analysis. We identify bottlenecks, define and detect basic and composite anti-patterns. In addition to control flow analysis, we mine event log to perform organizational analysis and discover metrics such as handover of work, subcontracting, joint cases and joint activities.
问题报告和解决是一个软件工程过程,由诸如问题跟踪系统(ITS)、对等代码审查系统(PCR)和版本控制系统(VCS)等工具支持。一些开源软件项目,如Google Chromium和Android,都遵循这样的流程:将缺陷或功能增强请求报告给问题跟踪器,然后使用版本控制系统进行源代码更改或补丁审查和补丁提交。我们从控制流和组织的角度提出了过程挖掘三个软件存储库(ITS, PCR和VCS)的应用,以实现有效的过程管理。ITS, PCR和VCS没有显式链接,因此我们实现了基于正则表达式的启发式方法来集成来自Google Chromium项目的三个存储库的数据。我们定义诸如错误报告、错误修复、错误验证、补丁提交、补丁审查和源代码提交等活动,并创建错误解决过程的事件日志。提取的事件日志包含审计跟踪数据,如caseID、时间戳、活动名称和执行者。我们使用过程挖掘工具Disco发现跨越三个存储库的错误解决过程的运行时过程模型,并进行过程性能和效率分析。我们识别瓶颈,定义和检测基本和复合反模式。除控制流分析外,我们还挖掘事件日志进行组织分析,发现工作移交、分包、联合案例和联合活动等指标。
{"title":"Process mining multiple repositories for software defect resolution from control and organizational perspective","authors":"Monika Gupta, A. Sureka, S. Padmanabhuni","doi":"10.1145/2597073.2597081","DOIUrl":"https://doi.org/10.1145/2597073.2597081","url":null,"abstract":"Issue reporting and resolution is a software engineering process supported by tools such as Issue Tracking System (ITS), Peer Code Review (PCR) system and Version Control System (VCS). Several open source software projects such as Google Chromium and Android follow process in which a defect or feature enhancement request is reported to an issue tracker followed by source-code change or patch review and patch commit using a version control system. We present an application of process mining three software repositories (ITS, PCR and VCS) from control flow and organizational perspective for effective process management. ITS, PCR and VCS are not explicitly linked so we implement regular expression based heuristics to integrate data from three repositories for Google Chromium project. We define activities such as bug reporting, bug fixing, bug verification, patch submission, patch review, and source code commit and create an event log of the bug resolution process. The extracted event log contains audit trail data such as caseID, timestamp, activity name and performer. We discover runtime process model for bug resolution process spanning three repositories using process mining tool, Disco, and conduct process performance and efficiency analysis. We identify bottlenecks, define and detect basic and composite anti-patterns. In addition to control flow analysis, we mine event log to perform organizational analysis and discover metrics such as handover of work, subcontracting, joint cases and joint activities.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"81 1","pages":"122-131"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80877785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
An empirical study of dormant bugs 对休眠细菌的实证研究
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597108
T. Chen, M. Nagappan, Emad Shihab, A. Hassan
Over the past decade, several research efforts have studied the quality of software systems by looking at post-release bugs. However, these studies do not account for bugs that remain dormant (i.e., introduced in a version of the software system, but are not found until much later) for years and across many versions. Such dormant bugs skew our under- standing of the software quality. In this paper we study dormant bugs against non-dormant bugs using data from 20 different open-source Apache foundation software systems. We find that 33% of the bugs introduced in a version are not reported till much later (i.e., they are reported in future versions as dormant bugs). Moreover, we find that 18.9% of the reported bugs in a version are not even introduced in that version (i.e., they are dormant bugs from prior versions). In short, the use of reported bugs to judge the quality of a specific version might be misleading. Exploring the fix process for dormant bugs, we find that they are fixed faster (median fix time of 5 days) than non- dormant bugs (median fix time of 8 days), and are fixed by more experienced developers (median commit counts of developers who fix dormant bug is 169% higher). Our results highlight that dormant bugs are different from non-dormant bugs in many perspectives and that future research in software quality should carefully study and consider dormant bugs.
在过去的十年中,一些研究工作通过观察发布后的错误来研究软件系统的质量。然而,这些研究并没有考虑到多年来在许多版本中保持休眠状态的bug(即,在软件系统的某个版本中引入,但直到很久以后才被发现)。这些潜伏的bug扭曲了我们对软件质量的理解。在本文中,我们使用来自20个不同的开源Apache基金会软件系统的数据来研究休眠bug和非休眠bug。我们发现,在一个版本中引入的33%的bug直到很久以后才被报告(也就是说,它们在未来的版本中被报告为休眠bug)。此外,我们发现一个版本中18.9%的报告错误甚至没有在该版本中引入(即,它们是以前版本中的休眠错误)。简而言之,使用报告的bug来判断特定版本的质量可能会产生误导。通过研究休眠bug的修复过程,我们发现它们的修复速度比非休眠bug更快(修复时间中值为5天)(修复时间中值为8天),并且由更有经验的开发人员修复(修复休眠bug的开发人员的提交次数中值高出169%)。我们的研究结果强调了休眠bug与非休眠bug在许多方面的不同,未来的软件质量研究应该仔细研究和考虑休眠bug。
{"title":"An empirical study of dormant bugs","authors":"T. Chen, M. Nagappan, Emad Shihab, A. Hassan","doi":"10.1145/2597073.2597108","DOIUrl":"https://doi.org/10.1145/2597073.2597108","url":null,"abstract":"Over the past decade, several research efforts have studied the quality of software systems by looking at post-release bugs. However, these studies do not account for bugs that remain dormant (i.e., introduced in a version of the software system, but are not found until much later) for years and across many versions. Such dormant bugs skew our under- standing of the software quality. In this paper we study dormant bugs against non-dormant bugs using data from 20 different open-source Apache foundation software systems. We find that 33% of the bugs introduced in a version are not reported till much later (i.e., they are reported in future versions as dormant bugs). Moreover, we find that 18.9% of the reported bugs in a version are not even introduced in that version (i.e., they are dormant bugs from prior versions). In short, the use of reported bugs to judge the quality of a specific version might be misleading. Exploring the fix process for dormant bugs, we find that they are fixed faster (median fix time of 5 days) than non- dormant bugs (median fix time of 8 days), and are fixed by more experienced developers (median commit counts of developers who fix dormant bug is 169% higher). Our results highlight that dormant bugs are different from non-dormant bugs in many perspectives and that future research in software quality should carefully study and consider dormant bugs.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"38 1","pages":"82-91"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88376507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Classifying unstructured data into natural language text and technical information 将非结构化数据分类为自然语言文本和技术信息
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597112
T. Merten, Bastian Mager, Simone Bürsner, B. Paech
Software repository data, for example in issue tracking systems, include natural language text and technical information, which includes anything from log files via code snippets to stack traces. However, data mining is often only interested in one of the two types e.g. in natural language text when looking at text mining. Regardless of which type is being investigated, any techniques used have to deal with noise caused by fragments of the other type i.e. methods interested in natural language have to deal with technical fragments and vice versa. This paper proposes an approach to classify unstructured data, e.g. development documents, into natural language text and technical information using a mixture of text heuristics and agglomerative hierarchical clustering. The approach was evaluated using 225 manually annotated text passages from developer emails and issue tracker data. Using white space tokenization as a basis, the overall precision of the approach is 0.84 and the recall is 0.85.
例如,在问题跟踪系统中,软件存储库数据包括自然语言文本和技术信息,其中包括从日志文件到代码片段到堆栈跟踪的任何内容。然而,数据挖掘通常只对两种类型中的一种感兴趣,例如,在寻找文本挖掘时,在自然语言文本中。无论研究的是哪一种类型,所使用的任何技术都必须处理由其他类型的片段引起的噪声,即对自然语言感兴趣的方法必须处理技术片段,反之亦然。本文提出了一种将非结构化数据(如开发文档)分类为自然语言文本和技术信息的方法,该方法混合使用文本启发式和凝聚层次聚类。该方法使用来自开发人员电子邮件和问题跟踪器数据的225个手动注释的文本段落进行评估。使用空白标记作为基础,该方法的总体精度为0.84,召回率为0.85。
{"title":"Classifying unstructured data into natural language text and technical information","authors":"T. Merten, Bastian Mager, Simone Bürsner, B. Paech","doi":"10.1145/2597073.2597112","DOIUrl":"https://doi.org/10.1145/2597073.2597112","url":null,"abstract":"Software repository data, for example in issue tracking systems, include natural language text and technical information, which includes anything from log files via code snippets to stack traces. \u0000 However, data mining is often only interested in one of the two types e.g. in natural language text when looking at text mining. Regardless of which type is being investigated, any techniques used have to deal with noise caused by fragments of the other type i.e. methods interested in natural language have to deal with technical fragments and vice versa. \u0000 This paper proposes an approach to classify unstructured data, e.g. development documents, into natural language text and technical information using a mixture of text heuristics and agglomerative hierarchical clustering. \u0000 The approach was evaluated using 225 manually annotated text passages from developer emails and issue tracker data. Using white space tokenization as a basis, the overall precision of the approach is 0.84 and the recall is 0.85.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"41 1","pages":"300-303"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75569328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Revisiting Android reuse studies in the context of code obfuscation and library usages 在代码混淆和库用法的背景下重温Android重用研究
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597109
M. Vásquez, Andrew Holtzhauer, Carlos Bernal-Cárdenas, D. Poshyvanyk
In the recent years, studies of design and programming practices in mobile development are gaining more attention from researchers. Several such empirical studies used Android applications (paid, free, and open source) to analyze factors such as size, quality, dependencies, reuse, and cloning. Most of the studies use executable files of the apps (APK files), instead of source code because of availability issues (most of free apps available at the Android official market are not open-source, but still can be downloaded and analyzed in APK format). However, using only APK files in empirical studies comes with some threats to the validity of the results. In this paper, we analyze some of these pertinent threats. In particular, we analyzed the impact of third-party libraries and code obfuscation practices on estimating the amount of reuse by class cloning in Android apps. When including and excluding third-party libraries from the analysis, we found statistically significant differences in the amount of class cloning 24,379 free Android apps. Also, we found some evidence that obfuscation is responsible for increasing a number of false positives when detecting class clones. Finally, based on our findings, we provide a list of actionable guidelines for mining and analyzing large repositories of Android applications and minimizing these threats to validity
近年来,关于移动开发中的设计和编程实践的研究越来越受到研究者的关注。一些这样的实证研究使用Android应用程序(付费、免费和开源)来分析诸如大小、质量、依赖性、重用性和克隆等因素。由于可用性问题,大多数研究使用的是应用程序的可执行文件(APK文件),而不是源代码(Android官方市场上的大多数免费应用程序都不是开源的,但仍然可以下载和分析APK格式)。然而,在实证研究中仅使用APK文件会对结果的有效性造成一些威胁。在本文中,我们分析了其中一些相关的威胁。特别是,我们分析了第三方库和代码混淆实践对估算Android应用中类克隆的重用量的影响。当从分析中包括和排除第三方库时,我们发现在24,379个免费Android应用的类克隆数量上存在统计学上的显著差异。此外,我们发现一些证据表明,在检测类克隆时,混淆是导致误报数量增加的原因。最后,基于我们的发现,我们提供了一系列可操作的指导方针,用于挖掘和分析大型Android应用程序库,并最大限度地减少这些对有效性的威胁
{"title":"Revisiting Android reuse studies in the context of code obfuscation and library usages","authors":"M. Vásquez, Andrew Holtzhauer, Carlos Bernal-Cárdenas, D. Poshyvanyk","doi":"10.1145/2597073.2597109","DOIUrl":"https://doi.org/10.1145/2597073.2597109","url":null,"abstract":"In the recent years, studies of design and programming practices in mobile development are gaining more attention from researchers. Several such empirical studies used Android applications (paid, free, and open source) to analyze factors such as size, quality, dependencies, reuse, and cloning. Most of the studies use executable files of the apps (APK files), instead of source code because of availability issues (most of free apps available at the Android official market are not open-source, but still can be downloaded and analyzed in APK format). However, using only APK files in empirical studies comes with some threats to the validity of the results. In this paper, we analyze some of these pertinent threats. In particular, we analyzed the impact of third-party libraries and code obfuscation practices on estimating the amount of reuse by class cloning in Android apps. When including and excluding third-party libraries from the analysis, we found statistically significant differences in the amount of class cloning 24,379 free Android apps. Also, we found some evidence that obfuscation is responsible for increasing a number of false positives when detecting class clones. Finally, based on our findings, we provide a list of actionable guidelines for mining and analyzing large repositories of Android applications and minimizing these threats to validity","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"265 1","pages":"242-251"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72766230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Improving the effectiveness of test suite through mining historical data 通过挖掘历史数据,提高测试套件的有效性
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597084
Jeff Anderson, Saeed Salem, Hyunsook Do
Software regression testing is an integral part of most major software projects. As projects grow larger and the number of tests increases, performing regression testing becomes more costly. If software engineers can identify and run tests that are more likely to detect failures during regression testing, they may be able to better manage their regression testing activities. In this paper, to help identify such test cases, we developed techniques that utilizes various types of information in software repositories. To assess our techniques, we conducted an empirical study using an industrial software product, Microsoft Dynamics AX, which contains real faults. Our results show that the proposed techniques can be effective in identifying test cases that are likely to detect failures.
软件回归测试是大多数主要软件项目的一个组成部分。随着项目规模的扩大和测试数量的增加,执行回归测试的成本会越来越高。如果软件工程师能够识别并运行在回归测试期间更有可能检测到故障的测试,他们可能能够更好地管理他们的回归测试活动。在本文中,为了帮助识别这样的测试用例,我们开发了利用软件存储库中各种类型信息的技术。为了评估我们的技术,我们使用工业软件产品Microsoft Dynamics AX进行了一项实证研究,其中包含真实的故障。我们的结果表明,所提出的技术可以有效地识别可能检测到故障的测试用例。
{"title":"Improving the effectiveness of test suite through mining historical data","authors":"Jeff Anderson, Saeed Salem, Hyunsook Do","doi":"10.1145/2597073.2597084","DOIUrl":"https://doi.org/10.1145/2597073.2597084","url":null,"abstract":"Software regression testing is an integral part of most major software projects. As projects grow larger and the number of tests increases, performing regression testing becomes more costly. If software engineers can identify and run tests that are more likely to detect failures during regression testing, they may be able to better manage their regression testing activities. In this paper, to help identify such test cases, we developed techniques that utilizes various types of information in software repositories. To assess our techniques, we conducted an empirical study using an industrial software product, Microsoft Dynamics AX, which contains real faults. Our results show that the proposed techniques can be effective in identifying test cases that are likely to detect failures.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"93 1","pages":"142-151"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84198609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Understanding software evolution: the maisqual ant data set 理解软件进化:海量数据集
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597136
B. Baldassari, P. Preux
Software engineering is a maturing discipline which has seen many drastic advances in the last years. However, some studies still point to the lack of rigorous and mathematically grounded methods to raise the field to a new emerging science, with proper and reproducible foundations to build upon. Indeed, mathematicians and statisticians do not necessarily have software engineering knowledge, while software engineers and practitioners do not necessarily have a mathematical background. The Maisqual research project intends to fill the gap between both fields by proposing a controlled and peer-reviewed data set series ready to use and study. These data sets feature metrics from different repositories, from source code to mail activity and configuration management meta data. Metrics are described and commented, and all the steps followed for their extraction and treatment are described with contextual information about the data and its meaning. This article introduces the Apache Ant weekly data set, featuring 636 extracts of the project over 12 years at different levels of artefacts – application, files, functions. By associating community and process related information to code extracts, this data set unveils interesting perspectives on the evolution of one of the great success stories of open source.
软件工程是一门成熟的学科,在过去的几年里取得了巨大的进步。然而,一些研究仍然指出,缺乏严谨的、以数学为基础的方法来将这一领域提升为一门新兴的科学,并在其上建立适当的、可重复的基础。事实上,数学家和统计学家不一定有软件工程知识,而软件工程师和实践者不一定有数学背景。Maisqual的研究项目打算通过提出一个受控的和同行评审的数据集系列来填补这两个领域之间的空白,以便于使用和研究。这些数据集具有来自不同存储库的指标,从源代码到邮件活动和配置管理元数据。对指标进行了描述和注释,并使用有关数据及其含义的上下文信息描述了提取和处理指标所遵循的所有步骤。本文介绍了Apache Ant每周数据集,其中包含了12年来该项目在不同层次的工件(应用程序、文件、函数)上的636个摘要。通过将社区和流程相关信息与代码摘录相关联,该数据集揭示了关于开源的一个伟大成功故事的演变的有趣视角。
{"title":"Understanding software evolution: the maisqual ant data set","authors":"B. Baldassari, P. Preux","doi":"10.1145/2597073.2597136","DOIUrl":"https://doi.org/10.1145/2597073.2597136","url":null,"abstract":"Software engineering is a maturing discipline which has seen many drastic advances in the last years. However, some studies still point to the lack of rigorous and mathematically grounded methods to raise the field to a new emerging science, with proper and reproducible foundations to build upon. Indeed, mathematicians and statisticians do not necessarily have software engineering knowledge, while software engineers and practitioners do not necessarily have a mathematical background. \u0000 The Maisqual research project intends to fill the gap between both fields by proposing a controlled and peer-reviewed data set series ready to use and study. These data sets feature metrics from different repositories, from source code to mail activity and configuration management meta data. Metrics are described and commented, and all the steps followed for their extraction and treatment are described with contextual information about the data and its meaning. \u0000 This article introduces the Apache Ant weekly data set, featuring 636 extracts of the project over 12 years at different levels of artefacts – application, files, functions. By associating community and process related information to code extracts, this data set unveils interesting perspectives on the evolution of one of the great success stories of open source.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"41 1","pages":"424-427"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78821177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards building a universal defect prediction model 建立一个通用的缺陷预测模型
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597078
Feng Zhang, A. Mockus, I. Keivanloo, Ying Zou
To predict files with defects, a suitable prediction model must be built for a software project from either itself (within-project) or other projects (cross-project). A universal defect prediction model that is built from the entire set of diverse projects would relieve the need for building models for an individual project. A universal model could also be interpreted as a basic relationship between software metrics and defects. However, the variations in the distribution of predictors pose a formidable obstacle to build a universal model. Such variations exist among projects with different context factors (e.g., size and programming language). To overcome this challenge, we propose context-aware rank transformations for predictors. We cluster projects based on the similarity of the distribution of 26 predictors, and derive the rank transformations using quantiles of predictors for a cluster. We then fit the universal model on the transformed data of 1,398 open source projects hosted on SourceForge and GoogleCode. Adding context factors to the universal model improves the predictive power. The universal model obtains prediction performance comparable to the within-project models and yields similar results when applied on five external projects (one Apache and four Eclipse projects). These results suggest that a universal defect prediction model may be an achievable goal.
为了预测带有缺陷的文件,必须从软件项目本身(项目内)或其他项目(跨项目)构建一个合适的预测模型。从整个不同项目集合中构建的通用缺陷预测模型将减轻为单个项目构建模型的需要。通用模型也可以被解释为软件度量和缺陷之间的基本关系。然而,预测因子分布的变化对建立一个通用模型构成了巨大的障碍。这些变化存在于具有不同环境因素(例如,规模和编程语言)的项目中。为了克服这一挑战,我们提出了预测器的上下文感知等级转换。我们基于26个预测因子分布的相似性对项目进行聚类,并使用预测因子的分位数对聚类进行秩变换。然后,我们将通用模型拟合到SourceForge和GoogleCode上托管的1398个开源项目的转换数据上。在通用模型中加入上下文因素提高了预测能力。通用模型获得与项目内模型相当的预测性能,并且在应用于五个外部项目(一个Apache和四个Eclipse项目)时产生类似的结果。这些结果表明,一个通用的缺陷预测模型可能是一个可以实现的目标。
{"title":"Towards building a universal defect prediction model","authors":"Feng Zhang, A. Mockus, I. Keivanloo, Ying Zou","doi":"10.1145/2597073.2597078","DOIUrl":"https://doi.org/10.1145/2597073.2597078","url":null,"abstract":"To predict files with defects, a suitable prediction model must be built for a software project from either itself (within-project) or other projects (cross-project). A universal defect prediction model that is built from the entire set of diverse projects would relieve the need for building models for an individual project. A universal model could also be interpreted as a basic relationship between software metrics and defects. However, the variations in the distribution of predictors pose a formidable obstacle to build a universal model. Such variations exist among projects with different context factors (e.g., size and programming language). To overcome this challenge, we propose context-aware rank transformations for predictors. We cluster projects based on the similarity of the distribution of 26 predictors, and derive the rank transformations using quantiles of predictors for a cluster. We then fit the universal model on the transformed data of 1,398 open source projects hosted on SourceForge and GoogleCode. Adding context factors to the universal model improves the predictive power. The universal model obtains prediction performance comparable to the within-project models and yields similar results when applied on five external projects (one Apache and four Eclipse projects). These results suggest that a universal defect prediction model may be an achievable goal.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"42 1","pages":"182-191"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89724424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 146
An empirical study of just-in-time defect prediction using cross-project models 使用跨项目模型的即时缺陷预测的实证研究
Pub Date : 2014-05-31 DOI: 10.1145/2597073.2597075
Takafumi Fukushima, Yasutaka Kamei, Shane McIntosh, Kazuhiro Yamashita, Naoyasu Ubayashi
Prior research suggests that predicting defect-inducing changes, i.e., Just-In-Time (JIT) defect prediction is a more practical alternative to traditional defect prediction techniques, providing immediate feedback while design decisions are still fresh in the minds of developers. Unfortunately, similar to traditional defect prediction models, JIT models require a large amount of training data, which is not available when projects are in initial development phases. To address this flaw in traditional defect prediction, prior work has proposed cross-project models, i.e., models learned from older projects with sufficient history. However, cross-project models have not yet been explored in the context of JIT prediction. Therefore, in this study, we empirically evaluate the performance of JIT cross-project models. Through a case study on 11 open source projects, we find that in a JIT cross-project context: (1) high performance within-project models rarely perform well; (2) models trained on projects that have similar correlations between predictor and dependent variables often perform well; and (3) ensemble learning techniques that leverage historical data from several other projects (e.g., voting experts) often perform well. Our findings empirically confirm that JIT cross-project models learned using other projects are a viable solution for projects with little historical data. However, JIT cross-project models perform best when the data used to learn them is carefully selected.
先前的研究表明,预测导致缺陷的变更,即即时(JIT)缺陷预测是传统缺陷预测技术更实用的替代方案,当设计决策在开发人员的头脑中仍然是新鲜的时候,提供即时反馈。不幸的是,与传统的缺陷预测模型类似,JIT模型需要大量的训练数据,而这些数据在项目处于初始开发阶段时是不可用的。为了解决传统缺陷预测中的这个缺陷,先前的工作已经提出了跨项目模型,即,从具有足够历史的旧项目中学习的模型。然而,跨项目模型尚未在JIT预测的背景下进行探索。因此,在本研究中,我们对JIT跨项目模型的性能进行了实证评估。通过对11个开源项目的案例研究,我们发现在JIT跨项目环境下:(1)项目内的高性能模型很少表现良好;(2)在预测变量和因变量之间具有相似相关性的项目上训练的模型通常表现良好;(3)利用来自其他几个项目(例如,投票专家)的历史数据的集成学习技术通常表现良好。我们的研究结果从经验上证实,使用其他项目学习的JIT跨项目模型是具有很少历史数据的项目的可行解决方案。然而,当仔细选择用于学习JIT的数据时,JIT跨项目模型的性能最好。
{"title":"An empirical study of just-in-time defect prediction using cross-project models","authors":"Takafumi Fukushima, Yasutaka Kamei, Shane McIntosh, Kazuhiro Yamashita, Naoyasu Ubayashi","doi":"10.1145/2597073.2597075","DOIUrl":"https://doi.org/10.1145/2597073.2597075","url":null,"abstract":"Prior research suggests that predicting defect-inducing changes, i.e., Just-In-Time (JIT) defect prediction is a more practical alternative to traditional defect prediction techniques, providing immediate feedback while design decisions are still fresh in the minds of developers. Unfortunately, similar to traditional defect prediction models, JIT models require a large amount of training data, which is not available when projects are in initial development phases. To address this flaw in traditional defect prediction, prior work has proposed cross-project models, i.e., models learned from older projects with sufficient history. However, cross-project models have not yet been explored in the context of JIT prediction. Therefore, in this study, we empirically evaluate the performance of JIT cross-project models. Through a case study on 11 open source projects, we find that in a JIT cross-project context: (1) high performance within-project models rarely perform well; (2) models trained on projects that have similar correlations between predictor and dependent variables often perform well; and (3) ensemble learning techniques that leverage historical data from several other projects (e.g., voting experts) often perform well. Our findings empirically confirm that JIT cross-project models learned using other projects are a viable solution for projects with little historical data. However, JIT cross-project models perform best when the data used to learn them is carefully selected.","PeriodicalId":6621,"journal":{"name":"2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)","volume":"2068 1","pages":"172-181"},"PeriodicalIF":0.0,"publicationDate":"2014-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86549900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 163
期刊
2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1