首页 > 最新文献

2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)最新文献

英文 中文
On the Developers' Attitude Towards CRAN Checks 浅谈开发者对CRAN检查的态度
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3528389
Pranjay Kumar, Davin Ie, M. Vidoni
R is a package-based, multi-paradigm programming language for scientific software. It provides an easy way to install third-party code, datasets, tests, documentation and examples through CRAN (Comprehensive R Archive Network). Prior works indicated developers tend to code workarounds to bypass CRAN's automated checks (performed when submitting a package) instead of fixing the code-doing so reduces packages' quality. It may become a threat to those analyses written in R that rely on miss-checked code. This preliminary study card-sorted source code comments and analysed StackOverflow (SO) conversations discussing CRAN checks to understand developers' attitudes. We determined that about a quarter of SO posts aim to bypass a check with a workaround; the most affected are code-related problems, package dependencies, installation and feasibility. We analyse these checks and outline future steps to improve similar automated analyses.
R是一种基于包的、多范式的科学软件编程语言。它提供了一种通过CRAN(综合R档案网络)安装第三方代码、数据集、测试、文档和示例的简单方法。以前的工作表明,开发人员倾向于编写绕过CRAN自动检查(在提交包时执行)的变通方法,而不是修复代码——这样做会降低包的质量。它可能会对那些用R编写的依赖于未检查代码的分析构成威胁。这个初步的研究卡片对源代码注释进行了分类,并分析了讨论CRAN检查的StackOverflow (SO)对话,以了解开发人员的态度。我们确定,大约四分之一的SO帖子旨在通过变通绕过检查;受影响最大的是与代码相关的问题、包依赖、安装和可行性。我们分析了这些检查,并概述了改进类似自动化分析的未来步骤。
{"title":"On the Developers' Attitude Towards CRAN Checks","authors":"Pranjay Kumar, Davin Ie, M. Vidoni","doi":"10.1145/3524610.3528389","DOIUrl":"https://doi.org/10.1145/3524610.3528389","url":null,"abstract":"R is a package-based, multi-paradigm programming language for scientific software. It provides an easy way to install third-party code, datasets, tests, documentation and examples through CRAN (Comprehensive R Archive Network). Prior works indicated developers tend to code workarounds to bypass CRAN's automated checks (performed when submitting a package) instead of fixing the code-doing so reduces packages' quality. It may become a threat to those analyses written in R that rely on miss-checked code. This preliminary study card-sorted source code comments and analysed StackOverflow (SO) conversations discussing CRAN checks to understand developers' attitudes. We determined that about a quarter of SO posts aim to bypass a check with a workaround; the most affected are code-related problems, package dependencies, installation and feasibility. We analyse these checks and outline future steps to improve similar automated analyses.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123457526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How do I model my system? A Qualitative Study on the Challenges that Modelers Experience 如何对系统建模?对建模者所面临挑战的定性研究
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3529160
Christopher Vendome, E. J. Rapos, Nick DiGennaro
Model-Driven Software Engineering relies both on domain-expertise as well as software engineering expertise to fully grasp its representative power in modeling complex systems. As is typical in the development of any system, modelers face similar challenges to classic software developers, whether with general modeling concepts or specific features of existing tools such as the Eclipse Modeling Framework. In this work, we aim to understand the issues that modelers face by analyzing discussions from Eclipse's modeling tool forums, MATLAB Central, and Stack Overflow. By performing a qualitative study using an open-coding process, we created a taxonomy of common issues faced by modelers. We considered both difficulty experienced when modeling a system and issues faced using existing modeling tools; these form the basis of our two research questions. Based on the taxonomy, we propose nine suggestions and enhancements, in three overarching groups, to improve the experience of modelers, at all levels of experience.
模型驱动软件工程既依赖于领域专业知识,也依赖于软件工程专业知识,以充分掌握其在复杂系统建模中的代表性力量。在任何系统的开发中,建模者都面临着与传统软件开发人员类似的挑战,无论是使用一般的建模概念还是使用现有工具(如Eclipse modeling Framework)的特定功能。在这项工作中,我们的目标是通过分析来自Eclipse建模工具论坛、MATLAB Central和Stack Overflow的讨论来理解建模者所面临的问题。通过使用开放编码过程执行定性研究,我们创建了建模者面临的常见问题的分类。我们考虑了系统建模时遇到的困难和使用现有建模工具所面临的问题;这些构成了我们两个研究问题的基础。基于该分类法,我们在三个总体组中提出了九项建议和增强,以改进所有经验级别的建模者的经验。
{"title":"How do I model my system? A Qualitative Study on the Challenges that Modelers Experience","authors":"Christopher Vendome, E. J. Rapos, Nick DiGennaro","doi":"10.1145/3524610.3529160","DOIUrl":"https://doi.org/10.1145/3524610.3529160","url":null,"abstract":"Model-Driven Software Engineering relies both on domain-expertise as well as software engineering expertise to fully grasp its representative power in modeling complex systems. As is typical in the development of any system, modelers face similar challenges to classic software developers, whether with general modeling concepts or specific features of existing tools such as the Eclipse Modeling Framework. In this work, we aim to understand the issues that modelers face by analyzing discussions from Eclipse's modeling tool forums, MATLAB Central, and Stack Overflow. By performing a qualitative study using an open-coding process, we created a taxonomy of common issues faced by modelers. We considered both difficulty experienced when modeling a system and issues faced using existing modeling tools; these form the basis of our two research questions. Based on the taxonomy, we propose nine suggestions and enhancements, in three overarching groups, to improve the experience of modelers, at all levels of experience.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126105460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Ineffectiveness of Domain-Specific Word Embedding Models for GUI Test Reuse 特定领域词嵌入模型在GUI测试重用中的有效性
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527873
F. Khalili, Ali Mohebbi, Valerio Terragni, M. Pezzè, L. Mariani, A. Heydarnoori
Reusing test cases across similar applications can significantly reduce testing effort. Some recent test reuse approaches successfully exploit word embedding models to semantically match GUI events across Android apps. It is a common understanding that word embedding models trained on domain-specific corpora perform better on specialized tasks. Our recent study confirms this understanding in the context of Android test reuse. It shows that word embedding models trained with a corpus of the English descriptions of apps in the Google Play Store lead to a better semantic matching of Android GUI events. Motivated by this result, we hypothesize that we can further increase the effectiveness of semantic matching by partitioning the corpus of app descriptions into domain-specific corpora. Our experiments do not confirm our hypothesis. This paper sheds light on this unexpected negative result that contradicts the common understanding.
在类似的应用程序之间重用测试用例可以显著减少测试工作。最近的一些测试重用方法成功地利用词嵌入模型在语义上匹配Android应用程序中的GUI事件。人们普遍认为,在特定领域语料库上训练的词嵌入模型在特定任务上表现更好。我们最近的研究在Android测试重用的背景下证实了这一点。它表明,用b谷歌Play Store中应用程序的英语描述语料库训练的词嵌入模型可以更好地匹配Android GUI事件的语义。受此结果的启发,我们假设可以通过将应用描述的语料库划分为特定领域的语料库来进一步提高语义匹配的有效性。我们的实验不能证实我们的假设。本文揭示了这一出乎意料的否定结果与一般认识相矛盾。
{"title":"The Ineffectiveness of Domain-Specific Word Embedding Models for GUI Test Reuse","authors":"F. Khalili, Ali Mohebbi, Valerio Terragni, M. Pezzè, L. Mariani, A. Heydarnoori","doi":"10.1145/3524610.3527873","DOIUrl":"https://doi.org/10.1145/3524610.3527873","url":null,"abstract":"Reusing test cases across similar applications can significantly reduce testing effort. Some recent test reuse approaches successfully exploit word embedding models to semantically match GUI events across Android apps. It is a common understanding that word embedding models trained on domain-specific corpora perform better on specialized tasks. Our recent study confirms this understanding in the context of Android test reuse. It shows that word embedding models trained with a corpus of the English descriptions of apps in the Google Play Store lead to a better semantic matching of Android GUI events. Motivated by this result, we hypothesize that we can further increase the effectiveness of semantic matching by partitioning the corpus of app descriptions into domain-specific corpora. Our experiments do not confirm our hypothesis. This paper sheds light on this unexpected negative result that contradicts the common understanding.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126658513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Abstract Syntax Tree Representation Learning for Cross-Language Program Classification 跨语言程序分类的统一抽象语法树表示学习
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527915
Kesu Wang, Meng Yan, He Zhang, Haibo Hu
Program classification can be regarded as a high-level abstraction of code, laying a foundation for various tasks related to source code comprehension, and has a very wide range of applications in the field of software engineering, such as code clone detection, code smell classification, defects classification, etc. The cross-language program classification can realize code transfer in different programming languages, and can also promote cross-language code reuse, thereby helping developers to write code quickly and reduce the development time of code transfer. Most of the existing studies focus on the semantic learning of the code, whilst few studies are devoted to cross-language tasks. The main challenge of cross-language program classification is how to extract semantic features of different programming languages. In order to cope with this difficulty, we propose a Unified Abstract Syntax Tree (namely UAST in this paper) neural network. In detail, the core idea of UAST consists of two unified mechanisms. First, UAST learns an AST representation by unifying the AST traversal sequence and graph-like AST structure for capturing semantic code features. Second, we construct a mechanism called unified vocabulary, which can reduce the feature gap between different programming languages, so it can achieve the role of cross-language program classification. Besides, we collect a dataset containing 20,000 files of five programming languages, which can be used as a benchmark dataset for the cross-language program classification task. We have done experiments on two datasets, and the results show that our proposed approach out-performs the state-of-the-art baselines in terms of four evaluation metrics (Precision, Recall, F1-score, and Accuracy).
程序分类可以看作是对代码的高级抽象,为理解源代码相关的各种任务奠定基础,在软件工程领域有着非常广泛的应用,如代码克隆检测、代码气味分类、缺陷分类等。跨语言程序分类可以实现不同编程语言之间的代码迁移,也可以促进跨语言代码重用,从而帮助开发人员快速编写代码,减少代码迁移的开发时间。现有的研究大多集中在代码的语义学习上,而对跨语言任务的研究很少。跨语言程序分类的主要挑战是如何提取不同编程语言的语义特征。为了解决这一难题,我们提出了一种统一抽象语法树(即UAST)神经网络。具体来说,UAST的核心思想包括两个统一的机制。首先,UAST通过统一AST遍历序列和用于捕获语义代码特征的类似图的AST结构来学习AST表示。其次,构建统一词汇表机制,减少不同编程语言之间的特征差距,从而实现跨语言程序分类的作用。此外,我们收集了包含5种编程语言的20000个文件的数据集,可以作为跨语言程序分类任务的基准数据集。我们在两个数据集上进行了实验,结果表明,我们提出的方法在四个评估指标(Precision, Recall, F1-score和Accuracy)方面优于最先进的基线。
{"title":"Unified Abstract Syntax Tree Representation Learning for Cross-Language Program Classification","authors":"Kesu Wang, Meng Yan, He Zhang, Haibo Hu","doi":"10.1145/3524610.3527915","DOIUrl":"https://doi.org/10.1145/3524610.3527915","url":null,"abstract":"Program classification can be regarded as a high-level abstraction of code, laying a foundation for various tasks related to source code comprehension, and has a very wide range of applications in the field of software engineering, such as code clone detection, code smell classification, defects classification, etc. The cross-language program classification can realize code transfer in different programming languages, and can also promote cross-language code reuse, thereby helping developers to write code quickly and reduce the development time of code transfer. Most of the existing studies focus on the semantic learning of the code, whilst few studies are devoted to cross-language tasks. The main challenge of cross-language program classification is how to extract semantic features of different programming languages. In order to cope with this difficulty, we propose a Unified Abstract Syntax Tree (namely UAST in this paper) neural network. In detail, the core idea of UAST consists of two unified mechanisms. First, UAST learns an AST representation by unifying the AST traversal sequence and graph-like AST structure for capturing semantic code features. Second, we construct a mechanism called unified vocabulary, which can reduce the feature gap between different programming languages, so it can achieve the role of cross-language program classification. Besides, we collect a dataset containing 20,000 files of five programming languages, which can be used as a benchmark dataset for the cross-language program classification task. We have done experiments on two datasets, and the results show that our proposed approach out-performs the state-of-the-art baselines in terms of four evaluation metrics (Precision, Recall, F1-score, and Accuracy).","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126916725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Empirical Investigation on the Trade-off between Smart Contract Readability and Gas Consumption 智能合约可读性与用气量权衡的实证研究
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3529157
Anna Vacca, Michele Fredella, Andrea Di Sorbo, C. A. Visaggio, G. Canfora
Blockchain technology is becoming increasingly popular, and smart contracts (i.e., programs that run on top of the blockchain) represent a crucial element of this technology. In particular, smart contracts running on Ethereum (i.e., one of the most popular blockchain platforms) are often developed with Solidity, and their deployment and execution consume gas (i.e., a fee compensating the computing resources required). Smart contract development frequently involves code reuse, but poor readable smart contracts could hinder their reuse. However, writing readable smart contracts is challenging, since practices for improving the readability could also be in contrast with optimization strategies for reducing gas consumption. This paper aims at better understanding (i) the readability aspects for which traditional software and smart contracts differ, and (ii) the specific smart contract readability features exhibiting significant relationships with gas consumption. We leverage a set of metrics that previous research has proven correlated with code readability. In particular, we first compare the values of these metrics obtained for both Solidity smart contracts and traditional software systems (written in Java). Then, we investigate the correlations occurring between these metrics and gas consumption and between each pair of metrics. The results of our study highlight that smart contracts usually exhibit lower readability than traditional software for what concerns the number of parentheses, inline comments, and blank lines used. In addition, we found some readability metrics (such as the average length of identifiers and the average number of keywords) that significantly correlate with gas consumption.
区块链技术正变得越来越流行,智能合约(即在区块链上运行的程序)代表了这项技术的关键要素。特别是,在以太坊(即最流行的区块链平台之一)上运行的智能合约通常是用Solidity开发的,它们的部署和执行消耗gas(即补偿所需计算资源的费用)。智能合约开发经常涉及代码重用,但可读性差的智能合约可能会阻碍它们的重用。然而,编写可读的智能合约是具有挑战性的,因为提高可读性的实践也可能与减少天然气消耗的优化策略形成对比。本文旨在更好地理解(i)传统软件和智能合约不同的可读性方面,以及(ii)特定的智能合约可读性特征与天然气消耗具有重要关系。我们利用了一组先前的研究证明与代码可读性相关的指标。特别是,我们首先比较了Solidity智能合约和传统软件系统(用Java编写)获得的这些指标的值。然后,我们研究了这些指标与天然气消耗之间以及每对指标之间的相关性。我们的研究结果强调,智能合约通常表现出比传统软件更低的可读性,因为涉及到括号、内联注释和空白行的数量。此外,我们还发现一些可读性指标(如标识符的平均长度和关键字的平均数量)与气体消耗显著相关。
{"title":"An Empirical Investigation on the Trade-off between Smart Contract Readability and Gas Consumption","authors":"Anna Vacca, Michele Fredella, Andrea Di Sorbo, C. A. Visaggio, G. Canfora","doi":"10.1145/3524610.3529157","DOIUrl":"https://doi.org/10.1145/3524610.3529157","url":null,"abstract":"Blockchain technology is becoming increasingly popular, and smart contracts (i.e., programs that run on top of the blockchain) represent a crucial element of this technology. In particular, smart contracts running on Ethereum (i.e., one of the most popular blockchain platforms) are often developed with Solidity, and their deployment and execution consume gas (i.e., a fee compensating the computing resources required). Smart contract development frequently involves code reuse, but poor readable smart contracts could hinder their reuse. However, writing readable smart contracts is challenging, since practices for improving the readability could also be in contrast with optimization strategies for reducing gas consumption. This paper aims at better understanding (i) the readability aspects for which traditional software and smart contracts differ, and (ii) the specific smart contract readability features exhibiting significant relationships with gas consumption. We leverage a set of metrics that previous research has proven correlated with code readability. In particular, we first compare the values of these metrics obtained for both Solidity smart contracts and traditional software systems (written in Java). Then, we investigate the correlations occurring between these metrics and gas consumption and between each pair of metrics. The results of our study highlight that smart contracts usually exhibit lower readability than traditional software for what concerns the number of parentheses, inline comments, and blank lines used. In addition, we found some readability metrics (such as the average length of identifiers and the average number of keywords) that significantly correlate with gas consumption.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130768285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Exploring GNN Based Program Embedding Technologies for Binary Related Tasks 探索基于GNN的二进制相关任务的程序嵌入技术
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527900
Yixin Guo, Pengcheng Li, Yingwei Luo, Xiaolin Wang, Zhenlin Wang
With the rapid growth of program scale, program analysis, mainte-nance and optimization become increasingly diverse and complex. Applying learning-assisted methodologies onto program analysis has attracted ever-increasing attention. However, a large number of program factors including syntax structures, semantics, running platforms and compilation configurations block the effective re-alization of these methods. To overcome these obstacles, existing works prefer to be on a basis of source code or abstract syntax tree, but unfortunately are sub-optimal for binary-oriented analysis tasks closely related to the compilation process. To this end, we propose a new program analysis approach that aims at solving program-level and procedure-level tasks with one model, by taking advantage of the great power of graph neural networks from the level of binary code. By fusing the semantics of control flow graphs, data flow graphs and call graphs into one model, and embedding instructions and values simultaneously, our method can effectively work around emerging compilation-related problems. By testing the proposed method on two tasks, binary similarity detection and dead store prediction, the results show that our method is able to achieve as high accuracy as 83.25%, and 82.77%.
随着程序规模的快速增长,程序分析、维护和优化变得越来越多样化和复杂化。将学习辅助方法应用于程序分析已经引起了越来越多的关注。然而,语法结构、语义、运行平台和编译配置等众多程序因素阻碍了这些方法的有效实现。为了克服这些障碍,现有的工作倾向于以源代码或抽象语法树为基础,但不幸的是,对于与编译过程密切相关的面向二进制的分析任务来说,这不是最优的。为此,我们提出了一种新的程序分析方法,旨在利用图神经网络在二进制代码级别上的强大功能,用一个模型解决程序级和过程级任务。通过将控制流图、数据流图和调用图的语义融合到一个模型中,并同时嵌入指令和值,我们的方法可以有效地解决新出现的编译相关问题。通过对二值相似度检测和死库预测两项任务的测试,结果表明本文方法的准确率分别达到83.25%和82.77%。
{"title":"Exploring GNN Based Program Embedding Technologies for Binary Related Tasks","authors":"Yixin Guo, Pengcheng Li, Yingwei Luo, Xiaolin Wang, Zhenlin Wang","doi":"10.1145/3524610.3527900","DOIUrl":"https://doi.org/10.1145/3524610.3527900","url":null,"abstract":"With the rapid growth of program scale, program analysis, mainte-nance and optimization become increasingly diverse and complex. Applying learning-assisted methodologies onto program analysis has attracted ever-increasing attention. However, a large number of program factors including syntax structures, semantics, running platforms and compilation configurations block the effective re-alization of these methods. To overcome these obstacles, existing works prefer to be on a basis of source code or abstract syntax tree, but unfortunately are sub-optimal for binary-oriented analysis tasks closely related to the compilation process. To this end, we propose a new program analysis approach that aims at solving program-level and procedure-level tasks with one model, by taking advantage of the great power of graph neural networks from the level of binary code. By fusing the semantics of control flow graphs, data flow graphs and call graphs into one model, and embedding instructions and values simultaneously, our method can effectively work around emerging compilation-related problems. By testing the proposed method on two tasks, binary similarity detection and dead store prediction, the results show that our method is able to achieve as high accuracy as 83.25%, and 82.77%.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121350641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Do Visual Issue Reports Help Developers Fix Bugs?: - A Preliminary Study of Using Videos and Images to Report Issues on GitHub - 可视化问题报告能帮助开发者修复bug吗?: -在GitHub上使用视频和图像报告问题的初步研究-
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527882
Hiroki Kuramoto, Masanari Kondo, Yutaro Kashiwa, Yuta Ishimoto, Kaze Shindo, Yasutaka Kamei, Naoyasu Ubayashi
Issue reports are a pivotal interface between developers and users for receiving information about bugs in their products. In practice, issue reports often have incorrect information or insufficient information to enable bugs to be reproduced, and this has the effect of delaying the entire bug-fixing process. To facilitate their bug-reproduction work, GitHub has provided a new feature that allows users to share videos (e.g., mp4 files.) Using such videos, reports can be made to developers about the details of bugs by recording the symptoms, reproduction steps, and other important aspects of bug information. While such visual issue reports have the potential to significantly improve the bug-fixing process, no studies have empirically exam-ined this impact. In this paper, we conduct a preliminary study to identify the characteristics of visual issue reports by comparing them with non-visual issue reports. We collect 1,230 videos and 18,760 images from 226,286 issues on 4,173 publicly available repositories. Our preliminary analysis shows that issue reports with images are described in fewer words than non-visual issue reports. In addition, we observe that most dis-cussions in visual issue reports are concerned with either conditions for reproduction (e.g., when) or GUI (e.g., pageviewcontroller.)
问题报告是开发人员和用户之间的关键接口,用于接收有关其产品中的错误的信息。在实践中,问题报告通常包含不正确的信息或不充分的信息,从而无法重现错误,这将延迟整个错误修复过程。为了方便他们的bug复制工作,GitHub提供了一个新功能,允许用户分享视频(例如,mp4文件)。使用这样的视频,可以通过记录症状、重现步骤和bug信息的其他重要方面,向开发人员提供有关bug细节的报告。虽然这种可视化的问题报告有可能显著改善bug修复过程,但没有研究对这种影响进行实证检验。在本文中,我们进行了初步的研究,通过比较视觉问题报告和非视觉问题报告来确定视觉问题报告的特征。我们收集了1230个视频和18760张图片,来自4173个公开可用的存储库中的226286个问题。我们的初步分析表明,带有图像的问题报告比没有图像的问题报告用更少的文字描述。此外,我们观察到可视化问题报告中的大多数讨论都与再现条件(例如,何时)或GUI(例如,pageviewcontroller)有关。
{"title":"Do Visual Issue Reports Help Developers Fix Bugs?: - A Preliminary Study of Using Videos and Images to Report Issues on GitHub -","authors":"Hiroki Kuramoto, Masanari Kondo, Yutaro Kashiwa, Yuta Ishimoto, Kaze Shindo, Yasutaka Kamei, Naoyasu Ubayashi","doi":"10.1145/3524610.3527882","DOIUrl":"https://doi.org/10.1145/3524610.3527882","url":null,"abstract":"Issue reports are a pivotal interface between developers and users for receiving information about bugs in their products. In practice, issue reports often have incorrect information or insufficient information to enable bugs to be reproduced, and this has the effect of delaying the entire bug-fixing process. To facilitate their bug-reproduction work, GitHub has provided a new feature that allows users to share videos (e.g., mp4 files.) Using such videos, reports can be made to developers about the details of bugs by recording the symptoms, reproduction steps, and other important aspects of bug information. While such visual issue reports have the potential to significantly improve the bug-fixing process, no studies have empirically exam-ined this impact. In this paper, we conduct a preliminary study to identify the characteristics of visual issue reports by comparing them with non-visual issue reports. We collect 1,230 videos and 18,760 images from 226,286 issues on 4,173 publicly available repositories. Our preliminary analysis shows that issue reports with images are described in fewer words than non-visual issue reports. In addition, we observe that most dis-cussions in visual issue reports are concerned with either conditions for reproduction (e.g., when) or GUI (e.g., pageviewcontroller.)","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123120096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Predicting Change Propagation between Code Clone Instances by Graph-based Deep Learning 基于图的深度学习预测代码克隆实例之间的变化传播
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527912
Bin Hu, Yijian Wu, Xin Peng, Chaofeng Sha, Xiaochen Wang, Baiqiang Fu, Wenyun Zhao
Code clones widely exist in open-source and industrial software projects and are still recognized as a threat to software main-tenance due to the additional effort required for the simultaneous maintenance of multiple clone instances and potential defects caused by inconsistent changes in clone instances. To alleviate the threat, it is essential to accurately and efficiently make the decisions of change propagation between clone instances. Based on an exploratory study on clone change propagation with five famous open-source projects, we find that a clone class can have both propagation-required changes and propagation-free changes and thus fine-grained change propagation decision is required. Based on the findings, we propose a graph-based deep learning approach to predict the change propagation requirements of clone instances. We develop a graph representation, named Fused Clone Program Dependency Graph (FC-PDG), to capture the textual and structural code contexts of a pair of clone instances along with the changes on one of them. Based on the representation, we design a deep learning model that uses a Relational Graph Convolutional Network (R-GCN) to predict the change propagation requirement. We evaluate the approach with a dataset constructed based on 51 open-source Java projects, which includes 24,672 pairs of matched changes and 38,041 non-matched changes. The results show that the approach achieves high precision (83.1%), recall (81.2%), and F1-score (82.1%). Our further evaluation with three other open-source projects confirms the generality of the trained clone change propagation prediction model.
代码克隆广泛存在于开源和工业软件项目中,并且仍然被认为是对软件维护的威胁,因为同时维护多个克隆实例需要额外的工作,并且克隆实例中不一致的更改会导致潜在的缺陷。为了减轻这种威胁,必须准确有效地做出克隆实例之间的更改传播决策。通过对五个著名开源项目克隆变更传播的探索性研究,发现克隆类既可以有需要传播的变更,也可以有不需要传播的变更,因此需要细粒度的变更传播决策。基于这些发现,我们提出了一种基于图的深度学习方法来预测克隆实例的变化传播需求。我们开发了一个图形表示,称为融合克隆程序依赖图(FC-PDG),用于捕获一对克隆实例的文本和结构代码上下文以及其中一个实例的更改。在此基础上,我们设计了一个使用关系图卷积网络(R-GCN)来预测变化传播需求的深度学习模型。我们使用基于51个开源Java项目构建的数据集来评估该方法,其中包括24,672对匹配的更改和38,041对不匹配的更改。结果表明,该方法具有较高的准确率(83.1%)、召回率(81.2%)和f1得分(82.1%)。我们对其他三个开源项目的进一步评估证实了训练克隆变化传播预测模型的通用性。
{"title":"Predicting Change Propagation between Code Clone Instances by Graph-based Deep Learning","authors":"Bin Hu, Yijian Wu, Xin Peng, Chaofeng Sha, Xiaochen Wang, Baiqiang Fu, Wenyun Zhao","doi":"10.1145/3524610.3527912","DOIUrl":"https://doi.org/10.1145/3524610.3527912","url":null,"abstract":"Code clones widely exist in open-source and industrial software projects and are still recognized as a threat to software main-tenance due to the additional effort required for the simultaneous maintenance of multiple clone instances and potential defects caused by inconsistent changes in clone instances. To alleviate the threat, it is essential to accurately and efficiently make the decisions of change propagation between clone instances. Based on an exploratory study on clone change propagation with five famous open-source projects, we find that a clone class can have both propagation-required changes and propagation-free changes and thus fine-grained change propagation decision is required. Based on the findings, we propose a graph-based deep learning approach to predict the change propagation requirements of clone instances. We develop a graph representation, named Fused Clone Program Dependency Graph (FC-PDG), to capture the textual and structural code contexts of a pair of clone instances along with the changes on one of them. Based on the representation, we design a deep learning model that uses a Relational Graph Convolutional Network (R-GCN) to predict the change propagation requirement. We evaluate the approach with a dataset constructed based on 51 open-source Java projects, which includes 24,672 pairs of matched changes and 38,041 non-matched changes. The results show that the approach achieves high precision (83.1%), recall (81.2%), and F1-score (82.1%). Our further evaluation with three other open-source projects confirms the generality of the trained clone change propagation prediction model.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"47 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120876795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Context-based Cluster Fault Localization 基于上下文的集群故障定位
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527891
Ju-Yeol Yu, Yan Lei, Huan Xie, Lingfeng Fu, Chunyan Liu
Automated fault localization techniques collect runtime information as input data to identify suspicious statement potentially respon-sible for program failures. To discover the statistical coincidences between test results (i.e., failing or passing) and the executions of the different statements of a program (i.e., executed or not exe-cuted), researchers developed a suspiciousness methodology (e.g., spectrum-based formulas and deep neural network models). How-ever, the occurrences of coincidental correctness (CC) which means the faulty statements were executed but the output of the program was right affect the effectiveness of fault localization. Many re-searchers seek to identify CC tests using cluster analysis. However, the high-dimensional data containing too much noise reduce the effectiveness of cluster analysis. To overcome the obstacle, we propose CBCFL: a context-based cluster fault localization approach, which incorporates a failure context showing how a failure is produced into cluster analysis. Specifically, CBCFL uses the failure context containing the state-ments whose execution affects the output of a failing test as input data for cluster analysis to improve the effectiveness of identifying CC tests. Since CC tests execute the faulty statement, we change the labels of CC tests into failing tests. We take the context and the corresponding changed labels as the input data for fault local-ization techniques. To evaluate the effectiveness of CBCFL, we conduct large-scale experiments on six large-sized programs using five state-of-the-art fault localization approaches. The experimen-tal results show that CBCFL is more effective than the baselines, e.g., our approach can improve the MLP-FL method using cluster analysis by at most 200%, 250%, and 320% under the Top-1, Top-5, and Top-10 accuracies.
自动故障定位技术收集运行时信息作为输入数据,以识别可能导致程序故障的可疑语句。为了发现测试结果(即失败或通过)与程序不同语句的执行(即执行或未执行)之间的统计一致性,研究人员开发了一种怀疑方法(例如,基于频谱的公式和深度神经网络模型)。然而,在执行了错误语句但程序的输出是正确的情况下出现的巧合正确性影响了错误定位的有效性。许多研究人员试图用聚类分析来确定CC测试。然而,高维数据中含有过多的噪声会降低聚类分析的有效性。为了克服这一障碍,我们提出了CBCFL:一种基于上下文的聚类故障定位方法,该方法将显示故障如何产生的故障上下文纳入聚类分析。具体来说,CBCFL使用包含其执行影响失败测试输出的状态的失败上下文作为聚类分析的输入数据,以提高识别CC测试的有效性。由于CC测试执行错误语句,因此我们将CC测试的标签更改为失败测试。我们将上下文和相应的变化标签作为故障局部化技术的输入数据。为了评估CBCFL的有效性,我们使用五种最先进的故障定位方法在六个大型程序上进行了大规模实验。实验结果表明,在Top-1、Top-5和Top-10的准确率下,CBCFL比使用聚类分析的MLP-FL方法的准确率提高了200%、250%和320%。
{"title":"Context-based Cluster Fault Localization","authors":"Ju-Yeol Yu, Yan Lei, Huan Xie, Lingfeng Fu, Chunyan Liu","doi":"10.1145/3524610.3527891","DOIUrl":"https://doi.org/10.1145/3524610.3527891","url":null,"abstract":"Automated fault localization techniques collect runtime information as input data to identify suspicious statement potentially respon-sible for program failures. To discover the statistical coincidences between test results (i.e., failing or passing) and the executions of the different statements of a program (i.e., executed or not exe-cuted), researchers developed a suspiciousness methodology (e.g., spectrum-based formulas and deep neural network models). How-ever, the occurrences of coincidental correctness (CC) which means the faulty statements were executed but the output of the program was right affect the effectiveness of fault localization. Many re-searchers seek to identify CC tests using cluster analysis. However, the high-dimensional data containing too much noise reduce the effectiveness of cluster analysis. To overcome the obstacle, we propose CBCFL: a context-based cluster fault localization approach, which incorporates a failure context showing how a failure is produced into cluster analysis. Specifically, CBCFL uses the failure context containing the state-ments whose execution affects the output of a failing test as input data for cluster analysis to improve the effectiveness of identifying CC tests. Since CC tests execute the faulty statement, we change the labels of CC tests into failing tests. We take the context and the corresponding changed labels as the input data for fault local-ization techniques. To evaluate the effectiveness of CBCFL, we conduct large-scale experiments on six large-sized programs using five state-of-the-art fault localization approaches. The experimen-tal results show that CBCFL is more effective than the baselines, e.g., our approach can improve the MLP-FL method using cluster analysis by at most 200%, 250%, and 320% under the Top-1, Top-5, and Top-10 accuracies.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":" 35","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113952895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance Anomaly Detection through Sequence Alignment of System-Level Traces 基于系统级轨迹序列对齐的性能异常检测
Pub Date : 2022-05-01 DOI: 10.1145/3524610.3527898
Madeline Janecek, Naser Ezzati-Jivan, A. Hamou-Lhadj
Identifying and diagnosing performance anomalies is essential for maintaining software quality, yet it can be a complex and time-consuming task. Low level kernel events have been used as an excellent data source to monitor performance, but raw trace data is often too large to easily conduct effective analyses. To address this shortcoming, in this paper, we propose a framework for uncovering performance problems using execution critical path data. A critical path is the longest execution sequence without wait delays, and it can provide valuable insight into a program's internal and external dependencies. Upon extracting this data, course grained anomaly detection techniques are employed to determine if a finer grained analysis is required. If this is the case, the critical paths of individual executions are grouped together with machine learning clustering to identify different execution types, and outlying anomalies are identified using performance indicators. Finally, multiple sequence alignment is used to pinpoint specific abnormalities in the identified anomalous executions, allowing for improved application performance diagnosis and overall program comprehension.
识别和诊断性能异常对于维护软件质量至关重要,但它可能是一项复杂且耗时的任务。低级内核事件已被用作监视性能的优秀数据源,但是原始跟踪数据通常太大,无法轻松进行有效的分析。为了解决这个缺点,在本文中,我们提出了一个使用执行关键路径数据来发现性能问题的框架。关键路径是没有等待延迟的最长执行序列,它可以为程序的内部和外部依赖提供有价值的见解。在提取这些数据之后,将使用过程粒度异常检测技术来确定是否需要更细粒度的分析。如果是这种情况,则将单个执行的关键路径与机器学习聚类组合在一起,以识别不同的执行类型,并使用性能指标识别外围异常。最后,使用多序列比对来查明已识别的异常执行中的特定异常,从而改进应用程序性能诊断和整体程序理解。
{"title":"Performance Anomaly Detection through Sequence Alignment of System-Level Traces","authors":"Madeline Janecek, Naser Ezzati-Jivan, A. Hamou-Lhadj","doi":"10.1145/3524610.3527898","DOIUrl":"https://doi.org/10.1145/3524610.3527898","url":null,"abstract":"Identifying and diagnosing performance anomalies is essential for maintaining software quality, yet it can be a complex and time-consuming task. Low level kernel events have been used as an excellent data source to monitor performance, but raw trace data is often too large to easily conduct effective analyses. To address this shortcoming, in this paper, we propose a framework for uncovering performance problems using execution critical path data. A critical path is the longest execution sequence without wait delays, and it can provide valuable insight into a program's internal and external dependencies. Upon extracting this data, course grained anomaly detection techniques are employed to determine if a finer grained analysis is required. If this is the case, the critical paths of individual executions are grouped together with machine learning clustering to identify different execution types, and outlying anomalies are identified using performance indicators. Finally, multiple sequence alignment is used to pinpoint specific abnormalities in the identified anomalous executions, allowing for improved application performance diagnosis and overall program comprehension.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133158496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1