首页 > 最新文献

2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)最新文献

英文 中文
Predicting Co-Changes between Functionality Specifications and Source Code in Behavior Driven Development 在行为驱动开发中预测功能规范和源代码之间的共同变更
Aidan Z. H. Yang, D. A. D. Costa, Ying Zou
Behavior Driven Development (BDD) is an agile approach that uses. feature files to describe the functionalities of a software system using natural language constructs (English-like phrases). Because of the English-like structure of. feature files, BDD specifications become an evolving documentation that helps all (even non-technical) stakeholders to understand and contribute to a software project. After specifying a. feature files, developers can use a BDD tool (e.g., Cucumber) to automatically generate test cases and implement the code of the specified functionality. However, maintaining traceability between. feature files and source code requires human efforts. Therefore,. feature files can be out-of-date, reducing the advantages of using BDD. Furthermore, existing research do not attempt to improve the traceability between. feature files and source code files. In this paper, we study the co-changes between. feature files and source code files to improve the traceability between. feature files and source code files. Due to the English-like syntax of. feature files, we use natural language processing to identify co-changes, with an accuracy of 79%. We study the characteristics of BDD co-changes and build random forest models to predict when a. feature files should be modified before committing a code change. The random forest model obtains an AUC of 0.77. The model can assist developers in identifying when a. feature files should be modified in code commits. Once the traceability is up-to-date, BDD developers can write test code more efficiently and keep the software documentation up-to-date.
行为驱动开发(BDD)是一种敏捷方法,它使用。使用自然语言结构(类似英语的短语)描述软件系统功能的特征文件。因为的结构类似于英语。特性文件,BDD规范成为一种不断发展的文档,帮助所有(甚至是非技术的)涉众理解并为软件项目做出贡献。在指定了特性文件之后,开发人员可以使用BDD工具(例如Cucumber)自动生成测试用例并实现指定功能的代码。然而,维护之间的可追溯性。特性文件和源代码需要人工操作。因此,。特性文件可能过时,从而降低了使用BDD的优势。此外,现有的研究并没有试图改善两者之间的可追溯性。功能文件和源代码文件。在本文中,我们研究了两者之间的共变。改进特性文件和源代码文件之间的可追溯性。功能文件和源代码文件。的语法类似于英语。特征文件,我们使用自然语言处理来识别共同变化,准确率为79%。我们研究了BDD协同变更的特点,并建立了随机森林模型来预测在提交代码变更之前应该何时修改特征文件。随机森林模型的AUC为0.77。该模型可以帮助开发人员确定何时应该在代码提交中修改特性文件。一旦跟踪性是最新的,BDD开发人员就可以更有效地编写测试代码,并保持软件文档是最新的。
{"title":"Predicting Co-Changes between Functionality Specifications and Source Code in Behavior Driven Development","authors":"Aidan Z. H. Yang, D. A. D. Costa, Ying Zou","doi":"10.1109/MSR.2019.00080","DOIUrl":"https://doi.org/10.1109/MSR.2019.00080","url":null,"abstract":"Behavior Driven Development (BDD) is an agile approach that uses. feature files to describe the functionalities of a software system using natural language constructs (English-like phrases). Because of the English-like structure of. feature files, BDD specifications become an evolving documentation that helps all (even non-technical) stakeholders to understand and contribute to a software project. After specifying a. feature files, developers can use a BDD tool (e.g., Cucumber) to automatically generate test cases and implement the code of the specified functionality. However, maintaining traceability between. feature files and source code requires human efforts. Therefore,. feature files can be out-of-date, reducing the advantages of using BDD. Furthermore, existing research do not attempt to improve the traceability between. feature files and source code files. In this paper, we study the co-changes between. feature files and source code files to improve the traceability between. feature files and source code files. Due to the English-like syntax of. feature files, we use natural language processing to identify co-changes, with an accuracy of 79%. We study the characteristics of BDD co-changes and build random forest models to predict when a. feature files should be modified before committing a code change. The random forest model obtains an AUC of 0.77. The model can assist developers in identifying when a. feature files should be modified in code commits. Once the traceability is up-to-date, BDD developers can write test code more efficiently and keep the software documentation up-to-date.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"15 1","pages":"534-544"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80100303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Cleaning StackOverflow for Machine Translation 清理机器翻译的StackOverflow
Musfiqur Rahman, Peter C. Rigby, Dharani Palani, T. Nguyen
Generating source code API sequences from an English query using Machine Translation (MT) has gained much interest in recent years. For any kind of MT, the model needs to be trained on a parallel corpus. In this paper we clean StackOverflow, one of the most popular online discussion forums for programmers, to generate a parallel English-Code corpus from Android posts. We contrast three data cleaning approaches: standard NLP, title only, and software task extraction. We evaluate the quality of the each corpus for MT. To provide indicators of how useful each corpus will be for machine translation, we provide researchers with measurements of the corpus size, percentage of unique tokens, and per-word maximum likelihood alignment entropy. We have used these corpus cleaning approaches to translate between English and Code [22, 23], to compare existing SMT approaches from word mapping to neural networks [24], and to re-examine the "natural software" hypothesis [29]. After cleaning and aligning the data, we create a simple maximum likelihood MT model to show that English words in the corpus map to a small number of specific code elements. This model provides a basis for the success of using StackOverflow for search and other tasks in the software engineering literature and paves the way for MT. Our scripts and corpora are publicly available on GitHub [1] as well as at https://search.datacite.org/works/10.5281/zenodo.2558551.
使用机器翻译(MT)从英语查询生成源代码API序列近年来引起了人们的广泛关注。对于任何类型的机器翻译,模型都需要在并行语料库上进行训练。在本文中,我们清理了StackOverflow,一个最受欢迎的程序员在线讨论论坛,从Android帖子中生成一个并行的英语代码语料库。我们对比了三种数据清理方法:标准NLP,仅标题和软件任务提取。我们为机器翻译评估每个语料库的质量。为了提供每个语料库对机器翻译的有用程度的指标,我们为研究人员提供了语料库大小、唯一令牌百分比和每个单词最大似然对齐熵的测量值。我们使用这些语料库清理方法在英语和代码之间进行翻译[22,23],比较从词映射到神经网络的现有SMT方法[24],并重新检验“自然软件”假设[29]。在清理和对齐数据之后,我们创建了一个简单的最大似然MT模型,以显示语料库中的英语单词映射到少量特定的代码元素。该模型为在软件工程文献中成功使用StackOverflow进行搜索和其他任务提供了基础,并为机器翻译铺平了道路。我们的脚本和语料库在GitHub[1]以及https://search.datacite.org/works/10.5281/zenodo.2558551上公开可用。
{"title":"Cleaning StackOverflow for Machine Translation","authors":"Musfiqur Rahman, Peter C. Rigby, Dharani Palani, T. Nguyen","doi":"10.1109/MSR.2019.00021","DOIUrl":"https://doi.org/10.1109/MSR.2019.00021","url":null,"abstract":"Generating source code API sequences from an English query using Machine Translation (MT) has gained much interest in recent years. For any kind of MT, the model needs to be trained on a parallel corpus. In this paper we clean StackOverflow, one of the most popular online discussion forums for programmers, to generate a parallel English-Code corpus from Android posts. We contrast three data cleaning approaches: standard NLP, title only, and software task extraction. We evaluate the quality of the each corpus for MT. To provide indicators of how useful each corpus will be for machine translation, we provide researchers with measurements of the corpus size, percentage of unique tokens, and per-word maximum likelihood alignment entropy. We have used these corpus cleaning approaches to translate between English and Code [22, 23], to compare existing SMT approaches from word mapping to neural networks [24], and to re-examine the \"natural software\" hypothesis [29]. After cleaning and aligning the data, we create a simple maximum likelihood MT model to show that English words in the corpus map to a small number of specific code elements. This model provides a basis for the success of using StackOverflow for search and other tasks in the software engineering literature and paves the way for MT. Our scripts and corpora are publicly available on GitHub [1] as well as at https://search.datacite.org/works/10.5281/zenodo.2558551.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"18 1","pages":"79-83"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87249080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Dataset of Non-Functional Bugs 非功能性bug的数据集
Aida Radu, Sarah Nadi
While several researchers have published bug data sets in the past, there has been less focus on bugs related to non-functional requirements. Non-functional requirements describe the quality attributes of a program. In this work, we introduce NFBugs, a data set of 133 non-functional bug fixes collected from 65 open-source projects written in Java and Python. NFBugs can be used to support code recommender systems focusing on non-functional properties.
虽然过去有几位研究人员发表过bug数据集,但很少有人关注与非功能需求相关的bug。非功能性需求描述了程序的质量属性。在这项工作中,我们介绍了NFBugs,这是一个从65个用Java和Python编写的开源项目中收集的133个非功能性错误修复的数据集。NFBugs可以用来支持专注于非功能属性的代码推荐系统。
{"title":"A Dataset of Non-Functional Bugs","authors":"Aida Radu, Sarah Nadi","doi":"10.1109/MSR.2019.00066","DOIUrl":"https://doi.org/10.1109/MSR.2019.00066","url":null,"abstract":"While several researchers have published bug data sets in the past, there has been less focus on bugs related to non-functional requirements. Non-functional requirements describe the quality attributes of a program. In this work, we introduce NFBugs, a data set of 133 non-functional bug fixes collected from 65 open-source projects written in Java and Python. NFBugs can be used to support code recommender systems focusing on non-functional properties.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"4 1","pages":"399-403"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85591473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Striking Gold in Software Repositories? An Econometric Study of Cryptocurrencies on GitHub 在软件存储库中找到金子?GitHub上加密货币的计量经济学研究
Asher Trockman, R. V. Tonder, Bogdan Vasilescu
Cryptocurrencies have a significant open source development presence on GitHub. This presents a unique opportunity to observe their related developer effort and software growth. Individual cryptocurrency prices are partly driven by attractiveness, and we hypothesize that high-quality, actively-developed software is one of its influences. Thus, we report on a study of a panel data set containing nearly a year of daily observations of development activity, popularity, and market capitalization for over two hundred open source cryptocurrencies. We find that open source project popularity is associated with higher market capitalization, though development activity and quality assurance practices are insignificant variables in our models. Using Granger causality tests, we find no compelling evidence for a dynamic relation between market capitalization and metrics such as daily stars, forks, watchers, commits, contributors, and lines of code changed.
加密货币在GitHub上有重要的开源开发存在。这提供了一个独特的机会来观察他们相关的开发人员的工作和软件的增长。个别加密货币的价格在一定程度上受到吸引力的驱动,我们假设高质量、积极开发的软件是其影响之一。因此,我们报告了一项对面板数据集的研究,该数据集包含了近一年对200多种开源加密货币的开发活动、受欢迎程度和市值的日常观察。我们发现开源项目的受欢迎程度与较高的市场资本化有关,尽管开发活动和质量保证实践在我们的模型中是无关紧要的变量。使用格兰杰因果关系测试,我们发现没有令人信服的证据表明市场资本和指标之间存在动态关系,例如每日明星,分叉,观察者,提交,贡献者和更改的代码行。
{"title":"Striking Gold in Software Repositories? An Econometric Study of Cryptocurrencies on GitHub","authors":"Asher Trockman, R. V. Tonder, Bogdan Vasilescu","doi":"10.1109/MSR.2019.00036","DOIUrl":"https://doi.org/10.1109/MSR.2019.00036","url":null,"abstract":"Cryptocurrencies have a significant open source development presence on GitHub. This presents a unique opportunity to observe their related developer effort and software growth. Individual cryptocurrency prices are partly driven by attractiveness, and we hypothesize that high-quality, actively-developed software is one of its influences. Thus, we report on a study of a panel data set containing nearly a year of daily observations of development activity, popularity, and market capitalization for over two hundred open source cryptocurrencies. We find that open source project popularity is associated with higher market capitalization, though development activity and quality assurance practices are insignificant variables in our models. Using Granger causality tests, we find no compelling evidence for a dynamic relation between market capitalization and metrics such as daily stars, forks, watchers, commits, contributors, and lines of code changed.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"69 1","pages":"181-185"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87142071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Beyond GumTree: A Hybrid Approach to Generate Edit Scripts 超越GumTree:生成编辑脚本的混合方法
Junnosuke Matsumoto, Yoshiki Higo, S. Kusumoto
On development using a version control system, understanding differences of source code is important. Edit scripts (in short, ES) represent differences between two versions of source code. One of the tools generating ESs is GumTree. GumTree takes two versions of source code as input and generates an ES consisting of insert, delete, update and move nodes of abstract syntax tree (in short, AST). However, the accuracy of move and update actions generated by GumTree is insufficient, which makes ESs more difficult to understand. A reason why the accuracy is insufficient is that GumTree generates ESs from only information of AST. Thus, in this research, we propose to generate easier-to-understand ESs by using not only structures of AST but also information of line differences. To evaluate our methodology, we applied it to some open source software, and we confirmed that ESs generated by our methodology are more helpful to understand the differences of source code than GumTree.
在使用版本控制系统进行开发时,理解源代码的差异是很重要的。编辑脚本(简称ES)表示两个版本源代码之间的差异。生成ESs的工具之一是GumTree。GumTree以两个版本的源代码作为输入,生成一个包含插入、删除、更新和移动抽象语法树节点(简称AST)的ES。然而,GumTree生成的移动和更新动作的准确性不足,这使得ESs更加难以理解。准确性不足的一个原因是GumTree仅从AST的信息生成ESs。因此,在本研究中,我们建议不仅使用AST的结构信息,还使用行差信息来生成更容易理解的ESs。为了评估我们的方法,我们将其应用于一些开源软件,我们证实了由我们的方法生成的ESs比GumTree更有助于理解源代码的差异。
{"title":"Beyond GumTree: A Hybrid Approach to Generate Edit Scripts","authors":"Junnosuke Matsumoto, Yoshiki Higo, S. Kusumoto","doi":"10.1109/MSR.2019.00082","DOIUrl":"https://doi.org/10.1109/MSR.2019.00082","url":null,"abstract":"On development using a version control system, understanding differences of source code is important. Edit scripts (in short, ES) represent differences between two versions of source code. One of the tools generating ESs is GumTree. GumTree takes two versions of source code as input and generates an ES consisting of insert, delete, update and move nodes of abstract syntax tree (in short, AST). However, the accuracy of move and update actions generated by GumTree is insufficient, which makes ESs more difficult to understand. A reason why the accuracy is insufficient is that GumTree generates ESs from only information of AST. Thus, in this research, we propose to generate easier-to-understand ESs by using not only structures of AST but also information of line differences. To evaluate our methodology, we applied it to some open source software, and we confirmed that ESs generated by our methodology are more helpful to understand the differences of source code than GumTree.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"73 1","pages":"550-554"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84022045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Keynote Abstract 主题抽象
{"title":"Keynote Abstract","authors":"","doi":"10.1109/msr.2019.00011","DOIUrl":"https://doi.org/10.1109/msr.2019.00011","url":null,"abstract":"","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84342801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Title Page iii 第三页标题
{"title":"Title Page iii","authors":"","doi":"10.1109/msr.2019.00002","DOIUrl":"https://doi.org/10.1109/msr.2019.00002","url":null,"abstract":"","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89546860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Next Steps in Static API-Misuse Detection 研究静态api误用检测的后续步骤
Sven Amann, H. Nguyen, Sarah Nadi, T. Nguyen, M. Mezini
Application Programming Interfaces (APIs) often impose constraints such as call order or preconditions. API misuses, i.e., usages violating these constraints, may cause software crashes, data-loss, and vulnerabilities. Researchers developed several approaches to detect API misuses, typically still resulting in low recall and precision. In this work, we investigate ways to improve API-misuse detection. We design MUDetect, an API-misuse detector that builds on the strengths of existing detectors and tries to mitigate their weaknesses. MUDetect uses a new graph representation of API usages that captures different types of API misuses and a systematically designed ranking strategy that effectively improves precision. Evaluation shows that MUDetect identifies real-world API misuses with twice the recall of previous detectors and 2.5x higher precision. It even achieves almost 4x higher precision and recall, when mining patterns across projects, rather than from only the target project.
应用程序编程接口(api)经常施加诸如调用顺序或先决条件之类的约束。API的误用,即违反这些约束的用法,可能会导致软件崩溃、数据丢失和漏洞。研究人员开发了几种检测API滥用的方法,但通常仍然导致召回率和准确率较低。在这项工作中,我们研究了改进api滥用检测的方法。我们设计了MUDetect,这是一个api误用检测器,它建立在现有检测器的优势之上,并试图减轻它们的弱点。MUDetect使用一种新的API用法图表示来捕获不同类型的API误用,并使用系统设计的排序策略来有效地提高精度。评估表明,MUDetect识别真实世界的API滥用,召回率是以前检测器的两倍,精度提高2.5倍。当跨项目挖掘模式时,它甚至达到了几乎4倍的精度和召回率,而不是仅从目标项目中。
{"title":"Investigating Next Steps in Static API-Misuse Detection","authors":"Sven Amann, H. Nguyen, Sarah Nadi, T. Nguyen, M. Mezini","doi":"10.1109/MSR.2019.00053","DOIUrl":"https://doi.org/10.1109/MSR.2019.00053","url":null,"abstract":"Application Programming Interfaces (APIs) often impose constraints such as call order or preconditions. API misuses, i.e., usages violating these constraints, may cause software crashes, data-loss, and vulnerabilities. Researchers developed several approaches to detect API misuses, typically still resulting in low recall and precision. In this work, we investigate ways to improve API-misuse detection. We design MUDetect, an API-misuse detector that builds on the strengths of existing detectors and tries to mitigate their weaknesses. MUDetect uses a new graph representation of API usages that captures different types of API misuses and a systematically designed ranking strategy that effectively improves precision. Evaluation shows that MUDetect identifies real-world API misuses with twice the recall of previous detectors and 2.5x higher precision. It even achieves almost 4x higher precision and recall, when mining patterns across projects, rather than from only the target project.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"55 1","pages":"265-275"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86570877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
GreenHub Farmer: Real-World Data for Android Energy Mining GreenHub Farmer: Android能源挖掘的真实世界数据
Hugo Matalonga, Bruno Cabral, F. C. Filho, Marco Couto, Rui Pereira, S. Sousa, J. Fernandes
As mobile devices are supporting more and more of our daily activities, it is vital to widen their battery up-time as much as possible. In fact, according to the Wall Street Journal, 9/10 users suffer from low battery anxiety. The goal of our work is to understand how Android usage, apps, operating systems, hardware and user habits influence battery lifespan. Our strategy is to collect anonymous raw data from devices all over the world, through a mobile app, build and analyze a large-scale dataset containing real-world, day-to-day data, representative of user practices. So far, the dataset we collected includes 12 million+ (anonymous) data samples, across 900+ device brands and 5.000+ models. And, it keeps growing. The data we collect, which is publicly available and by different channels, is sufficiently heterogeneous for supporting studies with a wide range of focuses and research goals, thus opening the opportunity to inform and reshape user habits, and even influence the development of both hardware and software for mobile devices.
随着移动设备越来越多地支持我们的日常活动,尽可能延长电池的正常运行时间至关重要。事实上,据《华尔街日报》报道,9/10的用户都有电量不足的焦虑。我们的工作目标是了解Android使用情况、应用程序、操作系统、硬件和用户习惯如何影响电池寿命。我们的策略是通过移动应用程序从世界各地的设备收集匿名原始数据,构建并分析包含真实世界日常数据的大规模数据集,代表用户实践。到目前为止,我们收集的数据集包括1200多万(匿名)数据样本,涉及900多个设备品牌和5000多个型号。而且,它还在不断增长。我们收集的数据是公开的,可以通过不同的渠道获得,足以支持具有广泛重点和研究目标的研究,从而为告知和重塑用户习惯提供机会,甚至影响移动设备硬件和软件的开发。
{"title":"GreenHub Farmer: Real-World Data for Android Energy Mining","authors":"Hugo Matalonga, Bruno Cabral, F. C. Filho, Marco Couto, Rui Pereira, S. Sousa, J. Fernandes","doi":"10.1109/MSR.2019.00034","DOIUrl":"https://doi.org/10.1109/MSR.2019.00034","url":null,"abstract":"As mobile devices are supporting more and more of our daily activities, it is vital to widen their battery up-time as much as possible. In fact, according to the Wall Street Journal, 9/10 users suffer from low battery anxiety. The goal of our work is to understand how Android usage, apps, operating systems, hardware and user habits influence battery lifespan. Our strategy is to collect anonymous raw data from devices all over the world, through a mobile app, build and analyze a large-scale dataset containing real-world, day-to-day data, representative of user practices. So far, the dataset we collected includes 12 million+ (anonymous) data samples, across 900+ device brands and 5.000+ models. And, it keeps growing. The data we collect, which is publicly available and by different channels, is sufficiently heterogeneous for supporting studies with a wide range of focuses and research goals, thus opening the opportunity to inform and reshape user habits, and even influence the development of both hardware and software for mobile devices.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"23 1","pages":"171-175"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77837581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A Large-Scale Study About Quality and Reproducibility of Jupyter Notebooks 木星笔记的质量和再现性的大规模研究
J. F. Pimentel, Leonardo Gresta Paulino Murta, V. Braganholo, J. Freire
Jupyter Notebooks have been widely adopted by many different communities, both in science and industry. They support the creation of literate programming documents that combine code, text, and execution results with visualizations and all sorts of rich media. The self-documenting aspects and the ability to reproduce results have been touted as significant benefits of notebooks. At the same time, there has been growing criticism that the way notebooks are being used leads to unexpected behavior, encourage poor coding practices, and that their results can be hard to reproduce. To understand good and bad practices used in the development of real notebooks, we studied 1.4 million notebooks from GitHub. We present a detailed analysis of their characteristics that impact reproducibility. We also propose a set of best practices that can improve the rate of reproducibility and discuss open challenges that require further research and development.
Jupyter Notebooks已被许多不同的团体广泛采用,包括科学界和工业界。它们支持创建文字编程文档,这些文档将代码、文本和执行结果与可视化和各种富媒体结合起来。自记录方面和重现结果的能力被吹捧为笔记本的显著优点。与此同时,越来越多的人批评说,使用笔记本的方式会导致意想不到的行为,鼓励糟糕的编码实践,而且它们的结果很难重现。为了了解在真实笔记本的开发中使用的好的和坏的做法,我们研究了来自GitHub的140万台笔记本。我们提出了他们的特点,影响再现性的详细分析。我们还提出了一套可以提高再现率的最佳实践,并讨论了需要进一步研究和开发的开放挑战。
{"title":"A Large-Scale Study About Quality and Reproducibility of Jupyter Notebooks","authors":"J. F. Pimentel, Leonardo Gresta Paulino Murta, V. Braganholo, J. Freire","doi":"10.1109/MSR.2019.00077","DOIUrl":"https://doi.org/10.1109/MSR.2019.00077","url":null,"abstract":"Jupyter Notebooks have been widely adopted by many different communities, both in science and industry. They support the creation of literate programming documents that combine code, text, and execution results with visualizations and all sorts of rich media. The self-documenting aspects and the ability to reproduce results have been touted as significant benefits of notebooks. At the same time, there has been growing criticism that the way notebooks are being used leads to unexpected behavior, encourage poor coding practices, and that their results can be hard to reproduce. To understand good and bad practices used in the development of real notebooks, we studied 1.4 million notebooks from GitHub. We present a detailed analysis of their characteristics that impact reproducibility. We also propose a set of best practices that can improve the rate of reproducibility and discuss open challenges that require further research and development.","PeriodicalId":6706,"journal":{"name":"2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)","volume":"81 1","pages":"507-517"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78658011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 148
期刊
2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1