首页 > 最新文献

2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)最新文献

英文 中文
Search-Based Energy Testing of Android 基于搜索的Android能量测试
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00115
Reyhaneh Jabbarvand, Jun-Wei Lin, S. Malek
The utility of a smartphone is limited by its battery capacity and the ability of its hardware and software to efficiently use the device's battery. To properly characterize the energy consumption of an app and identify energy defects, it is critical that apps are properly tested, i.e., analyzed dynamically to assess the app's energy properties. However, currently there is a lack of testing tools for evaluating the energy properties of apps. We present COBWEB, a search-based energy testing technique for Android. By leveraging a set of novel models, representing both the functional behavior of an app as well as the contextual conditions affecting the app's energy behavior, COBWEB generates a test suite that can effectively find energy defects. Our experimental results using real-world apps demonstrate not only its ability to effectively and efficiently test energy behavior of apps, but also its superiority over prior techniques by finding a wider and more diverse set of energy defects.
智能手机的效用受到电池容量以及硬件和软件有效利用电池的能力的限制。为了正确地描述应用程序的能耗并识别能量缺陷,对应用程序进行适当的测试至关重要,即动态分析以评估应用程序的能量特性。然而,目前缺乏评估应用程序能量特性的测试工具。我们提出了基于搜索的基于Android的能量测试技术COBWEB。通过利用一组新颖的模型,既表示应用程序的功能行为,又表示影响应用程序能量行为的上下文条件,COBWEB生成了一个测试套件,可以有效地发现能量缺陷。我们使用真实应用程序的实验结果不仅证明了它能够有效和高效地测试应用程序的能量行为,而且通过发现更广泛和更多样化的能量缺陷集,它比先前的技术更优越。
{"title":"Search-Based Energy Testing of Android","authors":"Reyhaneh Jabbarvand, Jun-Wei Lin, S. Malek","doi":"10.1109/ICSE.2019.00115","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00115","url":null,"abstract":"The utility of a smartphone is limited by its battery capacity and the ability of its hardware and software to efficiently use the device's battery. To properly characterize the energy consumption of an app and identify energy defects, it is critical that apps are properly tested, i.e., analyzed dynamically to assess the app's energy properties. However, currently there is a lack of testing tools for evaluating the energy properties of apps. We present COBWEB, a search-based energy testing technique for Android. By leveraging a set of novel models, representing both the functional behavior of an app as well as the contextual conditions affecting the app's energy behavior, COBWEB generates a test suite that can effectively find energy defects. Our experimental results using real-world apps demonstrate not only its ability to effectively and efficiently test energy behavior of apps, but also its superiority over prior techniques by finding a wider and more diverse set of energy defects.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73231146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Why Do Episodic Volunteers Stay in FLOSS Communities? 为什么偶发性志愿者留在FLOSS社区?
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00100
A. Barcomb, Klaas-Jan Stol, D. Riehle, Brian Fitzgerald
Successful Free/Libre and Open Source Software (FLOSS) projects incorporate both habitual and infrequent, or episodic, contributors. Using the concept of episodic volunteering (EV) from the general volunteering literature, we derive a model consisting of five key constructs that we hypothesize affect episodic volunteers' retention in FLOSS communities. To evaluate the model we conducted a survey with over 100 FLOSS episodic volunteers. We observe that three of our model constructs (social norms, satisfaction and community commitment) are all positively associated with volunteers' intention to remain, while the two other constructs (psychological sense of community and contributor benefit motivations) are not. Furthermore, exploratory clustering on unobserved heterogeneity suggests that there are four distinct categories of volunteers: satisfied, classic, social and obligated. Based on our findings, we offer suggestions for projects to incorporate and manage episodic volunteers, so as to better leverage this type of contributors and potentially improve projects' sustainability.
成功的自由/自由和开放源码软件(FLOSS)项目包括惯常的和不经常的,或偶发的贡献者。利用一般志愿服务文献中的情景性志愿服务(EV)概念,我们推导了一个由五个关键构念组成的模型,我们假设这些构念会影响情景性志愿者在FLOSS社区中的保留。为了评估该模型,我们对100多名FLOSS情景志愿者进行了调查。我们观察到,我们的三个模型构念(社会规范、满意度和社区承诺)都与志愿者的留下来意愿呈正相关,而另外两个构念(社区心理意识和贡献者利益动机)则不是。此外,对未观察到的异质性的探索性聚类表明,志愿者有四种不同的类别:满意型、经典型、社交型和义务型。基于我们的研究结果,我们为项目提供了纳入和管理偶发志愿者的建议,以便更好地利用这类贡献者,并潜在地提高项目的可持续性。
{"title":"Why Do Episodic Volunteers Stay in FLOSS Communities?","authors":"A. Barcomb, Klaas-Jan Stol, D. Riehle, Brian Fitzgerald","doi":"10.1109/ICSE.2019.00100","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00100","url":null,"abstract":"Successful Free/Libre and Open Source Software (FLOSS) projects incorporate both habitual and infrequent, or episodic, contributors. Using the concept of episodic volunteering (EV) from the general volunteering literature, we derive a model consisting of five key constructs that we hypothesize affect episodic volunteers' retention in FLOSS communities. To evaluate the model we conducted a survey with over 100 FLOSS episodic volunteers. We observe that three of our model constructs (social norms, satisfaction and community commitment) are all positively associated with volunteers' intention to remain, while the two other constructs (psychological sense of community and contributor benefit motivations) are not. Furthermore, exploratory clustering on unobserved heterogeneity suggests that there are four distinct categories of volunteers: satisfied, classic, social and obligated. Based on our findings, we offer suggestions for projects to incorporate and manage episodic volunteers, so as to better leverage this type of contributors and potentially improve projects' sustainability.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80921862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Distilling Neural Representations of Data Structure Manipulation using fMRI and fNIRS 利用fMRI和fNIRS提取数据结构操作的神经表征
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00053
Yu Huang, Xinyu Liu, R. Krueger, Tyler Santander, Xiaosu Hu, Kevin Leach, Westley Weimer
Data structures permeate many aspects of software engineering, but their associated human cognitive processes are not thoroughly understood. We leverage medical imaging and insights from the psychological notion of spatial ability to decode the neural representations of several fundamental data structures and their manipulations. In a human study involving 76 participants, we examine list, array, tree, and mental rotation tasks using both functional near-infrared spectroscopy (fNIRS) and functional magnetic resonance imaging (fMRI). We find a nuanced relationship: data structure and spatial operations use the same focal regions of the brain but to different degrees. They are related but distinct neural tasks. In addition, more difficult computer science problems induce higher cognitive load than do problems of pure spatial reasoning. Finally, while fNIRS is less expensive and more permissive, there are some computing-relevant brain regions that only fMRI can reach.
数据结构渗透到软件工程的许多方面,但是与它们相关的人类认知过程并没有被完全理解。我们利用医学成像和空间能力的心理学概念的见解来解码几个基本数据结构及其操作的神经表示。在一项涉及76名参与者的人体研究中,我们使用功能近红外光谱(fNIRS)和功能磁共振成像(fMRI)检查了列表、数组、树和心理旋转任务。我们发现了一种微妙的关系:数据结构和空间操作使用相同的大脑焦点区域,但程度不同。它们是相关但不同的神经任务。此外,更难的计算机科学问题比纯粹的空间推理问题诱发更高的认知负荷。最后,虽然fNIRS更便宜,也更容易使用,但有些与计算相关的大脑区域只有fMRI才能到达。
{"title":"Distilling Neural Representations of Data Structure Manipulation using fMRI and fNIRS","authors":"Yu Huang, Xinyu Liu, R. Krueger, Tyler Santander, Xiaosu Hu, Kevin Leach, Westley Weimer","doi":"10.1109/ICSE.2019.00053","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00053","url":null,"abstract":"Data structures permeate many aspects of software engineering, but their associated human cognitive processes are not thoroughly understood. We leverage medical imaging and insights from the psychological notion of spatial ability to decode the neural representations of several fundamental data structures and their manipulations. In a human study involving 76 participants, we examine list, array, tree, and mental rotation tasks using both functional near-infrared spectroscopy (fNIRS) and functional magnetic resonance imaging (fMRI). We find a nuanced relationship: data structure and spatial operations use the same focal regions of the brain but to different degrees. They are related but distinct neural tasks. In addition, more difficult computer science problems induce higher cognitive load than do problems of pure spatial reasoning. Finally, while fNIRS is less expensive and more permissive, there are some computing-relevant brain regions that only fMRI can reach.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77784331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Learning to Spot and Refactor Inconsistent Method Names 学习发现和重构不一致的方法名
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00019
Kui Liu, Dongsun Kim, Tegawendé F. Bissyandé, Tae-young Kim, Kisub Kim, Anil Koyuncu, Suntae Kim, Yves Le Traon
To ensure code readability and facilitate software maintenance, program methods must be named properly. In particular, method names must be consistent with the corresponding method implementations. Debugging method names remains an important topic in the literature, where various approaches analyze commonalities among method names in a large dataset to detect inconsistent method names and suggest better ones. We note that the state-of-the-art does not analyze the implemented code itself to assess consistency. We thus propose a novel automated approach to debugging method names based on the analysis of consistency between method names and method code. The approach leverages deep feature representation techniques adapted to the nature of each artifact. Experimental results on over 2.1 million Java methods show that we can achieve up to 15 percentage points improvement over the state-of-the-art, establishing a record performance of 67.9% F1- measure in identifying inconsistent method names. We further demonstrate that our approach yields up to 25% accuracy in suggesting full names, while the state-of-the-art lags far behind at 1.1% accuracy. Finally, we report on our success in fixing 66 inconsistent method names in a live study on projects in the wild.
为了确保代码的可读性和便于软件维护,必须正确命名程序方法。特别是,方法名必须与相应的方法实现保持一致。调试方法名仍然是文献中的一个重要主题,其中各种方法分析大型数据集中方法名之间的共性,以检测不一致的方法名并提出更好的方法名。我们注意到,最新技术并不分析实现的代码本身来评估一致性。因此,我们提出了一种基于方法名和方法代码之间一致性分析的自动化方法名调试方法。该方法利用了适应每个工件性质的深度特征表示技术。在超过210万个Java方法上的实验结果表明,我们可以比最先进的方法提高15个百分点,在识别不一致的方法名称方面建立了67.9% F1的创纪录性能。我们进一步证明,我们的方法在建议全名方面的准确率高达25%,而最先进的方法的准确率远远落后于1.1%。最后,我们报告了我们在野外项目的实时研究中成功修复了66个不一致的方法名。
{"title":"Learning to Spot and Refactor Inconsistent Method Names","authors":"Kui Liu, Dongsun Kim, Tegawendé F. Bissyandé, Tae-young Kim, Kisub Kim, Anil Koyuncu, Suntae Kim, Yves Le Traon","doi":"10.1109/ICSE.2019.00019","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00019","url":null,"abstract":"To ensure code readability and facilitate software maintenance, program methods must be named properly. In particular, method names must be consistent with the corresponding method implementations. Debugging method names remains an important topic in the literature, where various approaches analyze commonalities among method names in a large dataset to detect inconsistent method names and suggest better ones. We note that the state-of-the-art does not analyze the implemented code itself to assess consistency. We thus propose a novel automated approach to debugging method names based on the analysis of consistency between method names and method code. The approach leverages deep feature representation techniques adapted to the nature of each artifact. Experimental results on over 2.1 million Java methods show that we can achieve up to 15 percentage points improvement over the state-of-the-art, establishing a record performance of 67.9% F1- measure in identifying inconsistent method names. We further demonstrate that our approach yields up to 25% accuracy in suggesting full names, while the state-of-the-art lags far behind at 1.1% accuracy. Finally, we report on our success in fixing 66 inconsistent method names in a live study on projects in the wild.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80751087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Pattern-Based Mining of Opinions in Q&A Websites 基于模式的问答网站意见挖掘
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00066
B. Lin, Fiorella Zampetti, G. Bavota, M. D. Penta, Michele Lanza
Informal documentation contained in resources such as Q&A websites (e.g., Stack Overflow) is a precious resource for developers, who can find there examples on how to use certain APIs, as well as opinions about pros and cons of such APIs. Automatically identifying and classifying such opinions can alleviate developers' burden in performing manual searches, and can be used to recommend APIs that are good from some points of view (e.g., performance), or highlight those less ideal from other perspectives (e.g., compatibility). We propose POME (Pattern-based Opinion MinEr), an approach that leverages natural language parsing and pattern-matching to classify Stack Overflow sentences referring to APIs according to seven aspects (e.g., performance, usability), and to determine their polarity (positive vs negative). The patterns have been inferred by manually analyzing 4,346 sentences from Stack Overflow linked to a total of 30 APIs. We evaluated POME by (i) comparing the pattern-matching approach with machine learners leveraging the patterns themselves as well as n-grams extracted from Stack Overflow posts; (ii) assessing the ability of POME to detect the polarity of sentences, as compared to sentiment-analysis tools; (iii) comparing POME with the state-of-the-art Stack Overflow opinion mining approach, Opiner, through a study involving 24 human evaluators. Our study shows that POME exhibits a higher precision than a state-of-the-art technique (Opiner), in terms of both opinion aspect identification and polarity assessment.
在问答网站(如Stack Overflow)等资源中包含的非正式文档对开发人员来说是宝贵的资源,他们可以找到如何使用某些api的示例,以及关于这些api的优缺点的意见。自动识别和分类这些意见可以减轻开发人员执行手动搜索的负担,并且可以用于推荐从某些角度(例如,性能)来看是好的api,或者从其他角度(例如,兼容性)突出显示那些不太理想的api。我们提出了POME(基于模式的意见挖掘器),这是一种利用自然语言解析和模式匹配的方法,根据七个方面(例如,性能,可用性)对引用api的Stack Overflow句子进行分类,并确定它们的极性(积极与消极)。这些模式是通过手动分析Stack Overflow中的4346个句子推断出来的,这些句子链接到总共30个api。我们通过(i)将模式匹配方法与利用模式本身以及从Stack Overflow帖子中提取的n-grams的机器学习方法进行比较来评估POME;(ii)与情感分析工具相比,评估POME检测句子极性的能力;(iii)通过一项涉及24名人类评价者的研究,将POME与最先进的堆栈溢出意见挖掘方法Opiner进行比较。我们的研究表明,在意见方面识别和极性评估方面,POME表现出比最先进的技术(Opiner)更高的精度。
{"title":"Pattern-Based Mining of Opinions in Q&A Websites","authors":"B. Lin, Fiorella Zampetti, G. Bavota, M. D. Penta, Michele Lanza","doi":"10.1109/ICSE.2019.00066","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00066","url":null,"abstract":"Informal documentation contained in resources such as Q&A websites (e.g., Stack Overflow) is a precious resource for developers, who can find there examples on how to use certain APIs, as well as opinions about pros and cons of such APIs. Automatically identifying and classifying such opinions can alleviate developers' burden in performing manual searches, and can be used to recommend APIs that are good from some points of view (e.g., performance), or highlight those less ideal from other perspectives (e.g., compatibility). We propose POME (Pattern-based Opinion MinEr), an approach that leverages natural language parsing and pattern-matching to classify Stack Overflow sentences referring to APIs according to seven aspects (e.g., performance, usability), and to determine their polarity (positive vs negative). The patterns have been inferred by manually analyzing 4,346 sentences from Stack Overflow linked to a total of 30 APIs. We evaluated POME by (i) comparing the pattern-matching approach with machine learners leveraging the patterns themselves as well as n-grams extracted from Stack Overflow posts; (ii) assessing the ability of POME to detect the polarity of sentences, as compared to sentiment-analysis tools; (iii) comparing POME with the state-of-the-art Stack Overflow opinion mining approach, Opiner, through a study involving 24 human evaluators. Our study shows that POME exhibits a higher precision than a state-of-the-art technique (Opiner), in terms of both opinion aspect identification and polarity assessment.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79083536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Probabilistic Disassembly 概率拆卸
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00121
Kenneth A. Miller, Yonghwi Kwon, Yi Sun, Zhuo Zhang, X. Zhang, Zhiqiang Lin
Disassembling stripped binaries is a prominent challenge for binary analysis, due to the interleaving of code segments and data, and the difficulties of resolving control transfer targets of indirect calls and jumps. As a result, most existing disassemblers have both false positives (FP) and false negatives (FN). We observe that uncertainty is inevitable in disassembly due to the information loss during compilation and code generation. Therefore, we propose to model such uncertainty using probabilities and propose a novel disassembly technique, which computes a probability for each address in the code space, indicating its likelihood of being a true positive instruction. The probability is computed from a set of features that are reachable to an address, including control flow and data flow features. Our experiments with more than two thousands binaries show that our technique does not have any FN and has only 3.7% FP. In comparison, a state-of-the-art superset disassembly technique has 85% FP. A rewriter built on our disassembly can generate binaries that are only half of the size of those by superset disassembly and run 3% faster. While many widely-used disassemblers such as IDA and BAP suffer from missing function entries, our experiment also shows that even without any function entry information, our disassembler can still achieve 0 FN and 6.8% FP.
由于代码段和数据的交错,以及解决间接调用和跳转的控制转移目标的困难,对剥离二进制文件的反汇编是二进制分析的一个突出挑战。因此,大多数现有的反汇编程序同时具有假阳性(FP)和假阴性(FN)。我们观察到,由于编译和代码生成过程中的信息丢失,不确定性在反汇编中是不可避免的。因此,我们建议使用概率对这种不确定性进行建模,并提出一种新的反汇编技术,该技术计算代码空间中每个地址的概率,表明其成为真正指令的可能性。概率是从一组可到达地址的特征(包括控制流和数据流特征)中计算出来的。我们对两千多个二进制的实验表明,我们的技术没有任何FN,只有3.7%的FP。相比之下,最先进的超集拆卸技术具有85%的FP。建立在我们的反汇编上的重写器可以生成的二进制文件只有超集反汇编的一半大小,运行速度快3%。虽然许多广泛使用的反汇编器(如IDA和BAP)存在缺少函数条目的问题,但我们的实验也表明,即使没有任何函数条目信息,我们的反汇编器仍然可以实现0 FN和6.8% FP。
{"title":"Probabilistic Disassembly","authors":"Kenneth A. Miller, Yonghwi Kwon, Yi Sun, Zhuo Zhang, X. Zhang, Zhiqiang Lin","doi":"10.1109/ICSE.2019.00121","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00121","url":null,"abstract":"Disassembling stripped binaries is a prominent challenge for binary analysis, due to the interleaving of code segments and data, and the difficulties of resolving control transfer targets of indirect calls and jumps. As a result, most existing disassemblers have both false positives (FP) and false negatives (FN). We observe that uncertainty is inevitable in disassembly due to the information loss during compilation and code generation. Therefore, we propose to model such uncertainty using probabilities and propose a novel disassembly technique, which computes a probability for each address in the code space, indicating its likelihood of being a true positive instruction. The probability is computed from a set of features that are reachable to an address, including control flow and data flow features. Our experiments with more than two thousands binaries show that our technique does not have any FN and has only 3.7% FP. In comparison, a state-of-the-art superset disassembly technique has 85% FP. A rewriter built on our disassembly can generate binaries that are only half of the size of those by superset disassembly and run 3% faster. While many widely-used disassemblers such as IDA and BAP suffer from missing function entries, our experiment also shows that even without any function entry information, our disassembler can still achieve 0 FN and 6.8% FP.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80809574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Reasonably-Most-General Clients for JavaScript Library Analysis 合理的-最通用的客户端用于JavaScript库分析
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00026
E. Kristensen, Anders Møller
A well-known approach to statically analyze libraries without having access to their client code is to model all possible clients abstractly using a most-general client. In dynamic languages, however, a most-general client would be too general: it may interact with the library in ways that are not intended by the library developer and are not realistic in actual clients, resulting in useless analysis results. In this work, we explore the concept of a reasonably-most-general client, in the context of a new static analysis tool REAGENT that aims to detect errors in TypeScript declaration files for JavaScript libraries. By incorporating different variations of reasonably-most-general clients into an existing static analyzer for JavaScript, we use REAGENT to study how different assumptions of client behavior affect the analysis results. We also show how REAGENT is able to find type errors in real-world TypeScript declaration files, and, once the errors have been corrected, to guarantee that no remaining errors exist relative to the selected assumptions.
在不访问客户端代码的情况下对库进行静态分析的一种众所周知的方法是使用最通用的客户端对所有可能的客户端进行抽象建模。然而,在动态语言中,最通用的客户端可能过于通用:它可能以库开发人员不希望的方式与库交互,并且在实际客户端中不现实,从而导致无用的分析结果。在这项工作中,我们在一个新的静态分析工具REAGENT的背景下探索了一个合理的最通用客户端的概念,该工具旨在检测JavaScript库的TypeScript声明文件中的错误。通过将最常见的客户端的不同变体合并到现有的JavaScript静态分析器中,我们使用REAGENT来研究客户端行为的不同假设如何影响分析结果。我们还展示了REAGENT如何能够在真实的TypeScript声明文件中发现类型错误,并且在错误被纠正后,保证不存在与所选假设相关的剩余错误。
{"title":"Reasonably-Most-General Clients for JavaScript Library Analysis","authors":"E. Kristensen, Anders Møller","doi":"10.1109/ICSE.2019.00026","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00026","url":null,"abstract":"A well-known approach to statically analyze libraries without having access to their client code is to model all possible clients abstractly using a most-general client. In dynamic languages, however, a most-general client would be too general: it may interact with the library in ways that are not intended by the library developer and are not realistic in actual clients, resulting in useless analysis results. In this work, we explore the concept of a reasonably-most-general client, in the context of a new static analysis tool REAGENT that aims to detect errors in TypeScript declaration files for JavaScript libraries. By incorporating different variations of reasonably-most-general clients into an existing static analyzer for JavaScript, we use REAGENT to study how different assumptions of client behavior affect the analysis results. We also show how REAGENT is able to find type errors in real-world TypeScript declaration files, and, once the errors have been corrected, to guarantee that no remaining errors exist relative to the selected assumptions.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90410646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Mining Historical Test Logs to Predict Bugs and Localize Faults in the Test Logs 挖掘历史测试日志以预测错误并定位测试日志中的错误
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00031
Anunay Amar, Peter C. Rigby
Software testing is an integral part of modern software development. However, test runs can produce thousands of lines of logged output that make it difficult to find the cause of a fault in the logs. This problem is exacerbated by environmental failures that distract from product faults. In this paper we present techniques with the goal of capturing the maximum number of product faults, while flagging the minimum number of log lines for inspection. We observe that the location of a fault in a log should be contained in the lines of a failing test log. In contrast, a passing test log should not contain the lines related to a failure. Lines that occur in both a passing and failing log introduce noise when attempting to find the fault in a failing log. We introduce an approach where we remove the lines that occur in the passing log from the failing log. After removing these lines, we use information retrieval techniques to flag the most probable lines for investigation. We modify TF-IDF to identify the most relevant log lines related to past product failures. We then vectorize the logs and develop an exclusive version of KNN to identify which logs are likely to lead to product faults and which lines are the most probable indication of the failure. Our best approach, LogFaultFlagger finds 89% of the total faults and flags less than 1% of the total failed log lines for inspection. LogFaultFlagger drastically outperforms the previous work CAM. We implemented LogFaultFlagger as a tool at Ericsson where it presents fault prediction summaries to base station testers.
软件测试是现代软件开发的重要组成部分。但是,测试运行可能产生数千行日志输出,这使得很难在日志中找到故障的原因。环境问题分散了人们对产品故障的注意力,从而加剧了这一问题。在本文中,我们提出的技术目标是捕获最大数量的产品故障,同时标记最少数量的日志行以供检查。我们注意到,日志中故障的位置应该包含在失败测试日志的行中。相反,通过的测试日志不应该包含与失败相关的行。在尝试查找失败日志中的故障时,在通过和失败日志中都出现的行会引入噪声。我们引入了一种方法,将传递日志中出现的行从失败日志中删除。在删除这些线之后,我们使用信息检索技术来标记最可能的线进行调查。我们修改TF-IDF以识别与过去产品故障相关的最相关日志线。然后,我们对日志进行矢量化,并开发KNN的专有版本,以确定哪些日志可能导致产品故障,哪些行是最可能的故障指示。我们最好的方法是LogFaultFlagger,它能发现总故障的89%,并标记出少于1%的总失败日志行以供检查。LogFaultFlagger的性能大大优于以前的工作CAM。我们在爱立信实现了LogFaultFlagger作为一个工具,它向基站测试人员提供故障预测摘要。
{"title":"Mining Historical Test Logs to Predict Bugs and Localize Faults in the Test Logs","authors":"Anunay Amar, Peter C. Rigby","doi":"10.1109/ICSE.2019.00031","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00031","url":null,"abstract":"Software testing is an integral part of modern software development. However, test runs can produce thousands of lines of logged output that make it difficult to find the cause of a fault in the logs. This problem is exacerbated by environmental failures that distract from product faults. In this paper we present techniques with the goal of capturing the maximum number of product faults, while flagging the minimum number of log lines for inspection. We observe that the location of a fault in a log should be contained in the lines of a failing test log. In contrast, a passing test log should not contain the lines related to a failure. Lines that occur in both a passing and failing log introduce noise when attempting to find the fault in a failing log. We introduce an approach where we remove the lines that occur in the passing log from the failing log. After removing these lines, we use information retrieval techniques to flag the most probable lines for investigation. We modify TF-IDF to identify the most relevant log lines related to past product failures. We then vectorize the logs and develop an exclusive version of KNN to identify which logs are likely to lead to product faults and which lines are the most probable indication of the failure. Our best approach, LogFaultFlagger finds 89% of the total faults and flags less than 1% of the total failed log lines for inspection. LogFaultFlagger drastically outperforms the previous work CAM. We implemented LogFaultFlagger as a tool at Ericsson where it presents fault prediction summaries to base station testers.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84953050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Gigahorse: Thorough, Declarative Decompilation of Smart Contracts Gigahorse:对智能合约进行彻底的声明式反编译
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00120
Neville Grech, Lexi Brent, Bernhard Scholz, Y. Smaragdakis
The rise of smart contractsThe rise of smart contracts–autonomous applications running on blockchains–has led to a growing number of threats, necessitating sophisticated program analysis. However, smart contracts, which transact valuable tokens and cryptocurrencies, are compiled to very low-level bytecode. This bytecode is the ultimate semantics and means of enforcement of the contract. We present the Gigahorse toolchain. At its core is a reverse compiler (i.e., a decompiler) that decompiles smart contracts from Ethereum Virtual Machine (EVM) bytecode into a highlevel 3-address code representation. The new intermediate representation of smart contracts makes implicit data- and controlflow dependencies of the EVM bytecode explicit. Decompilation obviates the need for a contract’s source and allows the analysis of both new and deployed contracts. Gigahorse advances the state of the art on several fronts. It gives the highest analysis precision and completeness among decompilers for Ethereum smart contracts–e.g., Gigahorse can decompile over 99.98% of deployed contracts, compared to 88% for the recently-published Vandal decompiler and under 50% for the state-of-the-practice Porosity decompiler. Importantly, Gigahorse offers a full-featured toolchain for further analyses (and a "batteries included" approach, with multiple clients already implemented), together with the highest performance and scalability. Key to these improvements is Gigahorse’s use of a declarative, logic-based specification, which allows high-level insights to inform low-level decompilation.autonomous applications running on blockchains---has led to a growing number of threats, necessitating sophisticated program analysis. However, smart contracts, which transact valuable tokens and cryptocurrencies, are compiled to very low-level bytecode. This bytecode is the ultimate semantics and means of enforcement of the contract. We present the Gigahorse toolchain. At its core is a reverse compiler (i.e., a decompiler) that decompiles smart contracts from Ethereum Virtual Machine (EVM) bytecode into a high-level 3-address code representation. The new intermediate representation of smart contracts makes implicit data- and control-flow dependencies of the EVM bytecode explicit. Decompilation obviates the need for a contract's source and allows the analysis of both new and deployed contracts. Gigahorse advances the state of the art on several fronts. It gives the highest analysis precision and completeness among decompilers for Ethereum smart contracts---e.g., Gigahorse can decompile over 99.98% of deployed contracts, compared to 88% for the recently-published Vandal decompiler and under 50% for the state-of-the-practice Porosity decompiler. Importantly, Gigahorse offers a full-featured toolchain for further analyses (and a ``batteries included'' approach, with multiple clients already implemented), together with the highest performance and scalability. Key to these improvements is Gigahorse's
智能合约的兴起——在区块链上运行的自主应用程序——导致了越来越多的威胁,需要复杂的程序分析。然而,交易有价值的令牌和加密货币的智能合约被编译为非常低级的字节码。这个字节码是契约的最终语义和执行方式。我们展示了Gigahorse工具链。其核心是一个反向编译器(即反编译器),它将智能合约从以太坊虚拟机(EVM)字节码反编译为高级3地址代码表示。智能合约的新中间表示使EVM字节码的隐式数据和控制流依赖关系显式化。反编译消除了对契约源的需要,并允许分析新的和已部署的契约。Gigahorse在几个方面推进了最先进的技术。它在以太坊智能合约的反编译器中提供了最高的分析精度和完整性。, Gigahorse可以反编译超过99.98%的已部署契约,而最近发布的Vandal反编译器的反编译率为88%,而最实用的孔隙度反编译器的反编译率不到50%。重要的是,Gigahorse为进一步分析提供了一个全功能的工具链(以及一个“包含电池”的方法,已经实现了多个客户端),以及最高的性能和可扩展性。这些改进的关键是Gigahorse使用了声明性的、基于逻辑的规范,它允许高级洞察通知低级反编译。在区块链上运行的自主应用程序导致了越来越多的威胁,需要复杂的程序分析。然而,交易有价值的令牌和加密货币的智能合约被编译为非常低级的字节码。这个字节码是契约的最终语义和执行方式。我们展示了Gigahorse工具链。其核心是一个反向编译器(即反编译器),它将智能合约从以太坊虚拟机(EVM)字节码反编译为高级的3地址代码表示。智能合约的新中间表示使EVM字节码的隐式数据和控制流依赖关系显式化。反编译消除了对契约源的需要,并允许分析新的和已部署的契约。Gigahorse在几个方面推进了最先进的技术。它在以太坊智能合约的反编译器中提供了最高的分析精度和完整性。, Gigahorse可以反编译超过99.98%的已部署契约,而最近发布的Vandal反编译器的反编译率为88%,而最实用的孔隙度反编译器的反编译率不到50%。重要的是,Gigahorse为进一步分析提供了一个全功能的工具链(以及一个“包含电池”的方法,已经实现了多个客户端),以及最高的性能和可扩展性。这些改进的关键是Gigahorse使用了声明性的、基于逻辑的规范,它允许高级洞察通知低级反编译。
{"title":"Gigahorse: Thorough, Declarative Decompilation of Smart Contracts","authors":"Neville Grech, Lexi Brent, Bernhard Scholz, Y. Smaragdakis","doi":"10.1109/ICSE.2019.00120","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00120","url":null,"abstract":"The rise of smart contractsThe rise of smart contracts–autonomous applications running on blockchains–has led to a growing number of threats, necessitating sophisticated program analysis. However, smart contracts, which transact valuable tokens and cryptocurrencies, are compiled to very low-level bytecode. This bytecode is the ultimate semantics and means of enforcement of the contract. We present the Gigahorse toolchain. At its core is a reverse compiler (i.e., a decompiler) that decompiles smart contracts from Ethereum Virtual Machine (EVM) bytecode into a highlevel 3-address code representation. The new intermediate representation of smart contracts makes implicit data- and controlflow dependencies of the EVM bytecode explicit. Decompilation obviates the need for a contract’s source and allows the analysis of both new and deployed contracts. Gigahorse advances the state of the art on several fronts. It gives the highest analysis precision and completeness among decompilers for Ethereum smart contracts–e.g., Gigahorse can decompile over 99.98% of deployed contracts, compared to 88% for the recently-published Vandal decompiler and under 50% for the state-of-the-practice Porosity decompiler. Importantly, Gigahorse offers a full-featured toolchain for further analyses (and a \"batteries included\" approach, with multiple clients already implemented), together with the highest performance and scalability. Key to these improvements is Gigahorse’s use of a declarative, logic-based specification, which allows high-level insights to inform low-level decompilation.autonomous applications running on blockchains---has led to a growing number of threats, necessitating sophisticated program analysis. However, smart contracts, which transact valuable tokens and cryptocurrencies, are compiled to very low-level bytecode. This bytecode is the ultimate semantics and means of enforcement of the contract. We present the Gigahorse toolchain. At its core is a reverse compiler (i.e., a decompiler) that decompiles smart contracts from Ethereum Virtual Machine (EVM) bytecode into a high-level 3-address code representation. The new intermediate representation of smart contracts makes implicit data- and control-flow dependencies of the EVM bytecode explicit. Decompilation obviates the need for a contract's source and allows the analysis of both new and deployed contracts. Gigahorse advances the state of the art on several fronts. It gives the highest analysis precision and completeness among decompilers for Ethereum smart contracts---e.g., Gigahorse can decompile over 99.98% of deployed contracts, compared to 88% for the recently-published Vandal decompiler and under 50% for the state-of-the-practice Porosity decompiler. Importantly, Gigahorse offers a full-featured toolchain for further analyses (and a ``batteries included'' approach, with multiple clients already implemented), together with the highest performance and scalability. Key to these improvements is Gigahorse's","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84635642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Class Imbalance Evolution and Verification Latency in Just-in-Time Software Defect Prediction 实时软件缺陷预测中的类不平衡演化与验证延迟
Pub Date : 2019-05-25 DOI: 10.1109/ICSE.2019.00076
George G. Cabral, Leandro L. Minku, Emad Shihab, Suhaib Mujahid
Just-in-Time Software Defect Prediction (JIT-SDP) is an SDP approach that makes defect predictions at the software change level. Most existing JIT-SDP work assumes that the characteristics of the problem remain the same over time. However, JIT-SDP may suffer from class imbalance evolution. Specifically, the imbalance status of the problem (i.e., how much underrepresented the defect-inducing changes are) may be intensified or reduced over time. If occurring, this could render existing JIT-SDP approaches unsuitable, including those that re-build classifiers over time using only recent data. This work thus provides the first investigation of whether class imbalance evolution poses a threat to JIT-SDP. This investigation is performed in a realistic scenario by taking into account verification latency -- the often overlooked fact that labeled training examples arrive with a delay. Based on 10 GitHub projects, we show that JIT-SDP suffers from class imbalance evolution, significantly hindering the predictive performance of existing JIT-SDP approaches. Compared to state-of-the-art class imbalance evolution learning approaches, the predictive performance of JIT-SDP approaches was up to 97.2% lower in terms of g-mean. Hence, it is essential to tackle class imbalance evolution in JIT-SDP. We then propose a novel class imbalance evolution approach for the specific context of JIT-SDP. While maintaining top ranked g-means, this approach managed to produce up to 63.59% more balanced recalls on the defect-inducing and clean classes than state-of-the-art class imbalance evolution approaches. We thus recommend it to avoid overemphasizing one class over the other in JIT-SDP.
即时软件缺陷预测(JIT-SDP)是一种在软件变更级别进行缺陷预测的SDP方法。大多数现有的JIT-SDP工作都假定问题的特征随着时间的推移保持不变。然而,JIT-SDP可能会遭受阶级不平衡的演变。具体地说,问题的不平衡状态(即,引起缺陷的变化有多少未被充分代表)可能随着时间的推移而加剧或减少。如果发生这种情况,可能会使现有的JIT-SDP方法变得不合适,包括那些只使用最近的数据随时间重新构建分类器的方法。因此,这项工作提供了阶级不平衡进化是否对JIT-SDP构成威胁的首次调查。这项调查是在考虑验证延迟的现实场景中进行的——这是一个经常被忽视的事实,即标记的训练示例会延迟到达。基于10个GitHub项目,我们发现JIT-SDP存在类不平衡进化,严重阻碍了现有JIT-SDP方法的预测性能。与最先进的类失衡进化学习方法相比,JIT-SDP方法的预测性能在g-mean方面降低了97.2%。因此,解决JIT-SDP中的阶级不平衡演变问题是至关重要的。然后,我们针对JIT-SDP的具体情况提出了一种新的类不平衡演化方法。在保持顶级g均值的同时,该方法在诱导缺陷和清洁类上的平衡召回比最先进的类不平衡进化方法多出63.59%。因此,我们建议避免在JIT-SDP中过分强调一个类而不是另一个类。
{"title":"Class Imbalance Evolution and Verification Latency in Just-in-Time Software Defect Prediction","authors":"George G. Cabral, Leandro L. Minku, Emad Shihab, Suhaib Mujahid","doi":"10.1109/ICSE.2019.00076","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00076","url":null,"abstract":"Just-in-Time Software Defect Prediction (JIT-SDP) is an SDP approach that makes defect predictions at the software change level. Most existing JIT-SDP work assumes that the characteristics of the problem remain the same over time. However, JIT-SDP may suffer from class imbalance evolution. Specifically, the imbalance status of the problem (i.e., how much underrepresented the defect-inducing changes are) may be intensified or reduced over time. If occurring, this could render existing JIT-SDP approaches unsuitable, including those that re-build classifiers over time using only recent data. This work thus provides the first investigation of whether class imbalance evolution poses a threat to JIT-SDP. This investigation is performed in a realistic scenario by taking into account verification latency -- the often overlooked fact that labeled training examples arrive with a delay. Based on 10 GitHub projects, we show that JIT-SDP suffers from class imbalance evolution, significantly hindering the predictive performance of existing JIT-SDP approaches. Compared to state-of-the-art class imbalance evolution learning approaches, the predictive performance of JIT-SDP approaches was up to 97.2% lower in terms of g-mean. Hence, it is essential to tackle class imbalance evolution in JIT-SDP. We then propose a novel class imbalance evolution approach for the specific context of JIT-SDP. While maintaining top ranked g-means, this approach managed to produce up to 63.59% more balanced recalls on the defect-inducing and clean classes than state-of-the-art class imbalance evolution approaches. We thus recommend it to avoid overemphasizing one class over the other in JIT-SDP.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81318879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
期刊
2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1