首页 > 最新文献

Automated Software Engineering最新文献

英文 中文
WalletRadar: towards automating the detection of vulnerabilities in browser-based cryptocurrency wallets WalletRadar:实现自动检测基于浏览器的加密货币钱包中的漏洞
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-31 DOI: 10.1007/s10515-024-00430-3
Pengcheng Xia, Yanhui Guo, Zhaowen Lin, Jun Wu, Pengbo Duan, Ningyu He, Kailong Wang, Tianming Liu, Yinliang Yue, Guoai Xu, Haoyu Wang

Cryptocurrency wallets, acting as fundamental infrastructure to the blockchain ecosystem, have seen significant user growth, particularly among browser-based wallets (i.e., browser extensions). However, this expansion accompanies security challenges, making these wallets prime targets for malicious activities. Despite a substantial user base, there is not only a significant gap in comprehensive security analysis but also a pressing need for specialized tools that can aid developers in reducing vulnerabilities during the development process. To fill the void, we present a comprehensive security analysis of browser-based wallets in this paper, along with the development of an automated tool designed for this purpose. We first compile a taxonomy of security vulnerabilities resident in cryptocurrency wallets by harvesting historical security reports. Based on this, we design WalletRadar, an automated detection framework that can accurately identify security issues based on static and dynamic analysis. Evaluation of 96 popular browser-based wallets shows WalletRadar’s effectiveness, by successfully automating the detection process in 90% of these wallets with high precision. This evaluation has led to the discovery of 116 security vulnerabilities corresponding to 70 wallets. By the time of this paper, we have received confirmations of 10 vulnerabilities from 8 wallet developers, with over $2,000 bug bounties. Further, we observed that 12 wallet developers have silently fixed 16 vulnerabilities after our Conflict of interest. WalletRadar can effectively automate the identification of security risks in cryptocurrency wallets, thereby enhancing software development quality and safety in the blockchain ecosystem.

加密货币钱包作为区块链生态系统的基础架构,用户数量大幅增长,尤其是基于浏览器的钱包(即浏览器扩展)。然而,这种增长伴随着安全挑战,使这些钱包成为恶意活动的主要目标。尽管用户基数庞大,但在全面安全分析方面不仅存在巨大差距,而且迫切需要能帮助开发人员在开发过程中减少漏洞的专业工具。为了填补这一空白,我们在本文中对基于浏览器的钱包进行了全面的安全分析,并为此开发了一款自动工具。我们首先通过收集历史安全报告,对加密货币钱包中存在的安全漏洞进行分类。在此基础上,我们设计了一个自动检测框架 WalletRadar,它可以根据静态和动态分析准确识别安全问题。对 96 个流行的基于浏览器的钱包进行的评估显示了 WalletRadar 的有效性,它成功地对其中 90% 的钱包进行了高精度的自动检测。通过评估,我们发现了 70 个钱包的 116 个安全漏洞。截至本文发稿时,我们已收到 8 个钱包开发商对 10 个漏洞的确认,漏洞赏金超过 2000 美元。此外,我们还观察到,在我们的利益冲突之后,有 12 家钱包开发商悄悄修复了 16 个漏洞。WalletRadar 可以有效地自动识别加密货币钱包中的安全风险,从而提高区块链生态系统中的软件开发质量和安全性。
{"title":"WalletRadar: towards automating the detection of vulnerabilities in browser-based cryptocurrency wallets","authors":"Pengcheng Xia,&nbsp;Yanhui Guo,&nbsp;Zhaowen Lin,&nbsp;Jun Wu,&nbsp;Pengbo Duan,&nbsp;Ningyu He,&nbsp;Kailong Wang,&nbsp;Tianming Liu,&nbsp;Yinliang Yue,&nbsp;Guoai Xu,&nbsp;Haoyu Wang","doi":"10.1007/s10515-024-00430-3","DOIUrl":"10.1007/s10515-024-00430-3","url":null,"abstract":"<div><p>Cryptocurrency wallets, acting as fundamental infrastructure to the blockchain ecosystem, have seen significant user growth, particularly among browser-based wallets (i.e., browser extensions). However, this expansion accompanies security challenges, making these wallets prime targets for malicious activities. Despite a substantial user base, there is not only a significant gap in comprehensive security analysis but also a pressing need for specialized tools that can aid developers in reducing vulnerabilities during the development process. To fill the void, we present a comprehensive security analysis of browser-based wallets in this paper, along with the development of an automated tool designed for this purpose. We first compile a taxonomy of security vulnerabilities resident in cryptocurrency wallets by harvesting historical security reports. Based on this, we design <span>WalletRadar</span>, an automated detection framework that can accurately identify security issues based on static and dynamic analysis. Evaluation of 96 popular browser-based wallets shows <span>WalletRadar</span>’s effectiveness, by successfully automating the detection process in 90% of these wallets with high precision. This evaluation has led to the discovery of 116 security vulnerabilities corresponding to 70 wallets. By the time of this paper, we have received confirmations of 10 vulnerabilities from 8 wallet developers, with over $2,000 bug bounties. Further, we observed that 12 wallet developers have silently fixed 16 vulnerabilities after our Conflict of interest. <span>WalletRadar</span> can effectively automate the identification of security risks in cryptocurrency wallets, thereby enhancing software development quality and safety in the blockchain ecosystem.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
6G secure quantum communication: a success probability prediction model 6G 安全量子通信:成功概率预测模型
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-29 DOI: 10.1007/s10515-024-00427-y
Muhammad Azeem Akbar, Arif Ali Khan, Sami Hyrynsalmi, Javed Ali Khan

The emergence of 6G networks initiates significant transformations in the communication technology landscape. Yet, the melding of quantum computing (QC) with 6G networks although promising an array of benefits, particularly in secure communication. Adapting QC into 6G requires a rigorous focus on numerous critical variables. This study aims to identify key variables in secure quantum communication (SQC) in 6G and develop a model for predicting the success probability of 6G-SQC projects. We identified key 6G-SQC variables from existing literature to achieve these objectives and collected training data by conducting a questionnaire survey. We then analyzed these variables using an optimization model, i.e., Genetic Algorithm (GA), with two different prediction methods the Naïve Bayes Classifier (NBC) and Logistic Regression (LR). The results of success probability prediction models indicate that as the 6G-SQC matures, project success probability significantly increases, and costs are notably reduced. Furthermore, the best fitness rankings for each 6G-SQC project variable determined using NBC and LR indicated a strong positive correlation (rs = 0.895). The t-test results (t = 0.752, p = 0.502 > 0.05) show no significant differences between the rankings calculated using both prediction models (NBC and LR). The results reveal that the developed success probability prediction model, based on 15 identified 6G-SQC project variables, highlights the areas where practitioners need to focus more to facilitate the cost-effective and successful implementation of 6G-SQC projects.

6G 网络的出现引发了通信技术领域的重大变革。然而,量子计算(QC)与 6G 网络的结合虽然有望带来一系列好处,尤其是在安全通信方面。将量子计算与 6G 相结合需要严格关注众多关键变量。本研究旨在确定 6G 安全量子通信(SQC)中的关键变量,并开发一个用于预测 6G-SQC 项目成功概率的模型。为实现这些目标,我们从现有文献中确定了 6G-SQC 的关键变量,并通过问卷调查收集了培训数据。然后,我们使用优化模型,即遗传算法(GA),结合奈伊夫贝叶斯分类器(NBC)和逻辑回归(LR)两种不同的预测方法,对这些变量进行了分析。成功概率预测模型的结果表明,随着 6G-SQC 的成熟,项目成功概率显著提高,成本明显降低。此外,使用 NBC 和 LR 确定的 6G-SQC 项目各变量的最佳适配度排名显示出很强的正相关性(rs = 0.895)。t 检验结果(t = 0.752,p = 0.502 >0.05)表明,使用两种预测模型(NBC 和 LR)计算出的排名之间没有显著差异。结果表明,基于 15 个已确定的 6G-SQC 项目变量开发的成功概率预测模型突出了从业人员需要更加关注的领域,以促进 6G-SQC 项目的成本效益和成功实施。
{"title":"6G secure quantum communication: a success probability prediction model","authors":"Muhammad Azeem Akbar,&nbsp;Arif Ali Khan,&nbsp;Sami Hyrynsalmi,&nbsp;Javed Ali Khan","doi":"10.1007/s10515-024-00427-y","DOIUrl":"10.1007/s10515-024-00427-y","url":null,"abstract":"<div><p>The emergence of 6G networks initiates significant transformations in the communication technology landscape. Yet, the melding of quantum computing (QC) with 6G networks although promising an array of benefits, particularly in secure communication. Adapting QC into 6G requires a rigorous focus on numerous critical variables. This study aims to identify key variables in secure quantum communication (SQC) in 6G and develop a model for predicting the success probability of 6G-SQC projects. We identified key 6G-SQC variables from existing literature to achieve these objectives and collected training data by conducting a questionnaire survey. We then analyzed these variables using an optimization model, i.e., Genetic Algorithm (GA), with two different prediction methods the Naïve Bayes Classifier (NBC) and Logistic Regression (LR). The results of success probability prediction models indicate that as the 6G-SQC matures, project success probability significantly increases, and costs are notably reduced. Furthermore, the best fitness rankings for each 6G-SQC project variable determined using NBC and LR indicated a strong positive correlation (rs = 0.895). The t-test results (t = 0.752, <i>p</i> = 0.502 &gt; 0.05) show no significant differences between the rankings calculated using both prediction models (NBC and LR). The results reveal that the developed success probability prediction model, based on 15 identified 6G-SQC project variables, highlights the areas where practitioners need to focus more to facilitate the cost-effective and successful implementation of 6G-SQC projects.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00427-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140367888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bash comment generation via data augmentation and semantic-aware CodeBERT 通过数据增强和语义感知 CodeBERT 生成 Bash 注释
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-26 DOI: 10.1007/s10515-024-00431-2
Yiheng Shen, Xiaolin Ju, Xiang Chen, Guang Yang

Understanding Bash code is challenging for developers due to its syntax flexibility and unique features. Bash lacks sufficient training data compared to comment generation tasks in popular programming languages. Furthermore, collecting more real Bash code and corresponding comments is time-consuming and labor-intensive. In this study, we propose a two-module method named Bash2Com for Bash code comments generation. The first module, NP-GD, is a gradient-based automatic data augmentation component that enhances normalization stability when generating adversarial examples. The second module, MASA, leverages CodeBERT to learn the rich semantics of Bash code. Specifically, MASA considers the representations learned at each layer of CodeBERT as a set of semantic information that captures recursive relationships within the code. To generate comments for different Bash snippets, MASA employs LSTM and attention mechanisms to dynamically concentrate on relevant representational information. Then, we utilize the Transformer decoder and beam search algorithm to generate code comments. To evaluate the effectiveness of Bash2Com, we consider a corpus of 10,592 Bash code and corresponding comments. Compared with the state-of-the-art baselines, our experimental results show that Bash2Com can outperform all baselines by at least 10.19%, 11.81%, 2.61%, and 6.13% in terms of the performance measures BLEU-3/4, METEOR, and ROUGR-L. Moreover, the rationality of NP-GD and MASA in Bash2Com are verified by ablation studies. Finally, we conduct a human evaluation to illustrate the effectiveness of Bash2Com from practitioners’ perspectives.

由于 Bash 的语法灵活、功能独特,理解 Bash 代码对开发人员来说具有挑战性。与流行编程语言的注释生成任务相比,Bash 缺乏足够的训练数据。此外,收集更多真实的 Bash 代码和相应的注释既耗时又耗力。在本研究中,我们提出了一种名为 Bash2Com 的双模块方法来生成 Bash 代码注释。第一个模块 NP-GD 是一个基于梯度的自动数据增强组件,可在生成对抗示例时增强归一化稳定性。第二个模块 MASA 利用 CodeBERT 学习 Bash 代码的丰富语义。具体来说,MASA 将 CodeBERT 各层学习到的表征视为一组语义信息,可捕捉代码中的递归关系。为了为不同的 Bash 代码段生成注释,MASA 采用了 LSTM 和注意力机制,以动态集中相关的表征信息。然后,我们利用 Transformer 解码器和波束搜索算法生成代码注释。为了评估 Bash2Com 的有效性,我们使用了一个包含 10,592 个 Bash 代码和相应注释的语料库。与最先进的基线相比,我们的实验结果表明,在性能指标BLEU-3/4、METEOR和ROUGR-L方面,Bash2Com至少比所有基线高出10.19%、11.81%、2.61%和6.13%。此外,Bash2Com 中的 NP-GD 和 MASA 的合理性也通过消融研究得到了验证。最后,我们进行了人工评估,从实践者的角度说明了Bash2Com的有效性。
{"title":"Bash comment generation via data augmentation and semantic-aware CodeBERT","authors":"Yiheng Shen,&nbsp;Xiaolin Ju,&nbsp;Xiang Chen,&nbsp;Guang Yang","doi":"10.1007/s10515-024-00431-2","DOIUrl":"10.1007/s10515-024-00431-2","url":null,"abstract":"<div><p>Understanding Bash code is challenging for developers due to its syntax flexibility and unique features. Bash lacks sufficient training data compared to comment generation tasks in popular programming languages. Furthermore, collecting more real Bash code and corresponding comments is time-consuming and labor-intensive. In this study, we propose a two-module method named Bash2Com for Bash code comments generation. The first module, NP-GD, is a gradient-based automatic data augmentation component that enhances normalization stability when generating adversarial examples. The second module, MASA, leverages CodeBERT to learn the rich semantics of Bash code. Specifically, MASA considers the representations learned at each layer of CodeBERT as a set of semantic information that captures recursive relationships within the code. To generate comments for different Bash snippets, MASA employs LSTM and attention mechanisms to dynamically concentrate on relevant representational information. Then, we utilize the Transformer decoder and beam search algorithm to generate code comments. To evaluate the effectiveness of Bash2Com, we consider a corpus of 10,592 Bash code and corresponding comments. Compared with the state-of-the-art baselines, our experimental results show that Bash2Com can outperform all baselines by at least 10.19%, 11.81%, 2.61%, and 6.13% in terms of the performance measures BLEU-3/4, METEOR, and ROUGR-L. Moreover, the rationality of NP-GD and MASA in Bash2Com are verified by ablation studies. Finally, we conduct a human evaluation to illustrate the effectiveness of Bash2Com from practitioners’ perspectives.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated detection of class diagram smells using self-supervised learning 利用自监督学习自动检测类图气味
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-24 DOI: 10.1007/s10515-024-00429-w
Amal Alazba, Hamoud Aljamaan, Mohammad Alshayeb

Design smells are symptoms of poorly designed solutions that may result in several maintenance issues. While various approaches, including traditional machine learning methods, have been proposed and shown to be effective in detecting design smells, they require extensive manually labeled data, which is expensive and challenging to scale. To leverage the vast amount of data that is now accessible, unsupervised semantic feature learning, or learning without requiring manual annotation labor, is essential. The goal of this paper is to propose a design smell detection method that is based on self-supervised learning. We propose Model Representation with Transformers (MoRT) to learn the UML class diagram features by training Transformers to recognize masked keywords. We empirically show how effective the defined proxy task is at learning semantic and structural properties. We thoroughly assess MoRT using four model smells: the Blob, Functional Decomposition, Spaghetti Code, and Swiss Army Knife. Furthermore, we compare our findings with supervised learning and feature-based methods. Finally, we ran a cross-project experiment to assess the generalizability of our approach. Results show that MoRT is highly effective in detecting design smells.

设计气味是设计不良的解决方案的症状,可能会导致一些维护问题。虽然包括传统机器学习方法在内的各种方法已被提出,并被证明能有效检测设计气味,但这些方法需要大量人工标注的数据,成本高昂且难以扩展。为了充分利用现在可以获取的大量数据,无监督语义特征学习或无需人工标注的学习就显得至关重要。本文的目标是提出一种基于自监督学习的设计气味检测方法。我们提出了使用变形器的模型表示法(MoRT),通过训练变形器识别屏蔽关键词来学习 UML 类图特征。我们通过经验证明了所定义的代理任务在学习语义和结构属性方面的有效性。我们使用四种模型气味对 MoRT 进行了全面评估:Blob、功能分解、意大利面条代码和瑞士军刀。此外,我们还将评估结果与监督学习和基于特征的方法进行了比较。最后,我们进行了一次跨项目实验,以评估我们方法的可推广性。结果表明,MoRT 在检测设计气味方面非常有效。
{"title":"Automated detection of class diagram smells using self-supervised learning","authors":"Amal Alazba,&nbsp;Hamoud Aljamaan,&nbsp;Mohammad Alshayeb","doi":"10.1007/s10515-024-00429-w","DOIUrl":"10.1007/s10515-024-00429-w","url":null,"abstract":"<div><p>Design smells are symptoms of poorly designed solutions that may result in several maintenance issues. While various approaches, including traditional machine learning methods, have been proposed and shown to be effective in detecting design smells, they require extensive manually labeled data, which is expensive and challenging to scale. To leverage the vast amount of data that is now accessible, unsupervised semantic feature learning, or learning without requiring manual annotation labor, is essential. The goal of this paper is to propose a design smell detection method that is based on self-supervised learning. We propose Model Representation with Transformers (MoRT) to learn the UML class diagram features by training Transformers to recognize masked keywords. We empirically show how effective the defined proxy task is at learning semantic and structural properties. We thoroughly assess MoRT using four model smells: the Blob, Functional Decomposition, Spaghetti Code, and Swiss Army Knife. Furthermore, we compare our findings with supervised learning and feature-based methods. Finally, we ran a cross-project experiment to assess the generalizability of our approach. Results show that MoRT is highly effective in detecting design smells.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140201622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing apples and oranges? Investigating the consistency of CPU and memory profiler results across multiple java versions 比较苹果和橘子?调查跨多个 Java 版本的 CPU 和内存剖析器结果的一致性
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-22 DOI: 10.1007/s10515-024-00423-2
Myles Watkinson, Alexander E. I. Brownlee

Profiling is an important tool in the software developer’s box, used to identify hot methods where most computational resources are used, to focus efforts at improving efficiency. Profilers are also important in the context of Genetic improvement (GI) of software. GI applies search-based optimisation to existing software with many examples of success in a variety of contexts. GI generates variants of the original program, testing each for functionality and properties such as run time or memory footprint, and profiling can be used to target the code variations to increase the search efficiency. We report on an experimental study comparing two profilers included with different versions of the Java Development Kit (JDK), HPROF (JDK 8) and Java Flight Recorder (JFR) (JDK 8, 9, and 17), within the GI toolbox Gin on six open-source applications, for both run time and memory use. We find that a core set of methods are labelled hot in most runs, with a long tail appearing rarely. We suggest five repeats enough to overcome this noise. Perhaps unsurprisingly, changing the profiler and JDK dramatically change the hot methods identified, so profiling must be rerun for new JDKs. We also show that using profiling for test case subset selection is unwise, often missing relevant members of the test suite. Similar general patterns are seen for memory profiling as for run time but the identified hot methods are often quite different.

剖析是软件开发人员的一个重要工具,用于识别使用计算资源最多的热门方法,以集中精力提高效率。剖析器在软件遗传改进(GI)方面也很重要。基因改进将基于搜索的优化技术应用于现有软件,在各种情况下都有许多成功的例子。GI 会生成原始程序的变体,测试每个变体的功能和属性,如运行时间或内存占用,而剖析可用于针对代码变体提高搜索效率。我们报告了一项实验研究,在 GI 工具箱 Gin 中,比较了 Java Development Kit (JDK) 不同版本所包含的两种剖析器 HPROF (JDK 8) 和 Java Flight Recorder (JFR) (JDK 8、9 和 17),在运行时间和内存使用方面对六个开源应用程序进行了剖析。我们发现,在大多数运行中,一组核心方法都被贴上了热标签,长尾方法很少出现。我们建议重复五次就足以克服这种噪音。也许不足为奇的是,改变剖析器和 JDK 会显著改变识别出的热门方法,因此必须针对新的 JDK 重新运行剖析。我们还发现,使用剖析来选择测试用例子集是不明智的,往往会遗漏测试套件中的相关成员。内存剖析的一般模式与运行时间剖析类似,但识别出的热点方法往往大相径庭。
{"title":"Comparing apples and oranges? Investigating the consistency of CPU and memory profiler results across multiple java versions","authors":"Myles Watkinson,&nbsp;Alexander E. I. Brownlee","doi":"10.1007/s10515-024-00423-2","DOIUrl":"10.1007/s10515-024-00423-2","url":null,"abstract":"<div><p>Profiling is an important tool in the software developer’s box, used to identify <i>hot</i> methods where most computational resources are used, to focus efforts at improving efficiency. Profilers are also important in the context of Genetic improvement (GI) of software. GI applies search-based optimisation to existing software with many examples of success in a variety of contexts. GI generates variants of the original program, testing each for functionality and properties such as run time or memory footprint, and profiling can be used to target the code variations to increase the search efficiency. We report on an experimental study comparing two profilers included with different versions of the Java Development Kit (JDK), HPROF (JDK 8) and Java Flight Recorder (JFR) (JDK 8, 9, and 17), within the GI toolbox Gin on six open-source applications, for both run time and memory use. We find that a core set of methods are labelled <i>hot</i> in most runs, with a long tail appearing rarely. We suggest five repeats enough to overcome this noise. Perhaps unsurprisingly, changing the profiler and JDK dramatically change the <i>hot</i> methods identified, so profiling must be rerun for new JDKs. We also show that using profiling for test case subset selection is unwise, often missing relevant members of the test suite. Similar general patterns are seen for memory profiling as for run time but the identified <i>hot</i> methods are often quite different.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00423-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140201528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt enhance API recommendation: visualize the user’s real intention behind this query 提示增强应用程序接口推荐:可视化用户查询背后的真实意图
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-11 DOI: 10.1007/s10515-024-00425-0
Yong Wang, Linjun Chen, Cuiyun Gao, Yingtao Fang, Yong Li

Developers frequently rely on APIs in their daily programming tasks, as APIs have become an indispensable tool for program development. However, with a vast number of open-source libraries available, selecting the appropriate API quickly can be a common challenge for programmers. Previous research on API recommendation primarily focused on designing better approaches to interpret user input. However, in practical applications, it is often difficult for users, especially novice programmers, to express their real intentions due to the limitations of language expression and programming capabilities. To address this issue, this paper introduces PTAPI, an approach that visualizes the user’s real intentions based on their query to enhance recommendation performance. Firstly, PTAPI identifies the prompt template from Stack Overflow (SO) posts based on the user’s input. Secondly, the obtained prompt template is combined with the user’s input to generate a new question. Finally, the newly generated question leverages dual information sources from SO posts and API official documentation to provide recommendations. To evaluate the effectiveness of PTAPI, we conducted experiments at both the class-level and method-level. The experimental results demonstrate the effectiveness of the proposed approach, with a significant improvement in the success rate.

开发人员在日常编程任务中经常依赖应用程序接口,因为应用程序接口已成为程序开发不可或缺的工具。然而,由于开放源代码库数量庞大,快速选择合适的应用程序接口可能是程序员面临的共同挑战。以往关于 API 推荐的研究主要集中在设计更好的方法来解释用户输入。然而,在实际应用中,由于语言表达和编程能力的限制,用户(尤其是新手程序员)往往很难表达自己的真实意图。为了解决这个问题,本文介绍了 PTAPI,一种根据用户的查询可视化用户真实意图以提高推荐性能的方法。首先,PTAPI 根据用户的输入从 Stack Overflow(SO)帖子中识别提示模板。其次,将获得的提示模板与用户输入相结合,生成一个新问题。最后,新生成的问题将利用来自 SO 帖子和 API 官方文档的双重信息源提供建议。为了评估 PTAPI 的有效性,我们在类级和方法级进行了实验。实验结果证明了所提方法的有效性,成功率有了显著提高。
{"title":"Prompt enhance API recommendation: visualize the user’s real intention behind this query","authors":"Yong Wang,&nbsp;Linjun Chen,&nbsp;Cuiyun Gao,&nbsp;Yingtao Fang,&nbsp;Yong Li","doi":"10.1007/s10515-024-00425-0","DOIUrl":"10.1007/s10515-024-00425-0","url":null,"abstract":"<div><p>Developers frequently rely on APIs in their daily programming tasks, as APIs have become an indispensable tool for program development. However, with a vast number of open-source libraries available, selecting the appropriate API quickly can be a common challenge for programmers. Previous research on API recommendation primarily focused on designing better approaches to interpret user input. However, in practical applications, it is often difficult for users, especially novice programmers, to express their real intentions due to the limitations of language expression and programming capabilities. To address this issue, this paper introduces PTAPI, an approach that visualizes the user’s real intentions based on their query to enhance recommendation performance. Firstly, PTAPI identifies the prompt template from Stack Overflow (SO) posts based on the user’s input. Secondly, the obtained prompt template is combined with the user’s input to generate a new question. Finally, the newly generated question leverages dual information sources from SO posts and API official documentation to provide recommendations. To evaluate the effectiveness of PTAPI, we conducted experiments at both the class-level and method-level. The experimental results demonstrate the effectiveness of the proposed approach, with a significant improvement in the success rate.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140114753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Future of software development with generative AI 利用生成式人工智能开发软件的未来
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-11 DOI: 10.1007/s10515-024-00426-z
Jaakko Sauvola, Sasu Tarkoma, Mika Klemettinen, Jukka Riekki, David Doermann

Generative AI is regarded as a major disruption to software development. Platforms, repositories, clouds, and the automation of tools and processes have been proven to improve productivity, cost, and quality. Generative AI, with its rapidly expanding capabilities, is a major step forward in this field. As a new key enabling technology, it can be used for many purposes, from creative dimensions to replacing repetitive and manual tasks. The number of opportunities increases with the capabilities of large-language models (LLMs). This has raised concerns about ethics, education, regulation, intellectual property, and even criminal activities. We analyzed the potential of generative AI and LLM technologies for future software development paths. We propose four primary scenarios, model trajectories for transitions between them, and reflect against relevant software development operations. The motivation for this research is clear: the software development industry needs new tools to understand the potential, limitations, and risks of generative AI, as well as guidelines for using it.

生成式人工智能被认为是对软件开发的重大颠覆。事实证明,平台、资源库、云以及工具和流程的自动化可以提高生产率、成本和质量。生成式人工智能凭借其快速扩展的能力,在这一领域向前迈出了一大步。作为一项新的关键使能技术,它可用于多种用途,从创造性层面到替代重复性人工任务。随着大型语言模型(LLM)能力的增强,机会也随之增多。这引起了人们对道德、教育、监管、知识产权甚至犯罪活动的关注。我们分析了生成式人工智能和 LLM 技术在未来软件开发道路上的潜力。我们提出了四种主要情景,模拟了它们之间的过渡轨迹,并对照相关的软件开发操作进行了反思。这项研究的动机很明确:软件开发行业需要新的工具来了解生成式人工智能的潜力、局限性和风险,以及使用它的指导原则。
{"title":"Future of software development with generative AI","authors":"Jaakko Sauvola,&nbsp;Sasu Tarkoma,&nbsp;Mika Klemettinen,&nbsp;Jukka Riekki,&nbsp;David Doermann","doi":"10.1007/s10515-024-00426-z","DOIUrl":"10.1007/s10515-024-00426-z","url":null,"abstract":"<div><p>Generative AI is regarded as a major disruption to software development. Platforms, repositories, clouds, and the automation of tools and processes have been proven to improve productivity, cost, and quality. Generative AI, with its rapidly expanding capabilities, is a major step forward in this field. As a new key enabling technology, it can be used for many purposes, from creative dimensions to replacing repetitive and manual tasks. The number of opportunities increases with the capabilities of large-language models (LLMs). This has raised concerns about ethics, education, regulation, intellectual property, and even criminal activities. We analyzed the potential of generative AI and LLM technologies for future software development paths. We propose four primary scenarios, model trajectories for transitions between them, and reflect against relevant software development operations. The motivation for this research is clear: the software development industry needs new tools to understand the potential, limitations, and risks of generative AI, as well as guidelines for using it.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-024-00426-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140098778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test-suite-guided discovery of least privilege for cloud infrastructure as code 在测试套件指导下发现云基础设施即代码的最小权限
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-05 DOI: 10.1007/s10515-024-00420-5
Ryo Shimizu, Yuna Nunomura, Hideyuki Kanuka

Infrastructure as code (IaC) for the cloud, which automatically configures a system’s cloud environment from source code, is an important practice thanks to its efficient, reproducible provisioning. On a cloud IaC definition (template), developers must carefully manage permission settings to minimize the risk of cyber-attacks. To this end, least privilege on IaC templates, i.e., the assignment of a necessary and sufficient set of permissions, is widely regarded as a best practice. However, the discovery of least privilege can be an error-prone, burdensome task for developers. This is partially because the execution of an action on the cloud sometimes implicitly requires permissions of other services, and since these are difficult to recognize without actual execution, developers are forced to manually iterate the execution of an action and the modification of permissions. In this work, we present an approach to automatically discover least privilege. Our approach utilizes a test suite, which represents what a system should achieve on the cloud, as an indicator of least privilege, and it iterates testing on the cloud and (re)configuration of permissions on the basis of the test results. We also propose a stepwise filtering technique that utilizes the co-occurrences of cloud services/actions and clustering-based pruning to efficiently rule out unnecessary permissions. Our experiments demonstrate that this filtering reduces the number of iterations compared to naive approaches, which directly affects the time and cost to discover least privilege. Moreover, three case studies show that our approach can identify least privilege on Amazon Web Services within a practical time.

云计算基础架构即代码(IaC)可从源代码自动配置系统的云计算环境,由于其高效、可重复的配置,成为一种重要的实践。在云 IaC 定义(模板)上,开发人员必须仔细管理权限设置,以最大限度地降低网络攻击的风险。为此,IaC 模板上的最小权限(即分配一组必要且足够的权限)被广泛视为最佳实践。然而,对于开发人员来说,发现最小权限可能是一项容易出错的繁重任务。部分原因是在云上执行操作有时隐含地需要其他服务的权限,而这些权限在没有实际执行的情况下很难识别,因此开发人员不得不手动重复执行操作和修改权限。在这项工作中,我们提出了一种自动发现最小权限的方法。我们的方法利用一个测试套件作为最小权限的指标,该套件代表了系统在云上应该实现的目标,并根据测试结果在云上反复进行测试和(重新)配置权限。我们还提出了一种逐步过滤技术,利用云服务/操作的共现和基于聚类的剪枝来有效地排除不必要的权限。我们的实验证明,与传统方法相比,这种过滤方法减少了迭代次数,从而直接影响了发现最小权限的时间和成本。此外,三个案例研究表明,我们的方法可以在实际时间内识别亚马逊网络服务上的最小权限。
{"title":"Test-suite-guided discovery of least privilege for cloud infrastructure as code","authors":"Ryo Shimizu,&nbsp;Yuna Nunomura,&nbsp;Hideyuki Kanuka","doi":"10.1007/s10515-024-00420-5","DOIUrl":"10.1007/s10515-024-00420-5","url":null,"abstract":"<div><p>Infrastructure as code (IaC) for the cloud, which automatically configures a system’s cloud environment from source code, is an important practice thanks to its efficient, reproducible provisioning. On a cloud IaC definition (template), developers must carefully manage permission settings to minimize the risk of cyber-attacks. To this end, least privilege on IaC templates, i.e., the assignment of a necessary and sufficient set of permissions, is widely regarded as a best practice. However, the discovery of least privilege can be an error-prone, burdensome task for developers. This is partially because the execution of an action on the cloud sometimes implicitly requires permissions of other services, and since these are difficult to recognize without actual execution, developers are forced to manually iterate the execution of an action and the modification of permissions. In this work, we present an approach to automatically discover least privilege. Our approach utilizes a test suite, which represents what a system should achieve on the cloud, as an indicator of least privilege, and it iterates testing on the cloud and (re)configuration of permissions on the basis of the test results. We also propose a stepwise filtering technique that utilizes the co-occurrences of cloud services/actions and clustering-based pruning to efficiently rule out unnecessary permissions. Our experiments demonstrate that this filtering reduces the number of iterations compared to naive approaches, which directly affects the time and cost to discover least privilege. Moreover, three case studies show that our approach can identify least privilege on Amazon Web Services within a practical time.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140035686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distilled GPT for source code summarization 用于源代码汇总的精选 GPT
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-01 DOI: 10.1007/s10515-024-00421-4
Chia-Yi Su, Collin McMillan

A code summary is a brief natural language description of source code. Summaries are usually only a single sentence long, and yet form the backbone of developer documentation. A short descriptions such as “changes all visible polygons to the color blue” can give a programmer a high-level idea of what code does without the effort of reading the code itself. Recently, products based on Large Language Models such as ChatGPT have demonstrated a strong ability to write these descriptions automatically. However, to use these tools, programmers must send their code to untrusted third parties for processing (e.g., via an API call). This loss of custody is not acceptable to many organizations. In this paper, we present an alternative: we train an open source model using sample output generated by GPT(-)3.5 in a process related to knowledge distillation. Our model is small enough (350 m parameters) to be run on a single 16gb GPU, yet we show in our evaluation that it is large enough to mimic GPT(-)3.5 on this task.

代码摘要是对源代码的简短自然语言描述。摘要通常只有一句话的长度,但却是开发人员文档的支柱。简短的描述,如 "将所有可见多边形变为蓝色",可以让程序员对代码的作用有一个高层次的概念,而无需费力阅读代码本身。最近,基于大型语言模型的产品(如 ChatGPT)已经展示了自动编写这些描述的强大能力。但是,要使用这些工具,程序员必须将他们的代码发送给不受信任的第三方进行处理(例如,通过 API 调用)。对于许多组织来说,这种监护权的丧失是不可接受的。在本文中,我们提出了一个替代方案:我们使用 GPT(-)3.5 在知识提炼相关过程中生成的样本输出来训练一个开源模型。我们的模型足够小(350 m 参数),可以在单个 16gb GPU 上运行,但我们在评估中表明,它足够大,可以在这项任务上模仿 GPT(-)3.5 。
{"title":"Distilled GPT for source code summarization","authors":"Chia-Yi Su,&nbsp;Collin McMillan","doi":"10.1007/s10515-024-00421-4","DOIUrl":"10.1007/s10515-024-00421-4","url":null,"abstract":"<div><p>A code summary is a brief natural language description of source code. Summaries are usually only a single sentence long, and yet form the backbone of developer documentation. A short descriptions such as “changes all visible polygons to the color blue” can give a programmer a high-level idea of what code does without the effort of reading the code itself. Recently, products based on Large Language Models such as ChatGPT have demonstrated a strong ability to write these descriptions automatically. However, to use these tools, programmers must send their code to untrusted third parties for processing (e.g., via an API call). This loss of custody is not acceptable to many organizations. In this paper, we present an alternative: we train an open source model using sample output generated by GPT<span>(-)</span>3.5 in a process related to knowledge distillation. Our model is small enough (350 m parameters) to be run on a single 16gb GPU, yet we show in our evaluation that it is large enough to mimic GPT<span>(-)</span>3.5 on this task.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140020008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GenerativeGI: creating generative art with genetic improvement GenerativeGI:利用基因改良创造生成艺术
IF 2 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-03-01 DOI: 10.1007/s10515-024-00414-3
Erik M. Fredericks, Jared M. Moore, Abigail C. Diller

Generative art is a domain in which artistic output is created via a procedure or heuristic that may result in digital and/or physical results. A generative artist will typically act as a domain expert by specifying the algorithms that will form the basis of the piece as well as defining and refining parameters that can impact the results, however such efforts can require a significant amount of time to generate the final output. This article presents and extends GenerativeGI, an evolutionary computation-based technique for creating generative art by automatically searching through combinations of artistic techniques and their accompanying parameters to produce outputs desirable by the designer. Generative art techniques and their respective parameters are encoded within a grammar that is then the target for genetic improvement. This grammar-based approach, combined with a many-objective evolutionary algorithm, enables the designer to efficiently search through a massive number of possible outputs that reflect their aesthetic preferences. We included a total of 15 generative art techniques and performed three separate empirical evaluations, each of which targets different aesthetic preferences and varying aspects of the search heuristic. Experimental results suggest that GenerativeGI can produce outputs that are significantly more novel than those generated by random or single objective search. Furthermore, GenerativeGI produces individuals with a larger number of relevant techniques used to generate their overall composition.

生成艺术是一个通过程序或启发式方法创造艺术成果的领域,可能会产生数字和/或物理结果。生成艺术家通常会充当领域专家,指定构成作品基础的算法,并定义和完善可能会影响结果的参数,但这些工作可能需要大量时间才能生成最终输出。本文介绍并扩展了 GenerativeGI,这是一种基于进化计算的生成艺术创作技术,通过自动搜索艺术技术及其附带参数的组合,生成设计者所需的输出结果。生成艺术技术及其各自的参数被编码在一个语法中,然后成为遗传改进的目标。这种基于语法的方法与多目标进化算法相结合,能让设计者在大量可能的输出结果中有效地进行搜索,从而反映出他们的审美偏好。我们共采用了 15 种生成艺术技术,并分别进行了三次实证评估,每次评估都针对不同的审美偏好和搜索启发式的不同方面。实验结果表明,与随机搜索或单一目标搜索相比,生成式图形艺术能够生成更加新颖的输出结果。此外,GenerativeGI 生成的个体在生成其整体构成时使用了更多的相关技术。
{"title":"GenerativeGI: creating generative art with genetic improvement","authors":"Erik M. Fredericks,&nbsp;Jared M. Moore,&nbsp;Abigail C. Diller","doi":"10.1007/s10515-024-00414-3","DOIUrl":"10.1007/s10515-024-00414-3","url":null,"abstract":"<div><p>Generative art is a domain in which artistic output is created via a procedure or heuristic that may result in digital and/or physical results. A generative artist will typically act as a domain expert by specifying the algorithms that will form the basis of the piece as well as defining and refining parameters that can impact the results, however such efforts can require a significant amount of time to generate the final output. This article presents and extends <i>GenerativeGI</i>, an evolutionary computation-based technique for creating generative art by automatically searching through combinations of artistic techniques and their accompanying parameters to produce outputs desirable by the designer. Generative art techniques and their respective parameters are encoded within a grammar that is then the target for genetic improvement. This grammar-based approach, combined with a many-objective evolutionary algorithm, enables the designer to efficiently search through a massive number of possible outputs that reflect their aesthetic preferences. We included a total of 15 generative art techniques and performed three separate empirical evaluations, each of which targets different aesthetic preferences and varying aspects of the search heuristic. Experimental results suggest that <i>GenerativeGI</i> can produce outputs that are significantly more novel than those generated by random or single objective search. Furthermore, <i>GenerativeGI</i> produces individuals with a larger number of relevant techniques used to generate their overall composition.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140020041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Automated Software Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1