首页 > 最新文献

ACM Transactions on Software Engineering and Methodology (TOSEM)最新文献

英文 中文
Examining Penetration Tester Behavior in the Collegiate Penetration Testing Competition 高校渗透测试竞赛中渗透测试人员行为的检验
Pub Date : 2022-04-09 DOI: 10.1145/3514040
Benjamin S. Meyers, Sultan Fahad Almassari, Brandon N. Keller, Andrew Meneely
Penetration testing is a key practice toward engineering secure software. Malicious actors have many tactics at their disposal, and software engineers need to know what tactics attackers will prioritize in the first few hours of an attack. Projects like MITRE ATT&CK™ provide knowledge, but how do people actually deploy this knowledge in real situations? A penetration testing competition provides a realistic, controlled environment with which to measure and compare the efficacy of attackers. In this work, we examine the details of vulnerability discovery and attacker behavior with the goal of improving existing vulnerability assessment processes using data from the 2019 Collegiate Penetration Testing Competition (CPTC). We constructed 98 timelines of vulnerability discovery and exploits for 37 unique vulnerabilities discovered by 10 teams of penetration testers. We grouped related vulnerabilities together by mapping to Common Weakness Enumerations and MITRE ATT&CK™. We found that (1) vulnerabilities related to improper resource control (e.g., session fixation) are discovered faster and more often, as well as exploited faster, than vulnerabilities related to improper access control (e.g., weak password requirements), (2) there is a clear process followed by penetration testers of discovery/collection to lateral movement/pre-attack. Our methodology facilitates quicker analysis of vulnerabilities in future CPTC events.
渗透测试是工程安全软件的关键实践。恶意行为者可以使用许多策略,软件工程师需要知道攻击者在攻击的最初几个小时内会优先使用哪些策略。像MITRE ATT&CK™这样的项目提供了知识,但是人们如何在实际情况中实际部署这些知识呢?渗透测试竞赛提供了一个现实的、可控的环境,用来衡量和比较攻击者的效能。在这项工作中,我们研究了漏洞发现和攻击者行为的细节,目的是利用2019年大学渗透测试竞赛(CPTC)的数据改进现有的漏洞评估流程。我们为10个渗透测试团队发现的37个独特漏洞构建了98个漏洞发现和利用时间表。我们通过映射到Common Weakness enumations和MITRE ATT&CK™,将相关的漏洞分组在一起。我们发现:(1)与不适当的资源控制(例如,会话固定)相关的漏洞比与不适当的访问控制(例如,弱密码要求)相关的漏洞发现得更快、更频繁,并且被利用得更快;(2)渗透测试人员有一个明确的过程,从发现/收集到横向移动/预攻击。我们的方法有助于更快地分析未来CPTC事件中的漏洞。
{"title":"Examining Penetration Tester Behavior in the Collegiate Penetration Testing Competition","authors":"Benjamin S. Meyers, Sultan Fahad Almassari, Brandon N. Keller, Andrew Meneely","doi":"10.1145/3514040","DOIUrl":"https://doi.org/10.1145/3514040","url":null,"abstract":"Penetration testing is a key practice toward engineering secure software. Malicious actors have many tactics at their disposal, and software engineers need to know what tactics attackers will prioritize in the first few hours of an attack. Projects like MITRE ATT&CK™ provide knowledge, but how do people actually deploy this knowledge in real situations? A penetration testing competition provides a realistic, controlled environment with which to measure and compare the efficacy of attackers. In this work, we examine the details of vulnerability discovery and attacker behavior with the goal of improving existing vulnerability assessment processes using data from the 2019 Collegiate Penetration Testing Competition (CPTC). We constructed 98 timelines of vulnerability discovery and exploits for 37 unique vulnerabilities discovered by 10 teams of penetration testers. We grouped related vulnerabilities together by mapping to Common Weakness Enumerations and MITRE ATT&CK™. We found that (1) vulnerabilities related to improper resource control (e.g., session fixation) are discovered faster and more often, as well as exploited faster, than vulnerabilities related to improper access control (e.g., weak password requirements), (2) there is a clear process followed by penetration testers of discovery/collection to lateral movement/pre-attack. Our methodology facilitates quicker analysis of vulnerabilities in future CPTC events.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"13 1","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2022-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77264871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
XCode: Towards Cross-Language Code Representation with Large-Scale Pre-Training XCode:通过大规模预训练实现跨语言代码表示
Pub Date : 2022-04-09 DOI: 10.1145/3506696
Zehao Lin, Guodun Li, Jingfeng Zhang, Yue Deng, Xiangji Zeng, Yin Zhang, Yao Wan
Source code representation learning is the basis of applying artificial intelligence to many software engineering tasks such as code clone detection, algorithm classification, and code summarization. Recently, many works have tried to improve the performance of source code representation from various perspectives, e.g., introducing the structural information of programs into latent representation. However, when dealing with rapidly expanded unlabeled cross-language source code datasets from the Internet, there are still two issues. Firstly, deep learning models for many code-specific tasks still suffer from the lack of high-quality labels. Secondly, the structural differences among programming languages make it more difficult to process multiple languages in a single neural architecture. To address these issues, in this article, we propose a novel Cross-language Code representation with a large-scale pre-training (XCode) method. Concretely, we propose to use several abstract syntax trees and ELMo-enhanced variational autoencoders to obtain multiple pre-trained source code language models trained on about 1.5 million code snippets. To fully utilize the knowledge across programming languages, we further propose a Shared Encoder-Decoder (SED) architecture which uses the multi-teacher single-student method to transfer knowledge from the aforementioned pre-trained models to the distilled SED. The pre-trained models and SED will cooperate to better represent the source code. For evaluation, we examine our approach on three typical downstream cross-language tasks, i.e., source code translation, code clone detection, and code-to-code search, on a real-world dataset composed of programming exercises with multiple solutions. Experimental results demonstrate the effectiveness of our proposed approach on cross-language code representations. Meanwhile, our approach performs significantly better than several code representation baselines on different downstream tasks in terms of multiple automatic evaluation metrics.
源代码表示学习是将人工智能应用于许多软件工程任务(如代码克隆检测、算法分类和代码摘要)的基础。近年来,许多研究都试图从不同的角度来提高源代码表示的性能,例如将程序的结构信息引入到潜在表示中。然而,当处理来自Internet的快速扩展的未标记的跨语言源代码数据集时,仍然存在两个问题。首先,许多代码特定任务的深度学习模型仍然缺乏高质量的标签。其次,编程语言之间的结构差异使得在单一神经网络架构中处理多种语言变得更加困难。为了解决这些问题,在本文中,我们提出了一种具有大规模预训练(XCode)方法的新颖的跨语言代码表示。具体来说,我们建议使用几个抽象语法树和elmo增强的变分自编码器来获得多个预训练的源代码语言模型,这些模型训练了大约150万个代码片段。为了充分利用跨编程语言的知识,我们进一步提出了一种共享编码器-解码器(Shared Encoder-Decoder, SED)架构,该架构使用多教师单个学生的方法将知识从上述预训练模型转移到蒸馏的SED。预训练的模型和SED将合作以更好地表示源代码。为了评估,我们在一个由多种解决方案的编程练习组成的真实数据集上,研究了我们在三个典型的下游跨语言任务上的方法,即源代码翻译、代码克隆检测和代码到代码搜索。实验结果证明了该方法在跨语言代码表示方面的有效性。同时,就多个自动评估指标而言,我们的方法在不同的下游任务上比几个代码表示基线执行得好得多。
{"title":"XCode: Towards Cross-Language Code Representation with Large-Scale Pre-Training","authors":"Zehao Lin, Guodun Li, Jingfeng Zhang, Yue Deng, Xiangji Zeng, Yin Zhang, Yao Wan","doi":"10.1145/3506696","DOIUrl":"https://doi.org/10.1145/3506696","url":null,"abstract":"Source code representation learning is the basis of applying artificial intelligence to many software engineering tasks such as code clone detection, algorithm classification, and code summarization. Recently, many works have tried to improve the performance of source code representation from various perspectives, e.g., introducing the structural information of programs into latent representation. However, when dealing with rapidly expanded unlabeled cross-language source code datasets from the Internet, there are still two issues. Firstly, deep learning models for many code-specific tasks still suffer from the lack of high-quality labels. Secondly, the structural differences among programming languages make it more difficult to process multiple languages in a single neural architecture. To address these issues, in this article, we propose a novel Cross-language Code representation with a large-scale pre-training (XCode) method. Concretely, we propose to use several abstract syntax trees and ELMo-enhanced variational autoencoders to obtain multiple pre-trained source code language models trained on about 1.5 million code snippets. To fully utilize the knowledge across programming languages, we further propose a Shared Encoder-Decoder (SED) architecture which uses the multi-teacher single-student method to transfer knowledge from the aforementioned pre-trained models to the distilled SED. The pre-trained models and SED will cooperate to better represent the source code. For evaluation, we examine our approach on three typical downstream cross-language tasks, i.e., source code translation, code clone detection, and code-to-code search, on a real-world dataset composed of programming exercises with multiple solutions. Experimental results demonstrate the effectiveness of our proposed approach on cross-language code representations. Meanwhile, our approach performs significantly better than several code representation baselines on different downstream tasks in terms of multiple automatic evaluation metrics.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"20 1 1","pages":"1 - 44"},"PeriodicalIF":0.0,"publicationDate":"2022-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86219745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Study on Blockchain Architecture Design Decisions and Their Security Attacks and Threats 区块链架构设计决策及其安全攻击与威胁研究
Pub Date : 2022-04-01 DOI: 10.1145/3502740
Sabreen Ahmadjee, C. Mera-Gómez, R. Bahsoon, R. Kazman
Blockchain is a disruptive technology intended to implement secure decentralised distributed systems, in which transactional data can be shared, stored, and verified by participants of the system without needing a central authentication/verification authority. Blockchain-based systems have several architectural components and variants, which architects can leverage to build secure software systems. However, there is a lack of studies to assist architects in making architecture design and configuration decisions for blockchain-based systems. This knowledge gap may increase the chance of making unsuitable design decisions and producing configurations prone to potential security risks. To address this limitation, we report our comprehensive systematic literature review to derive a taxonomy of commonly used architecture design decisions in blockchain-based systems. We map each of these decisions to potential security attacks and their posed threats. MITRE’s attack tactic categories and Microsoft STRIDE threat modeling are used to systematically classify threats and their associated attacks to identify potential attacks and threats in blockchain-based systems. Our mapping approach aims to guide architects to make justifiable design decisions that will result in more secure implementations.
区块链是一种颠覆性技术,旨在实现安全的去中心化分布式系统,其中交易数据可以由系统参与者共享、存储和验证,而无需中央认证/验证机构。基于区块链的系统有几个架构组件和变体,架构师可以利用它们来构建安全的软件系统。然而,缺乏研究来帮助架构师为基于区块链的系统进行架构设计和配置决策。这种知识差距可能会增加做出不合适的设计决策和产生容易产生潜在安全风险的配置的机会。为了解决这一限制,我们报告了我们全面的系统文献综述,以得出基于区块链的系统中常用架构设计决策的分类。我们将这些决策映射到潜在的安全攻击及其构成的威胁。MITRE的攻击策略类别和Microsoft STRIDE威胁建模用于系统地对威胁及其相关攻击进行分类,以识别基于区块链的系统中的潜在攻击和威胁。我们的映射方法旨在指导架构师做出合理的设计决策,从而产生更安全的实现。
{"title":"A Study on Blockchain Architecture Design Decisions and Their Security Attacks and Threats","authors":"Sabreen Ahmadjee, C. Mera-Gómez, R. Bahsoon, R. Kazman","doi":"10.1145/3502740","DOIUrl":"https://doi.org/10.1145/3502740","url":null,"abstract":"Blockchain is a disruptive technology intended to implement secure decentralised distributed systems, in which transactional data can be shared, stored, and verified by participants of the system without needing a central authentication/verification authority. Blockchain-based systems have several architectural components and variants, which architects can leverage to build secure software systems. However, there is a lack of studies to assist architects in making architecture design and configuration decisions for blockchain-based systems. This knowledge gap may increase the chance of making unsuitable design decisions and producing configurations prone to potential security risks. To address this limitation, we report our comprehensive systematic literature review to derive a taxonomy of commonly used architecture design decisions in blockchain-based systems. We map each of these decisions to potential security attacks and their posed threats. MITRE’s attack tactic categories and Microsoft STRIDE threat modeling are used to systematically classify threats and their associated attacks to identify potential attacks and threats in blockchain-based systems. Our mapping approach aims to guide architects to make justifiable design decisions that will result in more secure implementations.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"8 1","pages":"1 - 45"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73941841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Testing the Plasticity of Reinforcement Learning-based Systems 基于强化学习系统的可塑性测试
Pub Date : 2022-03-28 DOI: 10.1145/3511701
Matteo Biagiola, P. Tonella
The dataset available for pre-release training of a machine-learning based system is often not representative of all possible execution contexts that the system will encounter in the field. Reinforcement Learning (RL) is a prominent approach among those that support continual learning, i.e., learning continually in the field, in the post-release phase. No study has so far investigated any method to test the plasticity of RL-based systems, i.e., their capability to adapt to an execution context that may deviate from the training one. We propose an approach to test the plasticity of RL-based systems. The output of our approach is a quantification of the adaptation and anti-regression capabilities of the system, obtained by computing the adaptation frontier of the system in a changed environment. We visualize such frontier as an adaptation/anti-regression heatmap in two dimensions, or as a clustered projection when more than two dimensions are involved. In this way, we provide developers with information on the amount of changes that can be accommodated by the continual learning component of the system, which is key to decide if online, in-the-field learning can be safely enabled or not.
用于基于机器学习的系统的预发布训练的数据集通常不能代表系统在该领域将遇到的所有可能的执行上下文。在那些支持持续学习的方法中,强化学习(RL)是一种突出的方法,即在发布后阶段在该领域持续学习。到目前为止,还没有研究调查任何方法来测试基于强化学习的系统的可塑性,即它们适应可能偏离训练环境的执行环境的能力。我们提出了一种方法来测试基于rl的系统的可塑性。我们的方法的输出是系统的适应和抗回归能力的量化,通过计算系统在变化的环境中的适应前沿得到。我们将这种边界可视化为二维的自适应/反回归热图,或者当涉及二维以上时,将其视为聚类投影。通过这种方式,我们为开发人员提供了关于系统的持续学习组件可以适应的变化量的信息,这是决定在线、现场学习是否可以安全启用的关键。
{"title":"Testing the Plasticity of Reinforcement Learning-based Systems","authors":"Matteo Biagiola, P. Tonella","doi":"10.1145/3511701","DOIUrl":"https://doi.org/10.1145/3511701","url":null,"abstract":"The dataset available for pre-release training of a machine-learning based system is often not representative of all possible execution contexts that the system will encounter in the field. Reinforcement Learning (RL) is a prominent approach among those that support continual learning, i.e., learning continually in the field, in the post-release phase. No study has so far investigated any method to test the plasticity of RL-based systems, i.e., their capability to adapt to an execution context that may deviate from the training one. We propose an approach to test the plasticity of RL-based systems. The output of our approach is a quantification of the adaptation and anti-regression capabilities of the system, obtained by computing the adaptation frontier of the system in a changed environment. We visualize such frontier as an adaptation/anti-regression heatmap in two dimensions, or as a clustered projection when more than two dimensions are involved. In this way, we provide developers with information on the amount of changes that can be accommodated by the continual learning component of the system, which is key to decide if online, in-the-field learning can be safely enabled or not.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"83 1","pages":"1 - 46"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90598733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Uncertainty-aware Prediction Validator in Deep Learning Models for Cyber-physical System Data 网络物理系统数据深度学习模型中的不确定性感知预测验证器
Pub Date : 2022-03-28 DOI: 10.1145/3527451
Ferhat Ozgur Catak, T. Yue, Sajid Ali
The use of Deep learning in Cyber-Physical Systems (CPSs) is gaining popularity due to its ability to bring intelligence to CPS behaviors. However, both CPSs and deep learning have inherent uncertainty. Such uncertainty, if not handled adequately, can lead to unsafe CPS behavior. The first step toward addressing such uncertainty in deep learning is to quantify uncertainty. Hence, we propose a novel method called NIRVANA (uNcertaInty pRediction ValidAtor iN Ai) for prediction validation based on uncertainty metrics. To this end, we first employ prediction-time Dropout-based Neural Networks to quantify uncertainty in deep learning models applied to CPS data. Second, such quantified uncertainty is taken as the input to predict wrong labels using a support vector machine, with the aim of building a highly discriminating prediction validator model with uncertainty values. In addition, we investigated the relationship between uncertainty quantification and prediction performance and conducted experiments to obtain optimal dropout ratios. We conducted all the experiments with four real-world CPS datasets. Results show that uncertainty quantification is negatively correlated to prediction performance of a deep learning model of CPS data. Also, our dropout ratio adjustment approach is effective in reducing uncertainty of correct predictions while increasing uncertainty of wrong predictions.
深度学习在网络物理系统(CPS)中的应用越来越受欢迎,因为它能够为CPS行为带来智能。然而,cps和深度学习都具有内在的不确定性。如果处理不当,这种不确定性可能导致不安全的CPS行为。在深度学习中解决这种不确定性的第一步是量化不确定性。因此,我们提出了一种新的方法,称为NIRVANA (uNcertaInty pRediction ValidAtor iN Ai),用于基于不确定性度量的预测验证。为此,我们首先采用基于预测时间dropout的神经网络来量化应用于CPS数据的深度学习模型中的不确定性。其次,将这种量化的不确定性作为输入,使用支持向量机预测错误标签,目的是建立具有不确定性值的高判别预测验证器模型。此外,我们还研究了不确定度量化与预测性能之间的关系,并进行了实验以获得最佳辍学率。我们用四个真实的CPS数据集进行了所有的实验。结果表明,不确定性量化与CPS数据深度学习模型的预测性能呈负相关。此外,我们的失分率调整方法可以有效地减少正确预测的不确定性,而增加错误预测的不确定性。
{"title":"Uncertainty-aware Prediction Validator in Deep Learning Models for Cyber-physical System Data","authors":"Ferhat Ozgur Catak, T. Yue, Sajid Ali","doi":"10.1145/3527451","DOIUrl":"https://doi.org/10.1145/3527451","url":null,"abstract":"The use of Deep learning in Cyber-Physical Systems (CPSs) is gaining popularity due to its ability to bring intelligence to CPS behaviors. However, both CPSs and deep learning have inherent uncertainty. Such uncertainty, if not handled adequately, can lead to unsafe CPS behavior. The first step toward addressing such uncertainty in deep learning is to quantify uncertainty. Hence, we propose a novel method called NIRVANA (uNcertaInty pRediction ValidAtor iN Ai) for prediction validation based on uncertainty metrics. To this end, we first employ prediction-time Dropout-based Neural Networks to quantify uncertainty in deep learning models applied to CPS data. Second, such quantified uncertainty is taken as the input to predict wrong labels using a support vector machine, with the aim of building a highly discriminating prediction validator model with uncertainty values. In addition, we investigated the relationship between uncertainty quantification and prediction performance and conducted experiments to obtain optimal dropout ratios. We conducted all the experiments with four real-world CPS datasets. Results show that uncertainty quantification is negatively correlated to prediction performance of a deep learning model of CPS data. Also, our dropout ratio adjustment approach is effective in reducing uncertainty of correct predictions while increasing uncertainty of wrong predictions.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"26 1","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76958880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Continuous and Proactive Software Architecture Evaluation: An IoT Case 持续和主动软件架构评估:一个物联网案例
Pub Date : 2022-03-15 DOI: 10.1145/3492762
Dalia Sobhy, Leandro L. Minku, R. Bahsoon, R. Kazman
Design-time evaluation is essential to build the initial software architecture to be deployed. However, experts’ assumptions made at design-time are unlikely to remain true indefinitely in systems that are characterized by scale, hyperconnectivity, dynamism, and uncertainty in operations (e.g. IoT). Therefore, experts’ design-time decisions can be challenged at run-time. A continuous architecture evaluation that systematically assesses and intertwines design-time and run-time decisions is thus necessary. This paper proposes the first proactive approach to continuous architecture evaluation of the system leveraging the support of simulation. The approach evaluates software architectures by not only tracking their performance over time, but also forecasting their likely future performance through machine learning of simulated instances of the architecture. This enables architects to make cost-effective informed decisions on potential changes to the architecture. We perform an IoT case study to show how machine learning on simulated instances of architecture can fundamentally guide the continuous evaluation process and influence the outcome of architecture decisions. A series of experiments is conducted to demonstrate the applicability and effectiveness of the approach. We also provide the architect with recommendations on how to best benefit from the approach through choice of learners and input parameters, grounded on experimentation and evidence.
设计时评估对于构建要部署的初始软件体系结构至关重要。然而,专家在设计时所做的假设不太可能在具有规模、超连接、动态性和操作不确定性(例如物联网)的系统中无限期地保持正确。因此,专家的设计时决策可能会在运行时受到挑战。因此,系统地评估和交织设计时和运行时决策的持续架构评估是必要的。本文提出了利用仿真支持对系统进行持续架构评估的第一种主动方法。该方法不仅通过跟踪软件架构的性能来评估软件架构,而且还通过对架构的模拟实例的机器学习来预测它们未来可能的性能。这使得架构师能够对架构的潜在更改做出具有成本效益的明智决策。我们进行了一个物联网案例研究,以展示在模拟架构实例上的机器学习如何从根本上指导持续评估过程并影响架构决策的结果。通过一系列实验验证了该方法的适用性和有效性。我们还根据实验和证据,就如何通过选择学习者和输入参数从该方法中获得最佳收益向架构师提供建议。
{"title":"Continuous and Proactive Software Architecture Evaluation: An IoT Case","authors":"Dalia Sobhy, Leandro L. Minku, R. Bahsoon, R. Kazman","doi":"10.1145/3492762","DOIUrl":"https://doi.org/10.1145/3492762","url":null,"abstract":"Design-time evaluation is essential to build the initial software architecture to be deployed. However, experts’ assumptions made at design-time are unlikely to remain true indefinitely in systems that are characterized by scale, hyperconnectivity, dynamism, and uncertainty in operations (e.g. IoT). Therefore, experts’ design-time decisions can be challenged at run-time. A continuous architecture evaluation that systematically assesses and intertwines design-time and run-time decisions is thus necessary. This paper proposes the first proactive approach to continuous architecture evaluation of the system leveraging the support of simulation. The approach evaluates software architectures by not only tracking their performance over time, but also forecasting their likely future performance through machine learning of simulated instances of the architecture. This enables architects to make cost-effective informed decisions on potential changes to the architecture. We perform an IoT case study to show how machine learning on simulated instances of architecture can fundamentally guide the continuous evaluation process and influence the outcome of architecture decisions. A series of experiments is conducted to demonstrate the applicability and effectiveness of the approach. We also provide the architect with recommendations on how to best benefit from the approach through choice of learners and input parameters, grounded on experimentation and evidence.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"86 1","pages":"1 - 54"},"PeriodicalIF":0.0,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84157673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
All in One: Design, Verification, and Implementation of SNOW-optimal Read Atomic Transactions 集于一体:snow最优读原子事务的设计、验证和实现
Pub Date : 2022-03-07 DOI: 10.1145/3494517
Si Liu
Distributed read atomic transactions are important building blocks of modern cloud databases that magnificently bridge the gap between data availability and strong data consistency. The performance of their transactional reads is particularly critical to the overall system performance, as many real-world database workloads are dominated by reads. Following the SNOW design principle for optimal reads, we develop LORA, a novel SNOW-optimal algorithm for distributed read atomic transactions. LORA completes its reads in exactly one round trip, even in the presence of conflicting writes, without imposing additional overhead to the communication, and it outperforms the state-of-the-art read atomic algorithms. To guide LORA’s development, we present a rewriting-logic-based framework and toolkit for design, verification, implementation, and evaluation of distributed databases. Within the framework, we formalize LORA and mathematically prove its data consistency guarantees. We also apply automatic model checking and statistical verification to validate our proofs and to estimate LORA’s performance. We additionally generate from the formal model a correct-by-construction distributed implementation for testing and performance evaluation under realistic deployments. Our design-level and implementation-based experimental results are consistent, which together demonstrate LORA’s promising data consistency and performance achievement.
分布式读原子事务是现代云数据库的重要构建块,它极大地弥合了数据可用性和强数据一致性之间的差距。事务性读取的性能对整个系统性能尤为重要,因为许多实际数据库工作负载主要是读取。遵循SNOW的最优读设计原则,我们开发了一种新的用于分布式读原子事务的最优算法LORA。即使存在冲突的写操作,LORA也能在一次往返中完成读操作,不会给通信带来额外的开销,而且它的性能优于最先进的读原子算法。为了指导LORA的发展,我们提出了一个基于重写逻辑的框架和工具包,用于分布式数据库的设计、验证、实现和评估。在框架内,我们形式化了LORA,并从数学上证明了它的数据一致性保证。我们还应用自动模型检查和统计验证来验证我们的证明并估计LORA的性能。另外,我们从正式模型中生成一个按构造正确的分布式实现,用于在实际部署下进行测试和性能评估。我们的设计级和基于实现的实验结果是一致的,共同证明了LORA具有良好的数据一致性和性能成就。
{"title":"All in One: Design, Verification, and Implementation of SNOW-optimal Read Atomic Transactions","authors":"Si Liu","doi":"10.1145/3494517","DOIUrl":"https://doi.org/10.1145/3494517","url":null,"abstract":"Distributed read atomic transactions are important building blocks of modern cloud databases that magnificently bridge the gap between data availability and strong data consistency. The performance of their transactional reads is particularly critical to the overall system performance, as many real-world database workloads are dominated by reads. Following the SNOW design principle for optimal reads, we develop LORA, a novel SNOW-optimal algorithm for distributed read atomic transactions. LORA completes its reads in exactly one round trip, even in the presence of conflicting writes, without imposing additional overhead to the communication, and it outperforms the state-of-the-art read atomic algorithms. To guide LORA’s development, we present a rewriting-logic-based framework and toolkit for design, verification, implementation, and evaluation of distributed databases. Within the framework, we formalize LORA and mathematically prove its data consistency guarantees. We also apply automatic model checking and statistical verification to validate our proofs and to estimate LORA’s performance. We additionally generate from the formal model a correct-by-construction distributed implementation for testing and performance evaluation under realistic deployments. Our design-level and implementation-based experimental results are consistent, which together demonstrate LORA’s promising data consistency and performance achievement.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"20 1","pages":"1 - 44"},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74549175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Faults Found in REST APIs by Automated Test Generation 自动化测试生成在REST api中发现的错误
Pub Date : 2022-03-07 DOI: 10.1145/3491038
Bogdan Marculescu, Man Zhang, Andrea Arcuri
RESTful web services are often used for building a wide variety of enterprise applications. The diversity and increased number of applications using RESTful APIs means that increasing amounts of resources are spent developing and testing these systems. Automation in test data generation provides a useful way of generating test data in a fast and efficient manner. However, automated test generation often results in large test suites that are hard to evaluate and investigate manually. This article proposes a taxonomy of the faults we have found using search-based software testing techniques applied on RESTful APIs. The taxonomy is a first step in understanding, analyzing, and ultimately fixing software faults in web services and enterprise applications. We propose to apply a density-based clustering algorithm to the test cases evolved during the search to allow a better separation between different groups of faults. This is needed to enable engineers to highlight and focus on the most serious faults. Tests were automatically generated for a set of eight case studies, seven open-source and one industrial. The test cases generated during the search are clustered based on the reported last executed line and based on the error messages returned, when such error messages were available. The tests were manually evaluated to determine their root causes and to obtain additional information. The article presents a taxonomy of the faults found based on the manual analysis of 415 faults in the eight case studies and proposes a method to support the classification using clustering of the resulting test cases.
RESTful web服务通常用于构建各种各样的企业应用程序。使用RESTful api的应用程序的多样性和数量的增加意味着要花费越来越多的资源来开发和测试这些系统。测试数据生成的自动化为快速有效地生成测试数据提供了一种有用的方法。然而,自动化的测试生成通常会导致难以手动评估和调查的大型测试套件。本文对我们在RESTful api上使用基于搜索的软件测试技术时发现的错误进行了分类。分类法是理解、分析并最终修复web服务和企业应用程序中的软件错误的第一步。我们建议对在搜索过程中进化的测试用例应用基于密度的聚类算法,以便更好地分离不同组的故障。这是工程师能够突出和关注最严重故障的必要条件。测试是为一组8个案例研究自动生成的,其中7个是开源的,1个是工业的。在搜索过程中生成的测试用例是基于报告的最后执行行和基于返回的错误消息(当这些错误消息可用时)聚集的。手动评估测试以确定其根本原因并获取其他信息。本文通过对8个案例中415个故障的人工分析,提出了故障的分类方法,并提出了一种使用聚类结果测试用例来支持分类的方法。
{"title":"On the Faults Found in REST APIs by Automated Test Generation","authors":"Bogdan Marculescu, Man Zhang, Andrea Arcuri","doi":"10.1145/3491038","DOIUrl":"https://doi.org/10.1145/3491038","url":null,"abstract":"RESTful web services are often used for building a wide variety of enterprise applications. The diversity and increased number of applications using RESTful APIs means that increasing amounts of resources are spent developing and testing these systems. Automation in test data generation provides a useful way of generating test data in a fast and efficient manner. However, automated test generation often results in large test suites that are hard to evaluate and investigate manually. This article proposes a taxonomy of the faults we have found using search-based software testing techniques applied on RESTful APIs. The taxonomy is a first step in understanding, analyzing, and ultimately fixing software faults in web services and enterprise applications. We propose to apply a density-based clustering algorithm to the test cases evolved during the search to allow a better separation between different groups of faults. This is needed to enable engineers to highlight and focus on the most serious faults. Tests were automatically generated for a set of eight case studies, seven open-source and one industrial. The test cases generated during the search are clustered based on the reported last executed line and based on the error messages returned, when such error messages were available. The tests were manually evaluated to determine their root causes and to obtain additional information. The article presents a taxonomy of the faults found based on the manual analysis of 415 faults in the eight case studies and proposes a method to support the classification using clustering of the resulting test cases.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"103 1","pages":"1 - 43"},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80412734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Context- and Fairness-Aware In-Process Crowdworker Recommendation 上下文和公平意识在进程众工推荐
Pub Date : 2022-03-07 DOI: 10.1145/3487571
Junjie Wang, Ye Yang, Song Wang, Jun Hu, Qing Wang
Identifying and optimizing open participation is essential to the success of open software development. Existing studies highlighted the importance of worker recommendation for crowdtesting tasks in order to improve bug detection efficiency, i.e., detect more bugs with fewer workers. However, there are a couple of limitations in existing work. First, these studies mainly focus on one-time recommendations based on expertise matching at the beginning of a new task. Second, the recommendation results suffer from severe popularity bias, i.e., highly experienced workers are recommended in almost all the tasks, while less experienced workers rarely get recommended. This article argues the need for context- and fairness-aware in-process crowdworker recommendation in order to address these limitations. We motivate this study through a pilot study, revealing the prevalence of long-sized non-yielding windows, i.e., no new bugs are revealed in consecutive test reports during the process of a crowdtesting task. This indicates the potential opportunity for accelerating crowdtesting by recommending appropriate workers in a dynamic manner, so that the non-yielding windows could be shortened. Besides, motivated by the popularity bias in existing crowdworker recommendation approach, this study also aims at alleviating the unfairness in recommendations. Driven by these observations, this article proposes a context- and fairness-aware in-process crowdworker recommendation approach, iRec2.0, to detect more bugs earlier, shorten the non-yielding windows, and alleviate the unfairness in recommendations. It consists of three main components: (1) the modeling of dynamic testing context, (2) the learning-based ranking component, and (3) the multi-objective optimization-based re-ranking component. The evaluation is conducted on 636 crowdtesting tasks from one of the largest crowdtesting platforms, and results show the potential of iRec2.0 in improving the cost-effectiveness of crowdtesting by saving the cost, shortening the testing process, and alleviating the unfairness among workers. In detail, iRec2.0 could shorten the non-yielding window by a median of 50%–66% in different application scenarios, and consequently have potential of saving testing cost by a median of 8%–12%. Meanwhile, the recommendation frequency of the crowdworker drop from 34%–60% to 5%–26% under different scenarios, indicating its potential in alleviating the unfairness among crowdworkers.
识别和优化开放参与对于开放软件开发的成功至关重要。现有研究强调了在众测任务中推荐工作人员的重要性,以提高bug检测效率,即用更少的工作人员检测更多的bug。然而,在现有的工作中有一些限制。首先,这些研究主要集中在新任务开始时基于专业知识匹配的一次性推荐。第二,推荐结果存在严重的人气偏差,即几乎所有的任务都推荐经验丰富的工人,而经验不足的工人很少被推荐。为了解决这些限制,本文认为需要上下文和公平性意识的进程内众工推荐。我们通过一项试点研究来激励这项研究,揭示了长尺寸非yield窗口的普遍存在,即在众测任务的过程中,连续的测试报告中没有发现新的bug。这表明通过动态推荐合适的员工来加速众测的潜在机会,从而缩短非收益窗口。此外,由于现有众包推荐方法存在人气偏差,本研究也旨在缓解推荐的不公平性。在这些观察结果的推动下,本文提出了一种具有上下文和公平性意识的进程内众工推荐方法iRec2.0,以更早地发现更多的bug,缩短不产生窗口,并减轻推荐中的不公平性。它包括三个主要组成部分:(1)动态测试上下文建模;(2)基于学习的排序组件;(3)基于多目标优化的重新排序组件。我们对一家最大的众测平台的636个众测任务进行了评估,结果显示iRec2.0在节省成本、缩短测试过程、缓解员工之间的不公平等方面提高众测成本效益的潜力。在不同的应用场景下,iRec2.0可以将不产生窗口缩短50%-66%的中位数,从而有可能节省8%-12%的测试成本。同时,在不同场景下,众包工作者的推荐频率从34%-60%下降到5%-26%,表明其在缓解众包工作者之间的不公平方面具有潜力。
{"title":"Context- and Fairness-Aware In-Process Crowdworker Recommendation","authors":"Junjie Wang, Ye Yang, Song Wang, Jun Hu, Qing Wang","doi":"10.1145/3487571","DOIUrl":"https://doi.org/10.1145/3487571","url":null,"abstract":"Identifying and optimizing open participation is essential to the success of open software development. Existing studies highlighted the importance of worker recommendation for crowdtesting tasks in order to improve bug detection efficiency, i.e., detect more bugs with fewer workers. However, there are a couple of limitations in existing work. First, these studies mainly focus on one-time recommendations based on expertise matching at the beginning of a new task. Second, the recommendation results suffer from severe popularity bias, i.e., highly experienced workers are recommended in almost all the tasks, while less experienced workers rarely get recommended. This article argues the need for context- and fairness-aware in-process crowdworker recommendation in order to address these limitations. We motivate this study through a pilot study, revealing the prevalence of long-sized non-yielding windows, i.e., no new bugs are revealed in consecutive test reports during the process of a crowdtesting task. This indicates the potential opportunity for accelerating crowdtesting by recommending appropriate workers in a dynamic manner, so that the non-yielding windows could be shortened. Besides, motivated by the popularity bias in existing crowdworker recommendation approach, this study also aims at alleviating the unfairness in recommendations. Driven by these observations, this article proposes a context- and fairness-aware in-process crowdworker recommendation approach, iRec2.0, to detect more bugs earlier, shorten the non-yielding windows, and alleviate the unfairness in recommendations. It consists of three main components: (1) the modeling of dynamic testing context, (2) the learning-based ranking component, and (3) the multi-objective optimization-based re-ranking component. The evaluation is conducted on 636 crowdtesting tasks from one of the largest crowdtesting platforms, and results show the potential of iRec2.0 in improving the cost-effectiveness of crowdtesting by saving the cost, shortening the testing process, and alleviating the unfairness among workers. In detail, iRec2.0 could shorten the non-yielding window by a median of 50%–66% in different application scenarios, and consequently have potential of saving testing cost by a median of 8%–12%. Meanwhile, the recommendation frequency of the crowdworker drop from 34%–60% to 5%–26% under different scenarios, indicating its potential in alleviating the unfairness among crowdworkers.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"30 1","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89426180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
ReCDroid+: Automated End-to-End Crash Reproduction from Bug Reports for Android Apps ReCDroid+:自动端到端崩溃再现从漏洞报告Android应用程序
Pub Date : 2022-03-07 DOI: 10.1145/3488244
Yu Zhao, Ting Su, Y. Liu, Wei Zheng, Xiaoxue Wu, Ramakanth Kavuluru, William G. J. Halfond, Tingting Yu
The large demand of mobile devices creates significant concerns about the quality of mobile applications (apps). Developers heavily rely on bug reports in issue tracking systems to reproduce failures (e.g., crashes). However, the process of crash reproduction is often manually done by developers, making the resolution of bugs inefficient, especially given that bug reports are often written in natural language. To improve the productivity of developers in resolving bug reports, in this paper, we introduce a novel approach, called ReCDroid+, that can automatically reproduce crashes from bug reports for Android apps. ReCDroid+ uses a combination of natural language processing (NLP), deep learning, and dynamic GUI exploration to synthesize event sequences with the goal of reproducing the reported crash. We have evaluated ReCDroid+ on 66 original bug reports from 37 Android apps. The results show that ReCDroid+ successfully reproduced 42 crashes (63.6% success rate) directly from the textual description of the manually reproduced bug reports. A user study involving 12 participants demonstrates that ReCDroid+ can improve the productivity of developers when resolving crash bug reports.
移动设备的巨大需求引起了人们对移动应用程序质量的极大关注。开发人员严重依赖问题跟踪系统中的错误报告来重现失败(例如,崩溃)。然而,崩溃重现的过程通常是由开发人员手动完成的,这使得bug的解决效率低下,特别是考虑到bug报告通常是用自然语言编写的。为了提高开发人员解决bug报告的效率,在本文中,我们引入了一种名为ReCDroid+的新方法,它可以自动从Android应用程序的bug报告中重现崩溃。ReCDroid+结合使用自然语言处理(NLP)、深度学习和动态GUI探索来合成事件序列,目标是重现报告的崩溃。我们对来自37个Android应用的66个原始漏洞报告进行了ReCDroid+评估。结果表明,ReCDroid+直接从手动复制的错误报告的文本描述中成功复制了42次崩溃(成功率为63.6%)。一项涉及12名参与者的用户研究表明,ReCDroid+在解决崩溃bug报告时可以提高开发人员的工作效率。
{"title":"ReCDroid+: Automated End-to-End Crash Reproduction from Bug Reports for Android Apps","authors":"Yu Zhao, Ting Su, Y. Liu, Wei Zheng, Xiaoxue Wu, Ramakanth Kavuluru, William G. J. Halfond, Tingting Yu","doi":"10.1145/3488244","DOIUrl":"https://doi.org/10.1145/3488244","url":null,"abstract":"The large demand of mobile devices creates significant concerns about the quality of mobile applications (apps). Developers heavily rely on bug reports in issue tracking systems to reproduce failures (e.g., crashes). However, the process of crash reproduction is often manually done by developers, making the resolution of bugs inefficient, especially given that bug reports are often written in natural language. To improve the productivity of developers in resolving bug reports, in this paper, we introduce a novel approach, called ReCDroid+, that can automatically reproduce crashes from bug reports for Android apps. ReCDroid+ uses a combination of natural language processing (NLP), deep learning, and dynamic GUI exploration to synthesize event sequences with the goal of reproducing the reported crash. We have evaluated ReCDroid+ on 66 original bug reports from 37 Android apps. The results show that ReCDroid+ successfully reproduced 42 crashes (63.6% success rate) directly from the textual description of the manually reproduced bug reports. A user study involving 12 participants demonstrates that ReCDroid+ can improve the productivity of developers when resolving crash bug reports.","PeriodicalId":7398,"journal":{"name":"ACM Transactions on Software Engineering and Methodology (TOSEM)","volume":"3 1","pages":"1 - 33"},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79842220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
ACM Transactions on Software Engineering and Methodology (TOSEM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1