首页 > 最新文献

Software Testing Verification & Reliability最新文献

英文 中文
High‐coverage metamorphic testing of concurrency support in C compilers C编译器中并发支持的高覆盖率变形测试
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-06-01 DOI: 10.1002/stvr.1812
Matt Windsor, A. Donaldson, John Wickerson
We present a technique and automated toolbox for randomized testing of C compilers. Unlike prior compiler‐testing approaches, we generate concurrent test cases in which threads communicate using fine‐grained atomic operations, and we study actual compiler implementations rather than abstract mappings. Our approach is (1) to generate test cases with precise oracles directly from an axiomatization of the C concurrency model; (2) to apply metamorphic fuzzing to each test case, aiming to amplify the coverage they are likely to achieve on compiler codebases; and (3) to execute each fuzzed test case extensively on a range of real machines. Our tool, C4, benefits compiler developers in two ways. First, test cases generated by C4 can achieve line coverage of parts of the LLVM C compiler that are reached by neither the LLVM test suite nor an existing (sequential) C fuzzer. This information can be used to guide further development of the LLVM test suite and can also shed light on where and how concurrency‐related compiler optimizations are implemented. Second, C4 can be used to gain confidence that a compiler implements concurrency correctly. As evidence of this, we show that C4 achieves high strong mutation coverage with respect to a set of concurrency‐related mutants derived from a recent version of LLVM and that it can find historic concurrency‐related bugs in GCC. As a by‐product of concurrency‐focused testing, C4 also revealed two previously unknown sequential compiler bugs in recent versions of GCC and the IBM XL compiler.
我们提出了一种随机测试C编译器的技术和自动化工具箱。与之前的编译器测试方法不同,我们生成并发测试用例,其中线程使用细粒度原子操作进行通信,我们研究实际的编译器实现,而不是抽象的映射。我们的方法是:(1)直接从C并发模型的公理化中使用精确的oracle生成测试用例;(2)对每个测试用例应用变形模糊测试,旨在扩大它们在编译器代码库上可能实现的覆盖率;(3)在一系列真实机器上广泛执行每个模糊测试用例。我们的工具C4以两种方式使编译器开发人员受益。首先,由C4生成的测试用例可以实现LLVM C编译器部分的行覆盖,这些部分既不是LLVM测试套件也不是现有的(顺序的)C模糊器。这些信息可以用来指导LLVM测试套件的进一步开发,也可以阐明在哪里以及如何实现与并发相关的编译器优化。其次,C4可用于获得编译器正确实现并发性的信心。作为这方面的证据,我们表明C4对于来自最近版本的LLVM的一组与并发相关的突变实现了高强度的突变覆盖,并且它可以发现GCC中与并发相关的历史错误。作为关注并发性的测试的副产品,C4还揭示了最近版本的GCC和IBM XL编译器中两个以前未知的顺序编译器错误。
{"title":"High‐coverage metamorphic testing of concurrency support in C compilers","authors":"Matt Windsor, A. Donaldson, John Wickerson","doi":"10.1002/stvr.1812","DOIUrl":"https://doi.org/10.1002/stvr.1812","url":null,"abstract":"We present a technique and automated toolbox for randomized testing of C compilers. Unlike prior compiler‐testing approaches, we generate concurrent test cases in which threads communicate using fine‐grained atomic operations, and we study actual compiler implementations rather than abstract mappings. Our approach is (1) to generate test cases with precise oracles directly from an axiomatization of the C concurrency model; (2) to apply metamorphic fuzzing to each test case, aiming to amplify the coverage they are likely to achieve on compiler codebases; and (3) to execute each fuzzed test case extensively on a range of real machines. Our tool, C4, benefits compiler developers in two ways. First, test cases generated by C4 can achieve line coverage of parts of the LLVM C compiler that are reached by neither the LLVM test suite nor an existing (sequential) C fuzzer. This information can be used to guide further development of the LLVM test suite and can also shed light on where and how concurrency‐related compiler optimizations are implemented. Second, C4 can be used to gain confidence that a compiler implements concurrency correctly. As evidence of this, we show that C4 achieves high strong mutation coverage with respect to a set of concurrency‐related mutants derived from a recent version of LLVM and that it can find historic concurrency‐related bugs in GCC. As a by‐product of concurrency‐focused testing, C4 also revealed two previously unknown sequential compiler bugs in recent versions of GCC and the IBM XL compiler.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79762517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Farewell after an 11‐year journey as joint editor‐in‐chief 在11年的联合主编之旅之后,再见了
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-05-10 DOI: 10.1002/stvr.1816
R. Hierons
{"title":"Farewell after an 11‐year journey as joint editor‐in‐chief","authors":"R. Hierons","doi":"10.1002/stvr.1816","DOIUrl":"https://doi.org/10.1002/stvr.1816","url":null,"abstract":"","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79940594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration testing and metamorphic testing 集成测试和变形测试
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-05-09 DOI: 10.1002/stvr.1817
Yves Le Traon, Tao Xie
The first paper, ‘ Towards using coupling measures to guide black-box integration testing in component-based systems ’ concerns integration testing in component-based systems. The authors investigate the correlation between component and interface coupling measures found in literature and the number of observed failures at two architectural levels: the component level and the software interface level. The finding serves as a first step towards an approach for systematic selection of test cases during integration testing of a distributed component-based software system with black-box components. For example, the number of coupled elements may be an indicator for failure-proneness and can be used to guide test case prioritisation during system integration testing; data-flow-based coupling measurements may not capture the nature of an automotive software system and thus are inapplicable; having a grey box model may improve system integration testing. Overall, prioritising testing of highly coupled components/interfaces can be a valid approach for systematic integration testing. ‘ High-coverage metamorphic testing of concurrency C compilers an approach and automated toolbox randomised testing of C compilers, checking whether C compilers concurrency in accordance the expected C11 semantics. ’ experimental results some interesting code relating concurrency, detects fence
第一篇论文《在基于组件的系统中使用耦合度量来指导黑盒集成测试》关注的是基于组件的系统中的集成测试。作者研究了文献中发现的组件和接口耦合度量之间的相关性,以及在两个体系结构级别上观察到的故障数量:组件级别和软件接口级别。这个发现可以作为在集成测试过程中系统地选择带有黑盒组件的分布式组件软件系统的测试用例方法的第一步。例如,耦合元素的数量可能是故障倾向的指示器,并且可以用于在系统集成测试期间指导测试用例的优先级;基于数据流的耦合测量可能无法捕捉汽车软件系统的本质,因此不适用;使用灰盒模型可以改进系统集成测试。总的来说,对高度耦合的组件/接口进行优先级测试是一种有效的系统集成测试方法。C编译器并发性的高覆盖率变形测试一种方法和自动工具箱随机测试C编译器,检查C编译器并发性是否符合预期的C11语义。实验结果一些有趣的代码涉及并发,检测栅栏
{"title":"Integration testing and metamorphic testing","authors":"Yves Le Traon, Tao Xie","doi":"10.1002/stvr.1817","DOIUrl":"https://doi.org/10.1002/stvr.1817","url":null,"abstract":"The first paper, ‘ Towards using coupling measures to guide black-box integration testing in component-based systems ’ concerns integration testing in component-based systems. The authors investigate the correlation between component and interface coupling measures found in literature and the number of observed failures at two architectural levels: the component level and the software interface level. The finding serves as a first step towards an approach for systematic selection of test cases during integration testing of a distributed component-based software system with black-box components. For example, the number of coupled elements may be an indicator for failure-proneness and can be used to guide test case prioritisation during system integration testing; data-flow-based coupling measurements may not capture the nature of an automotive software system and thus are inapplicable; having a grey box model may improve system integration testing. Overall, prioritising testing of highly coupled components/interfaces can be a valid approach for systematic integration testing. ‘ High-coverage metamorphic testing of concurrency C compilers an approach and automated toolbox randomised testing of C compilers, checking whether C compilers concurrency in accordance the expected C11 semantics. ’ experimental results some interesting code relating concurrency, detects fence","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91274600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MuFBDTester: A mutation‐based test sequence generator for FBD programs implementing nuclear power plant software MuFBDTester:一个基于突变的测试序列生成器,用于实施核电站软件的FBD程序
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-05-03 DOI: 10.1002/stvr.1815
Lingjun Liu, Eunkyoung Jee, Doo-Hwan Bae
Function block diagram (FBD) is a standard programming language for programmable logic controllers (PLCs). PLCs have been widely used to develop safety‐critical systems such as nuclear reactor protection systems. It is crucial to test FBD programs for such systems effectively. This paper presents an automated test sequence generation approach using mutation testing techniques for FBD programs and the developed tool, MuFBDTester. Given an FBD program, MuFBDTester analyses the program and generates mutated programs based on mutation operators. MuFBDTester translates the given program and mutants into the input language of a satisfiability modulo theories (SMT) solver to derive a set of test sequences. The primary objective is to find the test data that can distinguish between the results of the given program and mutants. We conducted experiments with several examples including real industrial cases to evaluate the effectiveness and efficiency of our approach. With the control of test size, the results indicated that the mutation‐based test suites were statistically more effective at revealing artificial faults than structural coverage‐based test suites. Furthermore, the mutation‐based test suites detected more reproduced faults, found in industrial programs, than structural coverage‐based test suites. Compared to structural coverage‐based test generation time, the time required by MuFBDTester to generate one test sequence from industrial programs is approximately 1.3 times longer; however, it is considered to be worth paying the price for high effectiveness. Using MuFBDTester, the manual effort of creating test suites was significantly reduced from days to minutes due to automated test generation. MuFBDTester can provide highly effective test suites for FBD engineers.
功能框图(FBD)是可编程逻辑控制器(plc)的标准编程语言。plc已被广泛用于开发安全关键系统,如核反应堆保护系统。有效地测试此类系统的FBD程序是至关重要的。本文介绍了一种利用FBD程序的突变测试技术和开发的工具MuFBDTester的自动化测试序列生成方法。给定一个FBD程序,MuFBDTester分析该程序并根据突变操作符生成突变程序。MuFBDTester将给定的程序和突变体转换为可满足模理论(SMT)求解器的输入语言,以派生出一组测试序列。主要目标是找到能够区分给定程序和突变体结果的测试数据。我们用包括实际工业案例在内的几个例子进行了实验,以评估我们的方法的有效性和效率。在控制测试规模的情况下,结果表明基于突变的测试套件比基于结构覆盖的测试套件在统计上更有效地揭示人工故障。此外,基于突变的测试套件比基于结构覆盖的测试套件检测到更多在工业程序中发现的再现错误。与基于结构覆盖率的测试生成时间相比,MuFBDTester从工业程序生成一个测试序列所需的时间大约长1.3倍;然而,人们认为为高效率付出代价是值得的。使用MuFBDTester,由于自动化的测试生成,创建测试套件的手工工作从几天显著减少到几分钟。MuFBDTester可以为FBD工程师提供高效的测试套件。
{"title":"MuFBDTester: A mutation‐based test sequence generator for FBD programs implementing nuclear power plant software","authors":"Lingjun Liu, Eunkyoung Jee, Doo-Hwan Bae","doi":"10.1002/stvr.1815","DOIUrl":"https://doi.org/10.1002/stvr.1815","url":null,"abstract":"Function block diagram (FBD) is a standard programming language for programmable logic controllers (PLCs). PLCs have been widely used to develop safety‐critical systems such as nuclear reactor protection systems. It is crucial to test FBD programs for such systems effectively. This paper presents an automated test sequence generation approach using mutation testing techniques for FBD programs and the developed tool, MuFBDTester. Given an FBD program, MuFBDTester analyses the program and generates mutated programs based on mutation operators. MuFBDTester translates the given program and mutants into the input language of a satisfiability modulo theories (SMT) solver to derive a set of test sequences. The primary objective is to find the test data that can distinguish between the results of the given program and mutants. We conducted experiments with several examples including real industrial cases to evaluate the effectiveness and efficiency of our approach. With the control of test size, the results indicated that the mutation‐based test suites were statistically more effective at revealing artificial faults than structural coverage‐based test suites. Furthermore, the mutation‐based test suites detected more reproduced faults, found in industrial programs, than structural coverage‐based test suites. Compared to structural coverage‐based test generation time, the time required by MuFBDTester to generate one test sequence from industrial programs is approximately 1.3 times longer; however, it is considered to be worth paying the price for high effectiveness. Using MuFBDTester, the manual effort of creating test suites was significantly reduced from days to minutes due to automated test generation. MuFBDTester can provide highly effective test suites for FBD engineers.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79473208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Metamorphic testing and test automation 变形测试和测试自动化
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-04-03 DOI: 10.1002/stvr.1814
R. Hierons, Tao Xie
This issue contains two papers. The first paper focuses on metamorphic testing and the second one focuses on test automation.Thefirst paper, ‘ Metamorphic relation prioritization for effective regression testing ’ by Madhusudan Srinivasan and Upulee Kanewala, concerns metamorphic testing. Metamorphic testing (MT) is an approach devised to support the testing of software that is untestable in the sense that it is not feasible to determine, in advance, the expected output for a given test input. The basic idea behind MT is that it is sometimes possible to provide a property (metamorphic relation) over multiple test runs that use inputs that are related in some way. A classic example is that we may not know what the cosine of x should be for some arbitrary x but we do know that cos( x ) should be the same as cos( (cid:1) x ). Previous work has proposed the use of multiple metamorphic relations (MRs), but the authors explore how one might prioritize (order) such MRs. Prioritization is based on information regarding a previous version of the software under test. The authors propose two approaches: prioritize on coverage or on fault detection. Optimization is achieved using a greedy algorithm that is sometimes called Additional Greedy. (Recommended by Dan Hao). The second paper, ‘ Improving test automation maturity: A multivocal literature review ’ by Yuqing Wang, Mika V. Mäntylä, Zihao Liu, Jouni Markkula and Päivi Raulamo-jurvanen, presents a multivocal literature review to survey and synthesize the guidelines given in the literature for improving test automation maturity. The authors select and review 81 primary studies (26 academic literature sources and 55 grey literature sources). From these primary studies, the authors extract 26 test automation best practices along with advice on how to conduct these best practices in forms of implementation/improvement approaches, actions, technical techniques, concepts and experience-based opinions. In particular, the literature review results contribute test automation best practices to suggest steps for improving test automation maturity, narrow the gap between practice and research in terms of the industry ’ s need to improve test automation maturity, provide a centralized knowledge base of existing guidelines for test automation maturity improvement and identify related research challenge and opportunities.
这一期有两篇论文。第一篇论文的重点是变形测试,第二篇论文的重点是测试自动化。第一篇论文,Madhusudan Srinivasan和Upulee Kanewala的“有效回归测试的变质关系优先级”,涉及变质测试。变形测试(MT)是一种设计用于支持不可测试的软件测试的方法,因为预先确定给定测试输入的预期输出是不可可行的。MT背后的基本思想是,有时可以在使用以某种方式相关的输入的多个测试运行中提供一个属性(变质关系)。一个经典的例子是我们可能不知道cos(x)对于任意的x应该是什么但是我们知道cos(x)应该和cos((cid:1) x)是一样的。先前的工作已经提出使用多重变形关系(MRs),但是作者探索了如何对这样的MRs进行优先级排序(排序)。优先级排序是基于有关被测软件的先前版本的信息。作者提出了两种方法:优先考虑覆盖或优先考虑故障检测。优化是通过贪心算法实现的,这种算法有时被称为附加贪心算法。(郝丹推荐)。第二篇论文,“提高测试自动化成熟度:多声音文献综述”,由王玉青、Mika V. Mäntylä、刘子好、Jouni Markkula和Päivi Raulamo-jurvanen撰写,提出了一篇多声音文献综述,以调查和综合文献中给出的提高测试自动化成熟度的指导方针。作者选择并回顾了81项主要研究(26项学术文献来源和55项灰色文献来源)。从这些主要的研究中,作者提取了26个测试自动化最佳实践,以及关于如何以实现/改进方法、行动、技术技巧、概念和基于经验的意见的形式执行这些最佳实践的建议。特别是,文献综述的结果提供了测试自动化最佳实践,以建议提高测试自动化成熟度的步骤,缩小了实践与研究之间的差距,就行业提高测试自动化成熟度的需求而言,为测试自动化成熟度改进提供了一个集中的知识库,并确定了相关的研究挑战和机遇。
{"title":"Metamorphic testing and test automation","authors":"R. Hierons, Tao Xie","doi":"10.1002/stvr.1814","DOIUrl":"https://doi.org/10.1002/stvr.1814","url":null,"abstract":"This issue contains two papers. The first paper focuses on metamorphic testing and the second one focuses on test automation.Thefirst paper, ‘ Metamorphic relation prioritization for effective regression testing ’ by Madhusudan Srinivasan and Upulee Kanewala, concerns metamorphic testing. Metamorphic testing (MT) is an approach devised to support the testing of software that is untestable in the sense that it is not feasible to determine, in advance, the expected output for a given test input. The basic idea behind MT is that it is sometimes possible to provide a property (metamorphic relation) over multiple test runs that use inputs that are related in some way. A classic example is that we may not know what the cosine of x should be for some arbitrary x but we do know that cos( x ) should be the same as cos( (cid:1) x ). Previous work has proposed the use of multiple metamorphic relations (MRs), but the authors explore how one might prioritize (order) such MRs. Prioritization is based on information regarding a previous version of the software under test. The authors propose two approaches: prioritize on coverage or on fault detection. Optimization is achieved using a greedy algorithm that is sometimes called Additional Greedy. (Recommended by Dan Hao). The second paper, ‘ Improving test automation maturity: A multivocal literature review ’ by Yuqing Wang, Mika V. Mäntylä, Zihao Liu, Jouni Markkula and Päivi Raulamo-jurvanen, presents a multivocal literature review to survey and synthesize the guidelines given in the literature for improving test automation maturity. The authors select and review 81 primary studies (26 academic literature sources and 55 grey literature sources). From these primary studies, the authors extract 26 test automation best practices along with advice on how to conduct these best practices in forms of implementation/improvement approaches, actions, technical techniques, concepts and experience-based opinions. In particular, the literature review results contribute test automation best practices to suggest steps for improving test automation maturity, narrow the gap between practice and research in terms of the industry ’ s need to improve test automation maturity, provide a centralized knowledge base of existing guidelines for test automation maturity improvement and identify related research challenge and opportunities.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79357656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RVprio: A tool for prioritizing runtime verification violations RVprio:对运行时验证违例进行优先排序的工具
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-03-07 DOI: 10.1002/stvr.1813
Lucas Cabral, Breno Miranda, Igor Lima, Marcelo d’Amorim
Runtime verification (RV) helps to find software bugs by monitoring formally specified properties during testing. A key problem in using RV during testing is how to reduce the manual inspection effort for checking whether property violations are true bugs. To date, there was no automated approach for determining the likelihood that property violations were true bugs to reduce tedious and time‐consuming manual inspection. We present RVprio, the first automated approach for prioritizing RV violations in order of likelihood of being true bugs. RVprio uses machine learning classifiers to prioritize violations. For training, we used a labelled dataset of 1170 violations from 110 projects. On that dataset, (1) RVprio reached 90% of the effectiveness of a theoretically optimal prioritizer that ranks all true bugs at the top of the ranked list, and (2) 88.1% of true bugs were in the top 25% of RVprio‐ranked violations; 32.7% of true bugs were in the top 10%. RVprio was also effective when we applied it to new unlabelled violations, from which we found previously unknown bugs—54 bugs in 8 open‐source projects. Our dataset is publicly available online.
运行时验证(RV)通过在测试期间监视正式指定的属性来帮助发现软件错误。在测试过程中使用RV的一个关键问题是如何减少检查属性违反是否为真正的bug的人工检查工作。到目前为止,还没有一种自动化的方法来确定违反财产的可能性是真正的错误,以减少繁琐和耗时的人工检查。我们提出了RVprio,这是第一个按照真正错误的可能性对RV违规进行优先排序的自动化方法。RVprio使用机器学习分类器对违规行为进行优先排序。对于训练,我们使用了来自110个项目的1170个违规标记数据集。在该数据集上,(1)RVprio达到了理论上最优优先排序器的90%的有效性,该优先排序器将所有真实错误排在排名列表的顶部,(2)88.1%的真实错误位于RVprio排名的违规行为的前25%;32.7%的真正漏洞位于前10%。当我们将RVprio应用于新的未标记违规时也很有效,从中我们发现了以前未知的错误-在8个开源项目中有54个错误。我们的数据集在网上是公开的。
{"title":"RVprio: A tool for prioritizing runtime verification violations","authors":"Lucas Cabral, Breno Miranda, Igor Lima, Marcelo d’Amorim","doi":"10.1002/stvr.1813","DOIUrl":"https://doi.org/10.1002/stvr.1813","url":null,"abstract":"Runtime verification (RV) helps to find software bugs by monitoring formally specified properties during testing. A key problem in using RV during testing is how to reduce the manual inspection effort for checking whether property violations are true bugs. To date, there was no automated approach for determining the likelihood that property violations were true bugs to reduce tedious and time‐consuming manual inspection. We present RVprio, the first automated approach for prioritizing RV violations in order of likelihood of being true bugs. RVprio uses machine learning classifiers to prioritize violations. For training, we used a labelled dataset of 1170 violations from 110 projects. On that dataset, (1) RVprio reached 90% of the effectiveness of a theoretically optimal prioritizer that ranks all true bugs at the top of the ranked list, and (2) 88.1% of true bugs were in the top 25% of RVprio‐ranked violations; 32.7% of true bugs were in the top 10%. RVprio was also effective when we applied it to new unlabelled violations, from which we found previously unknown bugs—54 bugs in 8 open‐source projects. Our dataset is publicly available online.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83410992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards using coupling measures to guide black‐box integration testing in component‐based systems 在基于组件的系统中使用耦合度量来指导黑盒集成测试
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-03-07 DOI: 10.1002/stvr.1811
Dominik Hellhake, J. Bogner, Tobias Schmid, S. Wagner
In component‐based software development, integration testing is a crucial step in verifying the composite behaviour of a system. However, very few formally or empirically validated approaches are available for systematically testing if components have been successfully integrated. In practice, integration testing of component‐based systems is usually performed in a time‐ and resource‐limited context, which further increases the demand for effective test selection strategies. In this work, we therefore analyse the relationship between different component and interface coupling measures found in literature and the distribution of failures found during integration testing of an automotive system. By investigating the correlation for each measure at two architectural levels, we discuss its usefulness to guide integration testing at the software component level as well as for the hardware component level where coupling is measured among multiple electronic control units (ECUs) of a vehicle. Our results indicate that there is a positive correlation between coupling measures and failure‐proneness at both architectural level for all tested measures. However, at the hardware component level, all measures achieved a significantly higher correlation when compared to the software‐level correlation. Consequently, we conclude that prioritizing testing of highly coupled components and interfaces is a valid approach for systematic integration testing, as coupling proved to be a valid indicator for failure‐proneness.
在基于组件的软件开发中,集成测试是验证系统组合行为的关键步骤。然而,很少有正式的或经验验证的方法可用于系统测试,如果组件已经成功地集成。在实践中,基于组件的系统的集成测试通常是在时间和资源有限的情况下进行的,这进一步增加了对有效测试选择策略的需求。因此,在这项工作中,我们分析了文献中发现的不同组件和接口耦合措施之间的关系以及在汽车系统集成测试期间发现的故障分布。通过研究两个体系结构级别上每个度量的相关性,我们讨论了它对指导软件组件级别以及硬件组件级别的集成测试的有用性,其中在车辆的多个电子控制单元(ecu)之间测量耦合。我们的结果表明,在所有测试措施的架构级别上,耦合措施和失效倾向性之间存在正相关。然而,在硬件组件级别,与软件级别的相关性相比,所有的度量都获得了显著更高的相关性。因此,我们得出结论,优先测试高度耦合的组件和接口是系统集成测试的有效方法,因为耦合被证明是故障倾向的有效指标。
{"title":"Towards using coupling measures to guide black‐box integration testing in component‐based systems","authors":"Dominik Hellhake, J. Bogner, Tobias Schmid, S. Wagner","doi":"10.1002/stvr.1811","DOIUrl":"https://doi.org/10.1002/stvr.1811","url":null,"abstract":"In component‐based software development, integration testing is a crucial step in verifying the composite behaviour of a system. However, very few formally or empirically validated approaches are available for systematically testing if components have been successfully integrated. In practice, integration testing of component‐based systems is usually performed in a time‐ and resource‐limited context, which further increases the demand for effective test selection strategies. In this work, we therefore analyse the relationship between different component and interface coupling measures found in literature and the distribution of failures found during integration testing of an automotive system. By investigating the correlation for each measure at two architectural levels, we discuss its usefulness to guide integration testing at the software component level as well as for the hardware component level where coupling is measured among multiple electronic control units (ECUs) of a vehicle. Our results indicate that there is a positive correlation between coupling measures and failure‐proneness at both architectural level for all tested measures. However, at the hardware component level, all measures achieved a significantly higher correlation when compared to the software‐level correlation. Consequently, we conclude that prioritizing testing of highly coupled components and interfaces is a valid approach for systematic integration testing, as coupling proved to be a valid indicator for failure‐proneness.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72686650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving test automation maturity: A multivocal literature review 改进测试自动化成熟度:一个多语种的文献综述
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-02-15 DOI: 10.1002/stvr.1804
Yuqing Wang, M. Mäntylä, Zihao Liu, Jouni Markkula, Päivi Raulamo-Jurvanen
Mature test automation is key for achieving software quality at speed. In this paper, we present a multivocal literature review with the objective to survey and synthesize the guidelines given in the literature for improving test automation maturity. We selected and reviewed 81 primary studies, consisting of 26 academic literature and 55 grey literature sources. From primary studies, we extracted 26 test automation best practices (e.g., Define an effective test automation strategy, Set up good test environments, and Develop high‐quality test scripts) and collected many pieces of advice (e.g., in forms of implementation/improvement approaches, technical techniques, concepts, and experience‐based heuristics) on how to conduct these best practices. We made main observations: (1) There are only six best practices whose positive effect on maturity improvement have been evaluated by academic studies using formal empirical methods; (2) several technical related best practices in this MLR were not presented in test maturity models; (3) some best practices can be linked to success factors and maturity impediments proposed by other scholars; (4) most pieces of advice on how to conduct proposed best practices were identified from experience studies and their effectiveness need to be further evaluated with cross‐site empirical evidence using formal empirical methods; (5) in the literature, some advice on how to conduct certain best practices are conflicting, and some advice on how to conduct certain best practices still need further qualitative analysis.
成熟的测试自动化是快速实现软件质量的关键。在这篇论文中,我们提出了一个多语种的文献综述,目的是调查和综合文献中给出的指导方针,以提高测试自动化的成熟度。我们选择并回顾了81项初步研究,包括26篇学术文献和55篇灰色文献。从最初的研究中,我们提取了26个测试自动化最佳实践(例如,定义一个有效的测试自动化策略,建立良好的测试环境,开发高质量的测试脚本),并收集了许多关于如何执行这些最佳实践的建议(例如,以实现/改进方法,技术技巧,概念和基于经验的启发式的形式)。研究结果表明:(1)目前仅有6种最佳实践对企业成熟度提升有积极影响,并得到了学术研究的实证评价;(2)该MLR中若干技术相关的最佳实践未在测试成熟度模型中提出;(3)一些最佳实践可以与其他学者提出的成功因素和成熟度障碍联系起来;(4)关于如何实施建议的最佳实践的大多数建议是从经验研究中确定的,其有效性需要使用正式的经验方法通过跨站点经验证据进一步评估;(5)在文献中,一些关于如何进行某些最佳实践的建议是相互矛盾的,一些关于如何进行某些最佳实践的建议还需要进一步的定性分析。
{"title":"Improving test automation maturity: A multivocal literature review","authors":"Yuqing Wang, M. Mäntylä, Zihao Liu, Jouni Markkula, Päivi Raulamo-Jurvanen","doi":"10.1002/stvr.1804","DOIUrl":"https://doi.org/10.1002/stvr.1804","url":null,"abstract":"Mature test automation is key for achieving software quality at speed. In this paper, we present a multivocal literature review with the objective to survey and synthesize the guidelines given in the literature for improving test automation maturity. We selected and reviewed 81 primary studies, consisting of 26 academic literature and 55 grey literature sources. From primary studies, we extracted 26 test automation best practices (e.g., Define an effective test automation strategy, Set up good test environments, and Develop high‐quality test scripts) and collected many pieces of advice (e.g., in forms of implementation/improvement approaches, technical techniques, concepts, and experience‐based heuristics) on how to conduct these best practices. We made main observations: (1) There are only six best practices whose positive effect on maturity improvement have been evaluated by academic studies using formal empirical methods; (2) several technical related best practices in this MLR were not presented in test maturity models; (3) some best practices can be linked to success factors and maturity impediments proposed by other scholars; (4) most pieces of advice on how to conduct proposed best practices were identified from experience studies and their effectiveness need to be further evaluated with cross‐site empirical evidence using formal empirical methods; (5) in the literature, some advice on how to conduct certain best practices are conflicting, and some advice on how to conduct certain best practices still need further qualitative analysis.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86890026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Combinatorial testing and model‐based testing 组合测试和基于模型的测试
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-02-11 DOI: 10.1002/stvr.1810
R. Hierons, Tao Xie
This issue contains two papers. The first paper focuses on combinatorial testing and the second one focuses on model-based testing. The first paper, ‘Combinatorial methods for testing Internet of Things smart home systems’ by Bernhard Garn, Dominik-Philip Schreiber, Dimitris E. Simos, Rick Kuhn, Jeff Voas, and Raghu Kacker, presents an approach for applying combinatorial testing (CT) to the internal configuration and functionality of Internet of Things (IoT) home automation hub systems. The authors first create an input parameter model of an IoT home automation hub system for use with test generation strategies of combinatorial testing and then propose an automated test execution framework and two test oracles for evaluation purposes. The proposed approach makes use of the appropriately formulated model of the hub and generates test sets derived from this model satisfying certain combinatorial coverage conditions. The authors conduct an evaluation of the proposed approach on a real-world IoT system. The evaluation results show that the proposed approach reveals multiple errors in the devices under test, and all approaches under comparison perform nearly equally well (recommended by W. K. Chan). The second paper, ‘Effective grey-box testing with partial FSM models’ by Robert Sachtleben and Jan Peleska, explores the problem of testing from a finite state machine (FSM) and considers the scenario in which an input can be enabled in some states and disabled in other states. There is already a body of work on testing from FSMs in which inputs are not always defined (partial FSMs), but such work typically allows the system under test (SUT) to be such that some inputs are defined in a state of the SUT but are not defined in the corresponding state of the specification FSM (the SUT can be ‘more’ defined). The paper introduces a conformance relation, called strong reduction, that requires that exactly the same inputs are defined in the specification and the SUT. A new test generation technique is given for strong reduction, with this returning test suites that are complete: a test suite is guaranteed to fail if the SUT is faulty and also satisfies certain conditions that place an upper bound on the number of states of the SUT. The overall approach also requires that the tester can determine which inputs are enabled in the current state of the SUT and so testing is grey-box (recommended by Helene Waeselynck).
这一期有两篇论文。第一篇论文主要研究组合测试,第二篇论文主要研究基于模型的测试。第一篇论文“测试物联网智能家居系统的组合方法”由Bernhard Garn, Dominik-Philip Schreiber, Dimitris E. Simos, Rick Kuhn, Jeff Voas和Raghu Kacker撰写,提出了一种将组合测试(CT)应用于物联网(IoT)家庭自动化中心系统的内部配置和功能的方法。作者首先创建了物联网家庭自动化中心系统的输入参数模型,用于组合测试的测试生成策略,然后提出了一个自动化测试执行框架和两个测试预言机,用于评估目的。该方法利用适当表述的轮毂模型,由该模型生成满足一定组合覆盖条件的测试集。作者在现实世界的物联网系统上对所提出的方法进行了评估。评估结果表明,所提出的方法揭示了被测设备中的多个错误,并且所有比较方法的性能几乎相同(由w.k. Chan推荐)。第二篇论文,由Robert Sachtleben和Jan Peleska撰写的“部分FSM模型的有效灰盒测试”,探讨了从有限状态机(FSM)进行测试的问题,并考虑了输入可以在某些状态下启用而在其他状态下禁用的场景。已经有大量的FSM测试工作,其中输入并不总是定义的(部分FSM),但是这样的工作通常允许被测系统(SUT)是这样的,一些输入在SUT的状态下定义,而不是在规范FSM的相应状态下定义(SUT可以“更多”定义)。本文引入了一种一致性关系,称为强约简,它要求在规范和SUT中定义完全相同的输入。本文给出了一种新的测试生成技术用于强还原,它返回的测试套件是完整的:如果SUT有故障,测试套件就保证失败,并且还满足了SUT状态数量的上界的某些条件。总体方法还要求测试人员能够确定在SUT的当前状态下启用了哪些输入,因此测试是灰盒测试(由Helene Waeselynck推荐)。
{"title":"Combinatorial testing and model‐based testing","authors":"R. Hierons, Tao Xie","doi":"10.1002/stvr.1810","DOIUrl":"https://doi.org/10.1002/stvr.1810","url":null,"abstract":"This issue contains two papers. The first paper focuses on combinatorial testing and the second one focuses on model-based testing. The first paper, ‘Combinatorial methods for testing Internet of Things smart home systems’ by Bernhard Garn, Dominik-Philip Schreiber, Dimitris E. Simos, Rick Kuhn, Jeff Voas, and Raghu Kacker, presents an approach for applying combinatorial testing (CT) to the internal configuration and functionality of Internet of Things (IoT) home automation hub systems. The authors first create an input parameter model of an IoT home automation hub system for use with test generation strategies of combinatorial testing and then propose an automated test execution framework and two test oracles for evaluation purposes. The proposed approach makes use of the appropriately formulated model of the hub and generates test sets derived from this model satisfying certain combinatorial coverage conditions. The authors conduct an evaluation of the proposed approach on a real-world IoT system. The evaluation results show that the proposed approach reveals multiple errors in the devices under test, and all approaches under comparison perform nearly equally well (recommended by W. K. Chan). The second paper, ‘Effective grey-box testing with partial FSM models’ by Robert Sachtleben and Jan Peleska, explores the problem of testing from a finite state machine (FSM) and considers the scenario in which an input can be enabled in some states and disabled in other states. There is already a body of work on testing from FSMs in which inputs are not always defined (partial FSMs), but such work typically allows the system under test (SUT) to be such that some inputs are defined in a state of the SUT but are not defined in the corresponding state of the specification FSM (the SUT can be ‘more’ defined). The paper introduces a conformance relation, called strong reduction, that requires that exactly the same inputs are defined in the specification and the SUT. A new test generation technique is given for strong reduction, with this returning test suites that are complete: a test suite is guaranteed to fail if the SUT is faulty and also satisfies certain conditions that place an upper bound on the number of states of the SUT. The overall approach also requires that the tester can determine which inputs are enabled in the current state of the SUT and so testing is grey-box (recommended by Helene Waeselynck).","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75275755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated black‐box testing of nominal and error scenarios in RESTful APIs 自动黑盒测试的名义和错误的场景在RESTful api
IF 1.5 4区 计算机科学 Q2 Engineering Pub Date : 2022-01-23 DOI: 10.1002/stvr.1808
Davide Corradini, Amedeo Zampieri, Michele Pasqua, Emanuele Viglianisi, Michael Dallago, M. Ceccato
RESTful APIs (or REST APIs for short) represent a mainstream approach to design and develop web APIs using the REpresentational State Transfer architectural style. Black‐box testing, which assumes only the access to the system under test with a specific interface, is the only viable option when white‐box testing is impracticable. This is the case for REST APIs: their source code is usually not (or just partially) available, or a white‐box analysis across many dynamically allocated distributed components (typical of a micro‐services architecture) is computationally challenging. This paper presents RestTestGen, a novel black‐box approach to automatically generate test cases for REST APIs, based on their interface definition (an OpenAPI specification). Input values and requests are generated for each operation of the API under test with the twofold objective of testing nominal execution scenarios and error scenarios. Two distinct oracles are deployed to detect when test cases reveal implementation defects. While this approach is mainly targeting the research community, it is also of interest to developers because, as a black‐box approach, it is universally applicable across different programming languages, or in the case external (compiled only) libraries are used in a REST API. The validation of our approach has been performed on more than 100 of real‐world REST APIs, highlighting the effectiveness of the approach in revealing actual faults in already deployed services.
RESTful api(或简称REST api)代表了使用具象状态传输架构风格设计和开发web api的主流方法。当白盒测试不可行时,黑盒测试是唯一可行的选择,黑盒测试假设只有通过特定的接口才能访问被测系统。这就是REST api的情况:它们的源代码通常是不可用的(或者只是部分可用),或者跨许多动态分配的分布式组件(典型的微服务架构)进行白盒分析在计算上是具有挑战性的。本文介绍了RestTestGen,一种新颖的黑盒方法,基于接口定义(OpenAPI规范)自动生成REST api的测试用例。为被测API的每个操作生成输入值和请求,具有测试名义执行场景和错误场景的双重目标。部署两个不同的oracle来检测测试用例何时揭示实现缺陷。虽然这种方法主要针对研究社区,但它也引起了开发人员的兴趣,因为作为黑盒方法,它普遍适用于不同的编程语言,或者在REST API中使用外部(仅编译)库的情况下。我们的方法已经在超过100个真实世界的REST api上进行了验证,强调了该方法在揭示已经部署的服务中的实际故障方面的有效性。
{"title":"Automated black‐box testing of nominal and error scenarios in RESTful APIs","authors":"Davide Corradini, Amedeo Zampieri, Michele Pasqua, Emanuele Viglianisi, Michael Dallago, M. Ceccato","doi":"10.1002/stvr.1808","DOIUrl":"https://doi.org/10.1002/stvr.1808","url":null,"abstract":"RESTful APIs (or REST APIs for short) represent a mainstream approach to design and develop web APIs using the REpresentational State Transfer architectural style. Black‐box testing, which assumes only the access to the system under test with a specific interface, is the only viable option when white‐box testing is impracticable. This is the case for REST APIs: their source code is usually not (or just partially) available, or a white‐box analysis across many dynamically allocated distributed components (typical of a micro‐services architecture) is computationally challenging. This paper presents RestTestGen, a novel black‐box approach to automatically generate test cases for REST APIs, based on their interface definition (an OpenAPI specification). Input values and requests are generated for each operation of the API under test with the twofold objective of testing nominal execution scenarios and error scenarios. Two distinct oracles are deployed to detect when test cases reveal implementation defects. While this approach is mainly targeting the research community, it is also of interest to developers because, as a black‐box approach, it is universally applicable across different programming languages, or in the case external (compiled only) libraries are used in a REST API. The validation of our approach has been performed on more than 100 of real‐world REST APIs, highlighting the effectiveness of the approach in revealing actual faults in already deployed services.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2022-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73685752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
Software Testing Verification & Reliability
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1