Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00018
Chengcheng Lv, Long Zhang, Fanping Zeng, Jian Zhang
XSS is one of the common vulnerabilities in web applications. Many black-box testing tools may collect a large number of payloads and traverse them to find a payload that can be successfully injected, but they are not very efficient. And previous research has paid less attention to how to improve the efficiency of black-box testing to detect XSS vulnerability. To improve the efficiency of testing, we develop an XSS testing tool. It collects 6128 payloads and uses a headless browser to detect XSS vulnerability. The tool can discover XSS vulnerability quickly with the ART(Adaptive Random Testing) method. We conduct an experiment using 3 extensively adopted open source vulnerable benchmarks and 2 actual websites to evaluate the ART method. The experimental results indicate that the ART method can effectively improve the fuzzing method by more than 27.1% in reducing the number of attempts before accomplishing a successful injection.
XSS是web应用程序中常见的漏洞之一。许多黑盒测试工具可能会收集大量的有效载荷,并遍历它们以找到可以成功注入的有效载荷,但它们的效率并不高。而对于如何提高黑盒测试检测跨站攻击漏洞的效率,以往的研究较少关注。为了提高测试效率,我们开发了XSS测试工具。它收集6128个有效负载,并使用无头浏览器检测XSS漏洞。该工具采用ART(Adaptive Random Testing,自适应随机测试)方法快速发现跨站攻击漏洞。我们使用3个广泛采用的开源漏洞基准和2个实际网站进行实验来评估ART方法。实验结果表明,ART方法在减少成功注射前的尝试次数方面,比模糊方法有效地提高了27.1%以上。
{"title":"Adaptive Random Testing for XSS Vulnerability","authors":"Chengcheng Lv, Long Zhang, Fanping Zeng, Jian Zhang","doi":"10.1109/APSEC48747.2019.00018","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00018","url":null,"abstract":"XSS is one of the common vulnerabilities in web applications. Many black-box testing tools may collect a large number of payloads and traverse them to find a payload that can be successfully injected, but they are not very efficient. And previous research has paid less attention to how to improve the efficiency of black-box testing to detect XSS vulnerability. To improve the efficiency of testing, we develop an XSS testing tool. It collects 6128 payloads and uses a headless browser to detect XSS vulnerability. The tool can discover XSS vulnerability quickly with the ART(Adaptive Random Testing) method. We conduct an experiment using 3 extensively adopted open source vulnerable benchmarks and 2 actual websites to evaluate the ART method. The experimental results indicate that the ART method can effectively improve the fuzzing method by more than 27.1% in reducing the number of attempts before accomplishing a successful injection.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117067113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/apsec48747.2019.00008
W. C. Chu
Steering Committee Chair: Sooyong Park, Sogang University, Korea Muhammad Ali Babar, University of Adelaide, Australia Sundeok (Steve) Char, Korea University, Korea William C. Chu, Tung Hai University, Taiwan Jin Song Dong, National University of Singapore, Singapore Jun Han, Swinburne University of Technology, Australia Jackey Keung, City University of Hong Kong, Hong Kong Karl R. P. H. Leung, Hong Kong Institute of Vocational Education, Hong Kong Deron Liang, National Central University, Taiwan Katsuhisa Maruyama, Ritsumeikan University, Japan Pornsiri Muenchaisri, Chulalonghorn University, Thailand Danny Poo, National University of Singapore, Singapore Steve Reeves, The University of Waikato, New Zealand Shamsul Sahibuddin, Universiti Teknologi Malaysia, Malaysia Ashish Sureka, Indraprastha Institute of Information Technology Delhi IIITD, India Hironori Washizaki, Waseda University, Japan He (Jason) Zhang, Nanjing University, China
指导委员会主席:朴秀勇、西江大学、韩国穆罕默德·阿里·巴巴、阿德莱德大学、澳大利亚Sundeok (Steve) Char、高丽大学、韩国William C. Chu、东海大学、台湾董劲松、新加坡国立大学、新加坡韩俊、斯威本科技大学、澳大利亚蒋耀基、香港城市大学、香港Karl r.p. H. Leung、香港专业教育学院、香港梁德隆、中央大学、台湾Maruyama胜久、立命馆大学,日本Pornsiri Muenchaisri,朱拉隆功大学,泰国Danny Poo,新加坡国立大学,新加坡Steve Reeves,怀卡托大学,新西兰Shamsul Sahibuddin,马来西亚理工大学,马来西亚Ashish Sureka, Indraprastha德里信息技术研究所IIITD,印度Hironori Washizaki,早稻田大学,日本He (Jason) Zhang,南京大学,中国
{"title":"APSEC 2019 Steering Committee and Emeritus Members","authors":"W. C. Chu","doi":"10.1109/apsec48747.2019.00008","DOIUrl":"https://doi.org/10.1109/apsec48747.2019.00008","url":null,"abstract":"Steering Committee Chair: Sooyong Park, Sogang University, Korea Muhammad Ali Babar, University of Adelaide, Australia Sundeok (Steve) Char, Korea University, Korea William C. Chu, Tung Hai University, Taiwan Jin Song Dong, National University of Singapore, Singapore Jun Han, Swinburne University of Technology, Australia Jackey Keung, City University of Hong Kong, Hong Kong Karl R. P. H. Leung, Hong Kong Institute of Vocational Education, Hong Kong Deron Liang, National Central University, Taiwan Katsuhisa Maruyama, Ritsumeikan University, Japan Pornsiri Muenchaisri, Chulalonghorn University, Thailand Danny Poo, National University of Singapore, Singapore Steve Reeves, The University of Waikato, New Zealand Shamsul Sahibuddin, Universiti Teknologi Malaysia, Malaysia Ashish Sureka, Indraprastha Institute of Information Technology Delhi IIITD, India Hironori Washizaki, Waseda University, Japan He (Jason) Zhang, Nanjing University, China","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122038374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00065
Hongliang Liang, Tianqi Yang, Lin Jiang, Yixiu Chen, Zhuosi Xie
Existing studies on detecting vulnerabilities in apps have two main disadvantages: one is that some studies are limited to detecting a certain vulnerability and lack comprehensive analysis; the other is the lack of valid evidence for vulnerability verification, which leads to high false alarms rate and requires massive manual efforts. We propose the concept of vulnerability pattern to abstract the characteristics of different attacks, e.g., their prerequisites and attack paths, so as to support detecting multiple kinds of vulnerabilities. Also, we present a zero false alarms framework which can find vulnerability instances precisely and generate test cases and triggers to validate the findings, by combing static analysis and dynamic binary instrumentation techniques. We implement our method in a tool named Witness, which currently can detect 8 different types of vulnerabilities and is extensible to support more. Evaluated on 3211 popular apps, Witness successfully detected 243 vulnerability instances, with better precision and more proofs than four existing tools.
{"title":"Witness: Detecting Vulnerabilities in Android Apps Extensively and Verifiably","authors":"Hongliang Liang, Tianqi Yang, Lin Jiang, Yixiu Chen, Zhuosi Xie","doi":"10.1109/APSEC48747.2019.00065","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00065","url":null,"abstract":"Existing studies on detecting vulnerabilities in apps have two main disadvantages: one is that some studies are limited to detecting a certain vulnerability and lack comprehensive analysis; the other is the lack of valid evidence for vulnerability verification, which leads to high false alarms rate and requires massive manual efforts. We propose the concept of vulnerability pattern to abstract the characteristics of different attacks, e.g., their prerequisites and attack paths, so as to support detecting multiple kinds of vulnerabilities. Also, we present a zero false alarms framework which can find vulnerability instances precisely and generate test cases and triggers to validate the findings, by combing static analysis and dynamic binary instrumentation techniques. We implement our method in a tool named Witness, which currently can detect 8 different types of vulnerabilities and is extensible to support more. Evaluated on 3211 popular apps, Witness successfully detected 243 vulnerability instances, with better precision and more proofs than four existing tools.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130414784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00044
Yuexing Wang, Min Zhou, M. Gu, Jiaguang Sun
Precise pointer analysis is desired since many program analyses benefit from it both in precision and performance. There are several dimensions of pointer analysis precision, flow sensitivity, context sensitivity, field sensitivity and path sensitivity. The more dimensions a pointer analysis considers, the more accurate its results will be. However, considering all dimensions is difficult because the trade-off between precision and efficiency should be balanced. This paper presents a flow, context, field and quasi path sensitive pointer analysis algorithm for C programs. Our algorithm runs on a control flow automaton, a key structure for our analysis to be flow sensitive. During the analysis process, we use function summaries to get context information. Elements of aggregate structures are handled to improve precision. We collect path conditions to filter unreachable paths and make all points-to relations gated. For efficiency, we propose a multi-entry mechanism. The algorithm is implemented in TsmartGP, which is an extension of CPAchecker. Our algorithm is compared with some state-of-the-art algorithms and TsmartGP is compared with cppcheck and Clang Static Analyzer by detecting uninitialized pointer errors in 13 real-world applications. The experimental results show that our algorithm is more accurate and TsmartGP can find more errors than other tools.
{"title":"Necessity and Capability of Flow, Context, Field and Quasi Path Sensitive Points-to Analysis","authors":"Yuexing Wang, Min Zhou, M. Gu, Jiaguang Sun","doi":"10.1109/APSEC48747.2019.00044","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00044","url":null,"abstract":"Precise pointer analysis is desired since many program analyses benefit from it both in precision and performance. There are several dimensions of pointer analysis precision, flow sensitivity, context sensitivity, field sensitivity and path sensitivity. The more dimensions a pointer analysis considers, the more accurate its results will be. However, considering all dimensions is difficult because the trade-off between precision and efficiency should be balanced. This paper presents a flow, context, field and quasi path sensitive pointer analysis algorithm for C programs. Our algorithm runs on a control flow automaton, a key structure for our analysis to be flow sensitive. During the analysis process, we use function summaries to get context information. Elements of aggregate structures are handled to improve precision. We collect path conditions to filter unreachable paths and make all points-to relations gated. For efficiency, we propose a multi-entry mechanism. The algorithm is implemented in TsmartGP, which is an extension of CPAchecker. Our algorithm is compared with some state-of-the-art algorithms and TsmartGP is compared with cppcheck and Clang Static Analyzer by detecting uninitialized pointer errors in 13 real-world applications. The experimental results show that our algorithm is more accurate and TsmartGP can find more errors than other tools.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131455162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00020
Ziqi Chen, Huiyan Wang, Chang Xu, Xiaoxing Ma, Chun Cao
Software systems assisted with deep neural networks (DNNs) are gaining increasing popularities. However, one outstanding problem is to judge whether a given application scenario suits a DNN model, whose answer highly affects its concerned system's performance. Existing work indirectly addressed this problem by seeking for higher test coverage or generating adversarial inputs. One pioneering work is SynEva, which exactly addressed this problem by synthesizing mirror programs for scenario suitableness evaluation of general machine learning programs, but fell short in supporting DNN models. In this paper, we propose VISION to eValuatIng Scenario suItableness fOr DNN models, specially catered for DNN characteristics. We conducted experiments on a real-world self-driving dataset Udacity, and the results show that VISION was effective in evaluating scenario suitableness for DNN models with an accuracy of 75.6–89.0% as compared to that of SynEva, 50.0–81.8%. We also explored different meta-models in VISION, and found out that the decision tree logic learner meta-model could be the best one for balancing VISION's effectiveness and efficiency.
{"title":"VISION: Evaluating Scenario Suitableness for DNN Models by Mirror Synthesis","authors":"Ziqi Chen, Huiyan Wang, Chang Xu, Xiaoxing Ma, Chun Cao","doi":"10.1109/APSEC48747.2019.00020","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00020","url":null,"abstract":"Software systems assisted with deep neural networks (DNNs) are gaining increasing popularities. However, one outstanding problem is to judge whether a given application scenario suits a DNN model, whose answer highly affects its concerned system's performance. Existing work indirectly addressed this problem by seeking for higher test coverage or generating adversarial inputs. One pioneering work is SynEva, which exactly addressed this problem by synthesizing mirror programs for scenario suitableness evaluation of general machine learning programs, but fell short in supporting DNN models. In this paper, we propose VISION to eValuatIng Scenario suItableness fOr DNN models, specially catered for DNN characteristics. We conducted experiments on a real-world self-driving dataset Udacity, and the results show that VISION was effective in evaluating scenario suitableness for DNN models with an accuracy of 75.6–89.0% as compared to that of SynEva, 50.0–81.8%. We also explored different meta-models in VISION, and found out that the decision tree logic learner meta-model could be the best one for balancing VISION's effectiveness and efficiency.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134358044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00036
Mengyu Ma, Liwei Chen, Gang Shi
The widespread deployment of unsafe programming languages such as C and C++, leaves many programs vulnerable to memory corruption attacks. With the continuous improvement of control-flow hijacking defense methods, recent works on data-oriented attacks including Data-oriented Exploits (DOE), Data-oriented Programming (DOP), and Block-oriented Programming (BOP) have been showed that these attacks can cause significant threat even in the presence of control-flow defense mechanism. Moreover, DFI (Date Flow Integrity) is a software-only approach for mitigating data-oriented attacks, while it incurs a 104% performance overhead. There are no suitable defense methods for such attacks as yet. In this paper, we propose Dam, a practical scheme to mitigate data-oriented attacks with tagged memory based on hardware. Dam is a novel approach using the idea of tagged memory to break data-flow stitching and gadgets dispatcher of generating data-oriented attacks rather than complete DFI. By enforcing security checking on memory access, Dam eliminates two requirements in constructing a valid data-oriented attack. We have implemented Dam by extending lowRISC, a RISC-V based SoC (System of a Chip) that implements tagged memory. And our evaluation results show that our scheme has an average performance cost of 6.48%, while Dam provides source compatibility and strong security.
{"title":"Dam: A Practical Scheme to Mitigate Data-Oriented Attacks with Tagged Memory Based on Hardware","authors":"Mengyu Ma, Liwei Chen, Gang Shi","doi":"10.1109/APSEC48747.2019.00036","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00036","url":null,"abstract":"The widespread deployment of unsafe programming languages such as C and C++, leaves many programs vulnerable to memory corruption attacks. With the continuous improvement of control-flow hijacking defense methods, recent works on data-oriented attacks including Data-oriented Exploits (DOE), Data-oriented Programming (DOP), and Block-oriented Programming (BOP) have been showed that these attacks can cause significant threat even in the presence of control-flow defense mechanism. Moreover, DFI (Date Flow Integrity) is a software-only approach for mitigating data-oriented attacks, while it incurs a 104% performance overhead. There are no suitable defense methods for such attacks as yet. In this paper, we propose Dam, a practical scheme to mitigate data-oriented attacks with tagged memory based on hardware. Dam is a novel approach using the idea of tagged memory to break data-flow stitching and gadgets dispatcher of generating data-oriented attacks rather than complete DFI. By enforcing security checking on memory access, Dam eliminates two requirements in constructing a valid data-oriented attack. We have implemented Dam by extending lowRISC, a RISC-V based SoC (System of a Chip) that implements tagged memory. And our evaluation results show that our scheme has an average performance cost of 6.48%, while Dam provides source compatibility and strong security.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115226158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00029
Y. Gu, Yuntianyi Chen, Xiangyang Jia, J. Xuan
The problem of performance ranking in configurable systems is to find the optimal (near-optimal) configurations with the best performance. This problem is challenging due to the large search space of potential configurations and the cost of manually examining configurations. Existing methods, such as the rank-based method, use a progressive strategy to sample configurations to reduce the cost of examining configurations. This sampling strategy is guided by frequent and random trials and may fail in balancing the number of samples and the ranking difference (i.e., the minimum of actual ranks in the predicted ranking). In this paper, we proposed a sampling method, namely MoConfig, which uses multi-objective optimization to minimize the number of samples and the ranking difference. Each solution in MoConfig is a sampling set of configurations and can be directly used as the input of existing methods of performance ranking. We conducted experiments on 20 datasets from real-world configurable systems. Experimental results demonstrate that MoConfig can sample fewer configurations and rank better than the existing rank-based method. We also compared the results by four algorithms of multi-objective optimization and found that NSGA-II performs well. Our proposed method can be used to improve the ranking difference and reduce the number of samples in building predictive models of performance ranking.
{"title":"Multi-Objective Configuration Sampling for Performance Ranking in Configurable Systems","authors":"Y. Gu, Yuntianyi Chen, Xiangyang Jia, J. Xuan","doi":"10.1109/APSEC48747.2019.00029","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00029","url":null,"abstract":"The problem of performance ranking in configurable systems is to find the optimal (near-optimal) configurations with the best performance. This problem is challenging due to the large search space of potential configurations and the cost of manually examining configurations. Existing methods, such as the rank-based method, use a progressive strategy to sample configurations to reduce the cost of examining configurations. This sampling strategy is guided by frequent and random trials and may fail in balancing the number of samples and the ranking difference (i.e., the minimum of actual ranks in the predicted ranking). In this paper, we proposed a sampling method, namely MoConfig, which uses multi-objective optimization to minimize the number of samples and the ranking difference. Each solution in MoConfig is a sampling set of configurations and can be directly used as the input of existing methods of performance ranking. We conducted experiments on 20 datasets from real-world configurable systems. Experimental results demonstrate that MoConfig can sample fewer configurations and rank better than the existing rank-based method. We also compared the results by four algorithms of multi-objective optimization and found that NSGA-II performs well. Our proposed method can be used to improve the ranking difference and reduce the number of samples in building predictive models of performance ranking.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117188063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/apsec48747.2019.00005
{"title":"Message from the APSEC 2019 General Chair","authors":"","doi":"10.1109/apsec48747.2019.00005","DOIUrl":"https://doi.org/10.1109/apsec48747.2019.00005","url":null,"abstract":"","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126004909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00025
Hirofumi Akimoto, Yuto Isogami, Takashi Kitamura, N. Noda, T. Kishi
In Software Product Line (SPL) development, one of promising techniques for core asset testing is to test a subset of SPL as representative products. SPL pairwise testing is a such technique in which each product corresponds to a possible feature configuration in the feature model (FM) and representative products are selected so as to all possible feature pairs are included. It is also important to prioritize representative products, because it could improve the effectiveness of core asset testing especially when the testing resource is limited. In this paper, we propose a prioritization method for SPL pairwise testing based on user profiles. A user profile is a set of user groups and their occurrence probabilities such as the percentages of user groups in a market that use specific devices, applications or services. These profiles are used as the probabilities of feature choices at decision points such as optional features and alternative features in a FM. Based on that, we calculate the probability for obtaining a feature pairs (PFP for short), and generate representative products with priority. Most researches relate to the probabilities about FM handle the probability for obtaining a single feature (PSF for short). Based on PSF, we could estimate PFP. However, this estimation is not appropriate for the prioritization especially when conditional probabilities appear in user profiles. In our method, we directly calculate PFP and determine the priorities. We evaluate the method to show advantages of prioritizations using PFP over those using PSF, and also analyze the characteristics of the method.
{"title":"A Prioritization Method for SPL Pairwise Testing Based on User Profiles","authors":"Hirofumi Akimoto, Yuto Isogami, Takashi Kitamura, N. Noda, T. Kishi","doi":"10.1109/APSEC48747.2019.00025","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00025","url":null,"abstract":"In Software Product Line (SPL) development, one of promising techniques for core asset testing is to test a subset of SPL as representative products. SPL pairwise testing is a such technique in which each product corresponds to a possible feature configuration in the feature model (FM) and representative products are selected so as to all possible feature pairs are included. It is also important to prioritize representative products, because it could improve the effectiveness of core asset testing especially when the testing resource is limited. In this paper, we propose a prioritization method for SPL pairwise testing based on user profiles. A user profile is a set of user groups and their occurrence probabilities such as the percentages of user groups in a market that use specific devices, applications or services. These profiles are used as the probabilities of feature choices at decision points such as optional features and alternative features in a FM. Based on that, we calculate the probability for obtaining a feature pairs (PFP for short), and generate representative products with priority. Most researches relate to the probabilities about FM handle the probability for obtaining a single feature (PSF for short). Based on PSF, we could estimate PFP. However, this estimation is not appropriate for the prioritization especially when conditional probabilities appear in user profiles. In our method, we directly calculate PFP and determine the priorities. We evaluate the method to show advantages of prioritizations using PFP over those using PSF, and also analyze the characteristics of the method.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128198925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00017
Sandra Schröder, Georg Buchgeher
Architecture conformance checking is an important means for quality control to assess that the system implementation adheres to its defined software architecture. Ideally, this process is automated to support continuous quality control. Many different approaches exist for automated conformance checking. However, these approaches are often limited in terms of supported concepts for describing and analyzing software architectures. We have developed an ontology-based approach that seeks to overcome the limited expressiveness of existing approaches. As a frontend of the formalism, we provide a Controlled Natural Language. In this paper, we present an industrial validation of the approach. For this, we collected architectural rules from three industrial projects. In total, we discovered 56 architectural rules in the projects. We successfully formalized 80% of those architectural rules. Additionally, we discussed the formalization with the corresponding software architect of each project. We found that the original intention of each architectural rule is properly reflected in the formalization. The results of the study show that projects could greatly benefit from applying an ontology-based approach, since it helps to precisely define and preserve concepts throughout the development process.
{"title":"Formalizing Architectural Rules with Ontologies - An Industrial Evaluation","authors":"Sandra Schröder, Georg Buchgeher","doi":"10.1109/APSEC48747.2019.00017","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00017","url":null,"abstract":"Architecture conformance checking is an important means for quality control to assess that the system implementation adheres to its defined software architecture. Ideally, this process is automated to support continuous quality control. Many different approaches exist for automated conformance checking. However, these approaches are often limited in terms of supported concepts for describing and analyzing software architectures. We have developed an ontology-based approach that seeks to overcome the limited expressiveness of existing approaches. As a frontend of the formalism, we provide a Controlled Natural Language. In this paper, we present an industrial validation of the approach. For this, we collected architectural rules from three industrial projects. In total, we discovered 56 architectural rules in the projects. We successfully formalized 80% of those architectural rules. Additionally, we discussed the formalization with the corresponding software architect of each project. We found that the original intention of each architectural rule is properly reflected in the formalization. The results of the study show that projects could greatly benefit from applying an ontology-based approach, since it helps to precisely define and preserve concepts throughout the development process.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125686360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}