Pub Date : 2018-10-01DOI: 10.1109/ISSREW.2018.000-4
V. Nagaraju
Non-homogeneous Poisson process (NHPP) software reliability growth models (SRGM) enable quantitative assessment of the software testing process. Software reliability models ranging from simple to complex have been proposed to characterize failure data that results from a variety of testing factors as well as non-uniform expenditure of testing effort. In order to predict the reliability of software accurately, it is important to apply models that both characterize the observed failure data well and make accurate predictions of the future. Efficient and robust algorithms to quickly estimate the model parameters despite inaccuracy in the initial estimates are also highly desirable. Ultimately, emphasis should be placed on predictive accuracy over complexity to best serve users of the research. This work presents the results of the preliminary contributions of the proposal including: (i) a heterogeneous single changepoint framework considering different models before and after the changepoint and (ii) comparison of testing effort models with a simple model as well as a testing effort model fit with an ECM algorithm to emphasize the importance of model predictive accuracy over increased model complexity. The preliminary findings will be used to serve as the basis of the overall contributions of the dissertation.
{"title":"Software Reliability Assessment: Modeling and Algorithms","authors":"V. Nagaraju","doi":"10.1109/ISSREW.2018.000-4","DOIUrl":"https://doi.org/10.1109/ISSREW.2018.000-4","url":null,"abstract":"Non-homogeneous Poisson process (NHPP) software reliability growth models (SRGM) enable quantitative assessment of the software testing process. Software reliability models ranging from simple to complex have been proposed to characterize failure data that results from a variety of testing factors as well as non-uniform expenditure of testing effort. In order to predict the reliability of software accurately, it is important to apply models that both characterize the observed failure data well and make accurate predictions of the future. Efficient and robust algorithms to quickly estimate the model parameters despite inaccuracy in the initial estimates are also highly desirable. Ultimately, emphasis should be placed on predictive accuracy over complexity to best serve users of the research. This work presents the results of the preliminary contributions of the proposal including: (i) a heterogeneous single changepoint framework considering different models before and after the changepoint and (ii) comparison of testing effort models with a simple model as well as a testing effort model fit with an ECM algorithm to emphasize the importance of model predictive accuracy over increased model complexity. The preliminary findings will be used to serve as the basis of the overall contributions of the dissertation.","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115467333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/issrew.2018.00-49
{"title":"Message from the STEP 2018 Workshop Chairs","authors":"","doi":"10.1109/issrew.2018.00-49","DOIUrl":"https://doi.org/10.1109/issrew.2018.00-49","url":null,"abstract":"","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116151453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/ISSREW.2018.00005
Arpit Christi, Matthew Lyle Olson, Mohammad Amin Alipour, Alex Groce
Spectrum-based fault localization (SBFL) is one of the most popular and studied methods for automated debugging. Many formulas have been proposed to improve the accuracy of SBFL scores. Many of these improvements are either marginal or context-dependent. This paper proposes that, independent of the scoring method used, the effectiveness of spectrum-based localization can usually be dramatically improved by, when possible, delta-debugging failing test cases and basing localization only on the reduced test cases. We show that for programs and faults taken from the standard localization literature, a large case study of Mozilla's JavaScript engine using 10 real faults, and mutants of various open-source projects, localizing only after reduction often produces much better rankings for faults than localization without reduction, independent of the localization formula used, and the improvement is often even greater than that provided by changing from the worst to the best localization formula for a subject.
{"title":"Reduce Before You Localize: Delta-Debugging and Spectrum-Based Fault Localization","authors":"Arpit Christi, Matthew Lyle Olson, Mohammad Amin Alipour, Alex Groce","doi":"10.1109/ISSREW.2018.00005","DOIUrl":"https://doi.org/10.1109/ISSREW.2018.00005","url":null,"abstract":"Spectrum-based fault localization (SBFL) is one of the most popular and studied methods for automated debugging. Many formulas have been proposed to improve the accuracy of SBFL scores. Many of these improvements are either marginal or context-dependent. This paper proposes that, independent of the scoring method used, the effectiveness of spectrum-based localization can usually be dramatically improved by, when possible, delta-debugging failing test cases and basing localization only on the reduced test cases. We show that for programs and faults taken from the standard localization literature, a large case study of Mozilla's JavaScript engine using 10 real faults, and mutants of various open-source projects, localizing only after reduction often produces much better rankings for faults than localization without reduction, independent of the localization formula used, and the improvement is often even greater than that provided by changing from the worst to the best localization formula for a subject.","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134628618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/ISSREW.2018.00-20
Florian Klück, Yihao Li, M. Nica, Jianbo Tao, F. Wotawa
In this paper, we outline a general automated testing approach to be applied for verification and validation of automated and autonomous driving functions. The approach makes use of ontologies of environment the system under test is interacting with. Ontologies are automatically converted into input models for combinatorial testing, which are used to generate test cases. The obtained abstract test cases are used to generate concrete test scenarios that provide the basis for simulation used to verify the functionality of the system under test. We discuss the general approach including its potential for automation in the automotive domain where there is growing need for sophisticated verification based on simulation in case of automated and autonomous vehicles.
{"title":"Using Ontologies for Test Suites Generation for Automated and Autonomous Driving Functions","authors":"Florian Klück, Yihao Li, M. Nica, Jianbo Tao, F. Wotawa","doi":"10.1109/ISSREW.2018.00-20","DOIUrl":"https://doi.org/10.1109/ISSREW.2018.00-20","url":null,"abstract":"In this paper, we outline a general automated testing approach to be applied for verification and validation of automated and autonomous driving functions. The approach makes use of ontologies of environment the system under test is interacting with. Ontologies are automatically converted into input models for combinatorial testing, which are used to generate test cases. The obtained abstract test cases are used to generate concrete test scenarios that provide the basis for simulation used to verify the functionality of the system under test. We discuss the general approach including its potential for automation in the automotive domain where there is growing need for sophisticated verification based on simulation in case of automated and autonomous vehicles.","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132407776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/ISSREW.2018.00-24
M. Cinque, Domenico Cotroneo, A. Pecchia
Security Information and Event Management (SIEM) is the state-of-the-practice in handling heterogeneous data sources for security analysis. This paper presents challenges and directions in SIEM in the context of a real-life mission critical system by a top leading company in the Air Traffic Control domain. The system emits massive volumes of highly-unstructured text logs. We present the challenges in addressing such logs, ongoing work on the integration of an open source SIEM, and directions in modeling system behavioral baselines for inferring compromise indicators. Our explorative analysis paves the way for data discovery approaches aiming to complement the current SIEM practice.
{"title":"Challenges and Directions in Security Information and Event Management (SIEM)","authors":"M. Cinque, Domenico Cotroneo, A. Pecchia","doi":"10.1109/ISSREW.2018.00-24","DOIUrl":"https://doi.org/10.1109/ISSREW.2018.00-24","url":null,"abstract":"Security Information and Event Management (SIEM) is the state-of-the-practice in handling heterogeneous data sources for security analysis. This paper presents challenges and directions in SIEM in the context of a real-life mission critical system by a top leading company in the Air Traffic Control domain. The system emits massive volumes of highly-unstructured text logs. We present the challenges in addressing such logs, ongoing work on the integration of an open source SIEM, and directions in modeling system behavioral baselines for inferring compromise indicators. Our explorative analysis paves the way for data discovery approaches aiming to complement the current SIEM practice.","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130171932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/ISSREW.2018.000-6
Hajar Homayouni
Enterprises use data warehouses to accumulate data from multiple sources for analysis and research. A data warehouse is populated using the Extract, Transform, and Load (ETL) process that (1) extracts data from various sources, (2) integrates, cleans, and transforms it into a common form, and (3) loads it into the data warehouse. Faults in the ETL implementation and execution can lead to incorrect data in the data warehouse, which renders it useless irrespective of the quality of the applications accessing it and the quality of the source data. Thus, ETL processes must be thoroughly tested to validate the correctness of the ETL implementation. This project develops and evaluates two types of functional testing approaches, namely data quality, and balancing tests. Data quality tests validate the data in the target data warehouse in isolation and balancing tests check for discrepancies between the source and target data. This paper describes the proposed approach, the work accomplished to date, and the expected contributions of this research.
{"title":"Testing Extract-Transform-Load Process in Data Warehouse Systems","authors":"Hajar Homayouni","doi":"10.1109/ISSREW.2018.000-6","DOIUrl":"https://doi.org/10.1109/ISSREW.2018.000-6","url":null,"abstract":"Enterprises use data warehouses to accumulate data from multiple sources for analysis and research. A data warehouse is populated using the Extract, Transform, and Load (ETL) process that (1) extracts data from various sources, (2) integrates, cleans, and transforms it into a common form, and (3) loads it into the data warehouse. Faults in the ETL implementation and execution can lead to incorrect data in the data warehouse, which renders it useless irrespective of the quality of the applications accessing it and the quality of the source data. Thus, ETL processes must be thoroughly tested to validate the correctness of the ETL implementation. This project develops and evaluates two types of functional testing approaches, namely data quality, and balancing tests. Data quality tests validate the data in the target data warehouse in isolation and balancing tests check for discrepancies between the source and target data. This paper describes the proposed approach, the work accomplished to date, and the expected contributions of this research.","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132838854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/ISSREW.2018.00-35
Jing Gu, Long Wang, Yong Yang, Ying Li
Expertise on distributed systems is critical for system maintenance and improvement. However, it is challenging to keep the up-to-date knowledge from distributed systems due to the complexity and continuous updates. Hence, computing platform providers study on how to extract knowledge directly from system behavior. In this paper, we propose a methodology called KEREP to automatically extract knowledge on distributed system behavior through request execution path. Technologies are devised to construct component structures, to depict the in-depth dynamic behavior and to identify the heartbeat mechanisms of target distributed systems. Experiments on two real-world distributed systems show the KEREP methodology extracts accurate knowledge of request processing and discovers undocumented features with good execution performance.
{"title":"KEREP: Experience in Extracting Knowledge on Distributed System Behavior through Request Execution Path","authors":"Jing Gu, Long Wang, Yong Yang, Ying Li","doi":"10.1109/ISSREW.2018.00-35","DOIUrl":"https://doi.org/10.1109/ISSREW.2018.00-35","url":null,"abstract":"Expertise on distributed systems is critical for system maintenance and improvement. However, it is challenging to keep the up-to-date knowledge from distributed systems due to the complexity and continuous updates. Hence, computing platform providers study on how to extract knowledge directly from system behavior. In this paper, we propose a methodology called KEREP to automatically extract knowledge on distributed system behavior through request execution path. Technologies are devised to construct component structures, to depict the in-depth dynamic behavior and to identify the heartbeat mechanisms of target distributed systems. Experiments on two real-world distributed systems show the KEREP methodology extracts accurate knowledge of request processing and discovers undocumented features with good execution performance.","PeriodicalId":321448,"journal":{"name":"2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124685682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}