Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688879
M. Hecht, J. Tamaki, Derek Lo
• Question — How can Failure Modes and Effects Analyses be generated from SysML models? • Motivation — Technical: Growing ubiquity, complexity, and safety criticality of systems containing software — Programmatic: Reduce cost and schedule burden of FMEAs to levels tolerated by developers and their management — Cultural: Growing use of SysML and • Method — Define success criterion and ensure model includes it — Create Structural models (primarily the system connections in internal block diagrams) that can be used to assess the success criterion — Create behavioral models for both normal flows and flows in the presence of simulated failures and cyber-attacks — Run simulations and log results — Analyze the logs and develop assessment artifacts.
{"title":"Modeling of Failure detection and recovery in SysML","authors":"M. Hecht, J. Tamaki, Derek Lo","doi":"10.1109/ISSREW.2013.6688879","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688879","url":null,"abstract":"• Question — How can Failure Modes and Effects Analyses be generated from SysML models? • Motivation — Technical: Growing ubiquity, complexity, and safety criticality of systems containing software — Programmatic: Reduce cost and schedule burden of FMEAs to levels tolerated by developers and their management — Cultural: Growing use of SysML and • Method — Define success criterion and ensure model includes it — Create Structural models (primarily the system connections in internal block diagrams) that can be used to assess the success criterion — Create behavioral models for both normal flows and flows in the presence of simulated failures and cyber-attacks — Run simulations and log results — Analyze the logs and develop assessment artifacts.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130586764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688925
N. Silva, A. Esper, R. Barbosa, Johan Zandin, C. Monteleone
The industrial process in the area of on-board computers is characterized by small production series of onboard computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of onboard computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.
{"title":"Reference architecture for high dependability on-board computers","authors":"N. Silva, A. Esper, R. Barbosa, Johan Zandin, C. Monteleone","doi":"10.1109/ISSREW.2013.6688925","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688925","url":null,"abstract":"The industrial process in the area of on-board computers is characterized by small production series of onboard computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of onboard computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133935655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688853
Vignir Gudmundsson, Christoph Schulze, D. Ganesan, M. Lindvall, Robert E. Wiegand
We are in the process of evaluating the feasibility of using model-based testing (MBT) to test systems. In this paper we discuss the feasibility of testing the software bus of NASA's Goddard Mission Service Evolution Center (GMSEC) using MBT. GMSEC has a flexible architecture making testing a difficult task. The idea is to use one model to test GMSEC for behavioral consistency among multiple programming language APIs and multiple middleware wrappers. Since a new testing approach must be evaluated in the light of the effort it takes to become productive, we measure and discuss costs and benefits. The study demonstrates that it is feasible to use MBT for a system like GMSEC based on the fact that the tester was able to use MBT to detect new issues in GMSEC, which is an already tested system.
{"title":"An initial evaluation of model-based testing","authors":"Vignir Gudmundsson, Christoph Schulze, D. Ganesan, M. Lindvall, Robert E. Wiegand","doi":"10.1109/ISSREW.2013.6688853","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688853","url":null,"abstract":"We are in the process of evaluating the feasibility of using model-based testing (MBT) to test systems. In this paper we discuss the feasibility of testing the software bus of NASA's Goddard Mission Service Evolution Center (GMSEC) using MBT. GMSEC has a flexible architecture making testing a difficult task. The idea is to use one model to test GMSEC for behavioral consistency among multiple programming language APIs and multiple middleware wrappers. Since a new testing approach must be evaluated in the light of the effort it takes to become productive, we measure and discuss costs and benefits. The study demonstrates that it is feasible to use MBT for a system like GMSEC based on the fact that the tester was able to use MBT to detect new issues in GMSEC, which is an already tested system.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132875616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688860
Marc Förster
The integration of software components towards an operational system that reliably complies with requirements is one of the crucial problems in the development and maintenance of automotive embedded software. Conventionally, development considers closed systems, in that the composition of a system and its environment presupposes a fixed environment, which leads to limited reusability. Accordingly, there is a need for specification and analysis techniques for systems that are “open” (at design time, and perhaps also, but not necessarily, at runtime). The problem is that the environment provided for a reusable component is unknown or just partly known beforehand.In a broader view, the integration challenge occurs not just during development but also during runtime: with updates and patches of integrated components, during the integration of new components (after-sale upgrade) or the activation/deactivation of components due to energy management or load balancing. There exist a number of approaches aiming at the objective described above: assume/guarantee, rely/guarantee, assumption-commitment reasoning, Design by contract, Rich components, contract-based development etc. At present virtually all of them are research in progress. In particular, none of the approaches mentioned has as yet been consistently applied in practice in the area of automotive software or embedded systems. Our project intends to give an overview and to facilitate the understanding of such techniques of, as we call them, “conditional” specification and assurance and their application to automotive software development, improving the methodological support for the integration and reuse of software components. The aim has been achieved by a survey of existing approaches, a statement of relevant integration scenarios and the prototypical application of a selected approach in a case study with a realistic system. This submission reports some of our findings.
{"title":"Conditional software specification & assurance: A practical assessment of contract-based approaches","authors":"Marc Förster","doi":"10.1109/ISSREW.2013.6688860","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688860","url":null,"abstract":"The integration of software components towards an operational system that reliably complies with requirements is one of the crucial problems in the development and maintenance of automotive embedded software. Conventionally, development considers closed systems, in that the composition of a system and its environment presupposes a fixed environment, which leads to limited reusability. Accordingly, there is a need for specification and analysis techniques for systems that are “open” (at design time, and perhaps also, but not necessarily, at runtime). The problem is that the environment provided for a reusable component is unknown or just partly known beforehand.In a broader view, the integration challenge occurs not just during development but also during runtime: with updates and patches of integrated components, during the integration of new components (after-sale upgrade) or the activation/deactivation of components due to energy management or load balancing. There exist a number of approaches aiming at the objective described above: assume/guarantee, rely/guarantee, assumption-commitment reasoning, Design by contract, Rich components, contract-based development etc. At present virtually all of them are research in progress. In particular, none of the approaches mentioned has as yet been consistently applied in practice in the area of automotive software or embedded systems. Our project intends to give an overview and to facilitate the understanding of such techniques of, as we call them, “conditional” specification and assurance and their application to automotive software development, improving the methodological support for the integration and reuse of software components. The aim has been achieved by a survey of existing approaches, a statement of relevant integration scenarios and the prototypical application of a selected approach in a case study with a realistic system. This submission reports some of our findings.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131252125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688905
F. Machida, A. Andrzejak, Rivalino Matias, Elder V. P. Sobrinho
Software aging (i.e. progressive performance degradation of long-running software systems) is difficult to detect due to the long latency until it manifests during program execution. Fast and accurate detection of aging is important for eliminating the underlying defects already during software development and testing. Also in a deployment scenario, aging detection is needed to plan mitigation methods like software rejuvenation. The goal of this paper is to evaluate whether the Mann-Kendall test is an effective approach for detecting software aging from traces of computer system metrics. This technique tests for existence of monotonic trends in time series, and studies of software aging often consider existence of trends in certain metrics as indication of software aging. Through an experimental study we show that the Mann-Kendall test is highly vulnerable to creating false positives in context of aging detection. By increasing the amount of data considered in the test, the false positive rate can be reduced; however, time to detect aging increases considerably. Our findings indicate that aging detection using the Mann-Kendall test alone is in general unreliable, or may require long measurement times.
{"title":"On the effectiveness of Mann-Kendall test for detection of software aging","authors":"F. Machida, A. Andrzejak, Rivalino Matias, Elder V. P. Sobrinho","doi":"10.1109/ISSREW.2013.6688905","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688905","url":null,"abstract":"Software aging (i.e. progressive performance degradation of long-running software systems) is difficult to detect due to the long latency until it manifests during program execution. Fast and accurate detection of aging is important for eliminating the underlying defects already during software development and testing. Also in a deployment scenario, aging detection is needed to plan mitigation methods like software rejuvenation. The goal of this paper is to evaluate whether the Mann-Kendall test is an effective approach for detecting software aging from traces of computer system metrics. This technique tests for existence of monotonic trends in time series, and studies of software aging often consider existence of trends in certain metrics as indication of software aging. Through an experimental study we show that the Mann-Kendall test is highly vulnerable to creating false positives in context of aging detection. By increasing the amount of data considered in the test, the false positive rate can be reduced; however, time to detect aging increases considerably. Our findings indicate that aging detection using the Mann-Kendall test alone is in general unreliable, or may require long measurement times.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132348188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688866
K. Vinod, M. Ramachandra, Prashanth Pai, S. Yalawar
Reliability is characteristic of the system which begins during the concept development phase of a product realization process and continuously or iteratively improved, until its end-of-life. Reliability data along with availability and serviceability (RAS) [1] can commonly be retrieved using the system logs through various data mining techniques. The size of the logs for a typical healthcare modality like the Philips Magnetic Resonance (MR) would be of the order of 3-digit megabyte number per day per installed base. Given the humongous size, various clustering techniques as used in big data processing algorithms [2], grind the data to seek the correct results in a timely and efficient fashion. This post-processing step introduces a temporal shift in analyzing the data much after the events have occurred. For the state of affairs that affects reliability and serviceability, it is important that the condition of the deployed systems is notified to actors who can resolve such issues, meeting shrinking timelines demanded by the service level agreements. This would require the log information to be processed directly at the deployment without causing a system performance regression. This paper talks about such a technique that is implemented within the system purview to improve the lead time and thus increase efficiency of the feedback into the research and development (R & D) department.
可靠性是系统的特征,它始于产品实现过程的概念开发阶段,并不断或迭代地改进,直到其生命周期结束。可靠性数据以及可用性和可服务性(RAS)[1]通常可以通过各种数据挖掘技术使用系统日志进行检索。典型的医疗保健模式(如Philips Magnetic Resonance (MR))的日志大小为每个安装基数每天3位数的兆字节数。由于庞大的数据规模,大数据处理算法[2]中使用了各种聚类技术,对数据进行研磨,以及时高效地寻求正确的结果。这个后处理步骤在分析事件发生很久之后的数据时引入了时间偏移。对于影响可靠性和可服务性的事务状态,重要的是将部署系统的状况通知给能够解决此类问题的参与者,以满足服务水平协议所要求的缩短的时间。这将要求在部署时直接处理日志信息,而不会导致系统性能退化。本文讨论了在系统范围内实施的这种技术,以改善交货时间,从而提高反馈到研究和开发(r&d)部门的效率。
{"title":"A novel mechanism to continuously scan field logs and gain real-time feedback","authors":"K. Vinod, M. Ramachandra, Prashanth Pai, S. Yalawar","doi":"10.1109/ISSREW.2013.6688866","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688866","url":null,"abstract":"Reliability is characteristic of the system which begins during the concept development phase of a product realization process and continuously or iteratively improved, until its end-of-life. Reliability data along with availability and serviceability (RAS) [1] can commonly be retrieved using the system logs through various data mining techniques. The size of the logs for a typical healthcare modality like the Philips Magnetic Resonance (MR) would be of the order of 3-digit megabyte number per day per installed base. Given the humongous size, various clustering techniques as used in big data processing algorithms [2], grind the data to seek the correct results in a timely and efficient fashion. This post-processing step introduces a temporal shift in analyzing the data much after the events have occurred. For the state of affairs that affects reliability and serviceability, it is important that the condition of the deployed systems is notified to actors who can resolve such issues, meeting shrinking timelines demanded by the service level agreements. This would require the log information to be processed directly at the deployment without causing a system performance regression. This paper talks about such a technique that is implemented within the system purview to improve the lead time and thus increase efficiency of the feedback into the research and development (R & D) department.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131792063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688869
Brendan Murphy, L. Williams
One of the most contentious areas in software development is branching. Proponents of agile development methodologies are often against the concept of branching as a matter of principle, other large software development projects, such as Windows, are heavy users of branches to control the software generated by thousands of its engineers. Microsoft is migrating its development processes to be able to simultaneously produce software as both stand-alone products and as a SAAS (e.g. Windows 8 and Azure), requiring a re-architecture of these processes. To fully understand the impact of any changes to their development processes the product groups addressed the question of whether and how to use branching within its development process. Bases on this assessment this talk attempts to go back to first principles in regard to software development and shows that there are a lot more similarities than differences between agile and non-agile software development methods. The talk will also discuss the pros and cons of branching identifying where it will positively and negatively impact software development.
{"title":"To branch or not to branch that is the question","authors":"Brendan Murphy, L. Williams","doi":"10.1109/ISSREW.2013.6688869","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688869","url":null,"abstract":"One of the most contentious areas in software development is branching. Proponents of agile development methodologies are often against the concept of branching as a matter of principle, other large software development projects, such as Windows, are heavy users of branches to control the software generated by thousands of its engineers. Microsoft is migrating its development processes to be able to simultaneously produce software as both stand-alone products and as a SAAS (e.g. Windows 8 and Azure), requiring a re-architecture of these processes. To fully understand the impact of any changes to their development processes the product groups addressed the question of whether and how to use branching within its development process. Bases on this assessment this talk attempts to go back to first principles in regard to software development and shows that there are a lot more similarities than differences between agile and non-agile software development methods. The talk will also discuss the pros and cons of branching identifying where it will positively and negatively impact software development.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688926
Silviya Grigorova, T. Maibaum
This brief report is a contribution to discussions of the notion of confidence in the context of assurance cases. In this work, we draw a parallel between the concepts of assurance case confidence and evidence weight in the legal domain, and explore the practical ramifications of this idea. We first establish what factors influence assurance case confidence, and propose a definition. Then, through a comparison with the legal domain (following the discussions of Jonathan Cohen, Keynes and Nance) we submit that confidence can be seen as composed of two distinct aspects, and we proceed to contend that it is beneficial to consider these aspects separately when performing an evaluation. One of the greatest advantages of doing so would be providing a separate measure for assurance case “ripeness” for review (to be used by assurance case developers, as well as regulators).
{"title":"Taking a page from the law books: Considering evidence weight in evaluating assurance case confidence","authors":"Silviya Grigorova, T. Maibaum","doi":"10.1109/ISSREW.2013.6688926","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688926","url":null,"abstract":"This brief report is a contribution to discussions of the notion of confidence in the context of assurance cases. In this work, we draw a parallel between the concepts of assurance case confidence and evidence weight in the legal domain, and explore the practical ramifications of this idea. We first establish what factors influence assurance case confidence, and propose a definition. Then, through a comparison with the legal domain (following the discussions of Jonathan Cohen, Keynes and Nance) we submit that confidence can be seen as composed of two distinct aspects, and we proceed to contend that it is beneficial to consider these aspects separately when performing an evaluation. One of the greatest advantages of doing so would be providing a separate measure for assurance case “ripeness” for review (to be used by assurance case developers, as well as regulators).","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127325266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688896
Dalin Zhang, Hailong Zhang, Dahai Jin, Yunzhan Gong
In order to avoid the path explosion problem in full path-sensitive detection during the process of path-sensitive defect detection, defect states are often merged at merging nodes on control flow graph, but this rough merging strategy may lead to accuracy loss and false positives. In this paper, state partition is proposed to handle the implicit variable relationships on respective paths and to improve the accuracy of detection. We also propose a path merging strategy with state partition to avoid accuracy loss caused by untimely merging of data flow information, and it has been implemented in our static analysis tool, Defect Testing System (DTS). Experiment on a large number of C open source projects shows the great improvement this strategy makes.
{"title":"Improving the accuracy of static analysis based on state partition","authors":"Dalin Zhang, Hailong Zhang, Dahai Jin, Yunzhan Gong","doi":"10.1109/ISSREW.2013.6688896","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688896","url":null,"abstract":"In order to avoid the path explosion problem in full path-sensitive detection during the process of path-sensitive defect detection, defect states are often merged at merging nodes on control flow graph, but this rough merging strategy may lead to accuracy loss and false positives. In this paper, state partition is proposed to handle the implicit variable relationships on respective paths and to improve the accuracy of detection. We also propose a path merging strategy with state partition to avoid accuracy loss caused by untimely merging of data flow information, and it has been implemented in our static analysis tool, Defect Testing System (DTS). Experiment on a large number of C open source projects shows the great improvement this strategy makes.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"86 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123178584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ISSREW.2013.6688902
M. Subramaniam, P. Chundi, A. Muthuraj, E. Margalit
Retinal Prosthesis device has been approved by FDA for treatment of vision impairment caused by RP. Validating the visual distortion estimation algorithms used in prosthesis is crucial for the safe use of prosthesis. An approach based on metamorphic testing was described to validate a prosthesis distortion estimation algorithm. Four metamorphic relations including two necessary conditions for the correct functioning of the estimation algorithm were identified. Violations in two metamorphic relations were detected showing different estimation behavior of prosthetic vs. regular images and those having high distortions.
{"title":"Testing distortion estimations in Retinal Prostheses","authors":"M. Subramaniam, P. Chundi, A. Muthuraj, E. Margalit","doi":"10.1109/ISSREW.2013.6688902","DOIUrl":"https://doi.org/10.1109/ISSREW.2013.6688902","url":null,"abstract":"Retinal Prosthesis device has been approved by FDA for treatment of vision impairment caused by RP. Validating the visual distortion estimation algorithms used in prosthesis is crucial for the safe use of prosthesis. An approach based on metamorphic testing was described to validate a prosthesis distortion estimation algorithm. Four metamorphic relations including two necessary conditions for the correct functioning of the estimation algorithm were identified. Violations in two metamorphic relations were detected showing different estimation behavior of prosthetic vs. regular images and those having high distortions.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"131 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130802271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}