The Mondex electronic purse is an outstanding example of industrial scale formal refinement, and was the first verification to achieve ITSEC level E6 certification. A formal abstract model and a formal concrete model were developed, and a formal refinement was hand-proved between them. Nevertheless, certain requirements issues were set beyond the scope of the formal development, or handled in an unnatural manner. The retrenchment tower pattern is used to address one such issue in detail: the finiteness of the purse log (which records unsuccessful transactions). A retrenchment is constructed from the lowest level model of the purse system to a model in which logs are finite, and is then lifted to create two refinement developments of the purse, working at different levels of detail, and connected via retrenchments, forming the tower. The tower development is appropriately validated, vindicating the design used
{"title":"Retrenching the Purse: Finite Exception Logs, and Validating the Small","authors":"R. Banach, M. Poppleton, S. Stepney","doi":"10.1109/SEW.2006.28","DOIUrl":"https://doi.org/10.1109/SEW.2006.28","url":null,"abstract":"The Mondex electronic purse is an outstanding example of industrial scale formal refinement, and was the first verification to achieve ITSEC level E6 certification. A formal abstract model and a formal concrete model were developed, and a formal refinement was hand-proved between them. Nevertheless, certain requirements issues were set beyond the scope of the formal development, or handled in an unnatural manner. The retrenchment tower pattern is used to address one such issue in detail: the finiteness of the purse log (which records unsuccessful transactions). A retrenchment is constructed from the lowest level model of the purse system to a model in which logs are finite, and is then lifted to create two refinement developments of the purse, working at different levels of detail, and connected via retrenchments, forming the tower. The tower development is appropriately validated, vindicating the design used","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124717995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an engineering approach to software engineering called "Unified Software Method" (USM). The goal pursued through this work in progress is to offer complete traceability between software requirements and the resulting software application for any kind of application and as a result, establish accountability of a practitioner's work. This paper presents an introduction to USM and focuses mainly on how USM can apply traceability to maintain synchronization between a software application and the relevant documents such as requirements, architecture, design, code, test, and executable
{"title":"Unified Software Method: An Engineering Approach to Software Engineering","authors":"Stéphane Mercier, M. Lavoie, R. Champagne","doi":"10.1109/SEW.2006.38","DOIUrl":"https://doi.org/10.1109/SEW.2006.38","url":null,"abstract":"In this paper, we propose an engineering approach to software engineering called \"Unified Software Method\" (USM). The goal pursued through this work in progress is to offer complete traceability between software requirements and the resulting software application for any kind of application and as a result, establish accountability of a practitioner's work. This paper presents an introduction to USM and focuses mainly on how USM can apply traceability to maintain synchronization between a software application and the relevant documents such as requirements, architecture, design, code, test, and executable","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"878 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pseudo-exhaustive testing uses the empirical observation that, for broad classes of software, a fault is likely triggered by only a few variables interacting. The method takes advantage of two relatively recent advances in software engineering: algorithms for efficiently generating covering arrays to represent software interaction test suites, and automated generation of test oracles using model checking. An experiment with a module of the traffic collision avoidance system (TCAS) illustrates the approach testing pairwise through 6-way interactions. We also outline current and future work applying the test methodology to a large real-world application, the personal identity verification (PIV) smart card
{"title":"Pseudo-Exhaustive Testing for Software","authors":"Rick Kuhn, Vadim Okun","doi":"10.1109/SEW.2006.26","DOIUrl":"https://doi.org/10.1109/SEW.2006.26","url":null,"abstract":"Pseudo-exhaustive testing uses the empirical observation that, for broad classes of software, a fault is likely triggered by only a few variables interacting. The method takes advantage of two relatively recent advances in software engineering: algorithms for efficiently generating covering arrays to represent software interaction test suites, and automated generation of test oracles using model checking. An experiment with a module of the traffic collision avoidance system (TCAS) illustrates the approach testing pairwise through 6-way interactions. We also outline current and future work applying the test methodology to a large real-world application, the personal identity verification (PIV) smart card","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126415636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Acquisition of "quantitative" models of sufficient accuracy to enable effective analysis of requirements tradeoffs is hampered by the slowness and difficulty of obtaining sufficient data. "Qualitative" models, based on expert opinion, can be built quickly and therefore used earlier. Such qualitative models are nondeterminate which makes them hard to use for making categorical policy decisions over the model. The nondeterminacy of qualitative models can be tamed using "stochastic sampling" and "treatment learning". These tools can quickly find and set the "master variables" that restrain qualitative simulations. Once tamed, qualitative modeling can be used in requirements engineering to assess more options, earlier in the life cycle
{"title":"Qualitative Modeling for Requirements Engineering","authors":"T. Menzies, Julian Richardson","doi":"10.1109/SEW.2006.27","DOIUrl":"https://doi.org/10.1109/SEW.2006.27","url":null,"abstract":"Acquisition of \"quantitative\" models of sufficient accuracy to enable effective analysis of requirements tradeoffs is hampered by the slowness and difficulty of obtaining sufficient data. \"Qualitative\" models, based on expert opinion, can be built quickly and therefore used earlier. Such qualitative models are nondeterminate which makes them hard to use for making categorical policy decisions over the model. The nondeterminacy of qualitative models can be tamed using \"stochastic sampling\" and \"treatment learning\". These tools can quickly find and set the \"master variables\" that restrain qualitative simulations. Once tamed, qualitative modeling can be used in requirements engineering to assess more options, earlier in the life cycle","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124554424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter T. Breuer, S. Pickin, Maria Mercedes Larrondo-Petrie
The formal analysis described here detects two so far undetected real deadlock situations per thousand C source files or million lines of code in the open source Linux operating system kernel, and three undetected accesses to freed memory, at a few seconds per file. That is notable because the code has been continuously under scrutiny from thousands of developers' pairs of eyes. In distinction to mo del-checking techniques, which also use symbolic logic, the analysis uses a "3-phase" compositional Hoare-style programming logic combined with abstract interpretation. The result is a customisable post-hoc semantic analysis of C code that is capable of several different analyses at once
{"title":"Detecting Deadlock, Double-Free and Other Abuses in a Million Lines of Linux Kernel Source","authors":"Peter T. Breuer, S. Pickin, Maria Mercedes Larrondo-Petrie","doi":"10.1109/SEW.2006.15","DOIUrl":"https://doi.org/10.1109/SEW.2006.15","url":null,"abstract":"The formal analysis described here detects two so far undetected real deadlock situations per thousand C source files or million lines of code in the open source Linux operating system kernel, and three undetected accesses to freed memory, at a few seconds per file. That is notable because the code has been continuously under scrutiny from thousands of developers' pairs of eyes. In distinction to mo del-checking techniques, which also use symbolic logic, the analysis uses a \"3-phase\" compositional Hoare-style programming logic combined with abstract interpretation. The result is a customisable post-hoc semantic analysis of C code that is capable of several different analyses at once","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"32 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130555152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Becker, D. Ruiz, V. S. Cunha, Taisa C. Novello, Franco Vieira e Souza
Metrics are essential in the assessment of the quality of software development processes (SDP). However, the adoption of a metrics program requires an information system for collecting, analyzing, and disseminating measures of software processes, products and services. This paper describes SPDW, an SPD data warehousing environment developed in the context of the metrics program of a leading software operation in Latin America, currently assessed as CMM Level 3. SDPW architecture encompasses: 1) automatic project data capturing, considering different types of heterogeneity present in the software development environment; 2) the representation of project metrics according to a standard organizational view; and 3) analytical functionality that supports process analysis. The paper also describes current implementations, and reports experiences on the use of SPDW by the organization
{"title":"SPDW: A Software Development Process Performance Data Warehousing Environment","authors":"K. Becker, D. Ruiz, V. S. Cunha, Taisa C. Novello, Franco Vieira e Souza","doi":"10.1109/SEW.2006.31","DOIUrl":"https://doi.org/10.1109/SEW.2006.31","url":null,"abstract":"Metrics are essential in the assessment of the quality of software development processes (SDP). However, the adoption of a metrics program requires an information system for collecting, analyzing, and disseminating measures of software processes, products and services. This paper describes SPDW, an SPD data warehousing environment developed in the context of the metrics program of a leading software operation in Latin America, currently assessed as CMM Level 3. SDPW architecture encompasses: 1) automatic project data capturing, considering different types of heterogeneity present in the software development environment; 2) the representation of project metrics according to a standard organizational view; and 3) analytical functionality that supports process analysis. The paper also describes current implementations, and reports experiences on the use of SPDW by the organization","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127533484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Axel Schneider, T. Bluhm, Tobias Renner, U. Heinkel, Joachim Knäblein, Reynaldo Zavala
Formal methods such as automated model checking are used commercially for digital circuit design verification. These techniques find errors early in the product cycle, which improves development time and cost. By contrast, the current practice in complex system design is to specify system functions and protocol details in natural language. Some errors are found early by manual inspections, but others are only revealed during implementation testing or by costly field failures. We describe our application of formal specification and model checking to real telecom product development, and enumerate the classes of specification errors that these formal techniques revealed at an early stage of the development cycle
{"title":"Formal Verification of Abstract System and Protocol Specifications","authors":"Axel Schneider, T. Bluhm, Tobias Renner, U. Heinkel, Joachim Knäblein, Reynaldo Zavala","doi":"10.1109/SEW.2006.19","DOIUrl":"https://doi.org/10.1109/SEW.2006.19","url":null,"abstract":"Formal methods such as automated model checking are used commercially for digital circuit design verification. These techniques find errors early in the product cycle, which improves development time and cost. By contrast, the current practice in complex system design is to specify system functions and protocol details in natural language. Some errors are found early by manual inspections, but others are only revealed during implementation testing or by costly field failures. We describe our application of formal specification and model checking to real telecom product development, and enumerate the classes of specification errors that these formal techniques revealed at an early stage of the development cycle","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although there exist several software model checkers that check the code against properties specified e.g. via a temporal logic and assertions, or just verifying low-level properties (like unhandled exceptions), none of them supports checking of software components against a high-level behavior specification. We present our approach to model checking of software components implemented in Java against a high-level specification of their behavior defined via behavior protocols, which employs the Java PathFinder model checker and the protocol checker. The property checked by the Java PathFinder (JPF) tool (correctness of particular method call sequences) is validated via its cooperation with the protocol checker. We show that just the publisher/listener pattern claimed to be the key flexibility support of JPF (even though proved very useful for our purpose) was not enough to achieve this kind of checking
{"title":"Model Checking of Software Components: Combining Java PathFinder and Behavior Protocol Model Checker","authors":"P. Parízek, F. Plášil, J. Kofroň","doi":"10.1109/SEW.2006.23","DOIUrl":"https://doi.org/10.1109/SEW.2006.23","url":null,"abstract":"Although there exist several software model checkers that check the code against properties specified e.g. via a temporal logic and assertions, or just verifying low-level properties (like unhandled exceptions), none of them supports checking of software components against a high-level behavior specification. We present our approach to model checking of software components implemented in Java against a high-level specification of their behavior defined via behavior protocols, which employs the Java PathFinder model checker and the protocol checker. The property checked by the Java PathFinder (JPF) tool (correctness of particular method call sequences) is validated via its cooperation with the protocol checker. We show that just the publisher/listener pattern claimed to be the key flexibility support of JPF (even though proved very useful for our purpose) was not enough to achieve this kind of checking","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoqing Peng, Huibiao Zhu, Jifeng He, Naiyong Jin
As a system-level modelling language, SystemC possesses some new and interesting features such as delayed notifications, notification cancelling, notification overriding and delta-cycle. It is challenging to formalise SystemC. In this paper, we first select a kernel subset of SystemC and study its operational semantics. Based on the operational semantics we define a bisimulation relation, from which program equivalence is explored. Finally, we present a set of algebraic laws for the subset language, which can be proved based on the operational semantics model via bisimulation
{"title":"An Operational Semantics of an Event-Driven System-Level Simulator","authors":"Xiaoqing Peng, Huibiao Zhu, Jifeng He, Naiyong Jin","doi":"10.1109/SEW.2006.10","DOIUrl":"https://doi.org/10.1109/SEW.2006.10","url":null,"abstract":"As a system-level modelling language, SystemC possesses some new and interesting features such as delayed notifications, notification cancelling, notification overriding and delta-cycle. It is challenging to formalise SystemC. In this paper, we first select a kernel subset of SystemC and study its operational semantics. Based on the operational semantics we define a bisimulation relation, from which program equivalence is explored. Finally, we present a set of algebraic laws for the subset language, which can be proved based on the operational semantics model via bisimulation","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125460706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Formal methods are nowadays the most rigorous way to produce software. However, the existing formal notations are not easy to use and understand for most people. Our approach proposes to circumvent this shortcoming by producing complementary graphical views on the formal developments. This paper addresses the graphical representation of formal B specifications using UML diagrams. A reverse-engineering approach is proposed to generate several class diagrams showing the static aspects of the B developments. These diagrams can help understand the specification for stakeholders who are not familiar with the B method, such as customers or certification authorities. A concept formation technique based on weighted link matrices is proposed to improve automation
{"title":"A Reverse-Engineering Approach to Understanding B Specifications with UML Diagrams","authors":"Akram Idani, Y. Ledru, Didier Bert","doi":"10.1109/SEW.2006.6","DOIUrl":"https://doi.org/10.1109/SEW.2006.6","url":null,"abstract":"Formal methods are nowadays the most rigorous way to produce software. However, the existing formal notations are not easy to use and understand for most people. Our approach proposes to circumvent this shortcoming by producing complementary graphical views on the formal developments. This paper addresses the graphical representation of formal B specifications using UML diagrams. A reverse-engineering approach is proposed to generate several class diagrams showing the static aspects of the B developments. These diagrams can help understand the specification for stakeholders who are not familiar with the B method, such as customers or certification authorities. A concept formation technique based on weighted link matrices is proposed to improve automation","PeriodicalId":127158,"journal":{"name":"2006 30th Annual IEEE/NASA Software Engineering Workshop","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132935084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}