Pub Date : 1998-03-01DOI: 10.1109/CMPASS.1997.613216
A. Offutt
This paper is about the disparity between what is known and being learned in academia, and what is being used in industry. The author interprets the issue as "why aren't the ideas that researchers have developed being used in industry?". The paper presents a shopping list of reasons why industry does not use the highly advanced and in some cases highly developed software testing techniques that are available. For ease of digestion, the problems are divided into three broad categories: problems in industry, problems in academic research and education, and problems in the interface between the two. Because the most difficult problems are usually those in the interface, these are presented first.
{"title":"Software testing: from theory to practice","authors":"A. Offutt","doi":"10.1109/CMPASS.1997.613216","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613216","url":null,"abstract":"This paper is about the disparity between what is known and being learned in academia, and what is being used in industry. The author interprets the issue as \"why aren't the ideas that researchers have developed being used in industry?\". The paper presents a shopping list of reasons why industry does not use the highly advanced and in some cases highly developed software testing techniques that are available. For ease of digestion, the problems are divided into three broad categories: problems in industry, problems in academic research and education, and problems in the interface between the two. Because the most difficult problems are usually those in the interface, these are presented first.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"95 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123364034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613270
Gary McGraw
In the commercial sector security analysis has traditionally been applied at the network system level, after release, using tiger team approaches. After a successful tiger team penetration, specific system vulnerability is patched. I make a case for applying software engineering analysis techniques that have proven successful in the software safety arena to security-critical software code. This work is based on the generally held belief that a large proportion of security violations result from errors introduced during software development.
{"title":"Testing for security during development: why we should scrap penetrate-and-patch","authors":"Gary McGraw","doi":"10.1109/CMPASS.1997.613270","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613270","url":null,"abstract":"In the commercial sector security analysis has traditionally been applied at the network system level, after release, using tiger team approaches. After a successful tiger team penetration, specific system vulnerability is patched. I make a case for applying software engineering analysis techniques that have proven successful in the software safety arena to security-critical software code. This work is based on the generally held belief that a large proportion of security violations result from errors introduced during software development.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"491 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123891026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613222
J. Payne
There are several reasons why testing technology is not successfully transferred to industry. These reasons can be traced to fundamental flaws with the way academics perform research, tool vendors market technology, and practitioners build software. Until these flaws are corrected, advanced testing technology will continue to languish inside universities and commercial research labs. This paper discusses these flaws and provides some suggestions on how these problems can be addressed.
{"title":"Why testing technology is not transferred to industry: academics don't get it, vendors don't know it, practitioners don't care","authors":"J. Payne","doi":"10.1109/CMPASS.1997.613222","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613222","url":null,"abstract":"There are several reasons why testing technology is not successfully transferred to industry. These reasons can be traced to fundamental flaws with the way academics perform research, tool vendors market technology, and practitioners build software. Until these flaws are corrected, advanced testing technology will continue to languish inside universities and commercial research labs. This paper discusses these flaws and provides some suggestions on how these problems can be addressed.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127726244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613289
D. Kuhn
Formal methods have demonstrated their effectiveness in a number of application areas, but are still not widely used in the computing industry. Advances in theorem proving tools, particularly those combining model checking with traditional interactive proof techniques are reducing the cost of formal techniques. Although traditionally used for analyzing the correctness of specifications against requirements (and to a lesser extent the correctness of source code), formal methods can help reduce the cost of test generation, making formal methods more cost effective.
{"title":"Evolving directions in formal methods","authors":"D. Kuhn","doi":"10.1109/CMPASS.1997.613289","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613289","url":null,"abstract":"Formal methods have demonstrated their effectiveness in a number of application areas, but are still not widely used in the computing industry. Advances in theorem proving tools, particularly those combining model checking with traditional interactive proof techniques are reducing the cost of formal techniques. Although traditionally used for analyzing the correctness of specifications against requirements (and to a lesser extent the correctness of source code), formal methods can help reduce the cost of test generation, making formal methods more cost effective.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123169536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613201
M. M. Ayadi, D. Bolignano
The objective of this paper is to present the verification of delegation in the SESAME protocol, a compatible extension version of Kerberos. For this we use the formal approach presented in Bolignano (1997). This approach is based on the use of state-based general purpose formal methods. It makes a clear separation between modeling of reliable agents and that of intruders. The SESAME protocol allows a principal in the system to delegate his rights to another principal or a group of principals. The formalization is transposed in a quite systematic manner into the Coq prover's formalism, and the complete formal proof is performed. The proof relies on the fact that confidentiality of keys shared by the multiple authorities involved in the protocol is guaranteed.
{"title":"On the formal verification of delegation in SESAME","authors":"M. M. Ayadi, D. Bolignano","doi":"10.1109/CMPASS.1997.613201","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613201","url":null,"abstract":"The objective of this paper is to present the verification of delegation in the SESAME protocol, a compatible extension version of Kerberos. For this we use the formal approach presented in Bolignano (1997). This approach is based on the use of state-based general purpose formal methods. It makes a clear separation between modeling of reliable agents and that of intruders. The SESAME protocol allows a principal in the system to delegate his rights to another principal or a group of principals. The formalization is transposed in a quite systematic manner into the Coq prover's formalism, and the complete formal proof is performed. The proof relies on the fact that confidentiality of keys shared by the multiple authorities involved in the protocol is guaranteed.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128052297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613200
J. Kim, Shiu-Kai Chin
Design and verification using formal logic extends existing VLSI design methods and tools. Such an extension provides rigorous support for design and verification at various levels of abstraction. Our design methodology combines design verification by mechanized theorem proving with conventional CAD tools. The theorem proving environment allows as to relate low level boolean implementations and high level arithmetic and instruction set specifications. We use the Higher-Order Logic theorem prover (HOL) to verify correctness relations between implementations and specifications. We use existing CAD tools to synthesize physical layouts and validate low level electrical and timing properties. Our CAD systems are Mentor Graphics GDT and MAGIC. To verify our design methodology, we fabricated a serial pipelined multiplier that is formally verified. Bit-serial circuits are widely used in signal processing. The multiplier chip was fabricated through MOSIS and worked correctly.
{"title":"Assured VLSI design with formal verification","authors":"J. Kim, Shiu-Kai Chin","doi":"10.1109/CMPASS.1997.613200","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613200","url":null,"abstract":"Design and verification using formal logic extends existing VLSI design methods and tools. Such an extension provides rigorous support for design and verification at various levels of abstraction. Our design methodology combines design verification by mechanized theorem proving with conventional CAD tools. The theorem proving environment allows as to relate low level boolean implementations and high level arithmetic and instruction set specifications. We use the Higher-Order Logic theorem prover (HOL) to verify correctness relations between implementations and specifications. We use existing CAD tools to synthesize physical layouts and validate low level electrical and timing properties. Our CAD systems are Mentor Graphics GDT and MAGIC. To verify our design methodology, we fabricated a serial pipelined multiplier that is formally verified. Bit-serial circuits are widely used in signal processing. The multiplier chip was fabricated through MOSIS and worked correctly.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127597464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613248
S. Garg, A. Puliafito, M. Telek, Kishor S. Trivedi
Software rejuvenation is a technique for software fault tolerance which involves occasionally stopping the executing software, "cleaning" the "internal state" and restarting. This cleanup is done at desirable times during execution on a preventive basis set that unplanned failures, which result in higher costs compared to planned stopping, are avoided. Since during rejuvenation, the software is typically unavailable or in a degraded mode of operation, the operation involves a cost. In this paper, we present an analytical model of a software system which serves transactions. Due to "aging", not only the service rate of the software decreases with time hut the software itself experiences occasional crash/hang failures. We propose and compare two rejuvenation policies. The policies are evaluated for the resulting steady state availability as well the probability that a transaction is denied service. We also numerically illustrate the use of our model to compute the optimal rejuvenation interval which minimizes (maximizes) the loss probability (steady state availability).
{"title":"On the analysis of software rejuvenation policies","authors":"S. Garg, A. Puliafito, M. Telek, Kishor S. Trivedi","doi":"10.1109/CMPASS.1997.613248","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613248","url":null,"abstract":"Software rejuvenation is a technique for software fault tolerance which involves occasionally stopping the executing software, \"cleaning\" the \"internal state\" and restarting. This cleanup is done at desirable times during execution on a preventive basis set that unplanned failures, which result in higher costs compared to planned stopping, are avoided. Since during rejuvenation, the software is typically unavailable or in a degraded mode of operation, the operation involves a cost. In this paper, we present an analytical model of a software system which serves transactions. Due to \"aging\", not only the service rate of the software decreases with time hut the software itself experiences occasional crash/hang failures. We propose and compare two rejuvenation policies. The policies are evaluated for the resulting steady state availability as well the probability that a transaction is denied service. We also numerically illustrate the use of our model to compute the optimal rejuvenation interval which minimizes (maximizes) the loss probability (steady state availability).","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131047112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613254
C. Michael
A software component that is reused in diverse settings can experience diverse operational environments. Unfortunately, a change in the operating environment can also invalidate past experience about the component's quality of performance. Indeed, most statistical methods for estimating software quality assume that the operating environment remains the same. Specifically, the probability density governing the selection of program inputs is assumed to remain constant. However, intuition suggests that such a stringent requirement is unnecessary. If a component has been executed very many times in one environment without experiencing a failure, one would expect it to be relatively failure-free in other similar environments. This paper seeks to quantify that intuition. The question asked is, "how much can be said about a component's probability of failure in one environment after observing its operation in other environments?" Specifically, we develop bounds on the component's probability of failure in the new environment based on its past behavior.
{"title":"Reusing testing of reusable software components","authors":"C. Michael","doi":"10.1109/CMPASS.1997.613254","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613254","url":null,"abstract":"A software component that is reused in diverse settings can experience diverse operational environments. Unfortunately, a change in the operating environment can also invalidate past experience about the component's quality of performance. Indeed, most statistical methods for estimating software quality assume that the operating environment remains the same. Specifically, the probability density governing the selection of program inputs is assumed to remain constant. However, intuition suggests that such a stringent requirement is unnecessary. If a component has been executed very many times in one environment without experiencing a failure, one would expect it to be relatively failure-free in other similar environments. This paper seeks to quantify that intuition. The question asked is, \"how much can be said about a component's probability of failure in one environment after observing its operation in other environments?\" Specifically, we develop bounds on the component's probability of failure in the new environment based on its past behavior.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128299190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613225
M. Blackburn, R. Busser, J. Fontaine
This paper provides the basis for integrating the Software Cost Reduction (SCR) specification method with the T-VEC (Test VECtor) test vector generator and specification analysis system. The SCR model is mapped to the T-VEC model to support automatic test vector generation for SCR specifications. The T-VEC system generated test vectors for an example SCR specification that was translated into the T-VEC language. The relationships between the models and the resulting test vectors are described. Two general guidelines for the translation process were identified that are fundamental for testing specifications that use event operators and for structuring the specifications to provide tests for all specified requirements.
{"title":"Automatic generation of test vectors for SCR-style specifications","authors":"M. Blackburn, R. Busser, J. Fontaine","doi":"10.1109/CMPASS.1997.613225","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613225","url":null,"abstract":"This paper provides the basis for integrating the Software Cost Reduction (SCR) specification method with the T-VEC (Test VECtor) test vector generator and specification analysis system. The SCR model is mapped to the T-VEC model to support automatic test vector generation for SCR specifications. The T-VEC system generated test vectors for an example SCR specification that was translated into the T-VEC language. The relationships between the models and the resulting test vectors are described. Two general guidelines for the translation process were identified that are fundamental for testing specifications that use event operators and for structuring the specifications to provide tests for all specified requirements.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613186
I. Ray, P. Ammann
SCR (Software Cost Reduction) specifications are useful for specifying event-driven systems. To use SCR effectively for critical applications, automated verification of safety properties is important. The fact that model checking approaches are sometimes problematic motivates us to further examine the alternative approach of theorem proving. Theorem proving, in general, is a difficult task; however the regular structure of the proof obligations generated from SCR specifications suggests that relatively unsophisticated theorem provers can discharge many of these obligations. As a feasibility study, we use the B-Toolkit to detect safety violations in an example SCR specification. The B-Toolkit is a good choice because it is commercially available and Supports verified refinement to executables in a commercial programming language (C). We convert the mode transition table in the example SCR specification to an AMN (Abstract Machine Notation) specification and analyze the result with the B-Toolkit. The B-Toolkit generates 120 proof obligations of which 113 are automatically discharged by the theorem prover. The remaining 7 proof obligations are, in fact, not theorems and correspond to the 3 problems in the SCR specification detected by the model checking approaches. For the corrected SCR specification, the B-Toolkit automatically discharges all proof obligations. The example shows that even simple theorem provers are a viable approach to automated analysis for SCR specifications.
{"title":"Using the B-toolkit to ensure safety in SCR specifications","authors":"I. Ray, P. Ammann","doi":"10.1109/CMPASS.1997.613186","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613186","url":null,"abstract":"SCR (Software Cost Reduction) specifications are useful for specifying event-driven systems. To use SCR effectively for critical applications, automated verification of safety properties is important. The fact that model checking approaches are sometimes problematic motivates us to further examine the alternative approach of theorem proving. Theorem proving, in general, is a difficult task; however the regular structure of the proof obligations generated from SCR specifications suggests that relatively unsophisticated theorem provers can discharge many of these obligations. As a feasibility study, we use the B-Toolkit to detect safety violations in an example SCR specification. The B-Toolkit is a good choice because it is commercially available and Supports verified refinement to executables in a commercial programming language (C). We convert the mode transition table in the example SCR specification to an AMN (Abstract Machine Notation) specification and analyze the result with the B-Toolkit. The B-Toolkit generates 120 proof obligations of which 113 are automatically discharged by the theorem prover. The remaining 7 proof obligations are, in fact, not theorems and correspond to the 3 problems in the SCR specification detected by the model checking approaches. For the corrected SCR specification, the B-Toolkit automatically discharges all proof obligations. The example shows that even simple theorem provers are a viable approach to automated analysis for SCR specifications.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115586387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}