Service collaboration allows the realization of more complicated business logic by using existing services. As Web services are generally designed by different organizations, there will be certain mismatches that make them not fit together. Mediation mechanism plays an important role in service collaboration, which guarantees the seamless interaction without changing the internal implementation of services. This paper proposes a comprehensive approach of decentralized mediation framework for multiple services collaboration across organizational boundaries. We also present a novel technique for mediation existence checking and mediator synthesis based on interaction paths, which not only reduces the complexity of mediator synthesis but also provides parallel sub-processes for multiple interactive parts to ensure the parallelism degree of the mediator.
{"title":"Implementing Service Collaboration Based on Decentralized Mediation","authors":"Xiaoqiang Qiao, Jun Wei","doi":"10.1109/QSIC.2011.18","DOIUrl":"https://doi.org/10.1109/QSIC.2011.18","url":null,"abstract":"Service collaboration allows the realization of more complicated business logic by using existing services. As Web services are generally designed by different organizations, there will be certain mismatches that make them not fit together. Mediation mechanism plays an important role in service collaboration, which guarantees the seamless interaction without changing the internal implementation of services. This paper proposes a comprehensive approach of decentralized mediation framework for multiple services collaboration across organizational boundaries. We also present a novel technique for mediation existence checking and mediator synthesis based on interaction paths, which not only reduces the complexity of mediator synthesis but also provides parallel sub-processes for multiple interactive parts to ensure the parallelism degree of the mediator.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115349766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in software testing allow automatic derivation of tests that reach almost any desired point in the source code. There is, however, a fundamental problem with the general idea of targeting one distinct test coverage goal at a time: Coverage goals are neither independent of each other, nor is test generation for any particular coverage goal guaranteed to succeed. We present EvoSuite, a search-based approach that optimizes whole test suites towards satisfying a coverage criterion, rather than generating distinct test cases directed towards distinct coverage goals. Evaluated on five open source libraries and an industrial case study, we show that EvoSuite achieves up to 18 times the coverage of a traditional approach targeting single branches, with up to 44% smaller test suites.
{"title":"Evolutionary Generation of Whole Test Suites","authors":"G. Fraser, Andrea Arcuri","doi":"10.1109/QSIC.2011.19","DOIUrl":"https://doi.org/10.1109/QSIC.2011.19","url":null,"abstract":"Recent advances in software testing allow automatic derivation of tests that reach almost any desired point in the source code. There is, however, a fundamental problem with the general idea of targeting one distinct test coverage goal at a time: Coverage goals are neither independent of each other, nor is test generation for any particular coverage goal guaranteed to succeed. We present EvoSuite, a search-based approach that optimizes whole test suites towards satisfying a coverage criterion, rather than generating distinct test cases directed towards distinct coverage goals. Evaluated on five open source libraries and an industrial case study, we show that EvoSuite achieves up to 18 times the coverage of a traditional approach targeting single branches, with up to 44% smaller test suites.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129544719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design of an information system will involve a number of structural and semantic integrity constraints. One way to ensure that these constraints are maintained is through the calculation and implementation of a guard for each operation: a condition sufficient for all integrity constraints to be maintained, checked before the operation is performed, if the guard evaluates false, then the operation will be blocked or rejected. The information required for the calculation of operation guards can be used also to calculate the effect of workflows: compositions or patterns of guarded operations. The multiplication of states and entities, for arbitrary, parallel compositions of operations and workflows, makes exhaustive analysis impractical. This paper shows how the precise specification of operations and workflows can be used instead to select particular scenarios for calculating effects at the model level, or for generating test cases at the implementation level. The result is an analysis and testing methodology for guarded workflows.
{"title":"Formal and Model-Based Testing of Concurrent Workflows","authors":"Chen-Wei Wang, Alessandra Cavarra, J. Davies","doi":"10.1109/QSIC.2011.27","DOIUrl":"https://doi.org/10.1109/QSIC.2011.27","url":null,"abstract":"The design of an information system will involve a number of structural and semantic integrity constraints. One way to ensure that these constraints are maintained is through the calculation and implementation of a guard for each operation: a condition sufficient for all integrity constraints to be maintained, checked before the operation is performed, if the guard evaluates false, then the operation will be blocked or rejected. The information required for the calculation of operation guards can be used also to calculate the effect of workflows: compositions or patterns of guarded operations. The multiplication of states and entities, for arbitrary, parallel compositions of operations and workflows, makes exhaustive analysis impractical. This paper shows how the precise specification of operations and workflows can be used instead to select particular scenarios for calculating effects at the model level, or for generating test cases at the implementation level. The result is an analysis and testing methodology for guarded workflows.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121500895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To maintain the quality of stored data, their integrity should be enforced. Repairing violations of integrity constraints contributes to integrity enforcement and thus to quality maintenance. Inconsistencies in databases are unavoidable, and repairing all of them often is unfeasible. We show that it is possible to get by with partial repairs that tolerate extant inconsistencies, while preserving the consistent parts of the database. Such repairs also integrity-preserving. Such repairs reduce the amount of integrity constraint violations and hence improve the quality of the stored data.
{"title":"Data Quality Maintenance by Integrity-Preserving Repairs that Tolerate Inconsistency","authors":"H. Decker","doi":"10.1109/QSIC.2011.34","DOIUrl":"https://doi.org/10.1109/QSIC.2011.34","url":null,"abstract":"To maintain the quality of stored data, their integrity should be enforced. Repairing violations of integrity constraints contributes to integrity enforcement and thus to quality maintenance. Inconsistencies in databases are unavoidable, and repairing all of them often is unfeasible. We show that it is possible to get by with partial repairs that tolerate extant inconsistencies, while preserving the consistent parts of the database. Such repairs also integrity-preserving. Such repairs reduce the amount of integrity constraint violations and hence improve the quality of the stored data.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124371138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Defining quality requirements completely and correctly is more difficult than defining functional requirements because stakeholders do not state most of quality requirements explicitly. We thus propose a method to measure a requirements specification for identifying the amount of quality requirements in the specification. We also propose another method to recommend quality requirements to be defined in such a specification. We expect stakeholders can identify missing and unnecessary quality requirements when measured quality requirements are different from recommended ones. We use a semi-formal language called X-JRDL to represent requirements specifications because it is suitable for analyzing quality requirements. We applied our methods to a requirements specification, and found our methods contribute to define quality requirements more completely and correctly.
{"title":"Quality Requirements Analysis Using Requirements Frames","authors":"H. Kaiya, A. Ohnishi","doi":"10.1109/QSIC.2011.21","DOIUrl":"https://doi.org/10.1109/QSIC.2011.21","url":null,"abstract":"Defining quality requirements completely and correctly is more difficult than defining functional requirements because stakeholders do not state most of quality requirements explicitly. We thus propose a method to measure a requirements specification for identifying the amount of quality requirements in the specification. We also propose another method to recommend quality requirements to be defined in such a specification. We expect stakeholders can identify missing and unnecessary quality requirements when measured quality requirements are different from recommended ones. We use a semi-formal language called X-JRDL to represent requirements specifications because it is suitable for analyzing quality requirements. We applied our methods to a requirements specification, and found our methods contribute to define quality requirements more completely and correctly.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123537918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Kazemi, A. Rostampour, A. Zamiri, Pooyan Jamshidi, H. Haghighi, F. S. Aliee
High cohesion as a desirable principle in software design has an incredible impact on software reuse, maintenance and support. In service-oriented architecture (SOA), the focus of services on single business functionality is defined as conceptual cohesion. Current metrics for measuring service cohesion reflect mostly the structural aspect of cohesion and therefore cannot be utilized to measure conceptual cohesion of services. Latent Semantic Indexing (LSI), on the other hand, is an information retrieval technique and is widely used to measure the degree of similarity between a set of text based documents. In this paper, a metric namely SCD is proposed that measure the conceptual cohesion of services based on LSI technique. This metric consider both service functionality and operation sequence to measure the conceptual cohesion. An evaluation of the metric based on a set of cohesion principles and comparison with the previously proposed metrics are also provided.
{"title":"An Information Retrieval Based Approach for Measuring Service Conceptual Cohesion","authors":"A. Kazemi, A. Rostampour, A. Zamiri, Pooyan Jamshidi, H. Haghighi, F. S. Aliee","doi":"10.1109/QSIC.2011.24","DOIUrl":"https://doi.org/10.1109/QSIC.2011.24","url":null,"abstract":"High cohesion as a desirable principle in software design has an incredible impact on software reuse, maintenance and support. In service-oriented architecture (SOA), the focus of services on single business functionality is defined as conceptual cohesion. Current metrics for measuring service cohesion reflect mostly the structural aspect of cohesion and therefore cannot be utilized to measure conceptual cohesion of services. Latent Semantic Indexing (LSI), on the other hand, is an information retrieval technique and is widely used to measure the degree of similarity between a set of text based documents. In this paper, a metric namely SCD is proposed that measure the conceptual cohesion of services based on LSI technique. This metric consider both service functionality and operation sequence to measure the conceptual cohesion. An evaluation of the metric based on a set of cohesion principles and comparison with the previously proposed metrics are also provided.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124938560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the underlying relations and partitions. It has comparable space and time complexity with the most efficient counterpart algorithms for Kripke structures.
{"title":"Saving Time in a Space-Efficient Simulation Algorithm","authors":"J. Markovski","doi":"10.1109/QSIC.2011.26","DOIUrl":"https://doi.org/10.1109/QSIC.2011.26","url":null,"abstract":"We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the underlying relations and partitions. It has comparable space and time complexity with the most efficient counterpart algorithms for Kripke structures.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125611542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a method for deriving a complete test suite for a timed deterministic FSM with time-outs when only the upper bound on the number of states and the largest finite time-out at a state of an implementation under test are known. We also show that a test suite derived for a corresponding classical FSM is much longer than this obtained directly from a FSM with time-outs. The application to a case study, the Loan Approval Service, illustrates how our approach can be applied for deriving tests for compositions of timed FSMs.
{"title":"FSM-Based Test Derivation Strategies for Systems with Time-Outs","authors":"M. Zhigulin, N. Yevtushenko, S. Maag, A. Cavalli","doi":"10.1109/QSIC.2011.30","DOIUrl":"https://doi.org/10.1109/QSIC.2011.30","url":null,"abstract":"The paper presents a method for deriving a complete test suite for a timed deterministic FSM with time-outs when only the upper bound on the number of states and the largest finite time-out at a state of an implementation under test are known. We also show that a test suite derived for a corresponding classical FSM is much longer than this obtained directly from a FSM with time-outs. The application to a case study, the Loan Approval Service, illustrates how our approach can be applied for deriving tests for compositions of timed FSMs.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123405626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writ ability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java byte code.
{"title":"A Hierarchical Security Assessment Model for Object-Oriented Programs","authors":"Bandar M. Alshammari, C. Fidge, D. Corney","doi":"10.1109/QSIC.2011.31","DOIUrl":"https://doi.org/10.1109/QSIC.2011.31","url":null,"abstract":"We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writ ability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java byte code.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126172169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}