Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960642
Baowen Xu, Weifeng Zhang, Hongji Yang, W. Chu
Web search engines are very useful information service tools in the Internet. The current Web search engines produce search results relating to the search terms and the actual information collected by them. Since the selections of the search results cannot affect the future ones, they may not cover most people's interests. In the paper, feedback information produced by the users' accessing lists is represented by a rough set and can influence the search results. Thus the search engines can provide self-adaptability.
{"title":"A rough set based self-adaptive Web search engine","authors":"Baowen Xu, Weifeng Zhang, Hongji Yang, W. Chu","doi":"10.1109/CMPSAC.2001.960642","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960642","url":null,"abstract":"Web search engines are very useful information service tools in the Internet. The current Web search engines produce search results relating to the search terms and the actual information collected by them. Since the selections of the search results cannot affect the future ones, they may not cover most people's interests. In the paper, feedback information produced by the users' accessing lists is represented by a rough set and can influence the search results. Thus the search engines can provide self-adaptability.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"718 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116128366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960609
K. Bennett, Jie Xu, M. Munro, Zhuang Hong, P. Layzell, N. Gold, D. Budgen, P. Brereton
The urgent need to change software easily to meet evolving business requirements requires a radical shift in the development of software, with a more demand-centric view leading to software which will be delivered as a service, within the framework of an open marketplace. We describe a service architecture and its rationale, in which components may be bound instantly, just at the time they are needed and then the binding may, be disengaged. This allows highly flexible software services to be evolved in "internet time". The paper focuses on early results: some of the aims have been demonstrated and amplified through an experimental implementation based on e-Speak, an existing and available technology. It is concluded that technology such as e-Speak provides a useful infrastructure that rapidly enabled us to demonstrate the basic operation and viability of our approach.
{"title":"An architectural model for service-based flexible software","authors":"K. Bennett, Jie Xu, M. Munro, Zhuang Hong, P. Layzell, N. Gold, D. Budgen, P. Brereton","doi":"10.1109/CMPSAC.2001.960609","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960609","url":null,"abstract":"The urgent need to change software easily to meet evolving business requirements requires a radical shift in the development of software, with a more demand-centric view leading to software which will be delivered as a service, within the framework of an open marketplace. We describe a service architecture and its rationale, in which components may be bound instantly, just at the time they are needed and then the binding may, be disengaged. This allows highly flexible software services to be evolved in \"internet time\". The paper focuses on early results: some of the aims have been demonstrated and amplified through an experimental implementation based on e-Speak, an existing and available technology. It is concluded that technology such as e-Speak provides a useful infrastructure that rapidly enabled us to demonstrate the basic operation and viability of our approach.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130018999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960603
R. Paul, Venkata U. B. Challagulla, F. Bastani, I. Yen
Several methods have been explored for assuring the reliability of mission critical systems (MCS), but no single method has proved to be completely effective. This paper presents an approach for quantifying the confidence in the probability that a program is free of specific classes of defects. The method uses memory-based reasoning techniques to admit a variety of data from a variety of projects for the purpose of assessing new systems. Once a sufficient amount of information has been collected, the statistical results can be applied to programs that are not in the analysis set to predict their reliabilities and guide the testing process. The approach is applied to the analysis of Y2K defects based on defect data generated using fault-injection simulation.
{"title":"A memory-based reasoning approach for assessing software quality","authors":"R. Paul, Venkata U. B. Challagulla, F. Bastani, I. Yen","doi":"10.1109/CMPSAC.2001.960603","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960603","url":null,"abstract":"Several methods have been explored for assuring the reliability of mission critical systems (MCS), but no single method has proved to be completely effective. This paper presents an approach for quantifying the confidence in the probability that a program is free of specific classes of defects. The method uses memory-based reasoning techniques to admit a variety of data from a variety of projects for the purpose of assessing new systems. Once a sufficient amount of information has been collected, the statistical results can be applied to programs that are not in the analysis set to predict their reliabilities and guide the testing process. The approach is applied to the analysis of Y2K defects based on defect data generated using fault-injection simulation.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122628204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960630
Jongmoon Baik, N. Eickelmann, Chris Abts
The use of COTS (commercial-off-the-shelf) components for system development has been a growing trend in the software engineering community during the past decade. However, the engineering of COTS-based systems involves significant technical risks. A good indicator of the as yet unresolved difficulties in COTS-based system developments is the relatively poor understanding frequently regarding the processes associated with development of the "glue code" used to integrate COTS components into the system, as well as with the other integration activities. The main objective of the paper is to understand how the glue code development process and the COTS component integration process affect each other and how they affect development effort and schedule throughout the development life cycle, based on given parameters via software simulation that provides a method for checking the understanding of the real world process and thus help people produce better results in the future.
{"title":"Empirical software simulation for COTS glue code development and integration","authors":"Jongmoon Baik, N. Eickelmann, Chris Abts","doi":"10.1109/CMPSAC.2001.960630","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960630","url":null,"abstract":"The use of COTS (commercial-off-the-shelf) components for system development has been a growing trend in the software engineering community during the past decade. However, the engineering of COTS-based systems involves significant technical risks. A good indicator of the as yet unresolved difficulties in COTS-based system developments is the relatively poor understanding frequently regarding the processes associated with development of the \"glue code\" used to integrate COTS components into the system, as well as with the other integration activities. The main objective of the paper is to understand how the glue code development process and the COTS component integration process affect each other and how they affect development effort and schedule throughout the development life cycle, based on given parameters via software simulation that provides a method for checking the understanding of the real world process and thus help people produce better results in the future.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122511309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960618
P. Chevalley, P. Thévenod-Fosse
The adoption of the object-oriented (OO) technology for the development of critical software raises important testing issues. This paper addresses one of these issues: how to create effective tests from OO specification documents? More precisely, the paper describes a technique that adapts a probabilistic method, called statistical functional testing, to the generation of test cases from UML state diagrams, using transition coverage as the testing criterion. Emphasis is put on defining an automatic way to produce both the input values and the expected outputs. The technique is automated with the aid of the Rational Software Corporation's Rose RealTime tool. An industrial case study from the avionics domain, formally specified and implemented in Java, is used to illustrate the feasibility of the technique at the subsystem level. Results of first test experiments are presented to exemplify the fault revealing power of the created statistical test cases.
{"title":"Automated generation of statistical test cases from UML state diagrams","authors":"P. Chevalley, P. Thévenod-Fosse","doi":"10.1109/CMPSAC.2001.960618","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960618","url":null,"abstract":"The adoption of the object-oriented (OO) technology for the development of critical software raises important testing issues. This paper addresses one of these issues: how to create effective tests from OO specification documents? More precisely, the paper describes a technique that adapts a probabilistic method, called statistical functional testing, to the generation of test cases from UML state diagrams, using transition coverage as the testing criterion. Emphasis is put on defining an automatic way to produce both the input values and the expected outputs. The technique is automated with the aid of the Rational Software Corporation's Rose RealTime tool. An industrial case study from the avionics domain, formally specified and implemented in Java, is used to illustrate the feasibility of the technique at the subsystem level. Results of first test experiments are presented to exemplify the fault revealing power of the created statistical test cases.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124200574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960679
S. Hirano, S. Tsumoto
Presents a knowledge-oriented clustering method based on rough set theory. The method evaluates the simplicity of classification knowledge during the clustering process and produces readable clusters reflecting the global features of objects. The method uses a newly-introduced measure, the indiscernibility degree, to evaluate the importance of equivalence relations that are related to the roughness of the classification knowledge. The indiscernibility degree is defined as the ratio of equivalence relations that gives a common classification to the two objects under consideration. The two objects can be classified into the same class if they have a high indiscernibility degree, even in the presence of equivalence relations which differentiate these objects. Ignorance of such equivalence relations is related to the generalization of knowledge, and it yields simple clusters that can be represented by simple knowledge. An experiment was performed on artificially created numerical data sets. The results showed that objects were classified into the expected clusters if modification was performed, whereas they were classified into many small categories without modification.
{"title":"A knowledge-oriented clustering technique based on rough sets","authors":"S. Hirano, S. Tsumoto","doi":"10.1109/CMPSAC.2001.960679","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960679","url":null,"abstract":"Presents a knowledge-oriented clustering method based on rough set theory. The method evaluates the simplicity of classification knowledge during the clustering process and produces readable clusters reflecting the global features of objects. The method uses a newly-introduced measure, the indiscernibility degree, to evaluate the importance of equivalence relations that are related to the roughness of the classification knowledge. The indiscernibility degree is defined as the ratio of equivalence relations that gives a common classification to the two objects under consideration. The two objects can be classified into the same class if they have a high indiscernibility degree, even in the presence of equivalence relations which differentiate these objects. Ignorance of such equivalence relations is related to the generalization of knowledge, and it yields simple clusters that can be represented by simple knowledge. An experiment was performed on artificially created numerical data sets. The results showed that objects were classified into the expected clusters if modification was performed, whereas they were classified into many small categories without modification.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123463579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960617
James S. Pascoe, R. Loader, V. Sunderam
In this paper we present a novel approach to fault-tolerant group membership for use predominantly in collaborative computing environments. As an exemplar, we use the Collaborative Computing Transport Layer which offers reliable atomic multicast capabilities for use in collaborative environments such as the Collaborative Computing Frameworks (CCF). Specific design goals of the approach are the elimination of processing overhead due to heartbeats, support for partial failures and extensibility These goals are satisfied in an approach which uses an IP multicast failure detector and two election based algorithms. By basing failure detection on IP multicast, the need for explicit keep-alive packets is removed, thus in the absence of failures the approach imposes no overhead.
{"title":"An election based approach to fault-tolerant group membership in collaborative environments","authors":"James S. Pascoe, R. Loader, V. Sunderam","doi":"10.1109/CMPSAC.2001.960617","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960617","url":null,"abstract":"In this paper we present a novel approach to fault-tolerant group membership for use predominantly in collaborative computing environments. As an exemplar, we use the Collaborative Computing Transport Layer which offers reliable atomic multicast capabilities for use in collaborative environments such as the Collaborative Computing Frameworks (CCF). Specific design goals of the approach are the elimination of processing overhead due to heartbeats, support for partial failures and extensibility These goals are satisfied in an approach which uses an IP multicast failure detector and two election based algorithms. By basing failure detection on IP multicast, the need for explicit keep-alive packets is removed, thus in the absence of failures the approach imposes no overhead.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123537720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960620
Erwan Breton, J. Bézivin
Within the domain of information systems, one of the main technological moves we may presently observe in industrial circles is the paradigm shift from object composition to model transformation. This is happening at a very fast pace. Among the concrete signs of this evolution one may mention the ongoing change at the Object Management Group from OMA (Object Management Architecture) to MDA (Model-Driven Architecture). A new information system landscape is emerging that will be more model-centered than object-oriented. Within this context, the information engineer will use and produce many models of low granularity and high abstraction. These models may describe static or dynamic aspects and may be accordingly qualified of product and process models. This paper focuses more particularly on the recent evolution of process models. It shows some aspects of the industrial state of the art in the domain and suggests some benefits that could be reaped from a well-mastered utilization of these techniques.
{"title":"Model driven process engineering","authors":"Erwan Breton, J. Bézivin","doi":"10.1109/CMPSAC.2001.960620","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960620","url":null,"abstract":"Within the domain of information systems, one of the main technological moves we may presently observe in industrial circles is the paradigm shift from object composition to model transformation. This is happening at a very fast pace. Among the concrete signs of this evolution one may mention the ongoing change at the Object Management Group from OMA (Object Management Architecture) to MDA (Model-Driven Architecture). A new information system landscape is emerging that will be more model-centered than object-oriented. Within this context, the information engineer will use and produce many models of low granularity and high abstraction. These models may describe static or dynamic aspects and may be accordingly qualified of product and process models. This paper focuses more particularly on the recent evolution of process models. It shows some aspects of the industrial state of the art in the domain and suggests some benefits that could be reaped from a well-mastered utilization of these techniques.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960602
S. Biffl, M. Halling
After a software inspection the project manager has to decide whether a product has sufficient quality to pass on to the next software development stage or whether a second inspection cycle, a reinspection, is likely to sufficiently improve its quality. The reinspection decision of recent research focused on the estimation of product quality after inspection, which does not take in to account the effect of a reinspection. Thus we propose to use estimation models for the quality improvement during reinspection and the cost and benefit of a reinspection as basis for the reinspection decision. We evaluate the reinspection decision correctness of these models with time-stamped defect data from a large-scale controlled experiment on the inspection and reinspection of a software requirements document. The main finding of the investigation is that the product quality criterion is likely to force products to be reinspected, if a large number of defects were detected in the first inspection. Further the product-quality, criterion is especially sensitive to an underestimation of the number of defects in the product and will let bad products pass as good. The cost-benefit criterion is less sensitive to estimation error than the product-quality criterion and should in practice be used as second opinion, if a product-quality criterion is applied.
{"title":"Investigating reinspection decision accuracy regarding product-quality and cost-benefit estimates","authors":"S. Biffl, M. Halling","doi":"10.1109/CMPSAC.2001.960602","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960602","url":null,"abstract":"After a software inspection the project manager has to decide whether a product has sufficient quality to pass on to the next software development stage or whether a second inspection cycle, a reinspection, is likely to sufficiently improve its quality. The reinspection decision of recent research focused on the estimation of product quality after inspection, which does not take in to account the effect of a reinspection. Thus we propose to use estimation models for the quality improvement during reinspection and the cost and benefit of a reinspection as basis for the reinspection decision. We evaluate the reinspection decision correctness of these models with time-stamped defect data from a large-scale controlled experiment on the inspection and reinspection of a software requirements document. The main finding of the investigation is that the product quality criterion is likely to force products to be reinspected, if a large number of defects were detected in the first inspection. Further the product-quality, criterion is especially sensitive to an underestimation of the number of defects in the product and will let bad products pass as good. The cost-benefit criterion is less sensitive to estimation error than the product-quality criterion and should in practice be used as second opinion, if a product-quality criterion is applied.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129882698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960658
L. Tahat, A. Bader, Boris Vaysburg, B. Korel
Testing large software systems is very laborious and expensive. Model-based test generation techniques are used to automatically generate tests for large software systems. However, these techniques require manually created system models that are used for test generation. In addition, generated test cases are not associated with individual requirements. In this paper, we present a novel approach of requirement-based test generation. The approach accepts a software specification as a set of individual requirements expressed in textual and SDL formats (a common practice in the industry). From these requirements, system model is automatically created with requirement information mapped to the model. The system model is used to automatically generate test cases related to individual requirements. Several test generation strategies are presented. The approach is extended to requirement-based regression test generation related to changes on the requirement level. Our initial experience shows that this approach may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.
{"title":"Requirement-based automated black-box test generation","authors":"L. Tahat, A. Bader, Boris Vaysburg, B. Korel","doi":"10.1109/CMPSAC.2001.960658","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960658","url":null,"abstract":"Testing large software systems is very laborious and expensive. Model-based test generation techniques are used to automatically generate tests for large software systems. However, these techniques require manually created system models that are used for test generation. In addition, generated test cases are not associated with individual requirements. In this paper, we present a novel approach of requirement-based test generation. The approach accepts a software specification as a set of individual requirements expressed in textual and SDL formats (a common practice in the industry). From these requirements, system model is automatically created with requirement information mapped to the model. The system model is used to automatically generate test cases related to individual requirements. Several test generation strategies are presented. The approach is extended to requirement-based regression test generation related to changes on the requirement level. Our initial experience shows that this approach may provide significant benefits in terms of reduction in number of test cases and increase in quality of a test suite.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117119958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}