Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.112
H. Umeno, M. Kiyama, T. Fukunaga, Takashige Kubo
A virtual machine system can run multiple conventional operating systems (OSs) in a single real host computer. A virtual machine is a logical computer with almost the same architecture as the host, and may contain several logical processors. A hypervisor is a control program to control this virtual machine system. Traditionally, the hypervisor has to receive an I/O interrupt pending for a waiting logical processor, and to simulate the I/O interrupt, consequently incurring the simulation overhead of the I/O interrupt. To avoid this overhead we present a new method which introduces a new self-wait state different from the conventional wait state, presents a new instruction for the hypervisor to detect the I/O interrupts pending for the logical processors in the self-wait state, and dispatches those logical processors on the detection ahead of the ready queue. This new method has eliminated the simulation overhead of those I/O interrupts, and enhanced the system performance to the near native.
{"title":"New method for dispatching waiting logical processors in virtual machine system","authors":"H. Umeno, M. Kiyama, T. Fukunaga, Takashige Kubo","doi":"10.1109/COMPSAC.2005.112","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.112","url":null,"abstract":"A virtual machine system can run multiple conventional operating systems (OSs) in a single real host computer. A virtual machine is a logical computer with almost the same architecture as the host, and may contain several logical processors. A hypervisor is a control program to control this virtual machine system. Traditionally, the hypervisor has to receive an I/O interrupt pending for a waiting logical processor, and to simulate the I/O interrupt, consequently incurring the simulation overhead of the I/O interrupt. To avoid this overhead we present a new method which introduces a new self-wait state different from the conventional wait state, presents a new instruction for the hypervisor to detect the I/O interrupts pending for the logical processors in the self-wait state, and dispatches those logical processors on the detection ahead of the ready queue. This new method has eliminated the simulation overhead of those I/O interrupts, and enhanced the system performance to the near native.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116046007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.111
Ryosuke Miyoshi, T. Miura, I. Shioya
Nowadays there have several applications on spatial information which manage high dimensional data. Whenever we examine nearest neighbor search in these applications by multi-dimensional indexing structure, very often we must access all pages if dimensionality exceeds about 10. This is known as curse of dimensionality that says any indexing structure is outperformed by simple linear search. In this investigation, for high dimensional data, we propose a sophisticated access mechanism based on extensible grid files with dimensionality reduction (DR) technique. We analyze error estimation caused by DR and recover the search space on original dimension. We examine nearest neighbor search and discuss some empirical results to show the usefulness of our approach.
{"title":"Nearest neighbor queries on extensible grid files using dimensionality reduction","authors":"Ryosuke Miyoshi, T. Miura, I. Shioya","doi":"10.1109/COMPSAC.2005.111","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.111","url":null,"abstract":"Nowadays there have several applications on spatial information which manage high dimensional data. Whenever we examine nearest neighbor search in these applications by multi-dimensional indexing structure, very often we must access all pages if dimensionality exceeds about 10. This is known as curse of dimensionality that says any indexing structure is outperformed by simple linear search. In this investigation, for high dimensional data, we propose a sophisticated access mechanism based on extensible grid files with dimensionality reduction (DR) technique. We analyze error estimation caused by DR and recover the search space on original dimension. We examine nearest neighbor search and discuss some empirical results to show the usefulness of our approach.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125257371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a formal method for designing the class structure based on a partial order on the functions, which is derived from the use-relationship between the functions and the various data items. We can regard this method as an initial step in building a theory of refactoring and design-patterns. Our method can identify the functions which should be factored into subfunctions, including their desired signatures and a reduced use-complexity, in order to simplify the class-subclass structure. A similar remark holds for the decomposition or consolidation of data items as well. We illustrate our method with several examples.
{"title":"A formal approach to designing a class-subclass structure using a partial-order on the functions","authors":"S. Kundu, N. Gwee","doi":"10.1109/COMPSAC.2005.23","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.23","url":null,"abstract":"We present a formal method for designing the class structure based on a partial order on the functions, which is derived from the use-relationship between the functions and the various data items. We can regard this method as an initial step in building a theory of refactoring and design-patterns. Our method can identify the functions which should be factored into subfunctions, including their desired signatures and a reduced use-complexity, in order to simplify the class-subclass structure. A similar remark holds for the decomposition or consolidation of data items as well. We illustrate our method with several examples.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"62 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113962382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.134
Hung-Chang Hsiao, C. King
Scoped broadcast disseminates a message to all the nodes within a designated physical/logical region in an overlay network. It can be a basic building block for applications such as information search, data broadcasting, and overlay structure diagnostic. In this paper, we study scoped broadcast in peer-to-peer (P2P) overlay networks based on distributed hash tables (DHT). Since P2P networks have very dynamic behavior due to peer joining and departure, it is interesting to know how many peers can be reached with one scoped broadcast. This depends mainly on the cost we would like to pay for maintaining the geometric structure of the DHT-based overlay. The maintenance cost is affected primarily by the failure detection and failure recovery mechanisms. We evaluated the effects of maintenance overhead on scoped broadcast via extensive simulations. The evaluation shows that it is important to exploit fresh nodes as the neighbors of a node. In addition, cooperative failure discovery and recovery can efficiently and effectively disseminate scoped broadcast messages.
{"title":"Scoped broadcast in dynamic peer-to-peer networks","authors":"Hung-Chang Hsiao, C. King","doi":"10.1109/COMPSAC.2005.134","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.134","url":null,"abstract":"Scoped broadcast disseminates a message to all the nodes within a designated physical/logical region in an overlay network. It can be a basic building block for applications such as information search, data broadcasting, and overlay structure diagnostic. In this paper, we study scoped broadcast in peer-to-peer (P2P) overlay networks based on distributed hash tables (DHT). Since P2P networks have very dynamic behavior due to peer joining and departure, it is interesting to know how many peers can be reached with one scoped broadcast. This depends mainly on the cost we would like to pay for maintaining the geometric structure of the DHT-based overlay. The maintenance cost is affected primarily by the failure detection and failure recovery mechanisms. We evaluated the effects of maintenance overhead on scoped broadcast via extensive simulations. The evaluation shows that it is important to exploit fresh nodes as the neighbors of a node. In addition, cooperative failure discovery and recovery can efficiently and effectively disseminate scoped broadcast messages.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127747335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An extension of symbolic transition graphs with assignment is proposed which combines the advantages of both the assignment-before-action and the assignment-after-action approaches: like the former it allows a simple set of rules to be designed for generating finite symbolic graphs from regular value-passing process descriptions; like the later it avoids creating multiple copies in the graph for a recursive process definition. Experiences show that, in most cases, considerable reductions in verification time and space can be achieved using the new approach.
{"title":"Extended symbolic transition graphs with assignment","authors":"Weijia Deng, Huimin Lin","doi":"10.1109/COMPSAC.2005.76","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.76","url":null,"abstract":"An extension of symbolic transition graphs with assignment is proposed which combines the advantages of both the assignment-before-action and the assignment-after-action approaches: like the former it allows a simple set of rules to be designed for generating finite symbolic graphs from regular value-passing process descriptions; like the later it avoids creating multiple copies in the graph for a recursive process definition. Experiences show that, in most cases, considerable reductions in verification time and space can be achieved using the new approach.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133980605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.153
Mechelle Gittens, Y. Kim, David Godwin
This paper discusses the Pareto principle as it relates to the distribution of software defects in code. We look at evidence in the context of both the software test team, and users of the software. We also investigate two related principles. The first principle is that the distribution of defects in code relates to the distribution of complexity in code. The second principle is that how we define complexity relates to the distribution of defects in code. We present this work as an empirical study of three general hypotheses investigated for large production-level software; we show that the essence of the principle holds, while precise percentages do not.
{"title":"The vital few versus the trivial many: examining the Pareto principle for software","authors":"Mechelle Gittens, Y. Kim, David Godwin","doi":"10.1109/COMPSAC.2005.153","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.153","url":null,"abstract":"This paper discusses the Pareto principle as it relates to the distribution of software defects in code. We look at evidence in the context of both the software test team, and users of the software. We also investigate two related principles. The first principle is that the distribution of defects in code relates to the distribution of complexity in code. The second principle is that how we define complexity relates to the distribution of defects in code. We present this work as an empirical study of three general hypotheses investigated for large production-level software; we show that the essence of the principle holds, while precise percentages do not.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134382180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.143
W. Howden
Traditional white and black box testing methods are effective in revealing many kinds of defects, but the more elusive bugs slip past them. Model-based testing incorporates additional application concepts in the selection of tests, which may provide more refined bug detection, but does not go far enough. Test selection patterns identify defect-oriented contexts in a program. They also identify suggested tests for risks associated with a specified context. A context and its risks is a kind of conceptual trap designed to corner a bug. The suggested tests will find the bug if it has been caught in the trap.
{"title":"Software Test Selection Patterns and Elusive Bugs","authors":"W. Howden","doi":"10.1109/COMPSAC.2005.143","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.143","url":null,"abstract":"Traditional white and black box testing methods are effective in revealing many kinds of defects, but the more elusive bugs slip past them. Model-based testing incorporates additional application concepts in the selection of tests, which may provide more refined bug detection, but does not go far enough. Test selection patterns identify defect-oriented contexts in a program. They also identify suggested tests for risks associated with a specified context. A context and its risks is a kind of conceptual trap designed to corner a bug. The suggested tests will find the bug if it has been caught in the trap.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124832206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.160
J. Savolainen, I. Oliver, M. Mannion, Hailang Zuo
Software product line development is a compromise between customer requirements, existing product line architectural constraints and commercial needs. Managing variability is the key to a successful product line development. Product line models of requirements and features can be constructed that contain variation points. New products can be driven by making requirement selections from a product line model of requirements but as the product line evolves selections are constrained by the design of the existing product line architecture and the cost of making these changes. We present a set of rules that map the selection constraint values of requirements to the selection constraint values of features which in turn map on to the selection constraint values of architectural assets. We illustrate the application of the rules using a worked example.
{"title":"Transitioning from product line requirements to product line architecture","authors":"J. Savolainen, I. Oliver, M. Mannion, Hailang Zuo","doi":"10.1109/COMPSAC.2005.160","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.160","url":null,"abstract":"Software product line development is a compromise between customer requirements, existing product line architectural constraints and commercial needs. Managing variability is the key to a successful product line development. Product line models of requirements and features can be constructed that contain variation points. New products can be driven by making requirement selections from a product line model of requirements but as the product line evolves selections are constrained by the design of the existing product line architecture and the cost of making these changes. We present a set of rules that map the selection constraint values of requirements to the selection constraint values of features which in turn map on to the selection constraint values of architectural assets. We illustrate the application of the rules using a worked example.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134629960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.138
N. Mohamed
Communication middleware such as MuniCluster provides high-level communication mechanisms for networked applications through hiding the low-level communication details from the applications. The MuniCluster model provides mechanisms to enhance the network performance properties through message separations and parallel transfer. However, the configurations of such its services require various measurements and setups to efficiently utilize the availability of the multiple network interfaces. In this paper we introduce and evaluate a self-configuring model that allows applications to transparently utilize the existence of multiple network interfaces and networks. Here we present enhancements to the MuniCluster model by adding the self-configuration mechanism. Using network resource discovery and deciding on how to efficiently utilize the multiple networks, the model enhances overall communications performance. The proposed techniques deal with the heterogeneity of interfaces and networks to enhance communication performance transparent form the applications.
{"title":"Self-configuring communication middleware model for multiple network interfaces","authors":"N. Mohamed","doi":"10.1109/COMPSAC.2005.138","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.138","url":null,"abstract":"Communication middleware such as MuniCluster provides high-level communication mechanisms for networked applications through hiding the low-level communication details from the applications. The MuniCluster model provides mechanisms to enhance the network performance properties through message separations and parallel transfer. However, the configurations of such its services require various measurements and setups to efficiently utilize the availability of the multiple network interfaces. In this paper we introduce and evaluate a self-configuring model that allows applications to transparently utilize the existence of multiple network interfaces and networks. Here we present enhancements to the MuniCluster model by adding the self-configuration mechanism. Using network resource discovery and deciding on how to efficiently utilize the multiple networks, the model enhances overall communications performance. The proposed techniques deal with the heterogeneity of interfaces and networks to enhance communication performance transparent form the applications.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116166844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.107
Martin P. Ward, H. Zedan
A program transformation is an operation which can be applied to any program (satisfying the transformations applicability conditions) and returns a semantically equivalent program. In the FermaT transformation system program transformations are carried out in a wide spectrum language, called WSL, and the transformations themselves are written in an extension of WSL called MetaWSL which was specifically designed to be a domain-specific language for writing program transformations. As a result, FermaT is capable of transforming its own source code via meta-transformations. This paper introduces MetaWSL and describes some applications of meta-transformations in the FermaT system.
{"title":"MetaWSL and meta-transformations in the FermaT transformation system","authors":"Martin P. Ward, H. Zedan","doi":"10.1109/COMPSAC.2005.107","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.107","url":null,"abstract":"A program transformation is an operation which can be applied to any program (satisfying the transformations applicability conditions) and returns a semantically equivalent program. In the FermaT transformation system program transformations are carried out in a wide spectrum language, called WSL, and the transformations themselves are written in an extension of WSL called MetaWSL which was specifically designed to be a domain-specific language for writing program transformations. As a result, FermaT is capable of transforming its own source code via meta-transformations. This paper introduces MetaWSL and describes some applications of meta-transformations in the FermaT system.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121222209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}