Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960649
Jeong A. Kang, A. Cheng
A rule-based system must satisfy stringent timing constraints when applied to a real-time environment. The most critical performance factor in the implementation of a production system is the condition-testing algorithm. We show an approach designed to reduce the response time of rule-based expert systems by reducing the matching time based on RETE. There are two steps in the method we propose: the first makes an index structure of the tokens to reduce the /spl alpha/-node-level join candidates; the second chooses the highest time tag for certain /spl beta/-nodes to reduce the size of the /spl beta/-memory and to keep the strategy of the RETE network. These steps reduce the amount of combinatorial match that is problematical in a real-time production system application.
{"title":"Reducing matching time for OPS5 production systems","authors":"Jeong A. Kang, A. Cheng","doi":"10.1109/CMPSAC.2001.960649","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960649","url":null,"abstract":"A rule-based system must satisfy stringent timing constraints when applied to a real-time environment. The most critical performance factor in the implementation of a production system is the condition-testing algorithm. We show an approach designed to reduce the response time of rule-based expert systems by reducing the matching time based on RETE. There are two steps in the method we propose: the first makes an index structure of the tokens to reduce the /spl alpha/-node-level join candidates; the second chooses the highest time tag for certain /spl beta/-nodes to reduce the size of the /spl beta/-memory and to keep the strategy of the RETE network. These steps reduce the amount of combinatorial match that is problematical in a real-time production system application.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116591897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960634
H. Suganuma, Kinya Nakamura, Tsutomu Syomura
Proposes a new software testing and regression testing process which includes the development of test cases and test data, and the multiple execution of such test cases. ne process is practical enough for the current emerging business application development whose schedules are typically extremely short. In our process, most of the steps for testing are reversed. That is, a test engineer starts from a test operation, which is followed by test data retrieval, test case retrieval and matrix checklist creation by combining the test cases. In this way, we can significantly reduce the cost of preparing the automatic execution of test cases for regression testing. In addition, this technique enables the test engineer to develop test cases incrementally. The functionality of the tool to support this process is also presented. A preliminary case study of this technique on an actual software development project in our company showed improvements in the preparation of the test execution as well as the execution of regression testing.
{"title":"Test operation-driven approach on building regression testing environment","authors":"H. Suganuma, Kinya Nakamura, Tsutomu Syomura","doi":"10.1109/CMPSAC.2001.960634","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960634","url":null,"abstract":"Proposes a new software testing and regression testing process which includes the development of test cases and test data, and the multiple execution of such test cases. ne process is practical enough for the current emerging business application development whose schedules are typically extremely short. In our process, most of the steps for testing are reversed. That is, a test engineer starts from a test operation, which is followed by test data retrieval, test case retrieval and matrix checklist creation by combining the test cases. In this way, we can significantly reduce the cost of preparing the automatic execution of test cases for regression testing. In addition, this technique enables the test engineer to develop test cases incrementally. The functionality of the tool to support this process is also presented. A preliminary case study of this technique on an actual software development project in our company showed improvements in the preparation of the test execution as well as the execution of regression testing.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130210388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960668
Bo Zhang, Ke Ding, Jing Li
Current component interfaces, such as those defined by CORBA, are devoid of additional constraints that make helps in the task of composing software, therefore the dependent relations and interaction protocols among components are hidden within the implementations. The "implicity" reduces the composability and makes it difficult to reuse, validate and manage components. This paper presents a message based architectural model and an XML-message based Architecture Description Language (XADL). XADL promotes the description of dependent relations from the implementation level to the architectural level and enhances the interface specification by adding the sequencing, behavior and quality constraints. Furthermore, we formulate the notion of architectural mismatch and propose a checking algorithm to prevent systems from potential mismatches. The use of XML facilitates not only the specification descriptions but also the system implementations, and increases the openness of the whole system as well.
{"title":"An XML-message based architecture description language and architectural mismatch checking","authors":"Bo Zhang, Ke Ding, Jing Li","doi":"10.1109/CMPSAC.2001.960668","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960668","url":null,"abstract":"Current component interfaces, such as those defined by CORBA, are devoid of additional constraints that make helps in the task of composing software, therefore the dependent relations and interaction protocols among components are hidden within the implementations. The \"implicity\" reduces the composability and makes it difficult to reuse, validate and manage components. This paper presents a message based architectural model and an XML-message based Architecture Description Language (XADL). XADL promotes the description of dependent relations from the implementation level to the architectural level and enhances the interface specification by adding the sequencing, behavior and quality constraints. Furthermore, we formulate the notion of architectural mismatch and propose a checking algorithm to prevent systems from potential mismatches. The use of XML facilitates not only the specification descriptions but also the system implementations, and increases the openness of the whole system as well.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130880168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960652
W. Chu, Jing Chen, Chun-Yuan Lee, Hongii Yang
Agent technology is becoming increasingly important because of its generality, flexibility, and modularity. Moreover, design patterns and software architectures are attracting attention in object-oriented software development. The authors propose a framework for an agent system, which is based on N-tier architecture and design patterns. A negotiation agent system is presented to show the feasibility of applying this framework.
{"title":"Implementing an agent system using N-tier pattern-based framework","authors":"W. Chu, Jing Chen, Chun-Yuan Lee, Hongii Yang","doi":"10.1109/CMPSAC.2001.960652","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960652","url":null,"abstract":"Agent technology is becoming increasingly important because of its generality, flexibility, and modularity. Moreover, design patterns and software architectures are attracting attention in object-oriented software development. The authors propose a framework for an agent system, which is based on N-tier architecture and design patterns. A negotiation agent system is presented to show the feasibility of applying this framework.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134316966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960661
Don-Lin Yang, Ching-Ting Pan, Yeh-Ching Chung
The association rule mining can be divided into two steps. The first step is to find out all frequent itemsets, whose occurrences are greater than or equal to the user-specified threshold. The second step is to generate reliable association rules based on all frequent itemsets found in the first step. Identifying all frequent itemsets in a large database dominates the overall performance in the association rule mining. In this paper, we propose an efficient hash-based method, HMFS, for discovering the maximal frequent itemsets. The HMFS method combines the advantages of both the DHP (Direct Hashing and Pruning) and the Pincer-Search algorithms. The combination leads to two advantages. First, the HMFS method, in general, can reduce the number of database scans. Second, the HMFS can filter the infrequent candidate itemsets and can use the filtered itemsets to find the maximal frequent itemsets. These two advantages can reduce the overall computing time of finding the maximal frequent itemsets. In addition, the HMFS method also provides an efficient mechanism to construct the maximal frequent candidate itemsets to reduce the search space. We have implemented the HMFS method along with the DHP and the Pincer-Search algorithms on a Pentium III 800 MHz PC. The experimental results show that the HMFS method has better performance than the DHP and the Pincer-Search algorithms for most of test cases. In particular, our method has significant improvement over the DHP and the Pincer-Search algorithms when the size of a database is large and the length of the longest itemset is relatively long.
关联规则挖掘可分为两个步骤。第一步是找出出现次数大于或等于用户指定阈值的所有频繁项集。第二步是基于第一步中发现的所有频繁项集生成可靠的关联规则。在关联规则挖掘中,识别大型数据库中的所有频繁项集是影响整体性能的重要因素。在本文中,我们提出了一种高效的基于哈希的方法,HMFS,用于发现最大频繁项集。HMFS方法结合了DHP(直接哈希和修剪)和钳子搜索算法的优点。这种结合带来了两个好处。首先,HMFS方法通常可以减少数据库扫描的次数。其次,HMFS可以过滤不频繁的候选项目集,并使用过滤后的项目集找到最大频繁的项目集。这两个优点可以减少查找最大频繁项集的总计算时间。此外,该方法还提供了一种有效的机制来构造最大频繁候选项集,以减少搜索空间。我们已经在Pentium III 800 MHz PC上实现了HMFS方法以及DHP和pin - search算法。实验结果表明,在大多数测试用例中,HMFS方法比DHP和Pincer-Search算法具有更好的性能。特别是,当数据库规模较大且最长项集的长度相对较长时,我们的方法比DHP和钳子搜索算法有显著的改进。
{"title":"An efficient hash-based method for discovering the maximal frequent set","authors":"Don-Lin Yang, Ching-Ting Pan, Yeh-Ching Chung","doi":"10.1109/CMPSAC.2001.960661","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960661","url":null,"abstract":"The association rule mining can be divided into two steps. The first step is to find out all frequent itemsets, whose occurrences are greater than or equal to the user-specified threshold. The second step is to generate reliable association rules based on all frequent itemsets found in the first step. Identifying all frequent itemsets in a large database dominates the overall performance in the association rule mining. In this paper, we propose an efficient hash-based method, HMFS, for discovering the maximal frequent itemsets. The HMFS method combines the advantages of both the DHP (Direct Hashing and Pruning) and the Pincer-Search algorithms. The combination leads to two advantages. First, the HMFS method, in general, can reduce the number of database scans. Second, the HMFS can filter the infrequent candidate itemsets and can use the filtered itemsets to find the maximal frequent itemsets. These two advantages can reduce the overall computing time of finding the maximal frequent itemsets. In addition, the HMFS method also provides an efficient mechanism to construct the maximal frequent candidate itemsets to reduce the search space. We have implemented the HMFS method along with the DHP and the Pincer-Search algorithms on a Pentium III 800 MHz PC. The experimental results show that the HMFS method has better performance than the DHP and the Pincer-Search algorithms for most of test cases. In particular, our method has significant improvement over the DHP and the Pincer-Search algorithms when the size of a database is large and the length of the longest itemset is relatively long.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133715011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960647
T. P. Plaks
Efficient parallelizing and mapping of algorithms onto regular arrays requires algebraic transformations. The paper considers the Iso-plane method for a class of algebraic transformations on the polytope model of algorithm. This method uses the partitioning of the ranges of loop indices in order to increase the dimensionality of the problem representation and a specific reordering of computations. As a result, the higher dimensional arrays with improved time complexity are produced.
{"title":"Algebraic transformations in regular array design","authors":"T. P. Plaks","doi":"10.1109/CMPSAC.2001.960647","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960647","url":null,"abstract":"Efficient parallelizing and mapping of algorithms onto regular arrays requires algebraic transformations. The paper considers the Iso-plane method for a class of algebraic transformations on the polytope model of algorithm. This method uses the partitioning of the ranges of loop indices in order to increase the dimensionality of the problem representation and a specific reordering of computations. As a result, the higher dimensional arrays with improved time complexity are produced.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129604747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960622
Y. Domaratsky, Maxim Perevozchikov, Alexander Ingulets, Alexander Alkhovik
A new generation of highly dependable real-time control systems (such as automotive brake-by-wire and steer-by-wire) is under development. Specific application domain requirements lead to the new features to be supported by the system software. These requirements are best supported by a time-triggered approach. Motorola is working on the time-triggered fault-tolerant communication hardware as well as participates in a software standardization committee. This article covers back-end system software for highly dependable real-time control systems including operating system, fault-tolerant communication layer and node-local configuration tools. System requirements, implementation strategy, communication scheme and system configuration mechanism are discussed.
{"title":"Back-end software for highly dependable real-time control systems","authors":"Y. Domaratsky, Maxim Perevozchikov, Alexander Ingulets, Alexander Alkhovik","doi":"10.1109/CMPSAC.2001.960622","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960622","url":null,"abstract":"A new generation of highly dependable real-time control systems (such as automotive brake-by-wire and steer-by-wire) is under development. Specific application domain requirements lead to the new features to be supported by the system software. These requirements are best supported by a time-triggered approach. Motorola is working on the time-triggered fault-tolerant communication hardware as well as participates in a software standardization committee. This article covers back-end system software for highly dependable real-time control systems including operating system, fault-tolerant communication layer and node-local configuration tools. System requirements, implementation strategy, communication scheme and system configuration mechanism are discussed.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132357403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960648
Hong Ki Thae, D. Hung
The authors present an approach to the design of hybrid systems by a combination of several comprehensive formalization techniques. We use Duration Calculus (DC) to specify the requirement and design at abstract level of system development. Then the high level designs are further refined in control theory. A formal verification may be done either in DC if it is possible, or in predicate calculus using the semantics of DC or theorems from control theory. We show our techniques through a double water tank case study which is one of the benchmark problems for modern process control engineering.
{"title":"A case study on formal design of hybrid control systems","authors":"Hong Ki Thae, D. Hung","doi":"10.1109/CMPSAC.2001.960648","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960648","url":null,"abstract":"The authors present an approach to the design of hybrid systems by a combination of several comprehensive formalization techniques. We use Duration Calculus (DC) to specify the requirement and design at abstract level of system development. Then the high level designs are further refined in control theory. A formal verification may be done either in DC if it is possible, or in predicate calculus using the semantics of DC or theorems from control theory. We show our techniques through a double water tank case study which is one of the benchmark problems for modern process control engineering.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134024137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960654
James J. Han, Hairong Sun, H. Levendel
In this paper, we examine the availability requirement for the fault management server in high-availability communication systems. According to our study, we find that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability as long as the fail-safe ratio (the probability that the failure of the fault management server will not bring the system down) and the fault coverage ratio (the probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed in this paper.
{"title":"Availability requirement for fault management server","authors":"James J. Han, Hairong Sun, H. Levendel","doi":"10.1109/CMPSAC.2001.960654","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960654","url":null,"abstract":"In this paper, we examine the availability requirement for the fault management server in high-availability communication systems. According to our study, we find that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability as long as the fail-safe ratio (the probability that the failure of the fault management server will not bring the system down) and the fault coverage ratio (the probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed in this paper.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133813302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-08DOI: 10.1109/CMPSAC.2001.960669
Shu‐Ching Chen, Chengcui Zhang, M. Shyu
In this paper a novel unsupervised segmentation framework for texture image queries is presented. The proposed framework consists of an unsupervised segmentation method for texture images, and a multi-filter query strategy. By applying the unsupervised segmentation method on each texture image, a set of texture feature parameters for that texture image can be extracted automatically. Based upon these parameters, an effective multi-filter query strategy which allows the users to issue texture-based image queries is developed The test results of the proposed framework on 318 texture images obtained from the MIT VisTex and Brodatz database are presented to show its effectiveness.
{"title":"An unsupervised segmentation framework for texture image queries","authors":"Shu‐Ching Chen, Chengcui Zhang, M. Shyu","doi":"10.1109/CMPSAC.2001.960669","DOIUrl":"https://doi.org/10.1109/CMPSAC.2001.960669","url":null,"abstract":"In this paper a novel unsupervised segmentation framework for texture image queries is presented. The proposed framework consists of an unsupervised segmentation method for texture images, and a multi-filter query strategy. By applying the unsupervised segmentation method on each texture image, a set of texture feature parameters for that texture image can be extracted automatically. Based upon these parameters, an effective multi-filter query strategy which allows the users to issue texture-based image queries is developed The test results of the proposed framework on 318 texture images obtained from the MIT VisTex and Brodatz database are presented to show its effectiveness.","PeriodicalId":269568,"journal":{"name":"25th Annual International Computer Software and Applications Conference. COMPSAC 2001","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134270581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}