Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5365886
Xiang Chen, Qing Gu, Xinping Wang, Ang Li, Daoxu Chen
Traditional interaction testing aims to build test suites that cover all t-way interactions of inputs. But in many test scenarios, the entire test suites cannot be fully run due to the limited budget. Therefore it is necessary to take the importance of interactions into account and prioritize these tests of the test suite. In the paper, we use the hybrid approach to build prioritized pairwise interaction test suites (PITS). It adopts a one-test-at-a-time strategy to construct final test suites. But to generate a single test it firstly generates a candidate test and then applies a specific metaheuristic search strategy to enhance this test. Here we experiment four different metaheuristic search strategies. In the experiments, we compare our approach to weighted density algorithm (WDA). Meanwhile, we also analyze the effectiveness of four different search strategies and the effectiveness of the increasing iterations. Empirical results demonstrate the effectiveness of our proposed approach. Keywords-prioritized interaction test suites; greedy algorithm; metaheuristic search
{"title":"A Hybrid Approach to Build Prioritized Pairwise Interaction Test Suites","authors":"Xiang Chen, Qing Gu, Xinping Wang, Ang Li, Daoxu Chen","doi":"10.1109/CISE.2009.5365886","DOIUrl":"https://doi.org/10.1109/CISE.2009.5365886","url":null,"abstract":"Traditional interaction testing aims to build test suites that cover all t-way interactions of inputs. But in many test scenarios, the entire test suites cannot be fully run due to the limited budget. Therefore it is necessary to take the importance of interactions into account and prioritize these tests of the test suite. In the paper, we use the hybrid approach to build prioritized pairwise interaction test suites (PITS). It adopts a one-test-at-a-time strategy to construct final test suites. But to generate a single test it firstly generates a candidate test and then applies a specific metaheuristic search strategy to enhance this test. Here we experiment four different metaheuristic search strategies. In the experiments, we compare our approach to weighted density algorithm (WDA). Meanwhile, we also analyze the effectiveness of four different search strategies and the effectiveness of the increasing iterations. Empirical results demonstrate the effectiveness of our proposed approach. Keywords-prioritized interaction test suites; greedy algorithm; metaheuristic search","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122335781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5364307
Xiaobing Pei, Changqing Chen
Decision rule is one of the most problems of rough set theory and application. In this paper, the judgment theorem with respect to decision rules is obtained based on dividing and ruling thought, from which an algorithm for the set of all decision rules is proposed. Finally, we show an illustrative example to explain the mining algorithm.
{"title":"A Mining Algorithm Based on Dividing and Ruling for Decision Rules","authors":"Xiaobing Pei, Changqing Chen","doi":"10.1109/CISE.2009.5364307","DOIUrl":"https://doi.org/10.1109/CISE.2009.5364307","url":null,"abstract":"Decision rule is one of the most problems of rough set theory and application. In this paper, the judgment theorem with respect to decision rules is obtained based on dividing and ruling thought, from which an algorithm for the set of all decision rules is proposed. Finally, we show an illustrative example to explain the mining algorithm.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122383511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363492
He Li, Zang Miao, C. Liang
ETL is an important process in building data warehouse. For processing large volume of data and accessing to heterogeneous multi-data sources, how to efficiently shorten the execution time and possess the ability of general accessing data sources is a big challenge. Aiming at the special requirements of telecommunication decision support system, this paper proposes a solution of a general ETL management platform, realizing such functions as accessing several heterogeneous data sources and monitoring ETL process. Considering the characteristic of massive data in telecom system, by means of thread pool technologies, this ETL management platform improves the multithreading technology to raise the ETL efficiency and resource utilization. At last, the system results prove this solution is efficient and valid.
{"title":"Research and Implementation of a Universal ETL Management Platform Based on Telecom Industry","authors":"He Li, Zang Miao, C. Liang","doi":"10.1109/CISE.2009.5363492","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363492","url":null,"abstract":"ETL is an important process in building data warehouse. For processing large volume of data and accessing to heterogeneous multi-data sources, how to efficiently shorten the execution time and possess the ability of general accessing data sources is a big challenge. Aiming at the special requirements of telecommunication decision support system, this paper proposes a solution of a general ETL management platform, realizing such functions as accessing several heterogeneous data sources and monitoring ETL process. Considering the characteristic of massive data in telecom system, by means of thread pool technologies, this ETL management platform improves the multithreading technology to raise the ETL efficiency and resource utilization. At last, the system results prove this solution is efficient and valid.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122569110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366586
Wei Liu, Wenhui Li
Content-based image retrieval (CBIR) solutions with regular Euclidean metric usually cannot achieve satisfactory performance due to the semantic gap. Hence, relevance feedback has been adopted as a promising approach to improve the search performance. In this paper, we propose a novel idea of learning with historical relevance feedback log-data,and adopt a new methodology called“Collaborative Image Retrieval” (CIR). To effectively search the log data,we propose a novel semisupervised distance metric learning technique, called “Laplacian Regularized Metric Learning” (LRML), for learning robust distance metrics for CIR.Different from previous methods,the proposed LRML method integrates both log data and unlabeled data information through an effective graph regularization framework. We show that reliable metrics can be learned from real log data eventhey may be noisy and limited at the beginning stage of a CIR system. Keywordssemi-supervised learning ; Collaborative Image Retrieval ; semantic gap
{"title":"A Novel Semi-Supervised Learning for Collaborative Image Retrieval","authors":"Wei Liu, Wenhui Li","doi":"10.1109/CISE.2009.5366586","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366586","url":null,"abstract":"Content-based image retrieval (CBIR) solutions with regular Euclidean metric usually cannot achieve satisfactory performance due to the semantic gap. Hence, relevance feedback has been adopted as a promising approach to improve the search performance. In this paper, we propose a novel idea of learning with historical relevance feedback log-data,and adopt a new methodology called“Collaborative Image Retrieval” (CIR). To effectively search the log data,we propose a novel semisupervised distance metric learning technique, called “Laplacian Regularized Metric Learning” (LRML), for learning robust distance metrics for CIR.Different from previous methods,the proposed LRML method integrates both log data and unlabeled data information through an effective graph regularization framework. We show that reliable metrics can be learned from real log data eventhey may be noisy and limited at the beginning stage of a CIR system. Keywordssemi-supervised learning ; Collaborative Image Retrieval ; semantic gap","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122824996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366664
Yanbin Liu, Xiaodong Zhu, Zhiming Sun, Yigang Wang, Fei Ye
After software fault is detected by runtime monitor, fault localization is always very difficult. A new method to fault localization based on dual-slices algorithm is proposed. The algorithm reduces software fault area by slicing faulty trace into segments firstly and then slicing the trace segments based on trace slice. It mainly includes two steps: Firstly, the faulty run trace is divided into segments by analyzing the differences between correct run and faulty run, and only the segments that inducing the differences between dual-traces will be regarded as suspicious fault-area; Secondly, the suspicious fault-area will be further sliced by trace slice to reduce the fault-area, and the more accuracy fault-area will be gained finally. This method could overcome some drawbacks of manual debugging, and increase the efficiency of fault localization. Keywords-–fault localization; dual-slices algorithm; runtime monitoring; edit distance; trace slice.
{"title":"Dual-Slices Algorithm for Software Fault Localization","authors":"Yanbin Liu, Xiaodong Zhu, Zhiming Sun, Yigang Wang, Fei Ye","doi":"10.1109/CISE.2009.5366664","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366664","url":null,"abstract":"After software fault is detected by runtime monitor, fault localization is always very difficult. A new method to fault localization based on dual-slices algorithm is proposed. The algorithm reduces software fault area by slicing faulty trace into segments firstly and then slicing the trace segments based on trace slice. It mainly includes two steps: Firstly, the faulty run trace is divided into segments by analyzing the differences between correct run and faulty run, and only the segments that inducing the differences between dual-traces will be regarded as suspicious fault-area; Secondly, the suspicious fault-area will be further sliced by trace slice to reduce the fault-area, and the more accuracy fault-area will be gained finally. This method could overcome some drawbacks of manual debugging, and increase the efficiency of fault localization. Keywords-–fault localization; dual-slices algorithm; runtime monitoring; edit distance; trace slice.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122876926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363477
Bei Wu, Xing-yuan Chen, Yongfu Zhang
Policy conflict detection is an important and difficult technique in policy research. A key step in policy conflict detection is to compare the relationship between policies. Existing approaches to address the problem of policy comparison are mainly based on logical reasoning and Boolean function comparison. Such approaches are computationally expensive and don't scale well for large heterogeneous distributed environments. Consequently a lightweight and effective approach is needed to improve the efficiency of policy relationship evaluation. Considering that policy rule comparison is the basis of policy comparison, we introduce the concept of rule dissimilarity in this paper and apply fuzzy theory to analyzing and computing rule dissimilarity to address the rule relationship comparison firstly. Rule dissimilarity measure provides a lightweight approach to pre-compile a large amount of rules and only return the most similar rules for further policy relationship evaluation and policy conflict detection. Detailed algorithms are presented for the rule dissimilarly computation. Results of our case study demonstrate the efficiency and practical value of our approach.
{"title":"A Policy Rule Dissimilarity Evaluation Approach Based on Fuzzy Theory","authors":"Bei Wu, Xing-yuan Chen, Yongfu Zhang","doi":"10.1109/CISE.2009.5363477","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363477","url":null,"abstract":"Policy conflict detection is an important and difficult technique in policy research. A key step in policy conflict detection is to compare the relationship between policies. Existing approaches to address the problem of policy comparison are mainly based on logical reasoning and Boolean function comparison. Such approaches are computationally expensive and don't scale well for large heterogeneous distributed environments. Consequently a lightweight and effective approach is needed to improve the efficiency of policy relationship evaluation. Considering that policy rule comparison is the basis of policy comparison, we introduce the concept of rule dissimilarity in this paper and apply fuzzy theory to analyzing and computing rule dissimilarity to address the rule relationship comparison firstly. Rule dissimilarity measure provides a lightweight approach to pre-compile a large amount of rules and only return the most similar rules for further policy relationship evaluation and policy conflict detection. Detailed algorithms are presented for the rule dissimilarly computation. Results of our case study demonstrate the efficiency and practical value of our approach.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122982888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363454
Wenyou Fan, Yanzhao Lu
GIS technology has already used in the Hydrological forecast and early-warning. But because of various GIS software, different formats and obstacle of functions sharing, people can not use them conveniently. It seriously influenced the application of GIS technology in hydrological Industry. In order to attack this problem, based on the technology of Data Warehouse and Serviceoriented Architecture (SOA) of Functional Warehouse in MapGIS technology, the author has proposed solutions. In the paper the author has expounded how to storage multisource and heterogeneous data by using Data Warehouse, and how to achieve functions sharing by utilizing Functional Warehouse. The author brings a new method that builds a Digital Hydrology Platform application of GIS technology in hydrology industry more convenient. Keyword: MapGIS Data Center, Hydrological digital platform, Data Warehouse, Function Warehouse
{"title":"GIS Technology in the Hydrological Industry","authors":"Wenyou Fan, Yanzhao Lu","doi":"10.1109/CISE.2009.5363454","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363454","url":null,"abstract":"GIS technology has already used in the Hydrological forecast and early-warning. But because of various GIS software, different formats and obstacle of functions sharing, people can not use them conveniently. It seriously influenced the application of GIS technology in hydrological Industry. In order to attack this problem, based on the technology of Data Warehouse and Serviceoriented Architecture (SOA) of Functional Warehouse in MapGIS technology, the author has proposed solutions. In the paper the author has expounded how to storage multisource and heterogeneous data by using Data Warehouse, and how to achieve functions sharing by utilizing Functional Warehouse. The author brings a new method that builds a Digital Hydrology Platform application of GIS technology in hydrology industry more convenient. Keyword: MapGIS Data Center, Hydrological digital platform, Data Warehouse, Function Warehouse","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114171676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5367079
Xianghong Wang
Computer courses’ teaching is becoming more and more important in science and engineering university nowadays along with the wide application of computers. As the rapid development of computer technology, the education reform and innovation about the computer courses are necessary. In this paper, the reform projects for theory teaching and experiment teaching of computer coursers are debated, including teaching content reform, textbook choosing, researching and teaching combination and modern teaching techniques applications. After the teaching reform, not only is the students’ designing ability improved effectively, but also their independence innovation is cultivated. Keywords-College education; Education reform; Education innovation
{"title":"Study and Practice about Teaching Reform of University Computer Courses","authors":"Xianghong Wang","doi":"10.1109/CISE.2009.5367079","DOIUrl":"https://doi.org/10.1109/CISE.2009.5367079","url":null,"abstract":"Computer courses’ teaching is becoming more and more important in science and engineering university nowadays along with the wide application of computers. As the rapid development of computer technology, the education reform and innovation about the computer courses are necessary. In this paper, the reform projects for theory teaching and experiment teaching of computer coursers are debated, including teaching content reform, textbook choosing, researching and teaching combination and modern teaching techniques applications. After the teaching reform, not only is the students’ designing ability improved effectively, but also their independence innovation is cultivated. Keywords-College education; Education reform; Education innovation","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114596748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5364072
Wei Wang, T. Xia
Determining of vertical ultimate bearing capacity (VUBC) of concrete pile is very important to design and management of geotechnical engineering in soft soil area. However, it is not well solved because the VUBC increases with time after pile installation. In this paper, conventional model for time-VUBC relationships is introduced, and one new grey model is proposed to predict time-VUBC relationships based on optimized field investigation data. Then, correctness and applicability of the grey model are analyzed. The proposed grey model can instead of long-time field investigation with short-time investigation, which can save both cost and time for engineering construction. Finally, good agreements have been found between field investigations and grey model simulations. Results of this study can put good foundation for design and management of corresponding geotechnical engineering.
{"title":"Grey Model for Time-Dependent Vertical Ultimate Bearing Capacity of Concrete Pile in Soft Soil Area","authors":"Wei Wang, T. Xia","doi":"10.1109/CISE.2009.5364072","DOIUrl":"https://doi.org/10.1109/CISE.2009.5364072","url":null,"abstract":"Determining of vertical ultimate bearing capacity (VUBC) of concrete pile is very important to design and management of geotechnical engineering in soft soil area. However, it is not well solved because the VUBC increases with time after pile installation. In this paper, conventional model for time-VUBC relationships is introduced, and one new grey model is proposed to predict time-VUBC relationships based on optimized field investigation data. Then, correctness and applicability of the grey model are analyzed. The proposed grey model can instead of long-time field investigation with short-time investigation, which can save both cost and time for engineering construction. Finally, good agreements have been found between field investigations and grey model simulations. Results of this study can put good foundation for design and management of corresponding geotechnical engineering.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121929285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5364446
Hongchan Zheng, Meigui Hu, Guohua Peng
Based on Lagrange polynomials and variation of constants, we devise a novel 2n-1-point interpolatory ternary subdivision scheme that reproduces polynomials of degree 2n-2. We illustrate the technique with a 3-point ternary interpolatory subdivision scheme which can rebuild Hassan and Dodgson’s interpolating 3-point ternary subdivision scheme and a new 5point ternary interpolatory subdivision scheme which can achieve C-continuity. The smoothness of the new schemes is proved using Laurent polynomial method. Keywordsternary subdivision; Lagrange polynomial; variation of constant
{"title":"Constructing 2n-1-Point Ternary Interpolatory Subdivision Schemes by Using Variation of Constants","authors":"Hongchan Zheng, Meigui Hu, Guohua Peng","doi":"10.1109/CISE.2009.5364446","DOIUrl":"https://doi.org/10.1109/CISE.2009.5364446","url":null,"abstract":"Based on Lagrange polynomials and variation of constants, we devise a novel 2n-1-point interpolatory ternary subdivision scheme that reproduces polynomials of degree 2n-2. We illustrate the technique with a 3-point ternary interpolatory subdivision scheme which can rebuild Hassan and Dodgson’s interpolating 3-point ternary subdivision scheme and a new 5point ternary interpolatory subdivision scheme which can achieve C-continuity. The smoothness of the new schemes is proved using Laurent polynomial method. Keywordsternary subdivision; Lagrange polynomial; variation of constant","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121995604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}