In this paper, the interface between MapReduce and different storage systems is proposed. MapReduce based computing platform can access various file systems by the interface without modifying the existing computing system so that to simplify the construction of distributed applications. Different file systems can be configured and get quick switch by the interface. The interface is also integrated in Hadoop to make the experiments. The results show the interface can achieve different storage systems switched at the same time the data access efficiency gets better with increasing data volume. Further experiment with larger data volume will be done and the deeper development of the interface is being taken in the computing platform of our work.
{"title":"Different File Systems Data Access Support on MapReduce","authors":"Dadan Zeng, Zhebing Chen, Jianpu Wang, Minqi Zhou, Aoying Zhou","doi":"10.1109/CISE.2009.5363360","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363360","url":null,"abstract":"In this paper, the interface between MapReduce and different storage systems is proposed. MapReduce based computing platform can access various file systems by the interface without modifying the existing computing system so that to simplify the construction of distributed applications. Different file systems can be configured and get quick switch by the interface. The interface is also integrated in Hadoop to make the experiments. The results show the interface can achieve different storage systems switched at the same time the data access efficiency gets better with increasing data volume. Further experiment with larger data volume will be done and the deeper development of the interface is being taken in the computing platform of our work.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124541933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366373
Bing Hu, Hongsheng Li, Sumin Liu
The localization and tracking of mobile nodes is a study focus in the application of wireless sensor networks. This paper makes a research on localization of mobile nodes in wireless sensor network and proposes a kind of mobile nodes localization algorithm based on ant colony algorithm. The novel algorithm uses distribution probability and transition probability of ant colony algorithm from one node to the other to calculate the weight coefficients of samples, and then determine the possible locations of mobile nodes. The simulation results show that using probability selection strategy to estimate the locations of nodes has lower computational complexity, higher positioning accuracy and stronger robustness. Keywords-wireless sensor networks; mobile nodes; nodes localization; ant colony algorithm.
{"title":"Research on Localization Algorithm of Mobile Nodes in Wireless Sensor Networks","authors":"Bing Hu, Hongsheng Li, Sumin Liu","doi":"10.1109/CISE.2009.5366373","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366373","url":null,"abstract":"The localization and tracking of mobile nodes is a study focus in the application of wireless sensor networks. This paper makes a research on localization of mobile nodes in wireless sensor network and proposes a kind of mobile nodes localization algorithm based on ant colony algorithm. The novel algorithm uses distribution probability and transition probability of ant colony algorithm from one node to the other to calculate the weight coefficients of samples, and then determine the possible locations of mobile nodes. The simulation results show that using probability selection strategy to estimate the locations of nodes has lower computational complexity, higher positioning accuracy and stronger robustness. Keywords-wireless sensor networks; mobile nodes; nodes localization; ant colony algorithm.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124163610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5365557
Li Li, Da-Yong Wang, Xiangheng Shen, Ming Yang
Combinatorial explosion is a key issue that leads to failures of planning for many planners. To avoid it, we modified the planner of IPP and divided its fact file into several small parts, and the method is called goal-decompounded. We also expanded the arithmetic of IPP. The modified planner we called MF-IPP able to handle multiple fact files, which avoided the combinatorial explosion. We applied the method on the GUI test case generation. The main idea was to produce the initial test case from planner firstly, and then propose a way of solution expanding to reinforce the generation. At last, we compared the performance of the two planners, and the result showed that MF-IPP can avoid the combinatorial explosion well.
{"title":"A Method for Combinatorial Explosion Avoidance of AI Planner and the Application on Test Case Generation","authors":"Li Li, Da-Yong Wang, Xiangheng Shen, Ming Yang","doi":"10.1109/CISE.2009.5365557","DOIUrl":"https://doi.org/10.1109/CISE.2009.5365557","url":null,"abstract":"Combinatorial explosion is a key issue that leads to failures of planning for many planners. To avoid it, we modified the planner of IPP and divided its fact file into several small parts, and the method is called goal-decompounded. We also expanded the arithmetic of IPP. The modified planner we called MF-IPP able to handle multiple fact files, which avoided the combinatorial explosion. We applied the method on the GUI test case generation. The main idea was to produce the initial test case from planner firstly, and then propose a way of solution expanding to reinforce the generation. At last, we compared the performance of the two planners, and the result showed that MF-IPP can avoid the combinatorial explosion well.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"os-28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127771697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363884
Linyuan Liu, Zhiqiu Huang, Dongqing Xie
Web services collaborations are highly automatic, dynamic, heterogeneous, and lack protection against corruption of the process. These characteristics impose high levels of risk on the interacting parties. In order to improve the reliability of the system, it is very necessary to make sure the privacy authorization of each service in system designing. This paper proposes a role-based web services privacy delegation model, which delegates the privacy authorization based on trust relationships of services, then it gives corresponding algorithms to check the validity of privacy delegation
{"title":"A Role-Based Model for Web Services Privacy Delegation","authors":"Linyuan Liu, Zhiqiu Huang, Dongqing Xie","doi":"10.1109/CISE.2009.5363884","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363884","url":null,"abstract":"Web services collaborations are highly automatic, dynamic, heterogeneous, and lack protection against corruption of the process. These characteristics impose high levels of risk on the interacting parties. In order to improve the reliability of the system, it is very necessary to make sure the privacy authorization of each service in system designing. This paper proposes a role-based web services privacy delegation model, which delegates the privacy authorization based on trust relationships of services, then it gives corresponding algorithms to check the validity of privacy delegation","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126221224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366421
Binbin Hao
Abstrast—Practice teaching is an important progress of training talent in engineering colleges and it is the key point whether the aim of training higher educated talents will be realized or not. This article is based on the CDIO engineering education model “learning by doing”. It makes a survey of practice teaching and analyzes the problems among Chinese engineering colleges and universities. And it also gives some measures to improve university students’ capacity of engineering practice.
{"title":"Deepen the Concept of CDIO and Strengthen Practice Teaching of Engineering Colleges","authors":"Binbin Hao","doi":"10.1109/CISE.2009.5366421","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366421","url":null,"abstract":"Abstrast—Practice teaching is an important progress of training talent in engineering colleges and it is the key point whether the aim of training higher educated talents will be realized or not. This article is based on the CDIO engineering education model “learning by doing”. It makes a survey of practice teaching and analyzes the problems among Chinese engineering colleges and universities. And it also gives some measures to improve university students’ capacity of engineering practice.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128041534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5366668
Wenjuan Wang, Jianxu Luo
Process Neural Network(PNN) has an important significance in solving industry modeling problems which are related to time, but long time is cost on high dimension inputs nonlinear modeling problems. A new Improved Process Neural Networks based on KPCA and Walsh (IPNN-KPW) are proposed in this paper. KPCA method and discrete Walsh transform are used to reduce process neural network's time cost. Momentum factor and self-adapting learning rate are adopted to accelerate the astringency of the network and keep down network's oscillation. The IPNN-KPW is applied to modeling of Polyacrylonitrile(PAN) average molecular weight in polymerization. The effectiveness of the algorithm is verified by the results. A higher accuracy of model is obtained with less time. Keywords-KPCA; Walsh transform; Process neural network; Modeling
{"title":"Research on Modeling of Improved Process Neural Network Based on KPCA and Discrete Walsh Transform","authors":"Wenjuan Wang, Jianxu Luo","doi":"10.1109/CISE.2009.5366668","DOIUrl":"https://doi.org/10.1109/CISE.2009.5366668","url":null,"abstract":"Process Neural Network(PNN) has an important significance in solving industry modeling problems which are related to time, but long time is cost on high dimension inputs nonlinear modeling problems. A new Improved Process Neural Networks based on KPCA and Walsh (IPNN-KPW) are proposed in this paper. KPCA method and discrete Walsh transform are used to reduce process neural network's time cost. Momentum factor and self-adapting learning rate are adopted to accelerate the astringency of the network and keep down network's oscillation. The IPNN-KPW is applied to modeling of Polyacrylonitrile(PAN) average molecular weight in polymerization. The effectiveness of the algorithm is verified by the results. A higher accuracy of model is obtained with less time. Keywords-KPCA; Walsh transform; Process neural network; Modeling","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128143836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363946
Bo Sun, Xiao-song Wu, Q. Xia, W. Cai
Based on the combustor model and experimental data of Ref.2, numerical simulations of three different combustor geometries presenting three situations with solid fuel regression were conducted using FLUENT software. The combustor inlet airflow had a Mach number of 1.5, total temperature of 1270 K and total pressure of 30 atm. The HTPB fuel and a global one- step reaction mechanism were used. The results of non reacting computation reveal that the airflow velocity deceases in the majority zone of combustor with the solid fuel boundary regression. The results of reacting computation reveal that the supersonic zone in the divergent section of three cases gets larger than non reaction case. Combustion takes place in the vicinity of solid fuel wall. Combustion efficiency is in the range of 35%~45%. Specific thrust and specific impulse both decrease with fuel regression and are both lower than experimental results.
{"title":"Numerical Analysis of Solid Fuel Scramjet Combustors","authors":"Bo Sun, Xiao-song Wu, Q. Xia, W. Cai","doi":"10.1109/CISE.2009.5363946","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363946","url":null,"abstract":"Based on the combustor model and experimental data of Ref.2, numerical simulations of three different combustor geometries presenting three situations with solid fuel regression were conducted using FLUENT software. The combustor inlet airflow had a Mach number of 1.5, total temperature of 1270 K and total pressure of 30 atm. The HTPB fuel and a global one- step reaction mechanism were used. The results of non reacting computation reveal that the airflow velocity deceases in the majority zone of combustor with the solid fuel boundary regression. The results of reacting computation reveal that the supersonic zone in the divergent section of three cases gets larger than non reaction case. Combustion takes place in the vicinity of solid fuel wall. Combustion efficiency is in the range of 35%~45%. Specific thrust and specific impulse both decrease with fuel regression and are both lower than experimental results.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125456811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5362906
Shaoke Chen, H. Chen
Based on the analysis of current triangulation method, a new 3D triangulation method based on boundary extension is proposed. Binning method is first applied to reduce the original point cloud into thin data, then the triangulation processes start with selected seed triangles, the triangular meshes extend outward by continuously linking the most suitable points to them along the boundary edges of the meshed area. Data points of some non- convex complex object surface such as object with interior holes can be directly triangulated without manually dividing it into several convex patches. The application results indicate that this approach is feasible and efficient in modeling of 3D point cloud.
{"title":"Research on Triangulation Method of Object Surface with Holes in Reverse Engineering","authors":"Shaoke Chen, H. Chen","doi":"10.1109/CISE.2009.5362906","DOIUrl":"https://doi.org/10.1109/CISE.2009.5362906","url":null,"abstract":"Based on the analysis of current triangulation method, a new 3D triangulation method based on boundary extension is proposed. Binning method is first applied to reduce the original point cloud into thin data, then the triangulation processes start with selected seed triangles, the triangular meshes extend outward by continuously linking the most suitable points to them along the boundary edges of the meshed area. Data points of some non- convex complex object surface such as object with interior holes can be directly triangulated without manually dividing it into several convex patches. The application results indicate that this approach is feasible and efficient in modeling of 3D point cloud.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"75 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125996750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363572
Jing Gao, Yuqing Lan
In this paper, agent-based distributed automated testing framework is proposed, and for the current lack of automatic test task allocation function in distributed automated testing framework, an approach of automatic test task allocation based on the test node ability is given. The complex test task is broken down into atomic test tasks, the test tasks in the test definition file is represented using "and-or tree" and ECA rules by the testers. Test master agent automatically obtain and calculate the execution ability of test nodes, the execution ability of test nodes is the computing power and system resources which the test nodes owned, then automatically allocate the test tasks in accordance with the "and-or tree" and the ECA rules described in the test definition file. The general architecture of agent-based distributed automated testing framework is given, and the automatic test task allocation method based on the ability of the test node is elaborated. The method has been applied to the distributed automated testing framework to enhance the test automation capabilities.
{"title":"Automatic Test Task Allocation in Agent-Based Distributed Automated Testing Framework","authors":"Jing Gao, Yuqing Lan","doi":"10.1109/CISE.2009.5363572","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363572","url":null,"abstract":"In this paper, agent-based distributed automated testing framework is proposed, and for the current lack of automatic test task allocation function in distributed automated testing framework, an approach of automatic test task allocation based on the test node ability is given. The complex test task is broken down into atomic test tasks, the test tasks in the test definition file is represented using \"and-or tree\" and ECA rules by the testers. Test master agent automatically obtain and calculate the execution ability of test nodes, the execution ability of test nodes is the computing power and system resources which the test nodes owned, then automatically allocate the test tasks in accordance with the \"and-or tree\" and the ECA rules described in the test definition file. The general architecture of agent-based distributed automated testing framework is given, and the automatic test task allocation method based on the ability of the test node is elaborated. The method has been applied to the distributed automated testing framework to enhance the test automation capabilities.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128202453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/CISE.2009.5363473
B. Gaudin, P. Nixon, Keith Bines, Fulvio Busacca, N. Casey
Models for fault diagnosis can help reduce the time taken to accurately identify faults, but the complexity of modern enterprise systems means that the process of manually model-building is itself very time-consuming. We study here the relevance of bootstrapping a diagnostic model that can then be manually refined and augmented by domain experts. We present an approach to model construction, developed by analyzing log traces from a real data center. We compare the automaticallybootstrapped model against a manually-constructed reference model for the same problem set in order to measure what amount of the model can be automatically built. An experiment with an Oracle Enterprise System shows that approximately 15% of the model, diagnosing 30% of the related issues, can be automatically built. keywords: System Diagnostics, Enterprise Systems, Machine Learning.
{"title":"Model Bootstrapping for Auto-Diagnosis of Enterprise Systems","authors":"B. Gaudin, P. Nixon, Keith Bines, Fulvio Busacca, N. Casey","doi":"10.1109/CISE.2009.5363473","DOIUrl":"https://doi.org/10.1109/CISE.2009.5363473","url":null,"abstract":"Models for fault diagnosis can help reduce the time taken to accurately identify faults, but the complexity of modern enterprise systems means that the process of manually model-building is itself very time-consuming. We study here the relevance of bootstrapping a diagnostic model that can then be manually refined and augmented by domain experts. We present an approach to model construction, developed by analyzing log traces from a real data center. We compare the automaticallybootstrapped model against a manually-constructed reference model for the same problem set in order to measure what amount of the model can be automatically built. An experiment with an Oracle Enterprise System shows that approximately 15% of the model, diagnosing 30% of the related issues, can be automatically built. keywords: System Diagnostics, Enterprise Systems, Machine Learning.","PeriodicalId":135441,"journal":{"name":"2009 International Conference on Computational Intelligence and Software Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131380946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}