The reuse of design knowledge for use in CAD systems is a promising way to reduce time and cost during the design cycle. To support this, a semantic core for a novel type of informational infrastructure supporting CAD systems is introduced, which allows to extract arbitrary subparts of the information base and use it efficiently. The key problem is the automated setup and classification of information piece within knowledge domains. This is solved by connecting depending design methods strongly to information sources outside the actual CAD design environment with a focus on knowledge generation, distribution and application. As a result it will provide problem solving capabilities within the geometric area supported by a system that can classify information based on context.
{"title":"Semantic Core to Acquire and Distribute Design Information","authors":"S. Opletal, D. Roller, S. Ruger","doi":"10.1109/ADVCOMP.2008.32","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.32","url":null,"abstract":"The reuse of design knowledge for use in CAD systems is a promising way to reduce time and cost during the design cycle. To support this, a semantic core for a novel type of informational infrastructure supporting CAD systems is introduced, which allows to extract arbitrary subparts of the information base and use it efficiently. The key problem is the automated setup and classification of information piece within knowledge domains. This is solved by connecting depending design methods strongly to information sources outside the actual CAD design environment with a focus on knowledge generation, distribution and application. As a result it will provide problem solving capabilities within the geometric area supported by a system that can classify information based on context.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122790436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Crespo, M. Piqueras, J. M. Aulló, W. Diaz-Villanueva
The "Grupo de Quimica Teorica y Computacional de la Universitat de Valencia" (GQTC/UVEG) Computational Chemistry Grid is a virtual organization that provides access to high performance computing resources for computational chemistry through a desktop client application (GridQTC). The GQTC/UVEG environment is based on a three-tiered architecture that includes a client, a grid middleware server and a set of distributed, high-end computational resources. The GridQTC desktop client is an open source application that allows the researcher to submit jobs to high performance compute resources, without learning the intricacies of the different operating systems, environments or resources. It presents what is available on the chosen system to the user and provides a simple and intuitive interface incorporating chemistry functionality that computational chemists need to conduct their work, including pre- and post-processing tools. We show with a use case how GridQTC provides an easy-to-use integrated computing environment for academic software packages such as NWChem.
{"title":"GridQTC: A Desktop Client for the Computational Chemistry Grid Infrastructure","authors":"R. Crespo, M. Piqueras, J. M. Aulló, W. Diaz-Villanueva","doi":"10.1109/ADVCOMP.2008.25","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.25","url":null,"abstract":"The \"Grupo de Quimica Teorica y Computacional de la Universitat de Valencia\" (GQTC/UVEG) Computational Chemistry Grid is a virtual organization that provides access to high performance computing resources for computational chemistry through a desktop client application (GridQTC). The GQTC/UVEG environment is based on a three-tiered architecture that includes a client, a grid middleware server and a set of distributed, high-end computational resources. The GridQTC desktop client is an open source application that allows the researcher to submit jobs to high performance compute resources, without learning the intricacies of the different operating systems, environments or resources. It presents what is available on the chosen system to the user and provides a simple and intuitive interface incorporating chemistry functionality that computational chemists need to conduct their work, including pre- and post-processing tools. We show with a use case how GridQTC provides an easy-to-use integrated computing environment for academic software packages such as NWChem.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115927351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous computing is a term with biological connection. Its ultimate aim is to create computer systems capable of self-management. Given the importance of self healing in self management, a number of researches have tried to prevent the failures of software components by modifying the code of components. Apart from the requirement to know a-priori and access the code of components in order to make them self heal, the approach does not allow the reuse of off-the-shelf components whose codes are not disclosed. Furthermore, none of the current preventive mechanisms for self healing are biologically inspired. This paper introduces a biologically inspired mechanism for self healing of distributed components. Self healing is achieved by central managers in cooperation with a number of connectors, where each manager resides on a single node of a distributed system. Components are seen purely as black boxes, with no need for their modification.
{"title":"A Biologically-Inspired Preventive Mechanism for Self-Healing of Distributed Software Components","authors":"M. Bisadi, M. Sharifi","doi":"10.1109/ADVCOMP.2008.36","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.36","url":null,"abstract":"Autonomous computing is a term with biological connection. Its ultimate aim is to create computer systems capable of self-management. Given the importance of self healing in self management, a number of researches have tried to prevent the failures of software components by modifying the code of components. Apart from the requirement to know a-priori and access the code of components in order to make them self heal, the approach does not allow the reuse of off-the-shelf components whose codes are not disclosed. Furthermore, none of the current preventive mechanisms for self healing are biologically inspired. This paper introduces a biologically inspired mechanism for self healing of distributed components. Self healing is achieved by central managers in cooperation with a number of connectors, where each manager resides on a single node of a distributed system. Components are seen purely as black boxes, with no need for their modification.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115795445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an intelligent electronic nose (EN)system designed using cheap metal oxide gas sensors (MOGS) is designed to detect fires at an early stage. The time series signals obtained from the same source of fire are highly correlated, and different sources of fire exhibit unique patterns in the time series data. Therefore, the error back propagation (BP) method can be effectively used for the classification of the tested smell. The accuracy of 99.6% is achieved by using only a single training dataset from each source of fire. The accuracy achieved with the k-means algorithm is 98.3%, which also shows the high ability of the EN in detecting the early stage of fire from various sources.
{"title":"Intelligent Electronic Nose Systems for Fire Detection Systems Based on Neural Networks","authors":"T. Fujinaka, M. Yoshioka, S. Omatu","doi":"10.1109/ADVCOMP.2008.47","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.47","url":null,"abstract":"In this paper, an intelligent electronic nose (EN)system designed using cheap metal oxide gas sensors (MOGS) is designed to detect fires at an early stage. The time series signals obtained from the same source of fire are highly correlated, and different sources of fire exhibit unique patterns in the time series data. Therefore, the error back propagation (BP) method can be effectively used for the classification of the tested smell. The accuracy of 99.6% is achieved by using only a single training dataset from each source of fire. The accuracy achieved with the k-means algorithm is 98.3%, which also shows the high ability of the EN in detecting the early stage of fire from various sources.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132423933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic computing aims to connect the intention of humans with computational content. We present a study of a problem of this type: extract information from large number of similar linguistic Web resources to compute various aggregations (sum, average,...). In our motivating example we calculate the sum of injured people in traffic accidents in a certain period in a certain region. We restrict ourselves to pages written in Czech language. Our solution exploits existing linguistic tools created originally for a syntactically annotated corpus, Prague Dependency Treebank (PDT 2.0). We propose a solutions which learns tree queries to extract data from PDT2.0 annotations and transforms the data in an ontology. This method is not limited to Czech language and can be used with any structured linguistic representation. We present a proof of concept of our method. This enables to compute various aggregations over linguistic Web resources.
{"title":"Computing Aggregations from Linguistic Web Resources: A Case Study in Czech Republic Sector/Traffic Accidents","authors":"J. Dedek, P. Vojtás","doi":"10.1109/ADVCOMP.2008.17","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.17","url":null,"abstract":"Semantic computing aims to connect the intention of humans with computational content. We present a study of a problem of this type: extract information from large number of similar linguistic Web resources to compute various aggregations (sum, average,...). In our motivating example we calculate the sum of injured people in traffic accidents in a certain period in a certain region. We restrict ourselves to pages written in Czech language. Our solution exploits existing linguistic tools created originally for a syntactically annotated corpus, Prague Dependency Treebank (PDT 2.0). We propose a solutions which learns tree queries to extract data from PDT2.0 annotations and transforms the data in an ontology. This method is not limited to Czech language and can be used with any structured linguistic representation. We present a proof of concept of our method. This enables to compute various aggregations over linguistic Web resources.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132204285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. M. Chaves-González, M. A. Vega-Rodríguez, D. Domínguez-González, J. Gómez-Pulido, J. M. Sánchez-Pérez
Frequency assignment problem (FAP) is a very important issue in the field of telecommunications (especially in GSM-Global System for Mobile-Networks). In this work, we present the Population-Based Incremental Learning (PBIL) algorithm to solve a particular branch of the FAP problem (MS-FAP). MS-FAP (Minimum Span Frequency Assignment Problem) tries to minimize the range of frequencies which is necessary in a certain area to cover the communications which take place there. In this paper it is presented the problem and it is explained the methodology which solve it. We have performed tests with a complete set of experiments using seven well known variations of PBIL and 7 types of MS-FAP problems. At the end, the results are presented and we compare them to conclude which variation of PBIL provides the best solution to the MS-FAP problem.
{"title":"Population-Based Incremental Learning to Solve the FAP Problem","authors":"J. M. Chaves-González, M. A. Vega-Rodríguez, D. Domínguez-González, J. Gómez-Pulido, J. M. Sánchez-Pérez","doi":"10.1109/ADVCOMP.2008.10","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.10","url":null,"abstract":"Frequency assignment problem (FAP) is a very important issue in the field of telecommunications (especially in GSM-Global System for Mobile-Networks). In this work, we present the Population-Based Incremental Learning (PBIL) algorithm to solve a particular branch of the FAP problem (MS-FAP). MS-FAP (Minimum Span Frequency Assignment Problem) tries to minimize the range of frequencies which is necessary in a certain area to cover the communications which take place there. In this paper it is presented the problem and it is explained the methodology which solve it. We have performed tests with a complete set of experiments using seven well known variations of PBIL and 7 types of MS-FAP problems. At the end, the results are presented and we compare them to conclude which variation of PBIL provides the best solution to the MS-FAP problem.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130055426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Vicente, J. Agulleiro, E. M. Garzón, J. Fernández
Tomography allows structure determination of an object from its projections. Weighted backprojection (WBP) is by far the standard method for tomographic reconstruction. The single-tilt acquisition geometry turns the 3D reconstruction problem into a set of independent 2D reconstruction problems of the slices that form the volume. These 2D reconstruction problems can be solved by WBP and modelled as sparse-matrix vector products, where the coefficient matrix are shared by the 2D problems. However, the standard implementation of WBP is based on recomputation of the coefficients when needed, because of the huge memory requirements. Modern computers now include enough memory to store the coefficients into a sparse matrix data structure. In this work, implementations of WBP based on matrix precomputation and efficient management of the memory hierarchy have been evaluated on modern architectures. The results clearly show that the matrix implementations significantly outperform the standard WBP.
{"title":"Matrix Weighted Back-Projection Accelerates Tomographic Reconstruction","authors":"E. Vicente, J. Agulleiro, E. M. Garzón, J. Fernández","doi":"10.1109/ADVCOMP.2008.44","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.44","url":null,"abstract":"Tomography allows structure determination of an object from its projections. Weighted backprojection (WBP) is by far the standard method for tomographic reconstruction. The single-tilt acquisition geometry turns the 3D reconstruction problem into a set of independent 2D reconstruction problems of the slices that form the volume. These 2D reconstruction problems can be solved by WBP and modelled as sparse-matrix vector products, where the coefficient matrix are shared by the 2D problems. However, the standard implementation of WBP is based on recomputation of the coefficients when needed, because of the huge memory requirements. Modern computers now include enough memory to store the coefficients into a sparse matrix data structure. In this work, implementations of WBP based on matrix precomputation and efficient management of the memory hierarchy have been evaluated on modern architectures. The results clearly show that the matrix implementations significantly outperform the standard WBP.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127761749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most important obstacles when porting an application to the Grid is its highly heterogeneous nature. This heterogeneity usually means an increase of the cost of both the application porting cycle and the operational cost of the infrastructure. Moreover, the effective number of resources available to a user are also limited by this heterogeneity. In this paper we presents two approaches to tackle these problems: (i) an straightforward deployment of custom virtual machines to support the application execution; (ii) and a new architecture to provision computing elements that allows to dynamically adapt them to changing VO demands. Experimental results for both approaches on prototyped testbed are discussed. In particular, the on-demand provision of computing elements show less than a 11% overall performance loss including the hypervisor overhead.
{"title":"Dynamic Deployment of Custom Execution Environments in Grids","authors":"R. Montero, E. Huedo, I. Llorente","doi":"10.1109/ADVCOMP.2008.8","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.8","url":null,"abstract":"One of the most important obstacles when porting an application to the Grid is its highly heterogeneous nature. This heterogeneity usually means an increase of the cost of both the application porting cycle and the operational cost of the infrastructure. Moreover, the effective number of resources available to a user are also limited by this heterogeneity. In this paper we presents two approaches to tackle these problems: (i) an straightforward deployment of custom virtual machines to support the application execution; (ii) and a new architecture to provision computing elements that allows to dynamically adapt them to changing VO demands. Experimental results for both approaches on prototyped testbed are discussed. In particular, the on-demand provision of computing elements show less than a 11% overall performance loss including the hypervisor overhead.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130642418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carmen Cotelo Queijo, Andrés Gómez Tato, Ignacio López Cabido, José Manuel Cotos Yañez
RETELAB project is devoted to the design, develop and deployment of a GRID infrastructure for the Spanish oceanographic research community. They have strong requirements for processing satellite images and numeric simulation. Currently, the infrastructure is under development but there exists an initial architecture that is presented here, together with the requirements related to the execution of oceanographic models based on ROMS. Also, a new JSDL extension is proposed conceptually for parallel jobs in a HTC environment.
{"title":"Adapting ROMS to Execute on GRID Using a Hybrid Parallelization Model","authors":"Carmen Cotelo Queijo, Andrés Gómez Tato, Ignacio López Cabido, José Manuel Cotos Yañez","doi":"10.1109/ADVCOMP.2008.24","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.24","url":null,"abstract":"RETELAB project is devoted to the design, develop and deployment of a GRID infrastructure for the Spanish oceanographic research community. They have strong requirements for processing satellite images and numeric simulation. Currently, the infrastructure is under development but there exists an initial architecture that is presented here, together with the requirements related to the execution of oceanographic models based on ROMS. Also, a new JSDL extension is proposed conceptually for parallel jobs in a HTC environment.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. da Silva Maximiano, M. A. Vega-Rodríguez, J. Gómez-Pulido, J. M. Sánchez-Pérez
Frequency assignment is a very important real-world problem, specially in GSM networks. These networks are very used in the telecommunication area (by mid 2006 GSM services were used by more than 1.8 billion subscribers across 210 countries, representing approximately 77% of the world's cellular market). In this paper we solve a real-world instance of this problem, using a differential evolution (DE) algorithm hybridized with a local search method. We also analyze the performance of the several configuration parameters, because the performance of optimization algorithms is highly dependent on the specific properties of the problem to be solved. Several experiments were carried out to find the best set of parameters for the DE algorithm implemented in this work. The final results obtained by DE are very good.
{"title":"Analysis of Parameter Settings for Differential Evolution Algorithm to Solve a Real-World Frequency Assignment Problem in GSM Networks","authors":"M. da Silva Maximiano, M. A. Vega-Rodríguez, J. Gómez-Pulido, J. M. Sánchez-Pérez","doi":"10.1109/ADVCOMP.2008.18","DOIUrl":"https://doi.org/10.1109/ADVCOMP.2008.18","url":null,"abstract":"Frequency assignment is a very important real-world problem, specially in GSM networks. These networks are very used in the telecommunication area (by mid 2006 GSM services were used by more than 1.8 billion subscribers across 210 countries, representing approximately 77% of the world's cellular market). In this paper we solve a real-world instance of this problem, using a differential evolution (DE) algorithm hybridized with a local search method. We also analyze the performance of the several configuration parameters, because the performance of optimization algorithms is highly dependent on the specific properties of the problem to be solved. Several experiments were carried out to find the best set of parameters for the DE algorithm implemented in this work. The final results obtained by DE are very good.","PeriodicalId":269090,"journal":{"name":"2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129631781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}