Gabriela Montiel-Moreno, J. Zechinelli-Martini, Genoveva Vargas-Solar
This article describes general architecture and construction principles to develop a Biology virtual laboratory through SISELS. SISELS is a mediation system that enables the configuration of virtual laboratories to provide transparent access to distributed biological resources (data or services). SISELS exploits the metadata associated to the subscribed resources to classify and organize them respect to their structure and content. This way, it is possible to generate subspaces of resources, denominated views respect to the requirements of a group of experts to study a biological problem. A view represents the semantic requirements of a group of experts and a subset of relevant resources satisfying partially or totally these requirements. SISELS uses three main levels of metadata to model the knowledge domain of the virtual laboratory based on the resources’ metadata and the semantic correspondences between the resources and the domain.
{"title":"SISELS: Semantic Integration System for Exploitation of Biological Resources","authors":"Gabriela Montiel-Moreno, J. Zechinelli-Martini, Genoveva Vargas-Solar","doi":"10.1109/ENC.2009.27","DOIUrl":"https://doi.org/10.1109/ENC.2009.27","url":null,"abstract":"This article describes general architecture and construction principles to develop a Biology virtual laboratory through SISELS. SISELS is a mediation system that enables the configuration of virtual laboratories to provide transparent access to distributed biological resources (data or services). SISELS exploits the metadata associated to the subscribed resources to classify and organize them respect to their structure and content. This way, it is possible to generate subspaces of resources, denominated views respect to the requirements of a group of experts to study a biological problem. A view represents the semantic requirements of a group of experts and a subset of relevant resources satisfying partially or totally these requirements. SISELS uses three main levels of metadata to model the knowledge domain of the virtual laboratory based on the resources’ metadata and the semantic correspondences between the resources and the domain.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117229897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Martínez-Medina, Christophe Bibineau, J. Zechinelli-Martini
Query optimization is a widely studied problem, a variety of query optimization techniques have been suggested. These approaches are presented in the framework of classical query evaluation procedures that rely upon cost models heavily dependent on metadata (e.g. statistics and cardinality estimates) and that typically are restricted to execution time estimation. There are computational environments where metadata acquisition and support is very expensive. Additionally, execution time is not the only optimization objective of interest. A ubiquitous computing environment is an appropriate example where classical query optimization techniques are not useful any more. In order to solve this problem, this article presents a query optimization technique based on learning, particularly on case-based reasoning. Given a query, the knowledge acquired from previous experiences is exploited in order to propose reasonable solutions. It is possible to learn from each new experience in order to suggest better solutions to solve future queries.
{"title":"Query Optimization Using Case-Based Reasoning in Ubiquitous Environments","authors":"L. Martínez-Medina, Christophe Bibineau, J. Zechinelli-Martini","doi":"10.1109/ENC.2009.42","DOIUrl":"https://doi.org/10.1109/ENC.2009.42","url":null,"abstract":"Query optimization is a widely studied problem, a variety of query optimization techniques have been suggested. These approaches are presented in the framework of classical query evaluation procedures that rely upon cost models heavily dependent on metadata (e.g. statistics and cardinality estimates) and that typically are restricted to execution time estimation. There are computational environments where metadata acquisition and support is very expensive. Additionally, execution time is not the only optimization objective of interest. A ubiquitous computing environment is an appropriate example where classical query optimization techniques are not useful any more. In order to solve this problem, this article presents a query optimization technique based on learning, particularly on case-based reasoning. Given a query, the knowledge acquired from previous experiences is exploited in order to propose reasonable solutions. It is possible to learn from each new experience in order to suggest better solutions to solve future queries.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126653506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Alor-Hernández, U. Juárez-Martínez, R. Posada-Gómez, A. Trejo, Jose Saul Rocha-Aragon
Service-Oriented Architecture (SOA) has become a new paradigm for distributed applications development. SOA allows the integration of legacy systems with new development applications and to create more flexible and adaptive applications. Also, it allows decreasing implementation and maintenance times. SOA uses Web Services technology as a solution for business applications integration. This paper introduces a service-oriented architecture for solving integration problems with legacy systems of stock quote management on a sugar mill. Different levels of services of the proposed architecture are described. Also, a case of study is presented where the functionality of the proposed SOA is explained. Finally, we emphasize our contribution.
{"title":"Defining an SOA for Stock Quote Management","authors":"G. Alor-Hernández, U. Juárez-Martínez, R. Posada-Gómez, A. Trejo, Jose Saul Rocha-Aragon","doi":"10.1109/ENC.2009.35","DOIUrl":"https://doi.org/10.1109/ENC.2009.35","url":null,"abstract":"Service-Oriented Architecture (SOA) has become a new paradigm for distributed applications development. SOA allows the integration of legacy systems with new development applications and to create more flexible and adaptive applications. Also, it allows decreasing implementation and maintenance times. SOA uses Web Services technology as a solution for business applications integration. This paper introduces a service-oriented architecture for solving integration problems with legacy systems of stock quote management on a sugar mill. Different levels of services of the proposed architecture are described. Also, a case of study is presented where the functionality of the proposed SOA is explained. Finally, we emphasize our contribution.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132406628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Anzures-García, L. A. Sánchez-Gálvez, Miguel J. Hornos, P. Paderewski
The development of collaborative applications should take into account static and dynamic issues, as well as different technological aspects that allow adapting them to either several working groups’ needs or new collaborative scenarios, since these applications have to be continuously running its correct functionality. For this reason, this paper presents a service-based layered architectural model which provides the appropriate infrastructure to support the inherent complexity of the development of long-time, scalable, complex and adaptable collaborative applications in heterogeneous environments, so that these applications allow carrying out effectively group work. Every model layer is briefly explained and an example of a collaborative application built with our architectural model is shown. The application we have chosen for this purpose is a Conference Management System.
{"title":"Service-Based Layered Architectural Model for Building Collaborative Applications in Heterogeneous Environments","authors":"M. Anzures-García, L. A. Sánchez-Gálvez, Miguel J. Hornos, P. Paderewski","doi":"10.1109/ENC.2009.37","DOIUrl":"https://doi.org/10.1109/ENC.2009.37","url":null,"abstract":"The development of collaborative applications should take into account static and dynamic issues, as well as different technological aspects that allow adapting them to either several working groups’ needs or new collaborative scenarios, since these applications have to be continuously running its correct functionality. For this reason, this paper presents a service-based layered architectural model which provides the appropriate infrastructure to support the inherent complexity of the development of long-time, scalable, complex and adaptable collaborative applications in heterogeneous environments, so that these applications allow carrying out effectively group work. Every model layer is briefly explained and an example of a collaborative application built with our architectural model is shown. The application we have chosen for this purpose is a Conference Management System.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133869198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new conceptual indexing technique intended to overcome the major problems resulting from the use of Term Frequency (TF) based approaches. To resolve the semantic problems related to TF approaches, the proposed technique disambiguates the words contained in a document and creates a list of super ordinates based on an external knowledge source. In order to reduce the dimension of the document vector, the final set of index values is created by extracting a set of common concepts, shared by multiple related words, from the list of hypernyms. Subsequently, a weight is assigned to each concept index by considering its position in the knowledge source's hierarchical tree (i.e. distance from the substituted words) and its number of occurrences. By applying the proposed technique, we were able to disambiguate words within different contexts, extrapolate concepts from documents, assigning appropriate normalised weights, and significantly reduce the vector dimension.
{"title":"A New Conceptual Approach to Document Indexing","authors":"S. Barresi, S. Nefti-Meziani, Y. Rezgui","doi":"10.1109/ENC.2009.50","DOIUrl":"https://doi.org/10.1109/ENC.2009.50","url":null,"abstract":"This paper presents a new conceptual indexing technique intended to overcome the major problems resulting from the use of Term Frequency (TF) based approaches. To resolve the semantic problems related to TF approaches, the proposed technique disambiguates the words contained in a document and creates a list of super ordinates based on an external knowledge source. In order to reduce the dimension of the document vector, the final set of index values is created by extracting a set of common concepts, shared by multiple related words, from the list of hypernyms. Subsequently, a weight is assigned to each concept index by considering its position in the knowledge source's hierarchical tree (i.e. distance from the substituted words) and its number of occurrences. By applying the proposed technique, we were able to disambiguate words within different contexts, extrapolate concepts from documents, assigning appropriate normalised weights, and significantly reduce the vector dimension.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126586528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aparecido Fabiano Pinatti de Carvalho, J. C. A. Silva, S. Zem-Mascarenhas
This paper illustrates the use of common sense knowledge, acquired from volunteers through the web, to support teachers to plan learning activities, which fit to pedagogical issues presented in renowned Learning Theories, so that effective learning can take place. It is approached in this paper how common sense knowledge is related to four Learning Theories, proposed by authors who are aware in the pedagogical area – Freire, Freinet, Ausubel and Gagné – and how computational technologies can make viable the use of this kind of knowledge by professors.
{"title":"Planning Learning Activities Pedagogically Suitable by Using Common Sense Knowledge","authors":"Aparecido Fabiano Pinatti de Carvalho, J. C. A. Silva, S. Zem-Mascarenhas","doi":"10.1109/ENC.2009.54","DOIUrl":"https://doi.org/10.1109/ENC.2009.54","url":null,"abstract":"This paper illustrates the use of common sense knowledge, acquired from volunteers through the web, to support teachers to plan learning activities, which fit to pedagogical issues presented in renowned Learning Theories, so that effective learning can take place. It is approached in this paper how common sense knowledge is related to four Learning Theories, proposed by authors who are aware in the pedagogical area – Freire, Freinet, Ausubel and Gagné – and how computational technologies can make viable the use of this kind of knowledge by professors.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114196524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel Alberto García-Mireles, Irene Rodríguez-Castillo
A method proposal named MECIS is presented, which has the purpose of evaluating the learning level reached in software engineering courses in an undergraduate university program. To build the knowledge body to evaluate, the practices of the MoProSoft model were taken as reference corresponding to the operations stage in the levels 1 and 2, and the SE2004 topics related with these ones. Five knowledge areas are established to evaluate, and a questionnaire is developed. This has 236 questions, distributed in three different learning categories. The proposed method is based on EvalProSoft. A summary is presented about the results of the pilot test in a degree program in computer science.
{"title":"Software Engineering Area Curricular Evaluation Method Based in MoProSoft","authors":"Gabriel Alberto García-Mireles, Irene Rodríguez-Castillo","doi":"10.1109/ENC.2009.19","DOIUrl":"https://doi.org/10.1109/ENC.2009.19","url":null,"abstract":"A method proposal named MECIS is presented, which has the purpose of evaluating the learning level reached in software engineering courses in an undergraduate university program. To build the knowledge body to evaluate, the practices of the MoProSoft model were taken as reference corresponding to the operations stage in the levels 1 and 2, and the SE2004 topics related with these ones. Five knowledge areas are established to evaluate, and a questionnaire is developed. This has 236 questions, distributed in three different learning categories. The proposed method is based on EvalProSoft. A summary is presented about the results of the pilot test in a degree program in computer science.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122440507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A group of 17 students applied 5 unit verification techniques in a simple Java program as training for a formal experiment. The verification techniques applied are desktop inspection, equivalence partitioning and boundary-value analysis, decision table, linearly independent path, and multiple condition coverage. The first one is a static technique, while the others are dynamic. JUnit test cases are generated when dynamic techniques are applied. Both the defects and the execution time are registered. Execution time is considered as a cost measure for the techniques. Preliminary results yield three relevant conclusions. As a first conclusion, performance defects are not easily found. Secondly, unit verification is rather costly and the percentage of defects it detects is low. Finally desktop inspection detects a greater variety of defects than the other techniques.
{"title":"Effectiveness and Cost of Verification Techniques: Preliminary Conclusions on Five Techniques","authors":"Diego Vallespir, Juliana Herbert","doi":"10.1109/ENC.2009.11","DOIUrl":"https://doi.org/10.1109/ENC.2009.11","url":null,"abstract":"A group of 17 students applied 5 unit verification techniques in a simple Java program as training for a formal experiment. The verification techniques applied are desktop inspection, equivalence partitioning and boundary-value analysis, decision table, linearly independent path, and multiple condition coverage. The first one is a static technique, while the others are dynamic. JUnit test cases are generated when dynamic techniques are applied. Both the defects and the execution time are registered. Execution time is considered as a cost measure for the techniques. Preliminary results yield three relevant conclusions. As a first conclusion, performance defects are not easily found. Secondly, unit verification is rather costly and the percentage of defects it detects is low. Finally desktop inspection detects a greater variety of defects than the other techniques.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131552137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As mobile applications become more pervasive and the demand for affordable mobile broadband Internet access grows, bandwidth sharing solutions via IEEE 802.11-based technology populate the telecommunications market. Many of the existing solutions are based on the willingness of Internet user communities to share their broadband connection at home. However, some other access sharing models have introduce incentives for those users that share their Internet connection. Another aspect that differentiates current solutions is the approach to deploy such a service. In general, these solutions can be grouped in three categories: the ones that follow a guerrilla approach, those that partner with ISPs, and the sharing solutions that originate inside the telcos. These options are compared in this paper from the business and technology angles, discussing the pros and cons of each of them. In addition, a broadband sharing enabling solution, called Extended HotSpots is described. This solution was evaluated during a field trial in the city of Berlin and the results collected are included in this paper.
{"title":"Metropolitan Public WiFi Access Based on Broadband Sharing","authors":"Pablo Vidales, Alexander Manecke, M. Solarski","doi":"10.1109/ENC.2009.22","DOIUrl":"https://doi.org/10.1109/ENC.2009.22","url":null,"abstract":"As mobile applications become more pervasive and the demand for affordable mobile broadband Internet access grows, bandwidth sharing solutions via IEEE 802.11-based technology populate the telecommunications market. Many of the existing solutions are based on the willingness of Internet user communities to share their broadband connection at home. However, some other access sharing models have introduce incentives for those users that share their Internet connection. Another aspect that differentiates current solutions is the approach to deploy such a service. In general, these solutions can be grouped in three categories: the ones that follow a guerrilla approach, those that partner with ISPs, and the sharing solutions that originate inside the telcos. These options are compared in this paper from the business and technology angles, discussing the pros and cons of each of them. In addition, a broadband sharing enabling solution, called Extended HotSpots is described. This solution was evaluated during a field trial in the city of Berlin and the results collected are included in this paper.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121649149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a description of the principal aspects employed in the development of a speaker verification system based on a Spanish corpus. The main goal is to obtain classification results and behavior using Support Vector Machines (SVM) as the classifier technique. The most relevant aspects involved in developing a Spanish corpus are given. For the front end processing a novel method to suppress silences between words is proposed and successfully applied. The validation to the complete system is made using randomly selected feature vectors and vectors from continuous sequences of the voice signal. Additionally, Gaussian Mixtures Models (GMM) and Artificial Neural Networks (ANN) are also used as classifiers to compare and validate the results.
{"title":"A Speaker Verification System Using SVM over a Spanish Corpus","authors":"J. Bernal, A. Prieto-Guerrero, John Goddard Close","doi":"10.1109/ENC.2009.53","DOIUrl":"https://doi.org/10.1109/ENC.2009.53","url":null,"abstract":"This paper presents a description of the principal aspects employed in the development of a speaker verification system based on a Spanish corpus. The main goal is to obtain classification results and behavior using Support Vector Machines (SVM) as the classifier technique. The most relevant aspects involved in developing a Spanish corpus are given. For the front end processing a novel method to suppress silences between words is proposed and successfully applied. The validation to the complete system is made using randomly selected feature vectors and vectors from continuous sequences of the voice signal. Additionally, Gaussian Mixtures Models (GMM) and Artificial Neural Networks (ANN) are also used as classifiers to compare and validate the results.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127605442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}