Víctor Cuevas-Vicenttín, Genoveva Vargas-Solar, C. Collet, P. Bucciol
In this paper we present our vision, and discuss the main research problems along with prospective solutions, on the evaluation of queries over data available and processable through services. In particular, we address queries in dynamic environments concerning aspects such as mobility and continuous data. These queries entail the potential of offering users ubiquitous access to relevant information in the form of value-added services. The core of our proposed approach, lies in the development of a services coordination framework complemented with proven traditional query processing techniques adapted to service-based environments. Thus, we derive the blueprint for an efficient solution for query evaluation in service-based contexts and their related application areas.
{"title":"Efficiently Coordinating Services for Querying Data in Dynamic Environments","authors":"Víctor Cuevas-Vicenttín, Genoveva Vargas-Solar, C. Collet, P. Bucciol","doi":"10.1109/ENC.2009.34","DOIUrl":"https://doi.org/10.1109/ENC.2009.34","url":null,"abstract":"In this paper we present our vision, and discuss the main research problems along with prospective solutions, on the evaluation of queries over data available and processable through services. In particular, we address queries in dynamic environments concerning aspects such as mobility and continuous data. These queries entail the potential of offering users ubiquitous access to relevant information in the form of value-added services. The core of our proposed approach, lies in the development of a services coordination framework complemented with proven traditional query processing techniques adapted to service-based environments. Thus, we derive the blueprint for an efficient solution for query evaluation in service-based contexts and their related application areas.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128716102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A cluster analysis on a set of Retro-Transcribing viral proteomic sequences is described in this paper. A Lysine-Arginine concentration vector is calculated from the sequences and analyzed to identify correlations among species. The computational strategy is based on the K-Means algorithm to partition the data into disjoint sets of points. A search method based on Evolutionary Programming is incorporated, in order to optimize the cluster structures. Experimental results show a number of interesting and unexpected similarities. These similarities could suggest bioelectronics relationships, in the context of the electronic mobility theory.
{"title":"Finding Bioelectronics Correlations in Retro-transcribing Viral Proteomic Sequences Using an Evolutionary Clustering Technique","authors":"R. Garza-Domínguez, E. Bautista-Thompson","doi":"10.1109/ENC.2009.44","DOIUrl":"https://doi.org/10.1109/ENC.2009.44","url":null,"abstract":"A cluster analysis on a set of Retro-Transcribing viral proteomic sequences is described in this paper. A Lysine-Arginine concentration vector is calculated from the sequences and analyzed to identify correlations among species. The computational strategy is based on the K-Means algorithm to partition the data into disjoint sets of points. A search method based on Evolutionary Programming is incorporated, in order to optimize the cluster structures. Experimental results show a number of interesting and unexpected similarities. These similarities could suggest bioelectronics relationships, in the context of the electronic mobility theory.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126918180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. S. Suarez-Castaon, Jehan-Francois Pâris, C. Aguilar-Ibaez
In this letter we present a reorganization method to protect against data loss when one or two disks fail in a RAID level 5. The main advantage of the proposed method is that it is robust against a second failure if a first failed disk has not been replaced yet. Our proposal is motivated by the fact that new disks have a high possibility to fail during their first year of operation and during this period there is enough free space to rebuild the lost data in the failed disk and store it by a reorganization in the remaining disks.
{"title":"Protecting Data against Consecutive Disk Failures in RAID-5","authors":"M. S. Suarez-Castaon, Jehan-Francois Pâris, C. Aguilar-Ibaez","doi":"10.1109/ENC.2009.56","DOIUrl":"https://doi.org/10.1109/ENC.2009.56","url":null,"abstract":"In this letter we present a reorganization method to protect against data loss when one or two disks fail in a RAID level 5. The main advantage of the proposed method is that it is robust against a second failure if a first failed disk has not been replaced yet. Our proposal is motivated by the fact that new disks have a high possibility to fail during their first year of operation and during this period there is enough free space to rebuild the lost data in the failed disk and store it by a reorganization in the remaining disks.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128971086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. A. Z. Marceleo, Alexander Gelbukh, Rodolfo A. Pazos Rangel
The first Natural Language Interfaces to Databases were built and designed for specific domains, and their customization processes implied source code manipulation. Open systems and database inter-operability enabled these interfaces to be independent of the operating system and database management system, and the separation of the knowledge base from the translation process allowed for domain portability. Although commercial interfaces incorporate semi-automatic configuration wizards that help configure the interface without knowledge of its inner workings or its source code, it is still difficult to customize these interfaces for a given database, due to confusion on the information that is necessary to provide to the knowledge base of the interface in order to make it able to answer some query category. For solving this problem, we propose an ontology whose design is simple and flexible enough to assist the customizer’s work. This paper describes the design of the ontology, as well as an empirical evaluation of this approach versus the customization process of a commercial interface. The evaluation was useful to detect problems with different types of queries used to retrieve information from a specific database. In spite of the difficulties found to make the evaluations and some unquestionable advantages offered by commercial interfaces.
{"title":"Customization of Natural Language Interfaces to Databases: Beyond Domain Portability","authors":"J. A. Z. Marceleo, Alexander Gelbukh, Rodolfo A. Pazos Rangel","doi":"10.1109/ENC.2009.52","DOIUrl":"https://doi.org/10.1109/ENC.2009.52","url":null,"abstract":"The first Natural Language Interfaces to Databases were built and designed for specific domains, and their customization processes implied source code manipulation. Open systems and database inter-operability enabled these interfaces to be independent of the operating system and database management system, and the separation of the knowledge base from the translation process allowed for domain portability. Although commercial interfaces incorporate semi-automatic configuration wizards that help configure the interface without knowledge of its inner workings or its source code, it is still difficult to customize these interfaces for a given database, due to confusion on the information that is necessary to provide to the knowledge base of the interface in order to make it able to answer some query category. For solving this problem, we propose an ontology whose design is simple and flexible enough to assist the customizer’s work. This paper describes the design of the ontology, as well as an empirical evaluation of this approach versus the customization process of a commercial interface. The evaluation was useful to detect problems with different types of queries used to retrieve information from a specific database. In spite of the difficulties found to make the evaluations and some unquestionable advantages offered by commercial interfaces.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132004040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. García-Bañuelos, Edgar Alberto Portilla-Flores, A. Chávez-Aragón, O. F. Reyes-Galaviz, Huberto Ayanegui-Santiago
Collaboration of peers is rather common in some scientific communities and is being facilitated with the advances in telecommunication and computer networking technologies. In this paper, we analyze the collaboration networks formed among Mexican computer science scholarsfootnote{We preferred to use the term emph{scholar} instead of emph{researcher}, because the REMIDEC census is on PhD holders even if some of them are not actively involved in research activities.}, using social network analysis techniques. A series of measurements are performed to identify some patterns of collaboration both among individuals and among Mexican academic institutions. The data for our measurements was taken from two freely available sources: DBLP, a public digital library which indexes computer science related conferences and journals; and the census of Mexican scholars made by REMIDEC. In order to have an insight about the impact of publications, we used the CORE ranking for computer science conferences and journals. Our aim is to gain a better understanding of the working practices exhibited by the Mexican computer science community.
{"title":"Finding and Analyzing Social Collaboration Networks in the Mexican Computer Science Community","authors":"L. García-Bañuelos, Edgar Alberto Portilla-Flores, A. Chávez-Aragón, O. F. Reyes-Galaviz, Huberto Ayanegui-Santiago","doi":"10.1109/ENC.2009.17","DOIUrl":"https://doi.org/10.1109/ENC.2009.17","url":null,"abstract":"Collaboration of peers is rather common in some scientific communities and is being facilitated with the advances in telecommunication and computer networking technologies. In this paper, we analyze the collaboration networks formed among Mexican computer science scholarsfootnote{We preferred to use the term emph{scholar} instead of emph{researcher}, because the REMIDEC census is on PhD holders even if some of them are not actively involved in research activities.}, using social network analysis techniques. A series of measurements are performed to identify some patterns of collaboration both among individuals and among Mexican academic institutions. The data for our measurements was taken from two freely available sources: DBLP, a public digital library which indexes computer science related conferences and journals; and the census of Mexican scholars made by REMIDEC. In order to have an insight about the impact of publications, we used the CORE ranking for computer science conferences and journals. Our aim is to gain a better understanding of the working practices exhibited by the Mexican computer science community.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"2002 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128288464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Alor-Hernández, R. Posada-Gómez, U. Juárez-Martínez, S. G. Peláez-Camarena, M. Abud-Figueroa
Web services have been used to respond to the new emerging requirements. Web services promise the dynamic creation of loosely coupled information systems and flexible business applications. Nowadays, a growing number of commercial enterprises are redefining their business processes under this technology. The platform and language independence of the web services programming interfaces enable the seamless integration of heterogeneous Web basedsystems.In this work, we propose a Web service-based virtual enterprise generator for B2C e-commerce. The main contribution of this work consists in a Web-based system namely VEGITO which builds and generates B2C Web portal through a GUI set. Under our proposal, we believe that small organizations can automate many of their business processes for B2C ecommerce without making large investments in software development and deployment.
{"title":"VEGITO: A Virtual Enterprise Generator","authors":"G. Alor-Hernández, R. Posada-Gómez, U. Juárez-Martínez, S. G. Peláez-Camarena, M. Abud-Figueroa","doi":"10.1109/ENC.2009.36","DOIUrl":"https://doi.org/10.1109/ENC.2009.36","url":null,"abstract":"Web services have been used to respond to the new emerging requirements. Web services promise the dynamic creation of loosely coupled information systems and flexible business applications. Nowadays, a growing number of commercial enterprises are redefining their business processes under this technology. The platform and language independence of the web services programming interfaces enable the seamless integration of heterogeneous Web basedsystems.In this work, we propose a Web service-based virtual enterprise generator for B2C e-commerce. The main contribution of this work consists in a Web-based system namely VEGITO which builds and generates B2C Web portal through a GUI set. Under our proposal, we believe that small organizations can automate many of their business processes for B2C ecommerce without making large investments in software development and deployment.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133898189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a performance study of a methodology for reconstruction of high-resolution remote sensing imagery is presented. This method is the robust version of the Bayesian regularization (BR) technique, which performs the image reconstruction as a solution of the ill-conditioned inverse spatial spectrum pattern (SSP) estimation problem with model uncertainties via unifying the Bayesian minimum risk (BMR) estimation strategy with the maximum entropy (ME) randomized a priori image model and other projection-type regularization constraints imposed on the solution. The results of extended comparative simulation study of a family of image formation/enhancement algorithms that employ the RBR method for high-resolution reconstruction of the SSP is presented. Moreover, the computational complexity of different methods are analyzed and reported together with the scene imaging protocols. The advantages of the remote sensing imaging experiment (that employ the BR-based estimator) over the cases of poorer designed experiments (that employ the conventional matched spatial filtering as well as the least squares techniques) are verified trough the simulation study. Finally, the application of this estimator in geophysical applications of remote sensing imagery is described.
{"title":"Performance Study of the Robust Bayesian Regularization Technique for Remote Sensing Imaging in Geophysical Applications","authors":"I. Villalón-Turrubiates, Adalberto Herrera-Nuñez","doi":"10.1109/ENC.2009.30","DOIUrl":"https://doi.org/10.1109/ENC.2009.30","url":null,"abstract":"In this paper, a performance study of a methodology for reconstruction of high-resolution remote sensing imagery is presented. This method is the robust version of the Bayesian regularization (BR) technique, which performs the image reconstruction as a solution of the ill-conditioned inverse spatial spectrum pattern (SSP) estimation problem with model uncertainties via unifying the Bayesian minimum risk (BMR) estimation strategy with the maximum entropy (ME) randomized a priori image model and other projection-type regularization constraints imposed on the solution. The results of extended comparative simulation study of a family of image formation/enhancement algorithms that employ the RBR method for high-resolution reconstruction of the SSP is presented. Moreover, the computational complexity of different methods are analyzed and reported together with the scene imaging protocols. The advantages of the remote sensing imaging experiment (that employ the BR-based estimator) over the cases of poorer designed experiments (that employ the conventional matched spatial filtering as well as the least squares techniques) are verified trough the simulation study. Finally, the application of this estimator in geophysical applications of remote sensing imagery is described.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126062527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Rodríguez-Martínez, Manuel Mora Tavarez, Francisco Javier Álvarez Ramírez
A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.
{"title":"A Descriptive/Comparative Study of the Evolution of Process Models of Software Development Life Cycles (PM-SDLCs)","authors":"L. Rodríguez-Martínez, Manuel Mora Tavarez, Francisco Javier Álvarez Ramírez","doi":"10.1109/ENC.2009.45","DOIUrl":"https://doi.org/10.1109/ENC.2009.45","url":null,"abstract":"A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133244672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontologies are the backbone of the semantic web and allow software agents to interoperate effectively. An ontology is able to represent and to clarify concepts and inter-concept relationships and can be used as a framework to represent underlying domain concepts expressed in many different languages. One way to do this is by mapping Ontologies in different languages using an inter-lingual index. In this paper we present a new methodology for ontology mapping in different script human languages (Arabic/English). We identify the steps of extracting concepts on both ontologies and automatically mapping them based on Machine Readable Dictionary (MRD) and Word Sense Disambiguation (WSD) tools. The paper also discusses a unique tool that automatically extracts unmapped concepts and uses MRD and WSD to match them and create semantic bridges between the ontologies.
{"title":"Arabic-English Automatic Ontology Mapping Based on Machine Readable Dictionary","authors":"Mustafa A. Abusalah, J. Tait, M. Oakes","doi":"10.1109/ENC.2009.51","DOIUrl":"https://doi.org/10.1109/ENC.2009.51","url":null,"abstract":"Ontologies are the backbone of the semantic web and allow software agents to interoperate effectively. An ontology is able to represent and to clarify concepts and inter-concept relationships and can be used as a framework to represent underlying domain concepts expressed in many different languages. One way to do this is by mapping Ontologies in different languages using an inter-lingual index. In this paper we present a new methodology for ontology mapping in different script human languages (Arabic/English). We identify the steps of extracting concepts on both ontologies and automatically mapping them based on Machine Readable Dictionary (MRD) and Word Sense Disambiguation (WSD) tools. The paper also discusses a unique tool that automatically extracts unmapped concepts and uses MRD and WSD to match them and create semantic bridges between the ontologies.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131981089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The typical problems of the requirements elicitation stage increase when stakeholders are working on a Global software Development project. In order to fulfil the challenge of successfully carrying out the requirements elicitation process in a GSD environment, requirements specialists need suitable preparation. An improvement in this preparation necessitates an update of the contents, techniques and tools used in the teaching of the requirements elicitation process. In this paper we discuss these issues, show a list of knowledge and skills which are desirable for requirements elicitation engineers in GSD (obtained from a review of literature), and we also propose a simulator environment with which to develop certain skills that are appropriate for students and engineers in GSD requirements elicitation.
{"title":"Teaching Requirements Elicitation within the Context of Global Software Development","authors":"M. Romero, A. Vizcaíno, M. Piattini","doi":"10.1109/ENC.2009.29","DOIUrl":"https://doi.org/10.1109/ENC.2009.29","url":null,"abstract":"The typical problems of the requirements elicitation stage increase when stakeholders are working on a Global software Development project. In order to fulfil the challenge of successfully carrying out the requirements elicitation process in a GSD environment, requirements specialists need suitable preparation. An improvement in this preparation necessitates an update of the contents, techniques and tools used in the teaching of the requirements elicitation process. In this paper we discuss these issues, show a list of knowledge and skills which are desirable for requirements elicitation engineers in GSD (obtained from a review of literature), and we also propose a simulator environment with which to develop certain skills that are appropriate for students and engineers in GSD requirements elicitation.","PeriodicalId":273670,"journal":{"name":"2009 Mexican International Conference on Computer Science","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124078172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}