Javier Cabezas, M. Araya-Polo, Isaac Gelado, N. Navarro, E. Morancho, J. Cela
Partial Differential Equations (PDE) are the heart of most simulations in many scientific fields, from Fluid Mechanics to Astrophysics. One the most popular mathematical schemes to solve a PDE is Finite Difference (FD). In this work we map a PDE-FD algorithm called Reverse Time Migration to a GPU using CUDA. This seismic imaging (Geophysics) algorithm is widely used in the oil industry. GPUs are natural contenders in the aftermath of the clock race, in particular for High-performance Computing (HPC). Due to GPU characteristics, the parallelism paradigm shifts from the classical threads plus SIMD to Single Program Multiple Data (SPMD). The NVIDIA GTX 280 implementation outperforms homogeneous CPUs up to 9x (Intel Harpertown E5420) and up to 14x (IBM PPC 970). These preliminary results confirm that GPUs are a real option for HPC, from performance to programmability.
{"title":"High-Performance Reverse Time Migration on GPU","authors":"Javier Cabezas, M. Araya-Polo, Isaac Gelado, N. Navarro, E. Morancho, J. Cela","doi":"10.1109/SCCC.2009.19","DOIUrl":"https://doi.org/10.1109/SCCC.2009.19","url":null,"abstract":"Partial Differential Equations (PDE) are the heart of most simulations in many scientific fields, from Fluid Mechanics to Astrophysics. One the most popular mathematical schemes to solve a PDE is Finite Difference (FD). In this work we map a PDE-FD algorithm called Reverse Time Migration to a GPU using CUDA. This seismic imaging (Geophysics) algorithm is widely used in the oil industry. GPUs are natural contenders in the aftermath of the clock race, in particular for High-performance Computing (HPC). Due to GPU characteristics, the parallelism paradigm shifts from the classical threads plus SIMD to Single Program Multiple Data (SPMD). The NVIDIA GTX 280 implementation outperforms homogeneous CPUs up to 9x (Intel Harpertown E5420) and up to 14x (IBM PPC 970). These preliminary results confirm that GPUs are a real option for HPC, from performance to programmability.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121156029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Villarroel, Yessica Gómez, Roman Gajardo, Oscar Rodriguez
Formalizing and institutionalizing software processes has become a necessity in recent years requiring the management and enhancement of software production and, at the same time, achieving certification in accordance with international standards. Due to the lack of collaboration tools in Small and Medium-sized Enterprises (SMEs) which could contribute to the improvement of software processes, different proposals have been made to enable these companies to develop and grow. This paper presents the experimental implementation of an improvement cycle in an internal area of a small company, considering the basic profile of the Competisoft process model with support on the Tutelkan platform. Through this experiment, it was noted that Competisoft supplied the basic elements to formalize and institutionalize the processes and that Tutelkan was a good complement to achieving this aim.
{"title":"Implementation of an Improvement Cycle Using the Competisoft Methodological Framework and the Tutelkan Platform","authors":"R. Villarroel, Yessica Gómez, Roman Gajardo, Oscar Rodriguez","doi":"10.19153/cleiej.13.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.13.1.2","url":null,"abstract":"Formalizing and institutionalizing software processes has become a necessity in recent years requiring the management and enhancement of software production and, at the same time, achieving certification in accordance with international standards. Due to the lack of collaboration tools in Small and Medium-sized Enterprises (SMEs) which could contribute to the improvement of software processes, different proposals have been made to enable these companies to develop and grow. This paper presents the experimental implementation of an improvement cycle in an internal area of a small company, considering the basic profile of the Competisoft process model with support on the Tutelkan platform. Through this experiment, it was noted that Competisoft supplied the basic elements to formalize and institutionalize the processes and that Tutelkan was a good complement to achieving this aim.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116360712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations.
{"title":"Face Recognition with Local Binary Patterns, Spatial Pyramid Histograms and Naive Bayes Nearest Neighbor Classification","authors":"Daniel Maturana, D. Mery, Á. Soto","doi":"10.1109/SCCC.2009.21","DOIUrl":"https://doi.org/10.1109/SCCC.2009.21","url":null,"abstract":"Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122240898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic data structures are sensitive to insertion order, particularly tree-based data structures. In this paper we present a buffering heuristic allowing delayed root selection (when enough data has arrived to have valid statistics) useful for hierarchical indexes. Initially, when less than $M$ objects have been inserted queries are answered from the buffer itself using an online-friendly algorithm which can be simulated by AESA (Approximating and Eliminating Search Algorithm) or can be implemented with the dynamic data structure being optimized. When the buffer is full the tree root can be selected in a more informed way using the distances between the $M$ objects in the buffer. Buffering has an additional usage, multiple routing strategies can be designed depending on statistics of the query. A complete picture of the technique includes doing a recursive best-root selection with much more parameters. We focus on the Dynamic Spatial Approximation Tree ({em DSAT}) investigating the improvement obtained in the first level of the tree (the root and its children). Notice that if the buffering strategy is repeated recursively we can obtain a boosting on the performance when the data structure reaches a stable state. For this reason even a very small improvement in performance is significant. We present a systematic improvement in the query complexity for several real time, publicly available data sets from the SISAP repository with our buffering strategies.
{"title":"Delayed Insertion Strategies in Dynamic Metric Indexes","authors":"Edgar Chávez, Nora Reyes, Patricia Roggero","doi":"10.1109/SCCC.2009.23","DOIUrl":"https://doi.org/10.1109/SCCC.2009.23","url":null,"abstract":"Dynamic data structures are sensitive to insertion order, particularly tree-based data structures. In this paper we present a buffering heuristic allowing delayed root selection (when enough data has arrived to have valid statistics) useful for hierarchical indexes. Initially, when less than $M$ objects have been inserted queries are answered from the buffer itself using an online-friendly algorithm which can be simulated by AESA (Approximating and Eliminating Search Algorithm) or can be implemented with the dynamic data structure being optimized. When the buffer is full the tree root can be selected in a more informed way using the distances between the $M$ objects in the buffer. Buffering has an additional usage, multiple routing strategies can be designed depending on statistics of the query. A complete picture of the technique includes doing a recursive best-root selection with much more parameters. We focus on the Dynamic Spatial Approximation Tree ({em DSAT}) investigating the improvement obtained in the first level of the tree (the root and its children). Notice that if the buffering strategy is repeated recursively we can obtain a boosting on the performance when the data structure reaches a stable state. For this reason even a very small improvement in performance is significant. We present a systematic improvement in the query complexity for several real time, publicly available data sets from the SISAP repository with our buffering strategies.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115500646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolas A. Barriga, M. Solar, Mauricio Araya-López
Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be very efficient in solving high dimensional problems. Even though several RRT variants have been proposed to tackle the dynamic replanning problem, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs as an initial solution, informed local search to fix unfeasible paths and a simple greedy optimizer. The algorithm is capable of recognizing when the local search is stuck, and subsequently restart the RRT. We show that this combination of simple techniques provides better responses to a highly dynamic environment than the dynamic RRT variants.
{"title":"Combining a Probabilistic Sampling Technique and Simple Heuristics to Solve the Dynamic Path Planning Problem","authors":"Nicolas A. Barriga, M. Solar, Mauricio Araya-López","doi":"10.1109/SCCC.2009.11","DOIUrl":"https://doi.org/10.1109/SCCC.2009.11","url":null,"abstract":"Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be very efficient in solving high dimensional problems. Even though several RRT variants have been proposed to tackle the dynamic replanning problem, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs as an initial solution, informed local search to fix unfeasible paths and a simple greedy optimizer. The algorithm is capable of recognizing when the local search is stuck, and subsequently restart the RRT. We show that this combination of simple techniques provides better responses to a highly dynamic environment than the dynamic RRT variants.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114625957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to explore new patterns for classification of cardiac signals, taken from the electrocardiogram (ECG), the circular statistic approach is introduced. Features are extracted from instantaneous phase of ECG signal using the analytic signal model based on the Hilbert transform theory. Feature vectors are used as patterns to distinguish among different ECG signals. Five types of ECG signals are obtained from MIT-BIH database. Preliminar results shown that the proposed features can be used on ECG signal classification problem.
{"title":"Feature Extraction Based on Circular Summary Statistics in ECG Signal Classification","authors":"Gustavo Soto, Sergio Torres","doi":"10.1109/SCCC.2009.24","DOIUrl":"https://doi.org/10.1109/SCCC.2009.24","url":null,"abstract":"In order to explore new patterns for classification of cardiac signals, taken from the electrocardiogram (ECG), the circular statistic approach is introduced. Features are extracted from instantaneous phase of ECG signal using the analytic signal model based on the Hilbert transform theory. Feature vectors are used as patterns to distinguish among different ECG signals. Five types of ECG signals are obtained from MIT-BIH database. Preliminar results shown that the proposed features can be used on ECG signal classification problem.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130677499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Mobile Information Device Profile (MIDP) of the Java Platform Micro Edition (JME), provides a standard run-time environment for mobile phones and personal digital assistants. The third and latest version of MIDP introduces anew dimension in the security model of MIDP at the application level. For the second version of MIDP, Zanella, Betarte and Luna had proposed a formal specification of the security model in the Calculus of Inductive Constructions using the Coq Proof Assistant. This paper presents an extension of that formal specification that incorporates the changes introduced in the third version of MIDP. The obtained specification it is proven to preserve the security properties of the second version of MIDP and enables the research of new security properties for the version 3.0 of the profile.
{"title":"Formal Specification and Analysis of the MIDP 3.0 Security Model","authors":"Gustavo Mazeikis, Gustavo Betarte, C. Luna","doi":"10.1109/SCCC.2009.18","DOIUrl":"https://doi.org/10.1109/SCCC.2009.18","url":null,"abstract":"The Mobile Information Device Profile (MIDP) of the Java Platform Micro Edition (JME), provides a standard run-time environment for mobile phones and personal digital assistants. The third and latest version of MIDP introduces anew dimension in the security model of MIDP at the application level. For the second version of MIDP, Zanella, Betarte and Luna had proposed a formal specification of the security model in the Calculus of Inductive Constructions using the Coq Proof Assistant. This paper presents an extension of that formal specification that incorporates the changes introduced in the third version of MIDP. The obtained specification it is proven to preserve the security properties of the second version of MIDP and enables the research of new security properties for the version 3.0 of the profile.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122637258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Cortez Cargill, Cristobal Undurraga Rius, D. Mery, Á. Soto
In computer vision, there has been a strong advance in creating new image descriptors. A descriptor that has recently appeared is the Covariance Descriptor, but there have not been any studies about the different methodologies for its construction. To address this problem we have conducted an analysis on the contribution of diverse features of an image to the descriptor and therefore their contribution to the detection of varied targets, in our case: faces and pedestrians. That is why we have defined a methodology to determinate the performance of the covariance matrix created from different characteristics. Now we are able to determinate the best set of features for face and people detection, for each problem. We have also achieved to establish that not any kind of combination of features can be used because it might not exist a correlation between them. Finally, when an analysis is performed with the best set of features, for the face detection problem we reach a performance of 99%, meanwhile for the pedestrian detection problem we reach a performance of 85%. With this we hope we have built a more solid base when choosing features for this descriptor, allowing to move forward to other topics such as object recognition or tracking.
{"title":"Performance Evaluation of the Covariance Descriptor for Target Detection","authors":"Pedro Cortez Cargill, Cristobal Undurraga Rius, D. Mery, Á. Soto","doi":"10.1109/SCCC.2009.7","DOIUrl":"https://doi.org/10.1109/SCCC.2009.7","url":null,"abstract":"In computer vision, there has been a strong advance in creating new image descriptors. A descriptor that has recently appeared is the Covariance Descriptor, but there have not been any studies about the different methodologies for its construction. To address this problem we have conducted an analysis on the contribution of diverse features of an image to the descriptor and therefore their contribution to the detection of varied targets, in our case: faces and pedestrians. That is why we have defined a methodology to determinate the performance of the covariance matrix created from different characteristics. Now we are able to determinate the best set of features for face and people detection, for each problem. We have also achieved to establish that not any kind of combination of features can be used because it might not exist a correlation between them. Finally, when an analysis is performed with the best set of features, for the face detection problem we reach a performance of 99%, meanwhile for the pedestrian detection problem we reach a performance of 85%. With this we hope we have built a more solid base when choosing features for this descriptor, allowing to move forward to other topics such as object recognition or tracking.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121772206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an optimal adaptive algorithm for context queries in tagged content. The queries consist of locating instances of a tag within a context specified by the query using patterns with preorder, ancestor-descendant and proximity operators in the document tree implied by the tagged content. The time taken to resolve a query $Q$ on a document tree $T$ is logarithmic in the size of $T$, proportional to the size of $Q$, and to the difficulty of the combination of $Q$ with $T$, as measured by the minimal size of a certificate of the answer. The performance of the algorithm is no worse than the classical worst-case optimal, while provably better on simpler queries and corpora. More formally, the algorithm runs in time $bigo(difficultynbkeywordslg(nbobjects/difficultynbkeywords))$ in the standard RAM model and in time $bigo(difficultynbkeywordslglgmin(nbobjects,nblabels))$ in the $Theta(lg(nbobjects))$-word RAM model, where $nbkeywords$ is the number of edges in the query, $difficulty$ is the minimum number of operations required to certify the answer to the query, $nbobjects$ is the number of nodes in the tree, and $nblabels$ is the number of labels indexed.
{"title":"Efficient Algorithms for Context Query Evaluation over a Tagged Corpus","authors":"Jérémy Félix Barbay, A. López-Ortiz","doi":"10.1109/SCCC.2009.16","DOIUrl":"https://doi.org/10.1109/SCCC.2009.16","url":null,"abstract":"We present an optimal adaptive algorithm for context queries in tagged content. The queries consist of locating instances of a tag within a context specified by the query using patterns with preorder, ancestor-descendant and proximity operators in the document tree implied by the tagged content. The time taken to resolve a query $Q$ on a document tree $T$ is logarithmic in the size of $T$, proportional to the size of $Q$, and to the difficulty of the combination of $Q$ with $T$, as measured by the minimal size of a certificate of the answer. The performance of the algorithm is no worse than the classical worst-case optimal, while provably better on simpler queries and corpora. More formally, the algorithm runs in time $bigo(difficultynbkeywordslg(nbobjects/difficultynbkeywords))$ in the standard RAM model and in time $bigo(difficultynbkeywordslglgmin(nbobjects,nblabels))$ in the $Theta(lg(nbobjects))$-word RAM model, where $nbkeywords$ is the number of edges in the query, $difficulty$ is the minimum number of operations required to certify the answer to the query, $nbobjects$ is the number of nodes in the tree, and $nblabels$ is the number of labels indexed.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129241826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Global Model Management (GMM) is a model-based approach for managing large sets of interrelated heterogeneous and complex MDE artifacts. Such artifacts are usually represented as models, however as many Domain Specific Languages have a textual concrete syntax, GMM also supports textual entities and model-to-text/text-to-model transformations which are projectors that bridge the MDE technical space and the Grammarware technical space. As the transformations supported by GMM are executable artifacts, typing is critical for preventing type errors during execution. We proposed the cGMM calculus which formalizes the notion of typing in GMM. In this work, we extend cGMM with new types and rules for supporting textual entities and projectors. With such an extension, those artifacts may participate in transformation compositions addressing larger transformation problems. We illustrate the new constructs in the context of an interoperability case study.
{"title":"Typing Textual Entities and M2T/T2M Transformations in a Model Management Environment","authors":"Andrés Vignaga","doi":"10.1109/SCCC.2009.25","DOIUrl":"https://doi.org/10.1109/SCCC.2009.25","url":null,"abstract":"Global Model Management (GMM) is a model-based approach for managing large sets of interrelated heterogeneous and complex MDE artifacts. Such artifacts are usually represented as models, however as many Domain Specific Languages have a textual concrete syntax, GMM also supports textual entities and model-to-text/text-to-model transformations which are projectors that bridge the MDE technical space and the Grammarware technical space. As the transformations supported by GMM are executable artifacts, typing is critical for preventing type errors during execution. We proposed the cGMM calculus which formalizes the notion of typing in GMM. In this work, we extend cGMM with new types and rules for supporting textual entities and projectors. With such an extension, those artifacts may participate in transformation compositions addressing larger transformation problems. We illustrate the new constructs in the context of an interoperability case study.","PeriodicalId":398661,"journal":{"name":"2009 International Conference of the Chilean Computer Science Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121293126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}