{"title":"Session details: Model transformation (MT 2006)","authors":"J. Bézivin, A. Pierantonio, Antonio Vallecillo","doi":"10.1145/3245482","DOIUrl":"https://doi.org/10.1145/3245482","url":null,"abstract":"","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132422788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intelligence is strongly related to the ability of solving different problems by a single system. General problems solvers such as Artificial Neural Networks, Evolutionary Algorithms, Particle Swarm etc, have traditionally been tested against one problem at one time. The purpose of this research is to build a complex and adaptive system able to solve multiple (and different) problems. The proposed system, called A-Brain, consists of several connected components (a Decision Maker, a Trainer and several Problem Solvers) which provide a base for building complex problem solvers. The A-Brain system is applied for solving some well-known problems in the field of symbolic regression. Numerical experiments show that A-Brain system is able to perform very well on the considered test problems.
{"title":"A-Brain: the multiple problems solver","authors":"Mihai Oltean","doi":"10.1145/1141277.1141502","DOIUrl":"https://doi.org/10.1145/1141277.1141502","url":null,"abstract":"Intelligence is strongly related to the ability of solving different problems by a single system. General problems solvers such as Artificial Neural Networks, Evolutionary Algorithms, Particle Swarm etc, have traditionally been tested against one problem at one time. The purpose of this research is to build a complex and adaptive system able to solve multiple (and different) problems. The proposed system, called A-Brain, consists of several connected components (a Decision Maker, a Trainer and several Problem Solvers) which provide a base for building complex problem solvers. The A-Brain system is applied for solving some well-known problems in the field of symbolic regression. Numerical experiments show that A-Brain system is able to perform very well on the considered test problems.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132475029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aspect-Oriented Programming (AOP) is increasingly being adopted by developers to better modularize object-oriented design by introducing crosscutting concerns. However, due to tight coupling of existing approaches with the implementing code and to the poor expressiveness of the pointcut languages a number of problems became evident. We believe that such problems could be solved shifting the focus of software development from a programming language specific implementation to application design. This work presents a possible solution based on modeling aspects at a higher level of abstraction which are, in turn, transformed to specific targets.
{"title":"Towards a model-driven join point model","authors":"W. Cazzola, A. Cicchetti, A. Pierantonio","doi":"10.1145/1141277.1141580","DOIUrl":"https://doi.org/10.1145/1141277.1141580","url":null,"abstract":"Aspect-Oriented Programming (AOP) is increasingly being adopted by developers to better modularize object-oriented design by introducing crosscutting concerns. However, due to tight coupling of existing approaches with the implementing code and to the poor expressiveness of the pointcut languages a number of problems became evident. We believe that such problems could be solved shifting the focus of software development from a programming language specific implementation to application design. This work presents a possible solution based on modeling aspects at a higher level of abstraction which are, in turn, transformed to specific targets.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132039618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic languages typically allow programs to be written at a very high level of abstraction. But their dynamic nature makes it very hard to compile such languages, meaning that a price has to be paid in terms of performance. However under certain restricted conditions compilation is possible. In this paper we describe how a domain specific language for image processing in Python can be compiled for execution on high speed graphics processing units. Previous work on similar problems have used either translative or generative compilation methods, each of which has its limitations. We propose a strategy which combine these two methods thereby achieving the benefits of both.
{"title":"Implementing an embedded GPU language by combining translation and generation","authors":"Calle Lejdfors, L. Ohlsson","doi":"10.1145/1141277.1141654","DOIUrl":"https://doi.org/10.1145/1141277.1141654","url":null,"abstract":"Dynamic languages typically allow programs to be written at a very high level of abstraction. But their dynamic nature makes it very hard to compile such languages, meaning that a price has to be paid in terms of performance. However under certain restricted conditions compilation is possible. In this paper we describe how a domain specific language for image processing in Python can be compiled for execution on high speed graphics processing units. Previous work on similar problems have used either translative or generative compilation methods, each of which has its limitations. We propose a strategy which combine these two methods thereby achieving the benefits of both.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132099098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multimedia documents are collections of media objects, synchronized by means of sets of temporal and spatial constraints. Any multimedia document definition is valid as long as the referred media objects are available and the constraints are satisfiable. Document validity depends on the context in which the document has to be presented. In this paper, we introduce a framework to characterize context adaptation, in the presence of both physical and user oriented context requirements. We define semantically equivalent presentation fragments as alternative to undeliverable ones. In the absence of equivalence, undeliverable media are replaced with candidates that minimize the loss of information/quality in the presentation.
{"title":"Dynamic context adaptation in multimedia documents","authors":"P. Bertolotti, O. Gaggi, M. Sapino","doi":"10.1145/1141277.1141595","DOIUrl":"https://doi.org/10.1145/1141277.1141595","url":null,"abstract":"Multimedia documents are collections of media objects, synchronized by means of sets of temporal and spatial constraints. Any multimedia document definition is valid as long as the referred media objects are available and the constraints are satisfiable. Document validity depends on the context in which the document has to be presented. In this paper, we introduce a framework to characterize context adaptation, in the presence of both physical and user oriented context requirements. We define semantically equivalent presentation fragments as alternative to undeliverable ones. In the absence of equivalence, undeliverable media are replaced with candidates that minimize the loss of information/quality in the presentation.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132263651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-linear AE-solution sets are a special case of parametric systems of equations where universally quantified parameters appear first. They allow to model many practical situations. A new branch and prune algorithm dedicated to the approximation of non-linear AE-solution sets is proposed. It is based on a new generalized interval (intervals whose bounds are not constrained to be ordered) parametric Hansen-Sengupta operator. In spite of some restrictions on the form of the AE-solution set which can be approximated, it allows to solve problems which were before out of reach of previous numerical methods. Some promising experimentations are presented.
{"title":"A branch and prune algorithm for the approximation of non-linear AE-solution sets","authors":"A. Goldsztejn","doi":"10.1145/1141277.1141665","DOIUrl":"https://doi.org/10.1145/1141277.1141665","url":null,"abstract":"Non-linear AE-solution sets are a special case of parametric systems of equations where universally quantified parameters appear first. They allow to model many practical situations. A new branch and prune algorithm dedicated to the approximation of non-linear AE-solution sets is proposed. It is based on a new generalized interval (intervals whose bounds are not constrained to be ordered) parametric Hansen-Sengupta operator. In spite of some restrictions on the form of the AE-solution set which can be approximated, it allows to solve problems which were before out of reach of previous numerical methods. Some promising experimentations are presented.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126678145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As document collections accummulate over time, some of the discussion subjects in them become outfashioned, while new ones emerge. In this paper, we address the challenge of finding such emerging and persistent "themes", i.e. subjects that live long enough to be incorporated into a taxonomy or ontology describing the document collection. Our method is based on similarity-based clustering and cluster label construction and focusses on the identification of cluster labels that "survive" changes in the constitution of the underlying population of documents, including changes in the feature space of dominant words. We conducted a set of promising experiments on the identification of themes that manifested themselves in the ACM library within the last decade.
{"title":"Expanding the taxonomies of bibliographic archives with persistent long-term themes","authors":"R. Schult, M. Spiliopoulou","doi":"10.1145/1141277.1141419","DOIUrl":"https://doi.org/10.1145/1141277.1141419","url":null,"abstract":"As document collections accummulate over time, some of the discussion subjects in them become outfashioned, while new ones emerge. In this paper, we address the challenge of finding such emerging and persistent \"themes\", i.e. subjects that live long enough to be incorporated into a taxonomy or ontology describing the document collection. Our method is based on similarity-based clustering and cluster label construction and focusses on the identification of cluster labels that \"survive\" changes in the constitution of the underlying population of documents, including changes in the feature space of dominant words. We conducted a set of promising experiments on the identification of themes that manifested themselves in the ACM library within the last decade.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126734395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Chen, Sarah L. Pinkerton, Changyu Shen, Mu Wang
Fanconi Anemia (FA) is a rare autosomal genetic disease with multiple birth defects and severe childhood complications for its patients. The lack of sequence homology of the entire FA Complementation Group proteins in such as FANCC, FANCG, FANCA makes them extremely difficult to characterize using conventional bioinformatics methods. In this work, we describe how to use computational methods to extract protein targets for FA, using protein interaction data set collected for FANC group C protein (FANCC). We first generated an initial set of 130 FA-interacting proteins as "FANCC seed proteins" by merging an in-house experimental set of FANCC Tandem Affinity Purification (TAP) Pulldown Proteomics data identified from Mass Spectrometry methods with publicly available human FANCC-interacting proteins. Next, we expanded the FANCC seed proteins using a nearest-neighbor method to generate a FANCC protein interaction subnetwork of 948 proteins in 903 protein interactions. We show that this network is statistically significant, with high indices of aggregation and separations. We also show a visualization of the network, support the evidence that many well-connected proteins exists in the network. Further, we developed and applied an interaction network protein scoring algorithm, which allows us to calculate a ranked list of significant FA proteins. Our result has been supporting further biological investigations of disease biologists on our team. We believe our method can be generalized to other disease biology studies with similar problems.
{"title":"An integrated computational proteomics method to extract protein targets for Fanconi Anemia studies","authors":"J. Chen, Sarah L. Pinkerton, Changyu Shen, Mu Wang","doi":"10.1145/1141277.1141316","DOIUrl":"https://doi.org/10.1145/1141277.1141316","url":null,"abstract":"Fanconi Anemia (FA) is a rare autosomal genetic disease with multiple birth defects and severe childhood complications for its patients. The lack of sequence homology of the entire FA Complementation Group proteins in such as FANCC, FANCG, FANCA makes them extremely difficult to characterize using conventional bioinformatics methods. In this work, we describe how to use computational methods to extract protein targets for FA, using protein interaction data set collected for FANC group C protein (FANCC). We first generated an initial set of 130 FA-interacting proteins as \"FANCC seed proteins\" by merging an in-house experimental set of FANCC Tandem Affinity Purification (TAP) Pulldown Proteomics data identified from Mass Spectrometry methods with publicly available human FANCC-interacting proteins. Next, we expanded the FANCC seed proteins using a nearest-neighbor method to generate a FANCC protein interaction subnetwork of 948 proteins in 903 protein interactions. We show that this network is statistically significant, with high indices of aggregation and separations. We also show a visualization of the network, support the evidence that many well-connected proteins exists in the network. Further, we developed and applied an interaction network protein scoring algorithm, which allows us to calculate a ranked list of significant FA proteins. Our result has been supporting further biological investigations of disease biologists on our team. We believe our method can be generalized to other disease biology studies with similar problems.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126287665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transforming models is a critical activity in Model Driven Engineering (MDE). With the expected adoption of the OMG QVT standard for model transformation language it is anticipated that the experience in applying model transformations in various cases will increase. However, the QVT standard is just one possible approach to solving model transformation problems. In parallel with the QVT activity many research groups and companies have been working on their own model transformation approaches and languages. It is important for software developers to be able to compare and select the most suitable languages and tools for a particular problem. This paper compares the proposed QVT language and the ATLAS Transformation Language (ATL) as a step in the direction of gathering knowledge about the existing model transformation approaches. The focus is on the major language components (sublanguages and their features, execution tools, etc.) and how they are related. Both languages expose a layered architecture for organizing their components. The paper analyzes the layers and compares them according to various categories. Furthermore, motivations for interoperability between the languages and the related tools are given. Possible solutions for interoperability are identified and discussed.
{"title":"On the architectural alignment of ATL and QVT","authors":"F. Jouault, I. Kurtev","doi":"10.1145/1141277.1141561","DOIUrl":"https://doi.org/10.1145/1141277.1141561","url":null,"abstract":"Transforming models is a critical activity in Model Driven Engineering (MDE). With the expected adoption of the OMG QVT standard for model transformation language it is anticipated that the experience in applying model transformations in various cases will increase. However, the QVT standard is just one possible approach to solving model transformation problems. In parallel with the QVT activity many research groups and companies have been working on their own model transformation approaches and languages. It is important for software developers to be able to compare and select the most suitable languages and tools for a particular problem. This paper compares the proposed QVT language and the ATLAS Transformation Language (ATL) as a step in the direction of gathering knowledge about the existing model transformation approaches. The focus is on the major language components (sublanguages and their features, execution tools, etc.) and how they are related. Both languages expose a layered architecture for organizing their components. The paper analyzes the layers and compares them according to various categories. Furthermore, motivations for interoperability between the languages and the related tools are given. Possible solutions for interoperability are identified and discussed.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126833435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K-nearest neighbors (KNN) is the simplest method for classification. Given a set of objects in a multi-dimensional feature space, the method assigns a category to an unclassified object based on the plurality of category of the k-nearest neighbors. The closeness between objects is determined using a distance measure, e.g. Euclidian distance. Despite its simplicity, KNN also has some drawbacks: 1) it suffers from expensive computational cost in training when the training set contains millions of objects; 2) its classification time is linear to the size of the training set. The larger the training set, the longer it takes to search for the k-nearest neighbors. In this paper, we propose a new algorithm, called SMART-TV (Small Absolute difference of Total Variation), that approximates a set of potential candidates of nearest neighbors by examining the absolute difference of total variation between each data object in the training set and the unclassified object. Then, the k-nearest neighbors are searched from that candidate set. We empirically evaluate the performance of our algorithm on both real and synthetic datasets and find that SMART-TV is fast and scalable. The classification accuracy of SMART-TV is high and comparable to the accuracy of the traditional KNN algorithm.
k近邻(KNN)是最简单的分类方法。该方法在多维特征空间中给定一组对象,根据其k近邻的类别个数为未分类对象分配类别。物体之间的距离是用距离度量来确定的,例如欧几里得距离。尽管简单,但KNN也有一些缺点:1)当训练集包含数百万个对象时,它的训练计算成本昂贵;2)其分类时间与训练集的大小成线性关系。训练集越大,搜索k个最近邻所需的时间就越长。在本文中,我们提出了一种新的算法,称为SMART-TV (Small Absolute difference of Total Variation),它通过检查训练集中每个数据对象与未分类对象之间的总变化的绝对差来近似一组最近邻的潜在候选对象。然后,从候选集合中搜索k个最近的邻居。我们对算法在真实数据集和合成数据集上的性能进行了实证评估,发现SMART-TV快速且可扩展。SMART-TV的分类精度高,可与传统KNN算法的分类精度相媲美。
{"title":"SMART-TV: a fast and scalable nearest neighbor based classifier for data mining","authors":"T. Abidin, W. Perrizo","doi":"10.1145/1141277.1141403","DOIUrl":"https://doi.org/10.1145/1141277.1141403","url":null,"abstract":"K-nearest neighbors (KNN) is the simplest method for classification. Given a set of objects in a multi-dimensional feature space, the method assigns a category to an unclassified object based on the plurality of category of the k-nearest neighbors. The closeness between objects is determined using a distance measure, e.g. Euclidian distance. Despite its simplicity, KNN also has some drawbacks: 1) it suffers from expensive computational cost in training when the training set contains millions of objects; 2) its classification time is linear to the size of the training set. The larger the training set, the longer it takes to search for the k-nearest neighbors. In this paper, we propose a new algorithm, called SMART-TV (Small Absolute difference of Total Variation), that approximates a set of potential candidates of nearest neighbors by examining the absolute difference of total variation between each data object in the training set and the unclassified object. Then, the k-nearest neighbors are searched from that candidate set. We empirically evaluate the performance of our algorithm on both real and synthetic datasets and find that SMART-TV is fast and scalable. The classification accuracy of SMART-TV is high and comparable to the accuracy of the traditional KNN algorithm.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115272712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}