A. Menychtas, Christina Santzaridou, George Kousiouris, T. Varvarigou, Leire Orue-Echevarria Arrieta, Juncal Alonso, Jesús Gorroñogoitia, H. Brunelière, Oliver Strauß, Tatiana Senkova, B. Pellens, P. Stuer
Nowadays Cloud Computing is considered as the ideal environment for engineering, hosting and provisioning applications. A continuously increasing set of cloud-based solutions is available to application owners and developers to tailor their applications exploiting the advanced features of this paradigm for elasticity, high availability and performance. Although these offerings provide many benefits to new applications, they also incorporate constrains to the modernization and migration of legacy applications by obliging the use of specific technologies and explicit architectural design approaches. The modernization and adaptation of legacy applications to cloud environments is a great challenge for all involved stakeholders, not only from the technical perspective, but also in business level with the need to adapt the business processes and models of the modernized application that will be offered from now on, as a service. In this paper we present a novel model-driven approach for the migration of legacy applications in modern cloud environments which covers all aspects and phases of the migration process, as well as an integrated framework that supports all migration process.
{"title":"ARTIST Methodology and Framework: A Novel Approach for the Migration of Legacy Software on the Cloud","authors":"A. Menychtas, Christina Santzaridou, George Kousiouris, T. Varvarigou, Leire Orue-Echevarria Arrieta, Juncal Alonso, Jesús Gorroñogoitia, H. Brunelière, Oliver Strauß, Tatiana Senkova, B. Pellens, P. Stuer","doi":"10.1109/SYNASC.2013.62","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.62","url":null,"abstract":"Nowadays Cloud Computing is considered as the ideal environment for engineering, hosting and provisioning applications. A continuously increasing set of cloud-based solutions is available to application owners and developers to tailor their applications exploiting the advanced features of this paradigm for elasticity, high availability and performance. Although these offerings provide many benefits to new applications, they also incorporate constrains to the modernization and migration of legacy applications by obliging the use of specific technologies and explicit architectural design approaches. The modernization and adaptation of legacy applications to cloud environments is a great challenge for all involved stakeholders, not only from the technical perspective, but also in business level with the need to adapt the business processes and models of the modernized application that will be offered from now on, as a service. In this paper we present a novel model-driven approach for the migration of legacy applications in modern cloud environments which covers all aspects and phases of the migration process, as well as an integrated framework that supports all migration process.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129270573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a computational framework for studying dynamical systems. This framework can be used to prove the existence of certain behaviour in a given dynamical system at any finite (limited) resolution automatically. The proposed framework is based on approximating the phase space topology of a given dynamical system at a finite resolution by adaptively partitioning it at rational points. Dyadic rationals and partition elements with disjoint interiors are employed to build a transparent partition that enables constructing an ideal combinatorial representation of a given dynamical system. Moreover, we introduce a new algorithmic strategy that overcomes the dependence on initial conditions, supports deriving ubiquitous conclusions, enables finding bifurcation points up to certain precision, and (most importantly) is computationally efficient. A set of simple yet powerful dynamic graph algorithms that were developed to support the new strategy are described in details. As an application, invariant sets and bifurcation points of the logistic map were computed.
{"title":"An Efficient Computational Framework for Studying Dynamical Systems","authors":"Islam ElShaarawy, W. Gomaa","doi":"10.1109/SYNASC.2013.26","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.26","url":null,"abstract":"In this paper, we introduce a computational framework for studying dynamical systems. This framework can be used to prove the existence of certain behaviour in a given dynamical system at any finite (limited) resolution automatically. The proposed framework is based on approximating the phase space topology of a given dynamical system at a finite resolution by adaptively partitioning it at rational points. Dyadic rationals and partition elements with disjoint interiors are employed to build a transparent partition that enables constructing an ideal combinatorial representation of a given dynamical system. Moreover, we introduce a new algorithmic strategy that overcomes the dependence on initial conditions, supports deriving ubiquitous conclusions, enables finding bifurcation points up to certain precision, and (most importantly) is computationally efficient. A set of simple yet powerful dynamic graph algorithms that were developed to support the new strategy are described in details. As an application, invariant sets and bifurcation points of the logistic map were computed.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131658257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable and scalable Ambient Intelligence means a distributed system of agents that are capable of working together or autonomously, depending on the requirements of the situation. In previous research we have argued in favor of the use of a representation for context information that can be distributed among agents, so that each agent knows only the information that is relevant to its activity. Recognizing interesting information or relevant situations is done by using context patterns -- graph patterns with potentially unknown nodes and edges labeled with regular expressions. In this context, a major challenge is for agents to use a graph matching algorithm that is adequate to the possibilities of the devices on which the agents are running. Moreover, it is necessary that the algorithm is able to provide partial matches. This paper presents an algorithm specifically designed for this problem, that uses growing partial matches to obtain the maximum sub-graph of the context graph that matches (part of) the context pattern. Experiments were performed with the algorithm and its performance has been compared with that of other algorithms adapted to our problem.
{"title":"Context Matching for Ambient Intelligence Applications","authors":"Andrei Olaru","doi":"10.1109/SYNASC.2013.42","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.42","url":null,"abstract":"Reliable and scalable Ambient Intelligence means a distributed system of agents that are capable of working together or autonomously, depending on the requirements of the situation. In previous research we have argued in favor of the use of a representation for context information that can be distributed among agents, so that each agent knows only the information that is relevant to its activity. Recognizing interesting information or relevant situations is done by using context patterns -- graph patterns with potentially unknown nodes and edges labeled with regular expressions. In this context, a major challenge is for agents to use a graph matching algorithm that is adequate to the possibilities of the devices on which the agents are running. Moreover, it is necessary that the algorithm is able to provide partial matches. This paper presents an algorithm specifically designed for this problem, that uses growing partial matches to obtain the maximum sub-graph of the context graph that matches (part of) the context pattern. Experiments were performed with the algorithm and its performance has been compared with that of other algorithms adapted to our problem.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116127575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces the Cph CT Toolbox - an open source collection of the most commonly used filtered back-projection algorithms for CT image reconstruction. The toolbox targets teaching, research and production environments providing simple codes for teaching and highly optimized codes for research and industrial applications. The toolbox has a flexible plugin infrastructure making it applicable to all existing CT systems using filtered back-projection reconstruction. The package is available online for download and modification under the GPLv2 license.
{"title":"Cph CT Toolbox: CT Reconstruction for Education, Research and Industrial Applications","authors":"J. Bardino, Martin Rehr, B. Vinter","doi":"10.1109/SYNASC.2013.48","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.48","url":null,"abstract":"This paper introduces the Cph CT Toolbox - an open source collection of the most commonly used filtered back-projection algorithms for CT image reconstruction. The toolbox targets teaching, research and production environments providing simple codes for teaching and highly optimized codes for research and industrial applications. The toolbox has a flexible plugin infrastructure making it applicable to all existing CT systems using filtered back-projection reconstruction. The package is available online for download and modification under the GPLv2 license.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115484387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce fresh-variable automata, a natural extension of finite-state automata over infinite alphabet. In this model the transitions are labeled with constants or variables that can be refreshed in some specified states. We prove several closure properties for this class of automata and study their decision problems. We show the applicability of our model in modeling Web services handling data from an infinite domain. We introduce a notion of simulation that enables us to reduce the Web service composition problem to the construction of a simulation of a target service by the asynchronous product of existing services, and prove that this construction is computable.
{"title":"Fresh-Variable Automata: Application to Service Composition","authors":"W. Belkhir, Yannick Chevalier, M. Rusinowitch","doi":"10.1109/SYNASC.2013.28","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.28","url":null,"abstract":"We introduce fresh-variable automata, a natural extension of finite-state automata over infinite alphabet. In this model the transitions are labeled with constants or variables that can be refreshed in some specified states. We prove several closure properties for this class of automata and study their decision problems. We show the applicability of our model in modeling Web services handling data from an infinite domain. We introduce a notion of simulation that enables us to reduce the Web service composition problem to the construction of a simulation of a target service by the asynchronous product of existing services, and prove that this construction is computable.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129386233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications have modules which could benefit greatly from the massive parallel numeric computing power provided by GPUs. Renderers, signal processing or simulators are only a few such applications. Due to the weaknesses of the GPUs such as stackless execution model or poor capabilities for pointer exchange with the host, sometimes is not feasible to convert an entire algorithm for GPU, even if it is highly parallel and some of its parts can be greatly accelerated on GPU. In such situations a programmer should have a framework which allows him to split the code flow of a thread in parts and each of these parts will run on the most suitable computing resource, CPU or GPU. For GPU execution, multiple data from host threads will be collected, run on GPU and the results returned to the original threads so they will be able to resume execution on host. In this paper we propose such an algorithm, analyze it and evaluate its practical results.
{"title":"Algorithm for Cooperative CPU-GPU Computing","authors":"Razvan-Mihai Aciu, H. Ciocarlie","doi":"10.1109/SYNASC.2013.53","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.53","url":null,"abstract":"Many applications have modules which could benefit greatly from the massive parallel numeric computing power provided by GPUs. Renderers, signal processing or simulators are only a few such applications. Due to the weaknesses of the GPUs such as stackless execution model or poor capabilities for pointer exchange with the host, sometimes is not feasible to convert an entire algorithm for GPU, even if it is highly parallel and some of its parts can be greatly accelerated on GPU. In such situations a programmer should have a framework which allows him to split the code flow of a thread in parts and each of these parts will run on the most suitable computing resource, CPU or GPU. For GPU execution, multiple data from host threads will be collected, run on GPU and the results returned to the original threads so they will be able to resume execution on host. In this paper we propose such an algorithm, analyze it and evaluate its practical results.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126565811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe a methodology for easy development of Hoare Logic verification tools using the K (operational) semantics of programming languages. We exploit the relationship between the Hoare Logic and Matching Logic Reachability, which allows us to translate Hoare triples into reachability rules. Then we use the symbolic execution support to check the derived reachability rules. A Hoare triple holds w.r.t. the partial correctness if and only if the execution of its reachability rule is successful. The methodology consists in enriching the operational semantics of a programming language with syntax and semantics for additional constructs required when using Hoare Logic. The obtained semantics is then used by the K Framework to verify annotated programs. We instantiate our methodology on a simple imperative language, by describing each step separately, and then we test the obtained tool over the KeY-Hoare tests suite.
{"title":"Engineering Hoare Logic-Based Program Verification in K Framework","authors":"Andrei Arusoaie","doi":"10.1109/SYNASC.2013.31","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.31","url":null,"abstract":"In this paper we describe a methodology for easy development of Hoare Logic verification tools using the K (operational) semantics of programming languages. We exploit the relationship between the Hoare Logic and Matching Logic Reachability, which allows us to translate Hoare triples into reachability rules. Then we use the symbolic execution support to check the derived reachability rules. A Hoare triple holds w.r.t. the partial correctness if and only if the execution of its reachability rule is successful. The methodology consists in enriching the operational semantics of a programming language with syntax and semantics for additional constructs required when using Hoare Logic. The obtained semantics is then used by the K Framework to verify annotated programs. We instantiate our methodology on a simple imperative language, by describing each step separately, and then we test the obtained tool over the KeY-Hoare tests suite.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We continue a Coxeter spectral study of finite posets and edge-bipartite graphs (a class of signed graphs in the sense of Harary and Zaslavsky). Here we are interested in two problems. First: whether the incidence matrices CI and CJ of two connected positive posets I and J are Z-congruent if and only if the Coxeter spectra of I and J coincide. Second: the problem if any square integer matrix A E Mn(Z) is Z-congruent with its transpose Atr. We show that these problems can be effectively solved using the right action * : Mn(Z) × Gl(n, Z)D → Mn(Z), A → A * B := Btr · A · B, of the isotropy group Gl(n, Z)D of a simply laced Dynkin diagram D E {An, Dn, E6, E7, E8}. We present an efficient algorithm for computing the isotropy group Gl(n, Z)D. In particular, we show that symbolic and numerical computer calculations in Python and Cython allow us to present a complete description of the isotropy group Gl(n, Z)D with |D| ≤ 10. Furthermore, we discuss optimisation techniques that are important from the calculation efficiency point of view.
{"title":"Efficient Computation of the Isotropy Group of a Finite Graph: A Combinatorial Approach","authors":"Marcin Gąsiorek","doi":"10.1109/SYNASC.2013.21","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.21","url":null,"abstract":"We continue a Coxeter spectral study of finite posets and edge-bipartite graphs (a class of signed graphs in the sense of Harary and Zaslavsky). Here we are interested in two problems. First: whether the incidence matrices CI and CJ of two connected positive posets I and J are Z-congruent if and only if the Coxeter spectra of I and J coincide. Second: the problem if any square integer matrix A E Mn(Z) is Z-congruent with its transpose Atr. We show that these problems can be effectively solved using the right action * : M<sub>n</sub>(Z) × Gl(n, Z)<sub>D</sub> → M<sub>n</sub>(Z), A → A * B := Btr · A · B, of the isotropy group Gl(n, Z)D of a simply laced Dynkin diagram D E {A<sub>n</sub>, D<sub>n</sub>, E<sub>6</sub>, E<sub>7</sub>, E<sub>8</sub>}. We present an efficient algorithm for computing the isotropy group Gl(n, Z)D. In particular, we show that symbolic and numerical computer calculations in Python and Cython allow us to present a complete description of the isotropy group Gl(n, Z)D with |D| ≤ 10. Furthermore, we discuss optimisation techniques that are important from the calculation efficiency point of view.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service providers face the challenge of meeting service-level agreements (SLAs) under uncertainty on the application actual performance. The performance heavily depends on the characteristics of the hardware on which the application is deployed, on the application architecture, as well as on the user workload. Although many models have been proposed for the performance prediction of software applications, most of them focus on average measures, e.g., mean response times. However, SLAs are often set in terms of percentiles, such that a given portion of requests receive a predefined service level, e.g., 95% of the requests should face a response time of at most 10 ms. To enable the effective prediction of this type of measures, in this paper we use fluid models for the computation of the probability distribution of performance measures relevant for SLAs. Our models are automatically built from a Palladio Component Model (PCM) instance, thus allowing the SLA assessment directly from the PCM specification. This provides an scalable alternative for SLA assessment within the PCM framework, as currently this is supported by means of simulation only.
{"title":"Assessing SLA Compliance from Palladio Component Models","authors":"Juan F. Pérez, G. Casale","doi":"10.1109/SYNASC.2013.60","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.60","url":null,"abstract":"Service providers face the challenge of meeting service-level agreements (SLAs) under uncertainty on the application actual performance. The performance heavily depends on the characteristics of the hardware on which the application is deployed, on the application architecture, as well as on the user workload. Although many models have been proposed for the performance prediction of software applications, most of them focus on average measures, e.g., mean response times. However, SLAs are often set in terms of percentiles, such that a given portion of requests receive a predefined service level, e.g., 95% of the requests should face a response time of at most 10 ms. To enable the effective prediction of this type of measures, in this paper we use fluid models for the computation of the probability distribution of performance measures relevant for SLAs. Our models are automatically built from a Palladio Component Model (PCM) instance, thus allowing the SLA assessment directly from the PCM specification. This provides an scalable alternative for SLA assessment within the PCM framework, as currently this is supported by means of simulation only.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Domain experts in different areas have a large number of options for approaching their specific data analysis problem. In exploration of large data sets on HPC systems, choosing which method to use, or how to tune the parameters of an algorithm to achieve good results are challenging tasks for data analysts themselves. In this paper, we propose a recommendation module for a distributed machine learning environment aiming at helping the end-users to obtain optimized results for their data analysis / machine learning problem.
{"title":"Ontology-Based Recommender for Distributed Machine Learning Environment","authors":"Daniel Pop, Caius Bogdanescu","doi":"10.1109/SYNASC.2013.76","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.76","url":null,"abstract":"Domain experts in different areas have a large number of options for approaching their specific data analysis problem. In exploration of large data sets on HPC systems, choosing which method to use, or how to tune the parameters of an algorithm to achieve good results are challenging tasks for data analysts themselves. In this paper, we propose a recommendation module for a distributed machine learning environment aiming at helping the end-users to obtain optimized results for their data analysis / machine learning problem.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126044013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}