A hash iteration technique takes as input a hash compression function which works on fixed length binary strings and outputs a hash function which works on arbitrary length binary strings. In this paper we introduce token-free bounded delay codes and then use them to define a hash iteration technique. The newly created schema, when applied to a hash compression function, preserves the following security properties: preimage resistance (Pre), always preimage-resistance (aPre), everywhere preimage-resistance (ePre) and collision-resistance (Coll). The proofs for the preservation of the second-preimage resistance (Sec), always second-preimage resistance (aSec), and everywhere second-preimage resistance (eSec) are part of our future work. Comparisons with other iteration techniques are also provided.
{"title":"Token Free Bounded Delay Codes and Hash Iteration","authors":"Sebastian Codrin Ditu","doi":"10.1109/SYNASC.2013.58","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.58","url":null,"abstract":"A hash iteration technique takes as input a hash compression function which works on fixed length binary strings and outputs a hash function which works on arbitrary length binary strings. In this paper we introduce token-free bounded delay codes and then use them to define a hash iteration technique. The newly created schema, when applied to a hash compression function, preserves the following security properties: preimage resistance (Pre), always preimage-resistance (aPre), everywhere preimage-resistance (ePre) and collision-resistance (Coll). The proofs for the preservation of the second-preimage resistance (Sec), always second-preimage resistance (aSec), and everywhere second-preimage resistance (eSec) are part of our future work. Comparisons with other iteration techniques are also provided.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130366643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Termination of Evolutionary Algorithms (EA) at its steady state so that useless iterations are not performed is a main point for its efficient application to black-box problems. Many EA algorithms evolve while there is still diversity in their population and, thus, they could be terminated by analyzing the behavior some measures of EA population diversity. This paper presents a numeric approximation to steady states that can be used to detect the moment EA population has lost its diversity for EA termination. Our condition has been applied to 3 EA paradigms based on diversity and a selection of functions covering the properties most relevant for EA convergence. Experiments show that our condition works regardless of the search space dimension and function landscape.
{"title":"Detecting Loss of Diversity for an Efficient Termination of EAs","authors":"D. Roche, D. Gil, J. Giraldo","doi":"10.1109/SYNASC.2013.79","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.79","url":null,"abstract":"Termination of Evolutionary Algorithms (EA) at its steady state so that useless iterations are not performed is a main point for its efficient application to black-box problems. Many EA algorithms evolve while there is still diversity in their population and, thus, they could be terminated by analyzing the behavior some measures of EA population diversity. This paper presents a numeric approximation to steady states that can be used to detect the moment EA population has lost its diversity for EA termination. Our condition has been applied to 3 EA paradigms based on diversity and a selection of functions covering the properties most relevant for EA convergence. Experiments show that our condition works regardless of the search space dimension and function landscape.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123787738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of cloud computing is an opportunity to companies offering client-server services to migrate to a Software as a Service (SaaS) kind of business model. This kind of business model is based on having companies offering services on the cloud accessible by means of web interfaces and protocols. This comes in opposition to the traditional (client-server) model in which software packages need to be downloaded, installed and maintained directly by clients. Therefore, the SaaS could allow the definition of high level services, removing the burden of configuring and managing servers from clients. In the point of view of the service providers, this transition is not easy. Concerns such as vendor neutral design, scalability, (self-)adaptation and monitoring of running applications need to be dealt with. The MODAClouds FP7 EU project proposes to deal with all these challenges and it is going to use ModelioSaaS as a use case. ModelioSaaS is a software as a service product to be offered by SOFTEAM by means of the migration of its existing client-server based products. The main contributions of this paper are therefore providing an account, from the industrial point of view, of the context surrounding this migration and the constraints it needs to comply to. These constraints will be presented in the form of functional and non-functional requirements along with their rationale. This paper presents our current view of the architecture of ModelioSaaS that will enable this move and the gaps that we intend to fill be means of the MODAClouds platform.
{"title":"From the Desktop to the Multi-clouds: The Case of ModelioSaaS","authors":"M. A. D. Silva, Antonin Abhervé, A. Sadovykh","doi":"10.1109/SYNASC.2013.67","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.67","url":null,"abstract":"The advent of cloud computing is an opportunity to companies offering client-server services to migrate to a Software as a Service (SaaS) kind of business model. This kind of business model is based on having companies offering services on the cloud accessible by means of web interfaces and protocols. This comes in opposition to the traditional (client-server) model in which software packages need to be downloaded, installed and maintained directly by clients. Therefore, the SaaS could allow the definition of high level services, removing the burden of configuring and managing servers from clients. In the point of view of the service providers, this transition is not easy. Concerns such as vendor neutral design, scalability, (self-)adaptation and monitoring of running applications need to be dealt with. The MODAClouds FP7 EU project proposes to deal with all these challenges and it is going to use ModelioSaaS as a use case. ModelioSaaS is a software as a service product to be offered by SOFTEAM by means of the migration of its existing client-server based products. The main contributions of this paper are therefore providing an account, from the industrial point of view, of the context surrounding this migration and the constraints it needs to comply to. These constraints will be presented in the form of functional and non-functional requirements along with their rationale. This paper presents our current view of the architecture of ModelioSaaS that will enable this move and the gaps that we intend to fill be means of the MODAClouds platform.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114756222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe a methodology for easy development of Hoare Logic verification tools using the K (operational) semantics of programming languages. We exploit the relationship between the Hoare Logic and Matching Logic Reachability, which allows us to translate Hoare triples into reachability rules. Then we use the symbolic execution support to check the derived reachability rules. A Hoare triple holds w.r.t. the partial correctness if and only if the execution of its reachability rule is successful. The methodology consists in enriching the operational semantics of a programming language with syntax and semantics for additional constructs required when using Hoare Logic. The obtained semantics is then used by the K Framework to verify annotated programs. We instantiate our methodology on a simple imperative language, by describing each step separately, and then we test the obtained tool over the KeY-Hoare tests suite.
{"title":"Engineering Hoare Logic-Based Program Verification in K Framework","authors":"Andrei Arusoaie","doi":"10.1109/SYNASC.2013.31","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.31","url":null,"abstract":"In this paper we describe a methodology for easy development of Hoare Logic verification tools using the K (operational) semantics of programming languages. We exploit the relationship between the Hoare Logic and Matching Logic Reachability, which allows us to translate Hoare triples into reachability rules. Then we use the symbolic execution support to check the derived reachability rules. A Hoare triple holds w.r.t. the partial correctness if and only if the execution of its reachability rule is successful. The methodology consists in enriching the operational semantics of a programming language with syntax and semantics for additional constructs required when using Hoare Logic. The obtained semantics is then used by the K Framework to verify annotated programs. We instantiate our methodology on a simple imperative language, by describing each step separately, and then we test the obtained tool over the KeY-Hoare tests suite.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce fresh-variable automata, a natural extension of finite-state automata over infinite alphabet. In this model the transitions are labeled with constants or variables that can be refreshed in some specified states. We prove several closure properties for this class of automata and study their decision problems. We show the applicability of our model in modeling Web services handling data from an infinite domain. We introduce a notion of simulation that enables us to reduce the Web service composition problem to the construction of a simulation of a target service by the asynchronous product of existing services, and prove that this construction is computable.
{"title":"Fresh-Variable Automata: Application to Service Composition","authors":"W. Belkhir, Yannick Chevalier, M. Rusinowitch","doi":"10.1109/SYNASC.2013.28","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.28","url":null,"abstract":"We introduce fresh-variable automata, a natural extension of finite-state automata over infinite alphabet. In this model the transitions are labeled with constants or variables that can be refreshed in some specified states. We prove several closure properties for this class of automata and study their decision problems. We show the applicability of our model in modeling Web services handling data from an infinite domain. We introduce a notion of simulation that enables us to reduce the Web service composition problem to the construction of a simulation of a target service by the asynchronous product of existing services, and prove that this construction is computable.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129386233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable and scalable Ambient Intelligence means a distributed system of agents that are capable of working together or autonomously, depending on the requirements of the situation. In previous research we have argued in favor of the use of a representation for context information that can be distributed among agents, so that each agent knows only the information that is relevant to its activity. Recognizing interesting information or relevant situations is done by using context patterns -- graph patterns with potentially unknown nodes and edges labeled with regular expressions. In this context, a major challenge is for agents to use a graph matching algorithm that is adequate to the possibilities of the devices on which the agents are running. Moreover, it is necessary that the algorithm is able to provide partial matches. This paper presents an algorithm specifically designed for this problem, that uses growing partial matches to obtain the maximum sub-graph of the context graph that matches (part of) the context pattern. Experiments were performed with the algorithm and its performance has been compared with that of other algorithms adapted to our problem.
{"title":"Context Matching for Ambient Intelligence Applications","authors":"Andrei Olaru","doi":"10.1109/SYNASC.2013.42","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.42","url":null,"abstract":"Reliable and scalable Ambient Intelligence means a distributed system of agents that are capable of working together or autonomously, depending on the requirements of the situation. In previous research we have argued in favor of the use of a representation for context information that can be distributed among agents, so that each agent knows only the information that is relevant to its activity. Recognizing interesting information or relevant situations is done by using context patterns -- graph patterns with potentially unknown nodes and edges labeled with regular expressions. In this context, a major challenge is for agents to use a graph matching algorithm that is adequate to the possibilities of the devices on which the agents are running. Moreover, it is necessary that the algorithm is able to provide partial matches. This paper presents an algorithm specifically designed for this problem, that uses growing partial matches to obtain the maximum sub-graph of the context graph that matches (part of) the context pattern. Experiments were performed with the algorithm and its performance has been compared with that of other algorithms adapted to our problem.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116127575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a wide range of applications of non-equidistant discretization of real signals. For instance, in computer graphics, Fourier analysis, identification and control theories, etc. They have the common ability to describe dynamical systems as well. In this paper we provide a fast algorithm based on an existing mathematical model to compute a non-uniform grid for representing different types of signals. In order to do that we need new concepts for constructing an effective numerical solution. Additionally, two experiments are performed to investigate the accuracy of the method. Finally, we also present a parallel implementation in CUDA which can further improve the execution time.
{"title":"Fast Computing of Non-uniform Sampling Positions for Real Signals","authors":"P. Kovács, Viktor Vad","doi":"10.1109/SYNASC.2013.27","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.27","url":null,"abstract":"There is a wide range of applications of non-equidistant discretization of real signals. For instance, in computer graphics, Fourier analysis, identification and control theories, etc. They have the common ability to describe dynamical systems as well. In this paper we provide a fast algorithm based on an existing mathematical model to compute a non-uniform grid for representing different types of signals. In order to do that we need new concepts for constructing an effective numerical solution. Additionally, two experiments are performed to investigate the accuracy of the method. Finally, we also present a parallel implementation in CUDA which can further improve the execution time.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128951154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces the Cph CT Toolbox - an open source collection of the most commonly used filtered back-projection algorithms for CT image reconstruction. The toolbox targets teaching, research and production environments providing simple codes for teaching and highly optimized codes for research and industrial applications. The toolbox has a flexible plugin infrastructure making it applicable to all existing CT systems using filtered back-projection reconstruction. The package is available online for download and modification under the GPLv2 license.
{"title":"Cph CT Toolbox: CT Reconstruction for Education, Research and Industrial Applications","authors":"J. Bardino, Martin Rehr, B. Vinter","doi":"10.1109/SYNASC.2013.48","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.48","url":null,"abstract":"This paper introduces the Cph CT Toolbox - an open source collection of the most commonly used filtered back-projection algorithms for CT image reconstruction. The toolbox targets teaching, research and production environments providing simple codes for teaching and highly optimized codes for research and industrial applications. The toolbox has a flexible plugin infrastructure making it applicable to all existing CT systems using filtered back-projection reconstruction. The package is available online for download and modification under the GPLv2 license.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115484387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications have modules which could benefit greatly from the massive parallel numeric computing power provided by GPUs. Renderers, signal processing or simulators are only a few such applications. Due to the weaknesses of the GPUs such as stackless execution model or poor capabilities for pointer exchange with the host, sometimes is not feasible to convert an entire algorithm for GPU, even if it is highly parallel and some of its parts can be greatly accelerated on GPU. In such situations a programmer should have a framework which allows him to split the code flow of a thread in parts and each of these parts will run on the most suitable computing resource, CPU or GPU. For GPU execution, multiple data from host threads will be collected, run on GPU and the results returned to the original threads so they will be able to resume execution on host. In this paper we propose such an algorithm, analyze it and evaluate its practical results.
{"title":"Algorithm for Cooperative CPU-GPU Computing","authors":"Razvan-Mihai Aciu, H. Ciocarlie","doi":"10.1109/SYNASC.2013.53","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.53","url":null,"abstract":"Many applications have modules which could benefit greatly from the massive parallel numeric computing power provided by GPUs. Renderers, signal processing or simulators are only a few such applications. Due to the weaknesses of the GPUs such as stackless execution model or poor capabilities for pointer exchange with the host, sometimes is not feasible to convert an entire algorithm for GPU, even if it is highly parallel and some of its parts can be greatly accelerated on GPU. In such situations a programmer should have a framework which allows him to split the code flow of a thread in parts and each of these parts will run on the most suitable computing resource, CPU or GPU. For GPU execution, multiple data from host threads will be collected, run on GPU and the results returned to the original threads so they will be able to resume execution on host. In this paper we propose such an algorithm, analyze it and evaluate its practical results.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126565811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service providers face the challenge of meeting service-level agreements (SLAs) under uncertainty on the application actual performance. The performance heavily depends on the characteristics of the hardware on which the application is deployed, on the application architecture, as well as on the user workload. Although many models have been proposed for the performance prediction of software applications, most of them focus on average measures, e.g., mean response times. However, SLAs are often set in terms of percentiles, such that a given portion of requests receive a predefined service level, e.g., 95% of the requests should face a response time of at most 10 ms. To enable the effective prediction of this type of measures, in this paper we use fluid models for the computation of the probability distribution of performance measures relevant for SLAs. Our models are automatically built from a Palladio Component Model (PCM) instance, thus allowing the SLA assessment directly from the PCM specification. This provides an scalable alternative for SLA assessment within the PCM framework, as currently this is supported by means of simulation only.
{"title":"Assessing SLA Compliance from Palladio Component Models","authors":"Juan F. Pérez, G. Casale","doi":"10.1109/SYNASC.2013.60","DOIUrl":"https://doi.org/10.1109/SYNASC.2013.60","url":null,"abstract":"Service providers face the challenge of meeting service-level agreements (SLAs) under uncertainty on the application actual performance. The performance heavily depends on the characteristics of the hardware on which the application is deployed, on the application architecture, as well as on the user workload. Although many models have been proposed for the performance prediction of software applications, most of them focus on average measures, e.g., mean response times. However, SLAs are often set in terms of percentiles, such that a given portion of requests receive a predefined service level, e.g., 95% of the requests should face a response time of at most 10 ms. To enable the effective prediction of this type of measures, in this paper we use fluid models for the computation of the probability distribution of performance measures relevant for SLAs. Our models are automatically built from a Palladio Component Model (PCM) instance, thus allowing the SLA assessment directly from the PCM specification. This provides an scalable alternative for SLA assessment within the PCM framework, as currently this is supported by means of simulation only.","PeriodicalId":293085,"journal":{"name":"2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}