Stock Market Forecasting implies the use of a series of techniques that helps in determining the stock price evolution. The paper describes a multi-agent system that uses numerical, financial and economical data in order to evaluate the company's position on the market, profitability, performance, future expectations in the company's evolution. Determining the effect of political, governmental and social decisions along with detecting the way in which the price is constructed based on technical and fundamental analysis methods and the bid/ask situation helps in determining a more precise buy/sell signals, reducing the false signals and determining some risk/gain positions on different periods of time. In order to validate the results a prototype was developed.
{"title":"Intelligent Stock Market Analysis System - A Fundamental and Macro-economical Analysis Approach","authors":"M. Tirea, V. Negru","doi":"10.1109/SYNASC.2014.75","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.75","url":null,"abstract":"Stock Market Forecasting implies the use of a series of techniques that helps in determining the stock price evolution. The paper describes a multi-agent system that uses numerical, financial and economical data in order to evaluate the company's position on the market, profitability, performance, future expectations in the company's evolution. Determining the effect of political, governmental and social decisions along with detecting the way in which the price is constructed based on technical and fundamental analysis methods and the bid/ask situation helps in determining a more precise buy/sell signals, reducing the false signals and determining some risk/gain positions on different periods of time. In order to validate the results a prototype was developed.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114026118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two programs P and Q are partially equivalent if, when both terminate on the same input, they end up with equivalent outputs. Establishing partial equivalence is useful in, e.g., Compiler verification, when P is the source program and Q is the target program, or in compiler optimisation, when P is the initial program and Q is the optimised program. A program R is partially correct if, when it terminates, it ends up in a "good" state. We show that, somewhat surprisingly, the problem of establishing partial equivalence can be reduced to the problem of showing partial correctness in an aggregated language, where programs R consist of pairs of programs 〈P, Q〉. Our method is crucially based on the recently-introduced matching logic, which allows to faithfully define the operational semantics of any language. We show that we can construct the aggregated language mechanically, from the semantics of the initial languages. Furthermore, matching logic gives us for free a proof system for partial correctness for the resulting language. This proof system can then be used to prove partial equivalence.
{"title":"Reducing Partial Equivalence to Partial Correctness","authors":"Stefan Ciobaca","doi":"10.1109/SYNASC.2014.30","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.30","url":null,"abstract":"Two programs P and Q are partially equivalent if, when both terminate on the same input, they end up with equivalent outputs. Establishing partial equivalence is useful in, e.g., Compiler verification, when P is the source program and Q is the target program, or in compiler optimisation, when P is the initial program and Q is the optimised program. A program R is partially correct if, when it terminates, it ends up in a \"good\" state. We show that, somewhat surprisingly, the problem of establishing partial equivalence can be reduced to the problem of showing partial correctness in an aggregated language, where programs R consist of pairs of programs 〈P, Q〉. Our method is crucially based on the recently-introduced matching logic, which allows to faithfully define the operational semantics of any language. We show that we can construct the aggregated language mechanically, from the semantics of the initial languages. Furthermore, matching logic gives us for free a proof system for partial correctness for the resulting language. This proof system can then be used to prove partial equivalence.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117058812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lorenzo Cianciaruso, Francesco di Forenza, E. D. Nitto, Marco Miglierina, Nicolas Ferry, Arnor Solberg
The ability to run and manage multi-clouds applications (i.e., Applications that run on multiple clouds) allows exploiting the peculiarities of each cloud solution and hence improves non-functional aspects such as availability, cost, and scalability. Monitoring such multi-clouds applications is fundamental to track the health of the applications themselves and of their underlying infrastructures as well as to decide when and how to adapt their behaviour and deployment. It is clear that, not only the application but also the corresponding monitoring infrastructure should dynamically adapt in order to (i) be optimized to the application context (e.g., Adapting the frequency of monitoring to reduce network load), (ii) to enable the co-evolution of the monitoring platform together with the cloud application (e.g., If a service migrates from one provider to another, the monitoring activities have to be adapted accordingly). In this paper, we present a model-based platform for the dynamic provisioning, deployment, and monitoring of multi-clouds applications whose monitoring activities can be automatically and dynamically adapted to best fit with the actual deployment of the application.
{"title":"Using Models at Runtime to Support Adaptable Monitoring of Multi-clouds Applications","authors":"Lorenzo Cianciaruso, Francesco di Forenza, E. D. Nitto, Marco Miglierina, Nicolas Ferry, Arnor Solberg","doi":"10.1109/SYNASC.2014.60","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.60","url":null,"abstract":"The ability to run and manage multi-clouds applications (i.e., Applications that run on multiple clouds) allows exploiting the peculiarities of each cloud solution and hence improves non-functional aspects such as availability, cost, and scalability. Monitoring such multi-clouds applications is fundamental to track the health of the applications themselves and of their underlying infrastructures as well as to decide when and how to adapt their behaviour and deployment. It is clear that, not only the application but also the corresponding monitoring infrastructure should dynamically adapt in order to (i) be optimized to the application context (e.g., Adapting the frequency of monitoring to reduce network load), (ii) to enable the co-evolution of the monitoring platform together with the cloud application (e.g., If a service migrates from one provider to another, the monitoring activities have to be adapted accordingly). In this paper, we present a model-based platform for the dynamic provisioning, deployment, and monitoring of multi-clouds applications whose monitoring activities can be automatically and dynamically adapted to best fit with the actual deployment of the application.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122062186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we express Babes Bolyai University File System lookup mechanism by using π-calculus. We describe the lookup process in a peer-to-peer decentralized system, how a request message is forwarded from a client to a system node, and how the response is replied. The formally specified protocol is verified by using the Mobility Workbench model-checker.
{"title":"Expressing BBUFs Lookup Using the π-Calculus","authors":"Gabriel Ciobanu, Dan Cojocar","doi":"10.1109/SYNASC.2014.74","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.74","url":null,"abstract":"In this paper we express Babes Bolyai University File System lookup mechanism by using π-calculus. We describe the lookup process in a peer-to-peer decentralized system, how a request message is forwarded from a client to a system node, and how the response is replied. The formally specified protocol is verified by using the Mobility Workbench model-checker.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126215231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, we live in a digital world producing data at an impressive speed: data are large, change quickly, and are often too complex to be processed by existing tools. The problem is to extract knowledge from all these data in an efficient way. MapReduce is a data parallel programming model for clusters of commodity machines that was created to address this problem. In this paper we provide an overview of the Hadoop ecosystem. We introduce the most significative approaches supporting automatic, on-line resource provisioning. Moreover, we analyse optimization approaches proposed in frameworks built on top of MapReduce, such as Pig and Hive, which point out the importance of scheduling techniques in MapReduce when multiple workflows are executed concurrently. Therefore, the default Hadoop schedulers are discussed along with some enhancements proposed by the research community. The analysis is performed to highlight how research contributions try to address common Hadoop points of weakness. As it stands out from our comparison, none of the frameworks surpasses the others and a fair evaluation is also difficult to be performed, the choice of the framework must be related to the specific application goal but there is no single solution that addresses all the issues typical of MapReduce.
{"title":"Optimization Techniques within the Hadoop Eco-system: A Survey","authors":"Giulia Rumi, Claudia Colella, D. Ardagna","doi":"10.1109/SYNASC.2014.65","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.65","url":null,"abstract":"Nowadays, we live in a digital world producing data at an impressive speed: data are large, change quickly, and are often too complex to be processed by existing tools. The problem is to extract knowledge from all these data in an efficient way. MapReduce is a data parallel programming model for clusters of commodity machines that was created to address this problem. In this paper we provide an overview of the Hadoop ecosystem. We introduce the most significative approaches supporting automatic, on-line resource provisioning. Moreover, we analyse optimization approaches proposed in frameworks built on top of MapReduce, such as Pig and Hive, which point out the importance of scheduling techniques in MapReduce when multiple workflows are executed concurrently. Therefore, the default Hadoop schedulers are discussed along with some enhancements proposed by the research community. The analysis is performed to highlight how research contributions try to address common Hadoop points of weakness. As it stands out from our comparison, none of the frameworks surpasses the others and a fair evaluation is also difficult to be performed, the choice of the framework must be related to the specific application goal but there is no single solution that addresses all the issues typical of MapReduce.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126573382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the distribution of cells by dimension in cylindrical algebraic decompositions (CADs). We find that they follow a standard distribution which seems largely independent of the underlying problem or CAD algorithm used. Rather, the distribution is inherent to the cylindrical structure and determined mostly by the number of variables. This insight is then combined with an algorithm that produces only full-dimensional cells to give an accurate method of predicting the number of cells in a complete CAD. Since constructing only full-dimensional cells is relatively inexpensive (involving no costly algebraic number calculations) this leads to heuristics for helping with various questions of problem formulation for CAD, such as choosing an optimal variable ordering. Our experiments demonstrate that this approach can be highly effective.
{"title":"Using the Distribution of Cells by Dimension in a Cylindrical Algebraic Decomposition","authors":"D. Wilson, M. England, R. Bradford, J. Davenport","doi":"10.1109/SYNASC.2014.15","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.15","url":null,"abstract":"We investigate the distribution of cells by dimension in cylindrical algebraic decompositions (CADs). We find that they follow a standard distribution which seems largely independent of the underlying problem or CAD algorithm used. Rather, the distribution is inherent to the cylindrical structure and determined mostly by the number of variables. This insight is then combined with an algorithm that produces only full-dimensional cells to give an accurate method of predicting the number of cells in a complete CAD. Since constructing only full-dimensional cells is relatively inexpensive (involving no costly algebraic number calculations) this leads to heuristics for helping with various questions of problem formulation for CAD, such as choosing an optimal variable ordering. Our experiments demonstrate that this approach can be highly effective.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"2 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114026491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the use of topological image analysis to derive characteristics needed in plant phenotyping. Due to certain features of root systems (deformation over time, overlaps of branches in a 2D image of the root system) a topological analysis is needed to correctly derive these characteristics. The advantages of such a topological analysis are highlighted in this paper and root phenotyping is presented as a new application for computational topology. Characteristics used in plant phenotyping that can be derived from root images using methods of topological image analysis are further presented. A Reeb graph based representation of root images is shown as an example for such a topological analysis. Based on a graph representation a new, normalised representation of root images is introduced.
{"title":"Topological Image Analysis and (Normalised) Representations for Plant Phenotyping","authors":"Ines Janusch, W. Kropatsch, Wolfgang Busch","doi":"10.1109/SYNASC.2014.83","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.83","url":null,"abstract":"This paper discusses the use of topological image analysis to derive characteristics needed in plant phenotyping. Due to certain features of root systems (deformation over time, overlaps of branches in a 2D image of the root system) a topological analysis is needed to correctly derive these characteristics. The advantages of such a topological analysis are highlighted in this paper and root phenotyping is presented as a new application for computational topology. Characteristics used in plant phenotyping that can be derived from root images using methods of topological image analysis are further presented. A Reeb graph based representation of root images is shown as an example for such a topological analysis. Based on a graph representation a new, normalised representation of root images is introduced.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133358957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After presenting the basic definition of spiking neural P systems (SN P systems), illustrated with two examples, we recall some results concerning the computing power and the size of universal SN P systems. We end this note with a couple of research topics.
{"title":"Spiking Neural P Systems - A Quick Survey and Some Research Topics","authors":"G. Paun","doi":"10.1109/SYNASC.2014.11","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.11","url":null,"abstract":"After presenting the basic definition of spiking neural P systems (SN P systems), illustrated with two examples, we recall some results concerning the computing power and the size of universal SN P systems. We end this note with a couple of research topics.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116725323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates a hybrid algorithm which utilizes exact and heuristic methods to optimise asset selection and capital allocation in portfolio optimisation. The proposed method is composed of a customised population based incremental learning procedure and a mathematical programming application. It is based on the standard Markowitz model with additional practical constraints such as cardinality on the number of assets and quantity of the allocated capital. Computational experiments have been conducted and analysis has demonstrated the performance and effectiveness of the proposed approach.
{"title":"A Population-Based Incremental Learning Method for Constrained Portfolio Optimisation","authors":"Yan Jin, R. Qu, J. Atkin","doi":"10.1109/SYNASC.2014.36","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.36","url":null,"abstract":"This paper investigates a hybrid algorithm which utilizes exact and heuristic methods to optimise asset selection and capital allocation in portfolio optimisation. The proposed method is composed of a customised population based incremental learning procedure and a mathematical programming application. It is based on the standard Markowitz model with additional practical constraints such as cardinality on the number of assets and quantity of the allocated capital. Computational experiments have been conducted and analysis has demonstrated the performance and effectiveness of the proposed approach.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129423981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huge data advent in high-performance computing (HPC) applications such as fluid flow simulations usually hinders the interactive processing and exploration of simulation results. Such an interactive data exploration not only allows scientiest to 'play' with their data but also to visualise huge (distributed) data sets in both an efficient and easy way. Therefore, we propose an HPC data exploration service based on a sliding window concept, that enables researches to access remote data (available on a supercomputer or cluster) during simulation runtime without exceeding any bandwidth limitations between the HPC back-end and the user front-end.
{"title":"Interactive Data Exploration for High-Performance Fluid Flow Computations through Porous Media","authors":"N. Perovic, J. Frisch, R. Mundani, E. Rank","doi":"10.1109/SYNASC.2014.68","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.68","url":null,"abstract":"Huge data advent in high-performance computing (HPC) applications such as fluid flow simulations usually hinders the interactive processing and exploration of simulation results. Such an interactive data exploration not only allows scientiest to 'play' with their data but also to visualise huge (distributed) data sets in both an efficient and easy way. Therefore, we propose an HPC data exploration service based on a sliding window concept, that enables researches to access remote data (available on a supercomputer or cluster) during simulation runtime without exceeding any bandwidth limitations between the HPC back-end and the user front-end.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127375489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}