Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266894
V. Ustimenko
The talk is dedicated to ideas of Holomorphic Encryption and Multivariate Key Dependent Cryptography. We observe recent theoretical results on the mentioned above topics together with their Applications to Cloud Security. Post Quantum Cryptography could not use many security tools based on Number Theory, because of the factorization algorithm developed by Peter Shor. This fact and fast development of Computer algebra make multivariate cryptography an important direction of research. The idea of key dependent cryptography looks promising for applications in Clouds, because the size of the key allows to control the speed of execution and security level. We will discuss recent results on key dependent multivariate cryptography. Finally, special classes of finite rings turned out to be very useful in holomorphic encryption and for the development of multivariate key dependent algorithms.
{"title":"On some mathematical aspects of data protection in cloud computing","authors":"V. Ustimenko","doi":"10.1109/HPCSim.2012.6266894","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266894","url":null,"abstract":"The talk is dedicated to ideas of Holomorphic Encryption and Multivariate Key Dependent Cryptography. We observe recent theoretical results on the mentioned above topics together with their Applications to Cloud Security. Post Quantum Cryptography could not use many security tools based on Number Theory, because of the factorization algorithm developed by Peter Shor. This fact and fast development of Computer algebra make multivariate cryptography an important direction of research. The idea of key dependent cryptography looks promising for applications in Clouds, because the size of the key allows to control the speed of execution and security level. We will discuss recent results on key dependent multivariate cryptography. Finally, special classes of finite rings turned out to be very useful in holomorphic encryption and for the development of multivariate key dependent algorithms.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266919
Paulina A. León, G. J. Martínez, S. V. C. Vergara
De Bruijn diagrams have been used as a useful tool for the systematic analysis of one-dimensional cellular automata (CA). They can be used to calculate particular kind of configurations, ancestors, complex patterns, cycles, Garden of Eden configurations and formal languages. However, there is few progress in two dimensions because its complexity increases exponentially. In this paper, we will offer a way to explore systematically such patterns by de Bruijn diagrams from initial configurations. Such analysis is concentrated mainly in two evolution rules: the famous Game of Life (complex CA) and the Diffusion Rule (chaotic CA). We will display some preliminary results and benefits to use de Bruijn diagrams in these CA.
{"title":"Complex dynamics in life-like rules described with de Bruijn diagrams: Complex and chaotic cellular automata","authors":"Paulina A. León, G. J. Martínez, S. V. C. Vergara","doi":"10.1109/HPCSim.2012.6266919","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266919","url":null,"abstract":"De Bruijn diagrams have been used as a useful tool for the systematic analysis of one-dimensional cellular automata (CA). They can be used to calculate particular kind of configurations, ancestors, complex patterns, cycles, Garden of Eden configurations and formal languages. However, there is few progress in two dimensions because its complexity increases exponentially. In this paper, we will offer a way to explore systematically such patterns by de Bruijn diagrams from initial configurations. Such analysis is concentrated mainly in two evolution rules: the famous Game of Life (complex CA) and the Diffusion Rule (chaotic CA). We will display some preliminary results and benefits to use de Bruijn diagrams in these CA.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115509318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266914
R. Alonso-Sanz
In conventional discrete dynamical systems, the new configuration depends solely on the configuration at the preceding time step. This contribution considers an extension to the standard framework of dynamical systems by taking into consideration past history in a simple way: the mapping defining the transition rule of the system remains unaltered, but it is applied to a certain summary of past states. This kind of embedded memory implementation, of straightforward computer codification, allows for an easy systematic study of the effect of memory in discrete dynamical systems, and may inspire some useful ideas in using discrete systems with memory (DSM) as a tool for modeling non-markovian phenomena. Besides their potential applications, DSM have an aesthetic and mathematical interest on their own, as will be briefly over viewed. The contribution focuses on the study of systems discrete par excellence, i.e., with space, time and state variable being discrete. These discrete universes are known as cellular automata (CA) in their more structured forms, and Boolean networks (BN) in a more general way. Thus, the mappings which define the rules of CA (or BN) are not formally altered when implementing embedded memory, but they are applied to cells (or nodes) that exhibit trait states computed as a function of their own previous states. So to say, cells (or nodes) - canalize - memory to the mapping. Automata on networks and on proximity graphs, together with structurally dynamic cellular automata, will be also studied with memory. If time permits, systems that remain discrete in space and time, but not in the state variable (e.g., maps and spatial games), will be also scrutinized with memory. A list of references on DSM may be found in http://uncomp.uwe.ac.uk/alonso-sanz.
{"title":"Cellular automata and other discrete dynamical systems with memory","authors":"R. Alonso-Sanz","doi":"10.1109/HPCSim.2012.6266914","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266914","url":null,"abstract":"In conventional discrete dynamical systems, the new configuration depends solely on the configuration at the preceding time step. This contribution considers an extension to the standard framework of dynamical systems by taking into consideration past history in a simple way: the mapping defining the transition rule of the system remains unaltered, but it is applied to a certain summary of past states. This kind of embedded memory implementation, of straightforward computer codification, allows for an easy systematic study of the effect of memory in discrete dynamical systems, and may inspire some useful ideas in using discrete systems with memory (DSM) as a tool for modeling non-markovian phenomena. Besides their potential applications, DSM have an aesthetic and mathematical interest on their own, as will be briefly over viewed. The contribution focuses on the study of systems discrete par excellence, i.e., with space, time and state variable being discrete. These discrete universes are known as cellular automata (CA) in their more structured forms, and Boolean networks (BN) in a more general way. Thus, the mappings which define the rules of CA (or BN) are not formally altered when implementing embedded memory, but they are applied to cells (or nodes) that exhibit trait states computed as a function of their own previous states. So to say, cells (or nodes) - canalize - memory to the mapping. Automata on networks and on proximity graphs, together with structurally dynamic cellular automata, will be also studied with memory. If time permits, systems that remain discrete in space and time, but not in the state variable (e.g., maps and spatial games), will be also scrutinized with memory. A list of references on DSM may be found in http://uncomp.uwe.ac.uk/alonso-sanz.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSIM.2012.6266908
P. Radeva, M. Drozdzal, S. Seguí, L. Igual, C. Malagelada, F. Azpiroz, Jordi Vitrià
Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.
{"title":"Active labeling: Application to wireless endoscopy analysis","authors":"P. Radeva, M. Drozdzal, S. Seguí, L. Igual, C. Malagelada, F. Azpiroz, Jordi Vitrià","doi":"10.1109/HPCSIM.2012.6266908","DOIUrl":"https://doi.org/10.1109/HPCSIM.2012.6266908","url":null,"abstract":"Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127945169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266970
Sébastien Limet
Efficient parallel programming has always been very tricky and only expert programmers are able to take the most of the computing power of modern computers. Such a situation is an obstacle to the development of the high performance computing in other sciences as well as in the industry. The fast changes in the computer architecture (multicores, manycores, GPU, clusters, ...) make even more difficult, even for an experienced programmer, to remain at the forefront of these evolutions. On the other hand, a huge amount of work has been done to develop programming languages or libraries that tend to help the programmers to write parallel programs which are more or less efficient. The key point in this kind of research is to find a good balance between the simplicity of the programming and the efficiency of the resulting programs. Many approaches have been proposed but none really prevail over the others. This paper is a small overview of some directions that seem promising to both simplify parallel programming and produce very efficient programs.
{"title":"High level languages for efficient parallel programming","authors":"Sébastien Limet","doi":"10.1109/HPCSim.2012.6266970","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266970","url":null,"abstract":"Efficient parallel programming has always been very tricky and only expert programmers are able to take the most of the computing power of modern computers. Such a situation is an obstacle to the development of the high performance computing in other sciences as well as in the industry. The fast changes in the computer architecture (multicores, manycores, GPU, clusters, ...) make even more difficult, even for an experienced programmer, to remain at the forefront of these evolutions. On the other hand, a huge amount of work has been done to develop programming languages or libraries that tend to help the programmers to write parallel programs which are more or less efficient. The key point in this kind of research is to find a good balance between the simplicity of the programming and the efficiency of the resulting programs. Many approaches have been proposed but none really prevail over the others. This paper is a small overview of some directions that seem promising to both simplify parallel programming and produce very efficient programs.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128098519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266938
A. Duran, Michael Klemm
In recent years, an observable trend in High Performance Computing (HPC) architectures has been the inclusion of accelerators, such as GPUs and field programmable arrays (FPGAs), to improve the performance of scientific applications. To rise to this challenge Intel announced the Intel® Many Integrated Core Architecture (Intel® MIC Architecture). In contrast with other accelerated platforms, the Intel MIC Architecture is a general purpose, manycore coprocessor that improves the programmability of such devices by supporting the well-known shared-memory execution model that is the base of most nodes in HPC machines. In this presentation, we will introduce key properties of the Intel MIC Architecture and we will also cover programming models for parallelization and vectorization of applications targeting this architecture.
{"title":"The Intel® Many Integrated Core Architecture","authors":"A. Duran, Michael Klemm","doi":"10.1109/HPCSim.2012.6266938","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266938","url":null,"abstract":"In recent years, an observable trend in High Performance Computing (HPC) architectures has been the inclusion of accelerators, such as GPUs and field programmable arrays (FPGAs), to improve the performance of scientific applications. To rise to this challenge Intel announced the Intel® Many Integrated Core Architecture (Intel® MIC Architecture). In contrast with other accelerated platforms, the Intel MIC Architecture is a general purpose, manycore coprocessor that improves the programmability of such devices by supporting the well-known shared-memory execution model that is the base of most nodes in HPC machines. In this presentation, we will introduce key properties of the Intel MIC Architecture and we will also cover programming models for parallelization and vectorization of applications targeting this architecture.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127024420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266940
D. Bartuschat, D. Ritter, U. Rüde
We present an algorithm for multi-physics simulation of charged particles in electrokinetic flows. It includes a coupled simulation of charged rigid particles in fluid flows in an electric field. The parallel simulation algorithm is implemented in the WALBERLA software framework. For solving the partial differential equation that models the electric potential, a cell-centered multigrid algorithm has been incorporated into the framework. After an introduction to the central concepts of WALBERLA, we describe the simulation setup and the simulation algorithm. Finally, we show the parallel scaling behavior of the algorithm on a high performance computer, with emphasis on the multigrid implementation.
{"title":"Parallel multigrid for electrokinetic simulation in particle-fluid flows","authors":"D. Bartuschat, D. Ritter, U. Rüde","doi":"10.1109/HPCSim.2012.6266940","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266940","url":null,"abstract":"We present an algorithm for multi-physics simulation of charged particles in electrokinetic flows. It includes a coupled simulation of charged rigid particles in fluid flows in an electric field. The parallel simulation algorithm is implemented in the WALBERLA software framework. For solving the partial differential equation that models the electric potential, a cell-centered multigrid algorithm has been incorporated into the framework. After an introduction to the central concepts of WALBERLA, we describe the simulation setup and the simulation algorithm. Finally, we show the parallel scaling behavior of the algorithm on a high performance computer, with emphasis on the multigrid implementation.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133580353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266934
Peiqing Zhang, B. Helvik
Nowadays, information and communication technology (ICT) has become more and more energy conscious. In this paper, we focus on peer-to-peer systems which contribute a major fraction of the Internet traffic. This paper proposes analytical models of energy consumption in P2P system. The model considers content pollution, the most common attack in P2P system, which has received little attention in previous work on green P2P. The analysis of the models shows that the popular sleep method in green computing potentially affects peer-to-peer performance. When the online time of clean copy holders is over cut, the system collapses. To find the balance between energy saving and system maintenance, the concept energy effectiveness is introduced. An approach for controlling energy consumption while keeping the system stable is suggested. We show that the whole system can be benefited if some warm-hearted and smart peers are willing to spend a little extra cost on energy, when most peers over cut their power on time. This approach can perfectly complement the popular sleep methods in green computing.
{"title":"Towards green P2P: Analysis of energy consumption in P2P and approaches to control","authors":"Peiqing Zhang, B. Helvik","doi":"10.1109/HPCSim.2012.6266934","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266934","url":null,"abstract":"Nowadays, information and communication technology (ICT) has become more and more energy conscious. In this paper, we focus on peer-to-peer systems which contribute a major fraction of the Internet traffic. This paper proposes analytical models of energy consumption in P2P system. The model considers content pollution, the most common attack in P2P system, which has received little attention in previous work on green P2P. The analysis of the models shows that the popular sleep method in green computing potentially affects peer-to-peer performance. When the online time of clean copy holders is over cut, the system collapses. To find the balance between energy saving and system maintenance, the concept energy effectiveness is introduced. An approach for controlling energy consumption while keeping the system stable is suggested. We show that the whole system can be benefited if some warm-hearted and smart peers are willing to spend a little extra cost on energy, when most peers over cut their power on time. This approach can perfectly complement the popular sleep methods in green computing.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131263926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266927
T. Frikha, N. B. Amor, K. Loukil, Agnès Ghorbel, M. Abid, J. Diguet
The emergency of multimedia applications particularly in mobile embedded systems puts new challenges for the design of such systems. The major difficulty is the embedded system's reduced energy and computational resources that must be carefully used to execute complex application often in unpredictable environments. So the system architecture must be energy efficient and flexible enough to adapt resources to application requirements to manage the environment architectures and mobile's constraints. The augmented reality is a very promising 3D embedded multimedia application. It's based on the addition of specific 3D's animations on a video flow. In this paper, we describe our concept of flexible architecture and we give implementation results based on Pixel Shader Accelerator. This is the first step of the project and we compare various hardware and software implementation.
{"title":"Hardware accelerator for self adaptive augmented reality systems","authors":"T. Frikha, N. B. Amor, K. Loukil, Agnès Ghorbel, M. Abid, J. Diguet","doi":"10.1109/HPCSim.2012.6266927","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266927","url":null,"abstract":"The emergency of multimedia applications particularly in mobile embedded systems puts new challenges for the design of such systems. The major difficulty is the embedded system's reduced energy and computational resources that must be carefully used to execute complex application often in unpredictable environments. So the system architecture must be energy efficient and flexible enough to adapt resources to application requirements to manage the environment architectures and mobile's constraints. The augmented reality is a very promising 3D embedded multimedia application. It's based on the addition of specific 3D's animations on a video flow. In this paper, we describe our concept of flexible architecture and we give implementation results based on Pixel Shader Accelerator. This is the first step of the project and we compare various hardware and software implementation.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116333971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266990
Sayanta Mallick, Gaétan Hains, C. Deme
Monitoring and predicting resource consumption is a fundamental need when running a virtualized system. Predicting resources is necessary because cloud infrastructures use virtual resources on demand. Current monitoring tools are insufficient to predict resource usage of virtualized systems so, without proper monitoring, virtualized systems can suffer down time, which can directly affect cloud infrastructure. We propose a new modelling approach to the problem of resource prediction. Models are based on historical data to forecast short-term resource usages. We present here in detail three of our prediction models to forecast and monitor resources. We also show experimental results by using real-life data and an overall evaluation of this approach.
{"title":"A resource prediction model for virtualization servers","authors":"Sayanta Mallick, Gaétan Hains, C. Deme","doi":"10.1109/HPCSim.2012.6266990","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266990","url":null,"abstract":"Monitoring and predicting resource consumption is a fundamental need when running a virtualized system. Predicting resources is necessary because cloud infrastructures use virtual resources on demand. Current monitoring tools are insufficient to predict resource usage of virtualized systems so, without proper monitoring, virtualized systems can suffer down time, which can directly affect cloud infrastructure. We propose a new modelling approach to the problem of resource prediction. Models are based on historical data to forecast short-term resource usages. We present here in detail three of our prediction models to forecast and monitor resources. We also show experimental results by using real-life data and an overall evaluation of this approach.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124993985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}