Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266894
V. Ustimenko
The talk is dedicated to ideas of Holomorphic Encryption and Multivariate Key Dependent Cryptography. We observe recent theoretical results on the mentioned above topics together with their Applications to Cloud Security. Post Quantum Cryptography could not use many security tools based on Number Theory, because of the factorization algorithm developed by Peter Shor. This fact and fast development of Computer algebra make multivariate cryptography an important direction of research. The idea of key dependent cryptography looks promising for applications in Clouds, because the size of the key allows to control the speed of execution and security level. We will discuss recent results on key dependent multivariate cryptography. Finally, special classes of finite rings turned out to be very useful in holomorphic encryption and for the development of multivariate key dependent algorithms.
{"title":"On some mathematical aspects of data protection in cloud computing","authors":"V. Ustimenko","doi":"10.1109/HPCSim.2012.6266894","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266894","url":null,"abstract":"The talk is dedicated to ideas of Holomorphic Encryption and Multivariate Key Dependent Cryptography. We observe recent theoretical results on the mentioned above topics together with their Applications to Cloud Security. Post Quantum Cryptography could not use many security tools based on Number Theory, because of the factorization algorithm developed by Peter Shor. This fact and fast development of Computer algebra make multivariate cryptography an important direction of research. The idea of key dependent cryptography looks promising for applications in Clouds, because the size of the key allows to control the speed of execution and security level. We will discuss recent results on key dependent multivariate cryptography. Finally, special classes of finite rings turned out to be very useful in holomorphic encryption and for the development of multivariate key dependent algorithms.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266919
Paulina A. León, G. J. Martínez, S. V. C. Vergara
De Bruijn diagrams have been used as a useful tool for the systematic analysis of one-dimensional cellular automata (CA). They can be used to calculate particular kind of configurations, ancestors, complex patterns, cycles, Garden of Eden configurations and formal languages. However, there is few progress in two dimensions because its complexity increases exponentially. In this paper, we will offer a way to explore systematically such patterns by de Bruijn diagrams from initial configurations. Such analysis is concentrated mainly in two evolution rules: the famous Game of Life (complex CA) and the Diffusion Rule (chaotic CA). We will display some preliminary results and benefits to use de Bruijn diagrams in these CA.
{"title":"Complex dynamics in life-like rules described with de Bruijn diagrams: Complex and chaotic cellular automata","authors":"Paulina A. León, G. J. Martínez, S. V. C. Vergara","doi":"10.1109/HPCSim.2012.6266919","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266919","url":null,"abstract":"De Bruijn diagrams have been used as a useful tool for the systematic analysis of one-dimensional cellular automata (CA). They can be used to calculate particular kind of configurations, ancestors, complex patterns, cycles, Garden of Eden configurations and formal languages. However, there is few progress in two dimensions because its complexity increases exponentially. In this paper, we will offer a way to explore systematically such patterns by de Bruijn diagrams from initial configurations. Such analysis is concentrated mainly in two evolution rules: the famous Game of Life (complex CA) and the Diffusion Rule (chaotic CA). We will display some preliminary results and benefits to use de Bruijn diagrams in these CA.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115509318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266914
R. Alonso-Sanz
In conventional discrete dynamical systems, the new configuration depends solely on the configuration at the preceding time step. This contribution considers an extension to the standard framework of dynamical systems by taking into consideration past history in a simple way: the mapping defining the transition rule of the system remains unaltered, but it is applied to a certain summary of past states. This kind of embedded memory implementation, of straightforward computer codification, allows for an easy systematic study of the effect of memory in discrete dynamical systems, and may inspire some useful ideas in using discrete systems with memory (DSM) as a tool for modeling non-markovian phenomena. Besides their potential applications, DSM have an aesthetic and mathematical interest on their own, as will be briefly over viewed. The contribution focuses on the study of systems discrete par excellence, i.e., with space, time and state variable being discrete. These discrete universes are known as cellular automata (CA) in their more structured forms, and Boolean networks (BN) in a more general way. Thus, the mappings which define the rules of CA (or BN) are not formally altered when implementing embedded memory, but they are applied to cells (or nodes) that exhibit trait states computed as a function of their own previous states. So to say, cells (or nodes) - canalize - memory to the mapping. Automata on networks and on proximity graphs, together with structurally dynamic cellular automata, will be also studied with memory. If time permits, systems that remain discrete in space and time, but not in the state variable (e.g., maps and spatial games), will be also scrutinized with memory. A list of references on DSM may be found in http://uncomp.uwe.ac.uk/alonso-sanz.
{"title":"Cellular automata and other discrete dynamical systems with memory","authors":"R. Alonso-Sanz","doi":"10.1109/HPCSim.2012.6266914","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266914","url":null,"abstract":"In conventional discrete dynamical systems, the new configuration depends solely on the configuration at the preceding time step. This contribution considers an extension to the standard framework of dynamical systems by taking into consideration past history in a simple way: the mapping defining the transition rule of the system remains unaltered, but it is applied to a certain summary of past states. This kind of embedded memory implementation, of straightforward computer codification, allows for an easy systematic study of the effect of memory in discrete dynamical systems, and may inspire some useful ideas in using discrete systems with memory (DSM) as a tool for modeling non-markovian phenomena. Besides their potential applications, DSM have an aesthetic and mathematical interest on their own, as will be briefly over viewed. The contribution focuses on the study of systems discrete par excellence, i.e., with space, time and state variable being discrete. These discrete universes are known as cellular automata (CA) in their more structured forms, and Boolean networks (BN) in a more general way. Thus, the mappings which define the rules of CA (or BN) are not formally altered when implementing embedded memory, but they are applied to cells (or nodes) that exhibit trait states computed as a function of their own previous states. So to say, cells (or nodes) - canalize - memory to the mapping. Automata on networks and on proximity graphs, together with structurally dynamic cellular automata, will be also studied with memory. If time permits, systems that remain discrete in space and time, but not in the state variable (e.g., maps and spatial games), will be also scrutinized with memory. A list of references on DSM may be found in http://uncomp.uwe.ac.uk/alonso-sanz.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSIM.2012.6266908
P. Radeva, M. Drozdzal, S. Seguí, L. Igual, C. Malagelada, F. Azpiroz, Jordi Vitrià
Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.
{"title":"Active labeling: Application to wireless endoscopy analysis","authors":"P. Radeva, M. Drozdzal, S. Seguí, L. Igual, C. Malagelada, F. Azpiroz, Jordi Vitrià","doi":"10.1109/HPCSIM.2012.6266908","DOIUrl":"https://doi.org/10.1109/HPCSIM.2012.6266908","url":null,"abstract":"Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127945169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266970
Sébastien Limet
Efficient parallel programming has always been very tricky and only expert programmers are able to take the most of the computing power of modern computers. Such a situation is an obstacle to the development of the high performance computing in other sciences as well as in the industry. The fast changes in the computer architecture (multicores, manycores, GPU, clusters, ...) make even more difficult, even for an experienced programmer, to remain at the forefront of these evolutions. On the other hand, a huge amount of work has been done to develop programming languages or libraries that tend to help the programmers to write parallel programs which are more or less efficient. The key point in this kind of research is to find a good balance between the simplicity of the programming and the efficiency of the resulting programs. Many approaches have been proposed but none really prevail over the others. This paper is a small overview of some directions that seem promising to both simplify parallel programming and produce very efficient programs.
{"title":"High level languages for efficient parallel programming","authors":"Sébastien Limet","doi":"10.1109/HPCSim.2012.6266970","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266970","url":null,"abstract":"Efficient parallel programming has always been very tricky and only expert programmers are able to take the most of the computing power of modern computers. Such a situation is an obstacle to the development of the high performance computing in other sciences as well as in the industry. The fast changes in the computer architecture (multicores, manycores, GPU, clusters, ...) make even more difficult, even for an experienced programmer, to remain at the forefront of these evolutions. On the other hand, a huge amount of work has been done to develop programming languages or libraries that tend to help the programmers to write parallel programs which are more or less efficient. The key point in this kind of research is to find a good balance between the simplicity of the programming and the efficiency of the resulting programs. Many approaches have been proposed but none really prevail over the others. This paper is a small overview of some directions that seem promising to both simplify parallel programming and produce very efficient programs.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128098519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266938
A. Duran, Michael Klemm
In recent years, an observable trend in High Performance Computing (HPC) architectures has been the inclusion of accelerators, such as GPUs and field programmable arrays (FPGAs), to improve the performance of scientific applications. To rise to this challenge Intel announced the Intel® Many Integrated Core Architecture (Intel® MIC Architecture). In contrast with other accelerated platforms, the Intel MIC Architecture is a general purpose, manycore coprocessor that improves the programmability of such devices by supporting the well-known shared-memory execution model that is the base of most nodes in HPC machines. In this presentation, we will introduce key properties of the Intel MIC Architecture and we will also cover programming models for parallelization and vectorization of applications targeting this architecture.
{"title":"The Intel® Many Integrated Core Architecture","authors":"A. Duran, Michael Klemm","doi":"10.1109/HPCSim.2012.6266938","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266938","url":null,"abstract":"In recent years, an observable trend in High Performance Computing (HPC) architectures has been the inclusion of accelerators, such as GPUs and field programmable arrays (FPGAs), to improve the performance of scientific applications. To rise to this challenge Intel announced the Intel® Many Integrated Core Architecture (Intel® MIC Architecture). In contrast with other accelerated platforms, the Intel MIC Architecture is a general purpose, manycore coprocessor that improves the programmability of such devices by supporting the well-known shared-memory execution model that is the base of most nodes in HPC machines. In this presentation, we will introduce key properties of the Intel MIC Architecture and we will also cover programming models for parallelization and vectorization of applications targeting this architecture.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127024420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266940
D. Bartuschat, D. Ritter, U. Rüde
We present an algorithm for multi-physics simulation of charged particles in electrokinetic flows. It includes a coupled simulation of charged rigid particles in fluid flows in an electric field. The parallel simulation algorithm is implemented in the WALBERLA software framework. For solving the partial differential equation that models the electric potential, a cell-centered multigrid algorithm has been incorporated into the framework. After an introduction to the central concepts of WALBERLA, we describe the simulation setup and the simulation algorithm. Finally, we show the parallel scaling behavior of the algorithm on a high performance computer, with emphasis on the multigrid implementation.
{"title":"Parallel multigrid for electrokinetic simulation in particle-fluid flows","authors":"D. Bartuschat, D. Ritter, U. Rüde","doi":"10.1109/HPCSim.2012.6266940","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266940","url":null,"abstract":"We present an algorithm for multi-physics simulation of charged particles in electrokinetic flows. It includes a coupled simulation of charged rigid particles in fluid flows in an electric field. The parallel simulation algorithm is implemented in the WALBERLA software framework. For solving the partial differential equation that models the electric potential, a cell-centered multigrid algorithm has been incorporated into the framework. After an introduction to the central concepts of WALBERLA, we describe the simulation setup and the simulation algorithm. Finally, we show the parallel scaling behavior of the algorithm on a high performance computer, with emphasis on the multigrid implementation.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133580353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266934
Peiqing Zhang, B. Helvik
Nowadays, information and communication technology (ICT) has become more and more energy conscious. In this paper, we focus on peer-to-peer systems which contribute a major fraction of the Internet traffic. This paper proposes analytical models of energy consumption in P2P system. The model considers content pollution, the most common attack in P2P system, which has received little attention in previous work on green P2P. The analysis of the models shows that the popular sleep method in green computing potentially affects peer-to-peer performance. When the online time of clean copy holders is over cut, the system collapses. To find the balance between energy saving and system maintenance, the concept energy effectiveness is introduced. An approach for controlling energy consumption while keeping the system stable is suggested. We show that the whole system can be benefited if some warm-hearted and smart peers are willing to spend a little extra cost on energy, when most peers over cut their power on time. This approach can perfectly complement the popular sleep methods in green computing.
{"title":"Towards green P2P: Analysis of energy consumption in P2P and approaches to control","authors":"Peiqing Zhang, B. Helvik","doi":"10.1109/HPCSim.2012.6266934","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266934","url":null,"abstract":"Nowadays, information and communication technology (ICT) has become more and more energy conscious. In this paper, we focus on peer-to-peer systems which contribute a major fraction of the Internet traffic. This paper proposes analytical models of energy consumption in P2P system. The model considers content pollution, the most common attack in P2P system, which has received little attention in previous work on green P2P. The analysis of the models shows that the popular sleep method in green computing potentially affects peer-to-peer performance. When the online time of clean copy holders is over cut, the system collapses. To find the balance between energy saving and system maintenance, the concept energy effectiveness is introduced. An approach for controlling energy consumption while keeping the system stable is suggested. We show that the whole system can be benefited if some warm-hearted and smart peers are willing to spend a little extra cost on energy, when most peers over cut their power on time. This approach can perfectly complement the popular sleep methods in green computing.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131263926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266900
J. Toutouh, Sergio Nesmachnow, E. Alba
This work addresses the reduction of power consumption of the AODV routing protocol in vehicular networks as an optimization problem. Nowadays, network designers focus on energy-aware communication protocols, specially to deploy wireless networks. Here, we introduce an automatic method to search for energy-efficient AODV configurations by using an evolutionary algorithm and parallel Monte-Carlo simulations to improve the accuracy of the evaluation of tentative solutions. The experimental results demonstrate that significant power consumption improvements over the standard configuration can be attained, with no noteworthy loss in the quality of service.
{"title":"Evolutionary power-aware routing in VANETs using Monte-Carlo simulation","authors":"J. Toutouh, Sergio Nesmachnow, E. Alba","doi":"10.1109/HPCSim.2012.6266900","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266900","url":null,"abstract":"This work addresses the reduction of power consumption of the AODV routing protocol in vehicular networks as an optimization problem. Nowadays, network designers focus on energy-aware communication protocols, specially to deploy wireless networks. Here, we introduce an automatic method to search for energy-efficient AODV configurations by using an evolutionary algorithm and parallel Monte-Carlo simulations to improve the accuracy of the evaluation of tentative solutions. The experimental results demonstrate that significant power consumption improvements over the standard configuration can be attained, with no noteworthy loss in the quality of service.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"311 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122231378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266917
D. Désérable, R. Hoffmann
This paper presents a comparative study of the XY-routing protocol in the square grid “S” examined herein with the XYZ protocol in the triangular grid “T ” examined elsewhere, both with toroidal connections and N nodes. The routing problem (also called multiple target searching) is performed in a partitioned cellular automata network with agents (or messages) moving from sources to targets, preferably on their minimal routes. The network in S consists of N nodes with 4 buffers per node. Buffers with the same names are connected to their neighboring nodes via unidirectional links. Each buffer may host an agent and each agent situated in a buffer has a computed direction defining the new buffer in the adjacent node. Two scenarios are examined: (i) N-1 agents are moving to a common target (also called “all-to-one gathering”) (ii) N/2 agents are moving to N/2 targets. It is shown that in both cases the T grid is 1.5 times faster than the S grid. - The deterministic minimal routing protocols were also randomized, with agents choosing a random direction in order to cope with congestion and deadlocks. It is shown that randomization can slightly shorten the transfer time in case of congestion, but, more important, deadlocks can be resolved.
{"title":"Rectangular vs. triangular minimal routing and performance study","authors":"D. Désérable, R. Hoffmann","doi":"10.1109/HPCSim.2012.6266917","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266917","url":null,"abstract":"This paper presents a comparative study of the XY-routing protocol in the square grid “S” examined herein with the XYZ protocol in the triangular grid “T ” examined elsewhere, both with toroidal connections and N nodes. The routing problem (also called multiple target searching) is performed in a partitioned cellular automata network with agents (or messages) moving from sources to targets, preferably on their minimal routes. The network in S consists of N nodes with 4 buffers per node. Buffers with the same names are connected to their neighboring nodes via unidirectional links. Each buffer may host an agent and each agent situated in a buffer has a computed direction defining the new buffer in the adjacent node. Two scenarios are examined: (i) N-1 agents are moving to a common target (also called “all-to-one gathering”) (ii) N/2 agents are moving to N/2 targets. It is shown that in both cases the T grid is 1.5 times faster than the S grid. - The deterministic minimal routing protocols were also randomized, with agents choosing a random direction in order to cope with congestion and deadlocks. It is shown that randomization can slightly shorten the transfer time in case of congestion, but, more important, deadlocks can be resolved.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124022428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}