Pub Date : 2015-10-18DOI: 10.1109/SBAC-PADW.2015.9
T. Pinheiro, M. D. Castro
One of the most relevant problems at large organizations is the choice of locations for establishing facilities, distribution centers or retail stores. This logistics issue involves a strategic decision which may cause significant impact at the effective cost of the product. There are several papers tackling this issue, known as the Facility Location Problem. The objective of this paper is to analyze applicable heuristics previously developed by other authors and to define a mathematical formulation to the fuel distribution industry in Brazil. It started from the analysis of the upstream and downstream flow in practice in this segment and the respective transportation cost formation, including taxes. Thereby, we propose the use of parallel programming techniques using the Message Passing Interface (MPI) with the objective of reducing transportation costs in a reasonable execution time. Results show that this approach provides interesting performance gains, when compared to serial execution.
{"title":"A Parallel Algorithm for the Facility Location Problem Applied to Oil and Gas Logistics","authors":"T. Pinheiro, M. D. Castro","doi":"10.1109/SBAC-PADW.2015.9","DOIUrl":"https://doi.org/10.1109/SBAC-PADW.2015.9","url":null,"abstract":"One of the most relevant problems at large organizations is the choice of locations for establishing facilities, distribution centers or retail stores. This logistics issue involves a strategic decision which may cause significant impact at the effective cost of the product. There are several papers tackling this issue, known as the Facility Location Problem. The objective of this paper is to analyze applicable heuristics previously developed by other authors and to define a mathematical formulation to the fuel distribution industry in Brazil. It started from the analysis of the upstream and downstream flow in practice in this segment and the respective transportation cost formation, including taxes. Thereby, we propose the use of parallel programming techniques using the Message Passing Interface (MPI) with the objective of reducing transportation costs in a reasonable execution time. Results show that this approach provides interesting performance gains, when compared to serial execution.","PeriodicalId":91389,"journal":{"name":"Proceedings. Symposium on Computer Architecture and High Performance Computing","volume":"34 1","pages":"97-102"},"PeriodicalIF":0.0,"publicationDate":"2015-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75096744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/SBAC-PAD.2015.13
Jeremias M Gomes, George Teodoro, Alba de Melo, Jun Kong, Tahsin Kurc, Joel H Saltz
We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations.
{"title":"Efficient irregular wavefront propagation algorithms on Intel<sup>®</sup> Xeon Phi<sup>™</sup>.","authors":"Jeremias M Gomes, George Teodoro, Alba de Melo, Jun Kong, Tahsin Kurc, Joel H Saltz","doi":"10.1109/SBAC-PAD.2015.13","DOIUrl":"https://doi.org/10.1109/SBAC-PAD.2015.13","url":null,"abstract":"<p><p>We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel<sup>®</sup> Xeon Phi<sup>™</sup> co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63<i>×</i> on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7<i>×</i> and 1.62<i>×</i>, respectively, as compared to efficient CPU and GPU implementations.</p>","PeriodicalId":91389,"journal":{"name":"Proceedings. Symposium on Computer Architecture and High Performance Computing","volume":"2015 ","pages":"25-32"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/SBAC-PAD.2015.13","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34574305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-23DOI: 10.1109/SBAC-PAD.2013.15
Juan Chabkinian, Thomas J. E. Schwarz
Linear Hashing is a widely used and efficient version of extensible hashing. A distributed version of Linear Hashing is LH* that stores key-indexed records on up to hundreds of thousands of sites in a distributed system. LH* implements the dictionary data structure efficiently since it does not use a central component for the key-based operations of insertion, deletion, actualization, and retrieval and for the scan operation. LH* allows a client or a server to commit an addressing error by sending a request to the wrong server. In this case, the server forwards to the correct server directly or in one more forward operation. We discuss here methods to avoid the double forward, which is rare but might breach quality of service guarantees. We compare our methods with LH* P2P that pushes information about changes in the file structure to clients, whether they are active or not.
{"title":"Fast LH","authors":"Juan Chabkinian, Thomas J. E. Schwarz","doi":"10.1109/SBAC-PAD.2013.15","DOIUrl":"https://doi.org/10.1109/SBAC-PAD.2013.15","url":null,"abstract":"Linear Hashing is a widely used and efficient version of extensible hashing. A distributed version of Linear Hashing is LH* that stores key-indexed records on up to hundreds of thousands of sites in a distributed system. LH* implements the dictionary data structure efficiently since it does not use a central component for the key-based operations of insertion, deletion, actualization, and retrieval and for the scan operation. LH* allows a client or a server to commit an addressing error by sending a request to the wrong server. In this case, the server forwards to the correct server directly or in one more forward operation. We discuss here methods to avoid the double forward, which is rare but might breach quality of service guarantees. We compare our methods with LH* P2P that pushes information about changes in the file structure to clients, whether they are active or not.","PeriodicalId":91389,"journal":{"name":"Proceedings. Symposium on Computer Architecture and High Performance Computing","volume":"16 1","pages":"57-64"},"PeriodicalIF":0.0,"publicationDate":"2013-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78495345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-27DOI: 10.1109/SBAC-PAD.2010.16
A. Benoit, L. Marchal, Y. Robert, O. Sinnen
Mapping and scheduling an application onto the processors of a parallel system is a difficult problem. This is true when performance is the only objective, but becomes worse when a second optimization criterion like reliability is involved. In this paper we investigate the problem of mapping an application consisting of several consecutive stages, i.e., a pipeline, onto heterogeneous processors, while considering both the performance, measured as throughput, and the reliability. The mechanism of replication, which refers to the mapping of an application stage onto more than one processor, can be used to increase throughput but also to increase reliability. Finding the right replication trade-off plays a pivotal role for this bi-criteria optimization problem. Our formal model includes heterogeneous processors, both in terms of execution speed as well as in terms of reliability. We study the complexity of the various sub problems and show how a solution can be obtained for the polynomial cases. For the general NP-hard problem, heuristics are presented and experimentally evaluated. We further propose the design of an exact algorithm based on A* state space search which allows us to evaluate the performance of our heuristics for small problem instances.
{"title":"Mapping Pipelined Applications with Replication to Increase Throughput and Reliability","authors":"A. Benoit, L. Marchal, Y. Robert, O. Sinnen","doi":"10.1109/SBAC-PAD.2010.16","DOIUrl":"https://doi.org/10.1109/SBAC-PAD.2010.16","url":null,"abstract":"Mapping and scheduling an application onto the processors of a parallel system is a difficult problem. This is true when performance is the only objective, but becomes worse when a second optimization criterion like reliability is involved. In this paper we investigate the problem of mapping an application consisting of several consecutive stages, i.e., a pipeline, onto heterogeneous processors, while considering both the performance, measured as throughput, and the reliability. The mechanism of replication, which refers to the mapping of an application stage onto more than one processor, can be used to increase throughput but also to increase reliability. Finding the right replication trade-off plays a pivotal role for this bi-criteria optimization problem. Our formal model includes heterogeneous processors, both in terms of execution speed as well as in terms of reliability. We study the complexity of the various sub problems and show how a solution can be obtained for the polynomial cases. For the general NP-hard problem, heuristics are presented and experimentally evaluated. We further propose the design of an exact algorithm based on A* state space search which allows us to evaluate the performance of our heuristics for small problem instances.","PeriodicalId":91389,"journal":{"name":"Proceedings. Symposium on Computer Architecture and High Performance Computing","volume":"134 1","pages":"55-62"},"PeriodicalIF":0.0,"publicationDate":"2010-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78029937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"memu: Unifying Application Modeling and Cluster Exploitation","authors":"A. Alves, A. Pina, J. Exposto, J. Rufino","doi":"10.1109/CAHPC.2004.23","DOIUrl":"https://doi.org/10.1109/CAHPC.2004.23","url":null,"abstract":"","PeriodicalId":91389,"journal":{"name":"Proceedings. Symposium on Computer Architecture and High Performance Computing","volume":"16 1","pages":"132-139"},"PeriodicalIF":0.0,"publicationDate":"2004-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75219663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Combined Scheduling of Malleable and Rigid Jobs","authors":"Jan Hungershöfer","doi":"10.1109/CAHPC.2004.27","DOIUrl":"https://doi.org/10.1109/CAHPC.2004.27","url":null,"abstract":"","PeriodicalId":91389,"journal":{"name":"Proceedings. Symposium on Computer Architecture and High Performance Computing","volume":"25 1","pages":"206-213"},"PeriodicalIF":0.0,"publicationDate":"2004-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80177351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}