Shuai Che, Gregory P. Rodgers, Bradford M. Beckmann, S. Reinhardt
Graphics processing units (GPUs) have been increasingly used to accelerate irregular applications such as graph and sparse-matrix computation. Graph coloring is a key building block for many graph applications. The first step of many graph applications is graph coloring/partitioning to obtain sets of independent vertices for subsequent parallel computations. However, parallelization and optimization of coloring for GPUs have been a challenge for programmers. This paper studies approaches to implementing graph coloring on a GPU and characterizes their program behaviors with different graph structures. We also investigate load imbalance, which can be the main cause for performance bottlenecks. We evaluate the effectiveness of different optimization techniques, including the use of work stealing and the design of a hybrid algorithm. We are able to improve graph coloring performance by approximately 25% compared to a baseline GPU implementation on an AMD Radeon HD 7950 GPU. We also analyze some important factors affecting performance.
图形处理单元(gpu)越来越多地用于加速图形和稀疏矩阵计算等不规则应用。图形着色是许多图形应用程序的关键组成部分。许多图形应用程序的第一步是图形着色/划分,以获得后续并行计算的独立顶点集。然而,gpu的并行化和着色优化一直是程序员面临的挑战。本文研究了在GPU上实现图着色的方法,并描述了它们在不同图结构下的程序行为。我们还研究了负载不平衡,这可能是导致性能瓶颈的主要原因。我们评估了不同优化技术的有效性,包括使用工作窃取和混合算法的设计。与AMD Radeon HD 7950 GPU上的基准GPU实现相比,我们能够将图形着色性能提高约25%。本文还分析了影响性能的一些重要因素。
{"title":"Graph Coloring on the GPU and Some Techniques to Improve Load Imbalance","authors":"Shuai Che, Gregory P. Rodgers, Bradford M. Beckmann, S. Reinhardt","doi":"10.1109/IPDPSW.2015.74","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.74","url":null,"abstract":"Graphics processing units (GPUs) have been increasingly used to accelerate irregular applications such as graph and sparse-matrix computation. Graph coloring is a key building block for many graph applications. The first step of many graph applications is graph coloring/partitioning to obtain sets of independent vertices for subsequent parallel computations. However, parallelization and optimization of coloring for GPUs have been a challenge for programmers. This paper studies approaches to implementing graph coloring on a GPU and characterizes their program behaviors with different graph structures. We also investigate load imbalance, which can be the main cause for performance bottlenecks. We evaluate the effectiveness of different optimization techniques, including the use of work stealing and the design of a hybrid algorithm. We are able to improve graph coloring performance by approximately 25% compared to a baseline GPU implementation on an AMD Radeon HD 7950 GPU. We also analyze some important factors affecting performance.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116518469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-tiered transactional web applications are frequently used in enterprise based systems. Due to their inherent distributed nature, pre-deployment testing for high-availability and varying concurrency are important for post-deployment performance. Accurate performance modeling of such applications can help estimate values for future deployment variations as well as validate experimental results. In order to theoretically model performance of multi-tiered applications, we use queuing networks and Mean Value Analysis (MVA) models. While MVA has been shown to work well with closed queuing networks, there are particular limitations in cases where the service demands vary with concurrency. This is further contrived by the use of multi-server queues in multi-core CPUs, that are not traditionally captured in MVA. We compare performance of a multi-server MVA model alongside actual performance testing measurements and demonstrate this deviation. Using spline interpolation of collected service demands, we show that a modified version of the MVA algorithm (called MVASD) that accepts an array of service demands, can provide superior estimates of maximum throughput and response time. Results are demonstrated over multi-tier vehicle insurance registration and e-commerce web applications. The mean deviations of predicted throughput and response time are shown to be less the 3% and 9%, respectively. Additionally, we analyze the effect of spline interpolation of service demands as a function of throughput on the prediction results.
{"title":"Performance Modeling of Multi-tiered Web Applications with Varying Service Demands","authors":"A. Kattepur, M. Nambiar","doi":"10.1109/IPDPSW.2015.28","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.28","url":null,"abstract":"Multi-tiered transactional web applications are frequently used in enterprise based systems. Due to their inherent distributed nature, pre-deployment testing for high-availability and varying concurrency are important for post-deployment performance. Accurate performance modeling of such applications can help estimate values for future deployment variations as well as validate experimental results. In order to theoretically model performance of multi-tiered applications, we use queuing networks and Mean Value Analysis (MVA) models. While MVA has been shown to work well with closed queuing networks, there are particular limitations in cases where the service demands vary with concurrency. This is further contrived by the use of multi-server queues in multi-core CPUs, that are not traditionally captured in MVA. We compare performance of a multi-server MVA model alongside actual performance testing measurements and demonstrate this deviation. Using spline interpolation of collected service demands, we show that a modified version of the MVA algorithm (called MVASD) that accepts an array of service demands, can provide superior estimates of maximum throughput and response time. Results are demonstrated over multi-tier vehicle insurance registration and e-commerce web applications. The mean deviations of predicted throughput and response time are shown to be less the 3% and 9%, respectively. Additionally, we analyze the effect of spline interpolation of service demands as a function of throughput on the prediction results.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanem Arslan, H. Topcuoglu, M. Kandemir, Oguz Tosun
Modern architectures are increasingly susceptible to transient and permanent faults due to continuously decreasing transistor sizes and faster operating frequencies. The probability of soft error occurrence is relatively high on cache structures due to the large area of the logic compared to other parts. Applying fault tolerance unselectively for all caches has a significant overhead on performance and energy. In this study, we propose asymmetrically reliable caches aiming to provide required reliability using just enough extra hardware under the performance and energy constraints. In our framework, a chip multiprocessor consists of one reliability-aware core which has ECC protection on its data cache for critical data and a set of less reliable cores with unprotected data caches to map noncritical data. The experimental results for selected applications show that our proposed technique provides 21% better reliability for only 6% more energy consumption compared to traditional caches.
{"title":"Performance and Energy Efficient Asymmetrically Reliable Caches for Multicore Architectures","authors":"Sanem Arslan, H. Topcuoglu, M. Kandemir, Oguz Tosun","doi":"10.1109/IPDPSW.2015.113","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.113","url":null,"abstract":"Modern architectures are increasingly susceptible to transient and permanent faults due to continuously decreasing transistor sizes and faster operating frequencies. The probability of soft error occurrence is relatively high on cache structures due to the large area of the logic compared to other parts. Applying fault tolerance unselectively for all caches has a significant overhead on performance and energy. In this study, we propose asymmetrically reliable caches aiming to provide required reliability using just enough extra hardware under the performance and energy constraints. In our framework, a chip multiprocessor consists of one reliability-aware core which has ECC protection on its data cache for critical data and a set of less reliable cores with unprotected data caches to map noncritical data. The experimental results for selected applications show that our proposed technique provides 21% better reliability for only 6% more energy consumption compared to traditional caches.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115715127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Device-to-Device (i.e. D2D) communication under-laying cellular technology not only increases system capacity but also utilizes the advantage of physical proximity of communicating devices to support services like proximity services, offload traffic from Base Station (i.e. BS) etc. But proximity discovery and synchronization among devices efficiently poses new research challenges for cellular networks. Inspired by the synchronization behaviour of fire fly found in nature, the reported algorithms based on bio-inspired firefly heuristics for synchronization among devices as well as service interest among them having drawback of large convergence time and large message exchanges. Therefore, we propose an improved O (n log n) distributed firefly algorithm for D2D large scale networks using tree based topological mechanism using RSSI based ranging scheme.
{"title":"Firefly Inspired Improved Distributed Proximity Algorithm for D2D Communication","authors":"A. Pratap, R. Misra","doi":"10.1109/IPDPSW.2015.64","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.64","url":null,"abstract":"Device-to-Device (i.e. D2D) communication under-laying cellular technology not only increases system capacity but also utilizes the advantage of physical proximity of communicating devices to support services like proximity services, offload traffic from Base Station (i.e. BS) etc. But proximity discovery and synchronization among devices efficiently poses new research challenges for cellular networks. Inspired by the synchronization behaviour of fire fly found in nature, the reported algorithms based on bio-inspired firefly heuristics for synchronization among devices as well as service interest among them having drawback of large convergence time and large message exchanges. Therefore, we propose an improved O (n log n) distributed firefly algorithm for D2D large scale networks using tree based topological mechanism using RSSI based ranging scheme.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121742933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With continued performance scaling of many cores per chip, an on-chip, off-chip memory has increasingly become a system bottleneck due to inter-thread contention. The memory access streams emerging from many cores and the simultaneously executed threads, exhibit increasingly limited locality. Large and high-density DRAMs contribute significantly to system power consumption and data over fetch. We develop a fine-grained Victim Row-Buffer (VRB) memory system to increase performance of the memory system. The VRB mechanism helps reuse the data accessed from the memory banks, avoids unnecessary data transfers, mitigates memory contentions, and thus can improve system throughput and system fairness by decoupling row-buffer contentions. Through full-system cycle-accurate simulations of many threads applications, we demonstrate that our proposed VRB technique achieves an up to 19% (8.4% on average) system-level throughput improvement, an up to 20% (7.2% on average) system fairness improvement, and it saves 6.8% of power consumption across the whole suite.
{"title":"Decoupling Contention with Victim Row-Buffer on Multicore Memory Systems","authors":"Ke Gao, Dongrui Fan, Jie Wu, Zhiyong Liu","doi":"10.1109/IPDPSW.2015.30","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.30","url":null,"abstract":"With continued performance scaling of many cores per chip, an on-chip, off-chip memory has increasingly become a system bottleneck due to inter-thread contention. The memory access streams emerging from many cores and the simultaneously executed threads, exhibit increasingly limited locality. Large and high-density DRAMs contribute significantly to system power consumption and data over fetch. We develop a fine-grained Victim Row-Buffer (VRB) memory system to increase performance of the memory system. The VRB mechanism helps reuse the data accessed from the memory banks, avoids unnecessary data transfers, mitigates memory contentions, and thus can improve system throughput and system fairness by decoupling row-buffer contentions. Through full-system cycle-accurate simulations of many threads applications, we demonstrate that our proposed VRB technique achieves an up to 19% (8.4% on average) system-level throughput improvement, an up to 20% (7.2% on average) system fairness improvement, and it saves 6.8% of power consumption across the whole suite.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123983184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several approaches to reduce the power consumption of data enters have been described in the literature, most of which aim to improve energy efficiency by trading off performance for reducing power consumption. However, these approaches do not always provide means for administrators and users to specify how they want to explore such trade-offs. This work provides techniques for assigning jobs to distributed resources, exploring energy efficient resource provisioning. We use middleware-level mechanisms to adapt resource allocation according to energy-related events and user-defined rules. A proposed framework enables developers, users and system administrators to specify and explore energy efficiency and performance trade-offs without detailed knowledge of the underlying hardware platform. Evaluation of the proposed solution under three scheduling policies shows gains of 25% in energy-efficiency with minimal impact on the overall application performance. We also evaluate reactivity in the adaptive resource provisioning.
{"title":"Energy-Aware Server Provisioning by Introducing Middleware-Level Dynamic Green Scheduling","authors":"Daniel Balouek-Thomert, E. Caron, L. Lefèvre","doi":"10.1109/IPDPSW.2015.121","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.121","url":null,"abstract":"Several approaches to reduce the power consumption of data enters have been described in the literature, most of which aim to improve energy efficiency by trading off performance for reducing power consumption. However, these approaches do not always provide means for administrators and users to specify how they want to explore such trade-offs. This work provides techniques for assigning jobs to distributed resources, exploring energy efficient resource provisioning. We use middleware-level mechanisms to adapt resource allocation according to energy-related events and user-defined rules. A proposed framework enables developers, users and system administrators to specify and explore energy efficiency and performance trade-offs without detailed knowledge of the underlying hardware platform. Evaluation of the proposed solution under three scheduling policies shows gains of 25% in energy-efficiency with minimal impact on the overall application performance. We also evaluate reactivity in the adaptive resource provisioning.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129563381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stream processing accelerators are often applied in MPSoCs for software defined radios. Sharing of these accelerators between different streams could improve their utilization and reduce thereby the hardware cost but is challenging under real-time constraints. In this paper we introduce entry- and exit-gateways that are responsible for multiplexing blocks of data over accelerators under real-time constraints. These gateways check for the availability of sufficient data and space and thereby enable the derivation of a dataflow model of the application. The dataflow model is used to verify the worst-case temporal behaviour based on the sizes of the blocks of data used for multiplexing. We demonstrate that required buffer capacities are non-monotone in the block size. Therefore, an ILP is presented to compute minimum block sizes and sufficient buffer capacities. The benefits of sharing accelerators are demonstrated using a multi-core system that is implemented on a Virtex 6 FPGA. A stereo audio stream from a PAL video signal is demodulated in this system in real-time where two accelerators are shared within and between two streams. In this system sharing reduces the number of accelerators by 75% and reduced the number of logic cells with 63%.
{"title":"Real-Time Multiprocessor Architecture for Sharing Stream Processing Accelerators","authors":"B. Dekens, M. Bekooij, G. Smit","doi":"10.1109/IPDPSW.2015.147","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.147","url":null,"abstract":"Stream processing accelerators are often applied in MPSoCs for software defined radios. Sharing of these accelerators between different streams could improve their utilization and reduce thereby the hardware cost but is challenging under real-time constraints. In this paper we introduce entry- and exit-gateways that are responsible for multiplexing blocks of data over accelerators under real-time constraints. These gateways check for the availability of sufficient data and space and thereby enable the derivation of a dataflow model of the application. The dataflow model is used to verify the worst-case temporal behaviour based on the sizes of the blocks of data used for multiplexing. We demonstrate that required buffer capacities are non-monotone in the block size. Therefore, an ILP is presented to compute minimum block sizes and sufficient buffer capacities. The benefits of sharing accelerators are demonstrated using a multi-core system that is implemented on a Virtex 6 FPGA. A stereo audio stream from a PAL video signal is demodulated in this system in real-time where two accelerators are shared within and between two streams. In this system sharing reduces the number of accelerators by 75% and reduced the number of logic cells with 63%.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129720659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The solution of real symmetric dense Eigen value problems is one of the fundamental matrix computations. To date, several new high-performance Eigen solvers have been developed for peta and postpeta scale systems. One of these, the Eigen Exa Eigen solver, has been developed in Japan. Eigen Exa provides two routines: eigens, which is based on traditional tridiagonalization, and eigensx, which employs a new method via a pentadiagonal matrix. Recently, we conducted a detailed performance evaluation of Eigen Exa by using 4,800 nodes of the Oak leaf-FX supercomputer system. In this paper, we report the results of our evaluation, which is mainly focused on investigating the differences between the two routines. The results clearly indicate both the advantages and disadvantages of eigensx over eigens, which will contribute to further performance improvement of Eigen Exa. The obtained results are also expected to be useful for other parallel dense matrix computations, in addition to Eigen value problems.
{"title":"Performance Evaluation of the Eigen Exa Eigensolver on Oakleaf-FX: Tridiagonalization Versus Pentadiagonalization","authors":"Takeshi Fukaya, Toshiyuki Imamura","doi":"10.1109/IPDPSW.2015.128","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.128","url":null,"abstract":"The solution of real symmetric dense Eigen value problems is one of the fundamental matrix computations. To date, several new high-performance Eigen solvers have been developed for peta and postpeta scale systems. One of these, the Eigen Exa Eigen solver, has been developed in Japan. Eigen Exa provides two routines: eigens, which is based on traditional tridiagonalization, and eigensx, which employs a new method via a pentadiagonal matrix. Recently, we conducted a detailed performance evaluation of Eigen Exa by using 4,800 nodes of the Oak leaf-FX supercomputer system. In this paper, we report the results of our evaluation, which is mainly focused on investigating the differences between the two routines. The results clearly indicate both the advantages and disadvantages of eigensx over eigens, which will contribute to further performance improvement of Eigen Exa. The obtained results are also expected to be useful for other parallel dense matrix computations, in addition to Eigen value problems.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"69 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130439804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayuan Meng, T. Uram, V. Morozov, V. Vishwanath, Kalyan Kumaran
Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patterns (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.
大多数加速器,如图形处理单元(gpu)和矢量处理器,特别适合加速大规模并行工作负载。另一方面,传统的工作负载是为多核并行性开发的,通常只扩展到几十个OpenMP线程。当硬件线程的数量明显超过外部循环的并行度时,程序员就面临着如何有效利用硬件的挑战。一种常见的解决方案是进一步利用隐藏在代码结构深处的并行性。这样的并行性结构更少:并行循环和顺序循环可能不完美地嵌套在一起,相邻的内部循环可能表现出不同的并发模式(例如Reduction vs. Forall),但必须在相同的并行部分中并行化。必须探索许多依赖于输入的转换。程序员通常使用较大的硬件线程组来协作地遍历较小的外部循环分区,并自适应地利用任何遇到的并行性。此过程耗时且容易出错,但是对于此类工作负载,获得很少或没有性能的风险仍然很高。为了降低风险并指导实现,我们提出了一种技术来对具有有限并行性的工作负载进行建模,该技术可以自动探索和评估涉及合作线程的转换。最终,我们的框架可以实现最佳性能和最有希望的转换,而无需实现GPU代码或使用物理硬件。我们设想将我们的技术集成到未来的编译器或自动调优的优化框架中。
{"title":"Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism","authors":"Jiayuan Meng, T. Uram, V. Morozov, V. Vishwanath, Kalyan Kumaran","doi":"10.1109/IPDPSW.2015.55","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.55","url":null,"abstract":"Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patterns (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130093885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. W. Bethel, David Camp, D. Donofrio, Mark Howison
Many data-intensive algorithms -- particularly in visualization, image processing, and data analysis -- operate on structured data, that is, data organized in multidimensional arrays. While many of these algorithms are quite numerically intensive, by and large, their performance is limited by the cost of memory accesses. As we move towards the exascale regime of computing, one central research challenge is finding ways to minimize data movement through the memory hierarchy, particularly within a node in a shared-memory parallel setting. We study the effects that an alternative in-memory data layout format has in terms of runtime performance gains resulting from reducing the amount of data moved through the memory hierarchy. We focus the study on shared-memory parallel implementations of two algorithms common in visualization and analysis: a stencil-based convolution kernel, which uses a structured memory access pattern, and ray casting volume rendering, which uses a semi-structured memory access pattern. The question we study is to better understand to what degree an alternative memory layout, when used by these key algorithms, will result in improved runtime performance and memory system utilization. Our approach uses a layout based on a Z-order (Morton-order) space-filling curve data organization, and we measure and report runtime and various metrics and counters associated with memory system utilization. Our results show nearly uniform improved runtime performance and improved utilization of the memory hierarchy across varying levels of concurrency the applications we tested. This approach is complementary to other memory optimization strategies like cache blocking, but may also be more general and widely applicable to a diverse set of applications.
{"title":"Improving Performance of Structured-Memory, Data-Intensive Applications on Multi-core Platforms via a Space-Filling Curve Memory Layout","authors":"E. W. Bethel, David Camp, D. Donofrio, Mark Howison","doi":"10.1109/IPDPSW.2015.71","DOIUrl":"https://doi.org/10.1109/IPDPSW.2015.71","url":null,"abstract":"Many data-intensive algorithms -- particularly in visualization, image processing, and data analysis -- operate on structured data, that is, data organized in multidimensional arrays. While many of these algorithms are quite numerically intensive, by and large, their performance is limited by the cost of memory accesses. As we move towards the exascale regime of computing, one central research challenge is finding ways to minimize data movement through the memory hierarchy, particularly within a node in a shared-memory parallel setting. We study the effects that an alternative in-memory data layout format has in terms of runtime performance gains resulting from reducing the amount of data moved through the memory hierarchy. We focus the study on shared-memory parallel implementations of two algorithms common in visualization and analysis: a stencil-based convolution kernel, which uses a structured memory access pattern, and ray casting volume rendering, which uses a semi-structured memory access pattern. The question we study is to better understand to what degree an alternative memory layout, when used by these key algorithms, will result in improved runtime performance and memory system utilization. Our approach uses a layout based on a Z-order (Morton-order) space-filling curve data organization, and we measure and report runtime and various metrics and counters associated with memory system utilization. Our results show nearly uniform improved runtime performance and improved utilization of the memory hierarchy across varying levels of concurrency the applications we tested. This approach is complementary to other memory optimization strategies like cache blocking, but may also be more general and widely applicable to a diverse set of applications.","PeriodicalId":340697,"journal":{"name":"2015 IEEE International Parallel and Distributed Processing Symposium Workshop","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126726405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}