Pub Date : 2017-12-01DOI: 10.1109/TPDS.2017.2740285
P. D. Sanzo
Transactional Memory (TM) is a practical programming paradigm for developing concurrent applications. Performance is a critical factor for TM implementations, and various studies demonstrated that specialised transaction/thread scheduling support is essential for implementing performance-effective TM systems. After one decade of research, this article reviews the wide variety of scheduling techniques proposed for Software Transactional Memories. Based on peculiarities and differences of the adopted scheduling strategies, we propose a classification of the existing techniques, and we discuss the specific characteristics of each technique. Also, we analyse the results of previous evaluation and comparison studies, and we present the results of a new experimental study encompassing techniques based on different scheduling strategies. Finally, we identify potential strengths and weaknesses of the different techniques, as well as the issues that require to be further investigated.
{"title":"Analysis, Classification and Comparison of Scheduling Techniques for Software Transactional Memories","authors":"P. D. Sanzo","doi":"10.1109/TPDS.2017.2740285","DOIUrl":"https://doi.org/10.1109/TPDS.2017.2740285","url":null,"abstract":"Transactional Memory (TM) is a practical programming paradigm for developing concurrent applications. Performance is a critical factor for TM implementations, and various studies demonstrated that specialised transaction/thread scheduling support is essential for implementing performance-effective TM systems. After one decade of research, this article reviews the wide variety of scheduling techniques proposed for Software Transactional Memories. Based on peculiarities and differences of the adopted scheduling strategies, we propose a classification of the existing techniques, and we discuss the specific characteristics of each technique. Also, we analyse the results of previous evaluation and comparison studies, and we present the results of a new experimental study encompassing techniques based on different scheduling strategies. Finally, we identify potential strengths and weaknesses of the different techniques, as well as the issues that require to be further investigated.","PeriodicalId":13128,"journal":{"name":"IEEE Trans. Parallel Distributed Syst.","volume":"9 1","pages":"3356-3373"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84781917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Delporte-Gallet, H. Fauconnier, J. Hélary, M. Raynal
The Global Data Computation problem consists of providing each process with the same vector (with one entry per process) such that each entry is filled by a value provided by the corresponding process. This paper presents a protocol that solves this problem in an asynchronous distributed system where processes can crash, but equipped with a perfect failure detector. This protocol requires that processes execute asynchronous computation rounds. The number of rounds is upper bounded by min(f+2, t+1, n), where n, t, and f represent the total number of processes, the maximum number of processes that can crash, and the number of processes that actually crash, respectively. This value is a lower bound for the number of rounds when t
{"title":"Early stopping in global data computation","authors":"C. Delporte-Gallet, H. Fauconnier, J. Hélary, M. Raynal","doi":"10.1145/571825.571871","DOIUrl":"https://doi.org/10.1145/571825.571871","url":null,"abstract":"The Global Data Computation problem consists of providing each process with the same vector (with one entry per process) such that each entry is filled by a value provided by the corresponding process. This paper presents a protocol that solves this problem in an asynchronous distributed system where processes can crash, but equipped with a perfect failure detector. This protocol requires that processes execute asynchronous computation rounds. The number of rounds is upper bounded by min(f+2, t+1, n), where n, t, and f represent the total number of processes, the maximum number of processes that can crash, and the number of processes that actually crash, respectively. This value is a lower bound for the number of rounds when t<n-1. To our knowledge, this protocol is the first to achieve this lower bound. Interestingly, this protocol meets the same lower bound as the one required in synchronous systems.","PeriodicalId":13128,"journal":{"name":"IEEE Trans. Parallel Distributed Syst.","volume":"100 1","pages":"909-921"},"PeriodicalIF":0.0,"publicationDate":"2002-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72926277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Sundar, Doddaballapur Narasimha-Murthy Jayasimha, D. Panda
Parallel algorithms for several common problems such as sorting and the FFT involve a personalized exchange of data among all the processors. Past approaches to doing complete exchange have taken one of two broad approaches: direct exchange or the indirect message-combining approaches. While combining approaches reduce the number of message startups, direct exchange minimizes the volume of data transmitted. This paper presents a family of hybrid algorithms for wormhole-routed 2D meshes that can effectively utilize the complementary strengths of these two approaches to complete exchange. The performance of hybrid algorithms using Cyclic Exchange (26) and Scott's Direct Exchange (23) are studied using analytical models, simulation, and implementation on a Cray T3D system. The results show that hybrids achieve lower completion times than either pure algorithm for a range of mesh sizes, data block sizes, and message startup costs. It is also demonstrated that barriers may be used to enhance performance by reducing message contention, whether or not the target system provides hardware support for barrier synchronization. The analytical models are shown useful in selecting the optimum hybrid for any given combination of system parameters (mesh size, message startup time, flit transfer time, and barrier cost) and the problem parameter (data block size).
{"title":"Hybrid algorithms for complete exchange in 2D meshes","authors":"N. Sundar, Doddaballapur Narasimha-Murthy Jayasimha, D. Panda","doi":"10.1145/237578.237602","DOIUrl":"https://doi.org/10.1145/237578.237602","url":null,"abstract":"Parallel algorithms for several common problems such as sorting and the FFT involve a personalized exchange of data among all the processors. Past approaches to doing complete exchange have taken one of two broad approaches: direct exchange or the indirect message-combining approaches. While combining approaches reduce the number of message startups, direct exchange minimizes the volume of data transmitted. This paper presents a family of hybrid algorithms for wormhole-routed 2D meshes that can effectively utilize the complementary strengths of these two approaches to complete exchange. The performance of hybrid algorithms using Cyclic Exchange (26) and Scott's Direct Exchange (23) are studied using analytical models, simulation, and implementation on a Cray T3D system. The results show that hybrids achieve lower completion times than either pure algorithm for a range of mesh sizes, data block sizes, and message startup costs. It is also demonstrated that barriers may be used to enhance performance by reducing message contention, whether or not the target system provides hardware support for barrier synchronization. The analytical models are shown useful in selecting the optimum hybrid for any given combination of system parameters (mesh size, message startup time, flit transfer time, and barrier cost) and the problem parameter (data block size).","PeriodicalId":13128,"journal":{"name":"IEEE Trans. Parallel Distributed Syst.","volume":"41 1","pages":"1201-1218"},"PeriodicalIF":0.0,"publicationDate":"2001-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72686879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current group communication services have mostly been implemented on a homogeneous, distributed computing environment. This limits their applicability because most modern distributed computing environment are heterogeneous in nature. This paper describes the design, implementation, and performance evaluation of a CORBA group communication service. Using CORBA to implement a group communication service enables that group communication service to operate in a heterogeneous, distributed computing environment. To evaluate the effect of CORBA on the performance of a group communication service, this paper provides a detailed comparison of the performance measured from three implementations of an atomic broadcast protocol and a group membership protocol. Two of these implementations use CORBA, while the third uses UDP sockets for interprocess communication. The main conclusion is that heterogeneity can be achieved in group communication services by implementing them using CORBA, but there is a substantial performance cost. This performance cost can be reduced to a certain extent by carefully choosing a design and tuning various protocol parameters such as buffer sizes and timer values.
{"title":"On group communication support in CORBA","authors":"Shivakant Mishra, Lanlan Fei, Xiao Lin, Guming Xing","doi":"10.1145/371209.371538","DOIUrl":"https://doi.org/10.1145/371209.371538","url":null,"abstract":"Current group communication services have mostly been implemented on a homogeneous, distributed computing environment. This limits their applicability because most modern distributed computing environment are heterogeneous in nature. This paper describes the design, implementation, and performance evaluation of a CORBA group communication service. Using CORBA to implement a group communication service enables that group communication service to operate in a heterogeneous, distributed computing environment. To evaluate the effect of CORBA on the performance of a group communication service, this paper provides a detailed comparison of the performance measured from three implementations of an atomic broadcast protocol and a group membership protocol. Two of these implementations use CORBA, while the third uses UDP sockets for interprocess communication. The main conclusion is that heterogeneity can be achieved in group communication services by implementing them using CORBA, but there is a substantial performance cost. This performance cost can be reduced to a certain extent by carefully choosing a design and tuning various protocol parameters such as buffer sizes and timer values.","PeriodicalId":13128,"journal":{"name":"IEEE Trans. Parallel Distributed Syst.","volume":"49 1","pages":"193-208"},"PeriodicalIF":0.0,"publicationDate":"2001-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72782596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on multiprocessor interconnection networks has primarily focused on wormhole switching, virtual channel flow control, and routing algorithms to enhance their performance. The rationale behind this research is that by alleviating the network latency for high network loads, the overall system performance would improve. Many studies have used synthetic workloads to support this claim. However, such workloads may not necessarily capture the behavior of real applications. In this paper, we have used parallel applications for a closer examination of the network behavior. In particular, the performance benefit from enhancing a 2D mesh with virtual channels (VCs) and a fully adaptive routing algorithm is examined with a set of shared-memory and message passing applications. Execution time and average message latency of shared memory applications are measured using execution-driven simulation and by varying many architectural attributes that affect the network workload. The communication traces of message passing applications, collected on an IBM-SP2, are used to run a trace-driven simulation of the mesh architecture to obtain message latency. Simulation results show that VCs and adaptive routing can reduce the network latency to varying degrees depending on the application. However, these modest benefits do not translate to significant improvements in the overall execution time because the load on the network is not high enough to exploit the advantages of the network enhancements. Moreover, this benefit may be negated if the architectural enhancements increase the network cycle time. Rather, emphasis should be placed on improving the raw network bandwidth and faster network interfaces. Index Terms—Adaptive routing, architectural simulation, interconnection network, mesh network, performance evaluation, virtual channels. E
{"title":"Impact of virtual channels and adaptive routing on application performance","authors":"A. S. Vaidya, A. Sivasubramaniam, C. Das","doi":"10.1145/371209.371542","DOIUrl":"https://doi.org/10.1145/371209.371542","url":null,"abstract":"Research on multiprocessor interconnection networks has primarily focused on wormhole switching, virtual channel flow control, and routing algorithms to enhance their performance. The rationale behind this research is that by alleviating the network latency for high network loads, the overall system performance would improve. Many studies have used synthetic workloads to support this claim. However, such workloads may not necessarily capture the behavior of real applications. In this paper, we have used parallel applications for a closer examination of the network behavior. In particular, the performance benefit from enhancing a 2D mesh with virtual channels (VCs) and a fully adaptive routing algorithm is examined with a set of shared-memory and message passing applications. Execution time and average message latency of shared memory applications are measured using execution-driven simulation and by varying many architectural attributes that affect the network workload. The communication traces of message passing applications, collected on an IBM-SP2, are used to run a trace-driven simulation of the mesh architecture to obtain message latency. Simulation results show that VCs and adaptive routing can reduce the network latency to varying degrees depending on the application. However, these modest benefits do not translate to significant improvements in the overall execution time because the load on the network is not high enough to exploit the advantages of the network enhancements. Moreover, this benefit may be negated if the architectural enhancements increase the network cycle time. Rather, emphasis should be placed on improving the raw network bandwidth and faster network interfaces. Index Terms—Adaptive routing, architectural simulation, interconnection network, mesh network, performance evaluation, virtual channels. E","PeriodicalId":13128,"journal":{"name":"IEEE Trans. Parallel Distributed Syst.","volume":"24 1","pages":"223-237"},"PeriodicalIF":0.0,"publicationDate":"2001-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90564857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ibraheem Al-Furaih, S. Aluru, Sanjay Goil, S. Ranka
Multidimensional binary search tree (abbreviated k-d tree) is a popular data structure for the organization and manipulation of spatial data. The data structure is useful in several applications including graph partitioning, hierarchical applications such as molecular dynamics and n-body simulations, and databases. In this paper, we study efficient parallel construction of k-d trees on coarse-grained distributed memory parallel computers. We consider several algorithms for parallel k-d tree construction and analyze them theoretically and experimentally, with a view towards identifying the algorithms that are practically efficient. We have carried out detailed implementations of all the algorithms discussed on the CM-5 and report on experimental results. Index Terms—k-d trees, hypercubes, meshes, multidimensional binary search trees, parallel algorithms, parallel computers.
{"title":"Parallel construction of multidimensional binary search trees","authors":"Ibraheem Al-Furaih, S. Aluru, Sanjay Goil, S. Ranka","doi":"10.1145/237578.237605","DOIUrl":"https://doi.org/10.1145/237578.237605","url":null,"abstract":"Multidimensional binary search tree (abbreviated k-d tree) is a popular data structure for the organization and manipulation of spatial data. The data structure is useful in several applications including graph partitioning, hierarchical applications such as molecular dynamics and n-body simulations, and databases. In this paper, we study efficient parallel construction of k-d trees on coarse-grained distributed memory parallel computers. We consider several algorithms for parallel k-d tree construction and analyze them theoretically and experimentally, with a view towards identifying the algorithms that are practically efficient. We have carried out detailed implementations of all the algorithms discussed on the CM-5 and report on experimental results. Index Terms—k-d trees, hypercubes, meshes, multidimensional binary search trees, parallel algorithms, parallel computers.","PeriodicalId":13128,"journal":{"name":"IEEE Trans. Parallel Distributed Syst.","volume":"262 1","pages":"136-148"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76438278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}