Pub Date : 1994-12-19DOI: 10.1109/ICPADS.1994.589884
Chien-Ming Ker, D. Du, George Spix, Lance Wu, S. Chang, Jin-Tuu Wang
The advent of the Internet has been one of the most exciting major events in the second half of the 20 century. The ancient dream of “a scholar knows all things happening in the world without venturing outdoors” has finally become a reality. Since 1993, the Internet started to take off. At present, the Internet has spread to more than 180 countries and regions, connecting more than 600,000 domestic networks of various types, hooking up more than 20 million computers available to 120 million users (2% of the entire global population). Within the Internet are the information treasures shared by all human civilizations.
{"title":"Taiwan's Information Superhighway: Technical Issues and Social Impacts","authors":"Chien-Ming Ker, D. Du, George Spix, Lance Wu, S. Chang, Jin-Tuu Wang","doi":"10.1109/ICPADS.1994.589884","DOIUrl":"https://doi.org/10.1109/ICPADS.1994.589884","url":null,"abstract":"The advent of the Internet has been one of the most exciting major events in the second half of the 20 century. The ancient dream of “a scholar knows all things happening in the world without venturing outdoors” has finally become a reality. Since 1993, the Internet started to take off. At present, the Internet has spread to more than 180 countries and regions, connecting more than 600,000 domestic networks of various types, hooking up more than 20 million computers available to 120 million users (2% of the entire global population). Within the Internet are the information treasures shared by all human civilizations.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"150-151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125182563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-12-19DOI: 10.1109/ICPADS.1994.589887
Lionel M. Ni, Kuo-Wei Wu, Ken Kennedy, H. Siegel, George Spix, Steven J. Wallach, Hans P. Zima
Parallel processing has been a subject of extensive research for over 20 years, especially in the last 10 years, with many commercial parallel machines becoming available, from small scale parallel machines to massively parallel machines. At one time, it was claimed that parallel machines will become the mainstream computers. However, more recently, some parallel computer vendors have gone out of business and some others are struggling. Some pessimists even claimed that this is a dying field. So, what’s wrong? Five distinguished panelists are invited to share their views on this issue. The panelists are also expected to address what could be done and could be done in order to make parallel computers truly mainstream computers. Panelists
{"title":"Parallel Processing: What Have We Done Wrong?","authors":"Lionel M. Ni, Kuo-Wei Wu, Ken Kennedy, H. Siegel, George Spix, Steven J. Wallach, Hans P. Zima","doi":"10.1109/ICPADS.1994.589887","DOIUrl":"https://doi.org/10.1109/ICPADS.1994.589887","url":null,"abstract":"Parallel processing has been a subject of extensive research for over 20 years, especially in the last 10 years, with many commercial parallel machines becoming available, from small scale parallel machines to massively parallel machines. At one time, it was claimed that parallel machines will become the mainstream computers. However, more recently, some parallel computer vendors have gone out of business and some others are struggling. Some pessimists even claimed that this is a dying field. So, what’s wrong? Five distinguished panelists are invited to share their views on this issue. The panelists are also expected to address what could be done and could be done in order to make parallel computers truly mainstream computers. Panelists","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116051706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-12-19DOI: 10.1109/ICPADS.1994.590345
D. Chesney, B. Cheng
Most of the available parallelism in source code is contained in loops and is exploited by applying a sequence of loop transformations. Diflerent methods of representing and ordering sequences oftransformations have been developed, including the use of unimodular transformations, which unify loop permutation, loop reversal, and loop skewing of perfectly nested loops. This paper presents three extensions to the unimodular approach that make it applicable to a wider range of source code structures. First, the unimodular transformations are extended to represent additional loop transformation techniques, namely loop fission, loop fusion, loop blocking (tiling), strip mining, cycle shrinking, loop coalescing, and loop collapsing. Second, the application of unimodular transformations is generalized to handle both perfectly and imperfectly nested loops. Third, attractive properties of the original unimodular transformations are preserved by the generalized model.
{"title":"Generalizing the Unimodular Approach","authors":"D. Chesney, B. Cheng","doi":"10.1109/ICPADS.1994.590345","DOIUrl":"https://doi.org/10.1109/ICPADS.1994.590345","url":null,"abstract":"Most of the available parallelism in source code is contained in loops and is exploited by applying a sequence of loop transformations. Diflerent methods of representing and ordering sequences oftransformations have been developed, including the use of unimodular transformations, which unify loop permutation, loop reversal, and loop skewing of perfectly nested loops. This paper presents three extensions to the unimodular approach that make it applicable to a wider range of source code structures. First, the unimodular transformations are extended to represent additional loop transformation techniques, namely loop fission, loop fusion, loop blocking (tiling), strip mining, cycle shrinking, loop coalescing, and loop collapsing. Second, the application of unimodular transformations is generalized to handle both perfectly and imperfectly nested loops. Third, attractive properties of the original unimodular transformations are preserved by the generalized model.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122878276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-12-19DOI: 10.1109/ICPADS.1994.589879
E. Manning
{"title":"Object Technology and Distributed Operating Systems","authors":"E. Manning","doi":"10.1109/ICPADS.1994.589879","DOIUrl":"https://doi.org/10.1109/ICPADS.1994.589879","url":null,"abstract":"","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131251320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Declustering schemes for range queries have been widely used in parallel storage systems to allow fast access to multidimensional data. A declustering scheme distributes data blocks among several devices (e.g., disks) so that the number of parallel block accesses needed per query is minimized. Given a system of k disks, a query that accesses m blocks needs a number of parallel block accesses that is at least OPT=/spl lceil/m/k/spl rceil/. In literature, the performance of any declustering scheme is measured by its worst-case additive deviation from OPT. A number of asymptotically optimal declustering schemes are known for 2-dimensional range queries. The case of higher dimensions appears intrinsically very difficult. None of the proposed schemes provide any non-trivial performance guarantees in higher dimensions. In this paper, we describe a declustering scheme which has guaranteed worst-case performance of OPT+O(k/sup 1/(d-1)/) parallel block accesses for d dimensions. Our scheme is a generalization of a 2-dimensional scheme proposed by Atallah and Prabhakar in 2000.
{"title":"A Declustering Scheme With Guaranteed Worst-Case Additive Error O(kfrac{1}{{d - 1}})","authors":"Fouad B. Chedid","doi":"10.1109/ICPADS.2005.16","DOIUrl":"https://doi.org/10.1109/ICPADS.2005.16","url":null,"abstract":"Declustering schemes for range queries have been widely used in parallel storage systems to allow fast access to multidimensional data. A declustering scheme distributes data blocks among several devices (e.g., disks) so that the number of parallel block accesses needed per query is minimized. Given a system of k disks, a query that accesses m blocks needs a number of parallel block accesses that is at least OPT=/spl lceil/m/k/spl rceil/. In literature, the performance of any declustering scheme is measured by its worst-case additive deviation from OPT. A number of asymptotically optimal declustering schemes are known for 2-dimensional range queries. The case of higher dimensions appears intrinsically very difficult. None of the proposed schemes provide any non-trivial performance guarantees in higher dimensions. In this paper, we describe a declustering scheme which has guaranteed worst-case performance of OPT+O(k/sup 1/(d-1)/) parallel block accesses for d dimensions. Our scheme is a generalization of a 2-dimensional scheme proposed by Atallah and Prabhakar in 2000.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115219795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICPADS.2016.0082
Tao Cai, Dejiao Niu, Yao He, Yeqing Zhu
Due to the price limitation and the number of DIMM slot, Byte and Block addressable NVM devices should coexist in the massive Storage Class Memory(SCM). But they have many differences such as interface, access granularity, I/O performance and storage capacity. Therefore, the existing main memory and file system management algorithms cannot be applied in it directly. In this paper, we present a complex file system named NVMCFS for Hybrid NVM. The head-tail layout and space management based on two layer radix-tree is provided to unify logic space between two type NVM devices. The complex file structures, dynamic file data distributed strategy, buffer for an individual file and asymmetric call in strategy are used to speed up the access response and improve I/O performance. The hybrid consistent mechanism is given and it can reduce the performance loss NVMCFS. Finally, the prototype of NVMCFS is implemented and evaluated by various benchmark. Compared to Ext2 and Ext4 on PMBD, NVMCFS improves sequential read speed 4.4x and 5x, sequential write speed 2.8x and 1.9x, IOPS 45% and 62%, and has the similar I/O performance with PMFS. At the same time, NVMCFS reduces the total overhead of consistency by 50%~92% compared to Ext4.
{"title":"NVMCFS: Complex File System for Hybrid NVM","authors":"Tao Cai, Dejiao Niu, Yao He, Yeqing Zhu","doi":"10.1109/ICPADS.2016.0082","DOIUrl":"https://doi.org/10.1109/ICPADS.2016.0082","url":null,"abstract":"Due to the price limitation and the number of DIMM slot, Byte and Block addressable NVM devices should coexist in the massive Storage Class Memory(SCM). But they have many differences such as interface, access granularity, I/O performance and storage capacity. Therefore, the existing main memory and file system management algorithms cannot be applied in it directly. In this paper, we present a complex file system named NVMCFS for Hybrid NVM. The head-tail layout and space management based on two layer radix-tree is provided to unify logic space between two type NVM devices. The complex file structures, dynamic file data distributed strategy, buffer for an individual file and asymmetric call in strategy are used to speed up the access response and improve I/O performance. The hybrid consistent mechanism is given and it can reduce the performance loss NVMCFS. Finally, the prototype of NVMCFS is implemented and evaluated by various benchmark. Compared to Ext2 and Ext4 on PMBD, NVMCFS improves sequential read speed 4.4x and 5x, sequential write speed 2.8x and 1.9x, IOPS 45% and 62%, and has the similar I/O performance with PMFS. At the same time, NVMCFS reduces the total overhead of consistency by 50%~92% compared to Ext4.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121690719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICPADS.2007.4447836
I. Korbi, L. Saïdane
In this paper, we propose a real time scheduling policy over 802.11 DCF protocol called Deadline Monotonic (DM). We evaluate the performance of this policy for a simple scenario where two stations with different delay constraints contend for the channel. For this scenario a Markov chain based analytical model is proposed. From the mathematical model, we derive expressions of the average medium access delay called service time. Analytical results are validated by simulation results using the ns-2 network simulator.
{"title":"Supporting deadline monotonic policy over 802.11 average service time analysis","authors":"I. Korbi, L. Saïdane","doi":"10.1109/ICPADS.2007.4447836","DOIUrl":"https://doi.org/10.1109/ICPADS.2007.4447836","url":null,"abstract":"In this paper, we propose a real time scheduling policy over 802.11 DCF protocol called Deadline Monotonic (DM). We evaluate the performance of this policy for a simple scenario where two stations with different delay constraints contend for the channel. For this scenario a Markov chain based analytical model is proposed. From the mathematical model, we derive expressions of the average medium access delay called service time. Analytical results are validated by simulation results using the ns-2 network simulator.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128026351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After deeply analyzing sniff mode which is a low power operation mode of Bluetooth, a learning function is used to approximate the distribution of the incoming traffic at a master-slave pair. Based the inter-arrival times of data packets obtained from the learning function, the mean of these inter-arrival times is the possible sniff interval; according to the backlog packets in the buffer space and forecast next burst data traffic, a cost model is used to approximate the slot occupancy assigned to a slave. Consequently, calculate sniff attempt slots, and go into sniff mode if conditions are satisfied. Finally, computer simulation results validate that the proposed scheduling policy can save about 38.6% power consumption compared to the always active mode.
{"title":"A Sniff Scheduling Policy for Power Saving in Bluetooth Piconet","authors":"Xiang Li, Xiaozong Yang","doi":"10.1109/ICPADS.2005.52","DOIUrl":"https://doi.org/10.1109/ICPADS.2005.52","url":null,"abstract":"After deeply analyzing sniff mode which is a low power operation mode of Bluetooth, a learning function is used to approximate the distribution of the incoming traffic at a master-slave pair. Based the inter-arrival times of data packets obtained from the learning function, the mean of these inter-arrival times is the possible sniff interval; according to the backlog packets in the buffer space and forecast next burst data traffic, a cost model is used to approximate the slot occupancy assigned to a slave. Consequently, calculate sniff attempt slots, and go into sniff mode if conditions are satisfied. Finally, computer simulation results validate that the proposed scheduling policy can save about 38.6% power consumption compared to the always active mode.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Storage applications are in urgent need of multi-erasure codes. But there is no consensus on the best coding technique. Hafner has presented a class of multi-erasure codes named HoVer codes [1]. This kind of codes has a unique data/parity layout which provides a range of implementation options that cover a large portion of the performance/efficiency trade-off space. Thus it can be applied to many scenarios by simple tuning. In this paper, we give a combinatorial representation of a family of double-erasure HoVer codes - create a mapping between this family of codes and Latin squares. We also present two families of double-erasure HoVer codes respectively based on the column-Hamiltonian Latin squares (of odd order) and a family of Latin squares of even order. Compared with the double-erasure HoVer codes presented in [1], the new codes enable greater flexibility in performance and efficiency trade-off.
{"title":"Constructing Double-Erasure HoVer Codes Using Latin Squares","authors":"Gang Wang, X. Liu, Sheng Lin, Gu-Ya Xie, Jing Liu","doi":"10.1109/ICPADS.2008.55","DOIUrl":"https://doi.org/10.1109/ICPADS.2008.55","url":null,"abstract":"Storage applications are in urgent need of multi-erasure codes. But there is no consensus on the best coding technique. Hafner has presented a class of multi-erasure codes named HoVer codes [1]. This kind of codes has a unique data/parity layout which provides a range of implementation options that cover a large portion of the performance/efficiency trade-off space. Thus it can be applied to many scenarios by simple tuning. In this paper, we give a combinatorial representation of a family of double-erasure HoVer codes - create a mapping between this family of codes and Latin squares. We also present two families of double-erasure HoVer codes respectively based on the column-Hamiltonian Latin squares (of odd order) and a family of Latin squares of even order. Compared with the double-erasure HoVer codes presented in [1], the new codes enable greater flexibility in performance and efficiency trade-off.","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127618635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICPADS.2007.4447708
C. King
Welcome to the proceedings of the 10th International Conference on High Performance Computing, HiPC 2003. This year, we were delighted to have 164 papers submitted to this conference from 20 different countries, including countries in North America, South America, Europe, Asia, and the Middle East. Of these, 48 papers from 11 different countries were accepted for presentation at the conference and publication in the conference proceedings. Less than 30% of the submitted papers were accepted this year, with each paper receiving a minimum of three reviews. Although the selection process was quite competitive, we were pleased to accomodate 10 (parallel) technical sessions of high-quality contributed papers. In addition to the contributed paper sessions, this year’s conference also featured a poster session, an industrial track session, five keynote addresses, five tutorials and seven workshops. It was a pleasure putting this program together with the help of five excellent Program Vice-Chairs and the 65-person Program Committee. Although the hard work of all the program committee members is deeply appreciated, I especially wish to acknowledge the dedicated effort made by the Vice-Chairs: Rajiv Gupta (Architecture), Jose Moreira (System Software), Stephan Olariu (Communication Networks), Yuanyuan Yang (Algorithms), and Xiaodong Zhang (Applications). Without their help and timely work, the quality of the program would not have been as high nor would the process have run so smoothly. I also wish to thank the other members of the supporting cast who helped in putting together this program, including those who organized the keynotes, tutorials, workshops, poster session, and industrial track session, and those who performed the administrative functions that were essential to the success of this conference. The work of Sushil Prasad in putting together the conference proceedings is also acknowledged, as well as the support provided by Jeonghee Shin in maintaining the CyberChair on-line paper submission and evaluation software. Last, but certainly not least, I express heartfelt thanks to our General Co-chair, Viktor Prasanna, for all his useful advice and for giving me the opportunity to serve as the program chair of this conference. This truly was a very rewarding experience for me. I trust you find this proceedings volume to be as informative and stimulating as we endeavored to make it. If you attended HiPC 2003, I hope you found time to enjoy the rich cultural experience provided by this interesting city of Hyderabad, India!
{"title":"Message from the program chair","authors":"C. King","doi":"10.1109/ICPADS.2007.4447708","DOIUrl":"https://doi.org/10.1109/ICPADS.2007.4447708","url":null,"abstract":"Welcome to the proceedings of the 10th International Conference on High Performance Computing, HiPC 2003. This year, we were delighted to have 164 papers submitted to this conference from 20 different countries, including countries in North America, South America, Europe, Asia, and the Middle East. Of these, 48 papers from 11 different countries were accepted for presentation at the conference and publication in the conference proceedings. Less than 30% of the submitted papers were accepted this year, with each paper receiving a minimum of three reviews. Although the selection process was quite competitive, we were pleased to accomodate 10 (parallel) technical sessions of high-quality contributed papers. In addition to the contributed paper sessions, this year’s conference also featured a poster session, an industrial track session, five keynote addresses, five tutorials and seven workshops. It was a pleasure putting this program together with the help of five excellent Program Vice-Chairs and the 65-person Program Committee. Although the hard work of all the program committee members is deeply appreciated, I especially wish to acknowledge the dedicated effort made by the Vice-Chairs: Rajiv Gupta (Architecture), Jose Moreira (System Software), Stephan Olariu (Communication Networks), Yuanyuan Yang (Algorithms), and Xiaodong Zhang (Applications). Without their help and timely work, the quality of the program would not have been as high nor would the process have run so smoothly. I also wish to thank the other members of the supporting cast who helped in putting together this program, including those who organized the keynotes, tutorials, workshops, poster session, and industrial track session, and those who performed the administrative functions that were essential to the success of this conference. The work of Sushil Prasad in putting together the conference proceedings is also acknowledged, as well as the support provided by Jeonghee Shin in maintaining the CyberChair on-line paper submission and evaluation software. Last, but certainly not least, I express heartfelt thanks to our General Co-chair, Viktor Prasanna, for all his useful advice and for giving me the opportunity to serve as the program chair of this conference. This truly was a very rewarding experience for me. I trust you find this proceedings volume to be as informative and stimulating as we endeavored to make it. If you attended HiPC 2003, I hope you found time to enjoy the rich cultural experience provided by this interesting city of Hyderabad, India!","PeriodicalId":281075,"journal":{"name":"International Conference on Parallel and Distributed Systems","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}