Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580954
R. Calinescu
This paper addresses the scheduling of uniform-dependence loop nests within the framework of the bulk-synchronous parallel (BSP) model. Two broad classes of tightly-nested loops are identified in the paper and scheduled according to the BSP discipline, and the resulting schedules are analysed in terms of the BSP cost model.
{"title":"A BSP approach to the scheduling of tightly-nested loops","authors":"R. Calinescu","doi":"10.1109/IPPS.1997.580954","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580954","url":null,"abstract":"This paper addresses the scheduling of uniform-dependence loop nests within the framework of the bulk-synchronous parallel (BSP) model. Two broad classes of tightly-nested loops are identified in the paper and scheduled according to the BSP discipline, and the resulting schedules are analysed in terms of the BSP cost model.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122284248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580969
G. Bandera, M. Ujaldón, M. A. Trenas, E. Zapata
Several methods have been proposed in the literature for the distribution of data on distributed memory machines, either oriented to dense or sparse structures. Many of the real applications, however, deal with both kinds of data jointly. The paper presents techniques for integrating dense and sparse array accesses in a way that optimizes locality and further allows an efficient loop partitioning within a data-parallel compiler. The approach is evaluated through an experimental survey with several compilers and parallel platforms. The results prove the benefits of the BRS sparse distribution when combined with CYCLIC in mixed algorithms and the poor efficiency achieved by well-known distribution schemes when sparse elements arise in the source code.
{"title":"The sparse cyclic distribution against its dense counterparts","authors":"G. Bandera, M. Ujaldón, M. A. Trenas, E. Zapata","doi":"10.1109/IPPS.1997.580969","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580969","url":null,"abstract":"Several methods have been proposed in the literature for the distribution of data on distributed memory machines, either oriented to dense or sparse structures. Many of the real applications, however, deal with both kinds of data jointly. The paper presents techniques for integrating dense and sparse array accesses in a way that optimizes locality and further allows an efficient loop partitioning within a data-parallel compiler. The approach is evaluated through an experimental survey with several compilers and parallel platforms. The results prove the benefits of the BRS sparse distribution when combined with CYCLIC in mixed algorithms and the poor efficiency achieved by well-known distribution schemes when sparse elements arise in the source code.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130459250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580912
A. Gerbessiotis, Constantinos J. Siniolakis
The authors present a new randomized sorting algorithm on the bulk-synchronous parallel (BSP) model. The algorithm improves upon the parallel slack of previous algorithms to achieve optimality. Tighter probabilistic bounds are also established. It uses sample sorting and utilizes recently introduced search algorithms for a class of data structures on the BSP model. Moreover the methods are within a 1+o(1) multiplicative factor of the respective sequential methods in terms of speedup for a wide range of the BSP parameters.
{"title":"A randomized sorting algorithm on the BSP model","authors":"A. Gerbessiotis, Constantinos J. Siniolakis","doi":"10.1109/IPPS.1997.580912","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580912","url":null,"abstract":"The authors present a new randomized sorting algorithm on the bulk-synchronous parallel (BSP) model. The algorithm improves upon the parallel slack of previous algorithms to achieve optimality. Tighter probabilistic bounds are also established. It uses sample sorting and utilizes recently introduced search algorithms for a class of data structures on the BSP model. Moreover the methods are within a 1+o(1) multiplicative factor of the respective sequential methods in terms of speedup for a wide range of the BSP parameters.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116636538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580979
G. Brodal, J. Träff, C. Zaroliagis
Presents a parallel priority data structure that improves the running time of certain algorithms for problems that lack a fast and work-efficient parallel solution. As a main application, we give a parallel implementation of Dijkstra's (1959) algorithm which runs in O(n) time while performing O(m log n) work on a CREW PRAM. This is a logarithmic factor improvement for the running time compared with previous approaches. The main feature of our data structure is that the operations needed in each iteration of Dijkstra's algorithm can be supported in O(1) time.
{"title":"A parallel priority data structure with applications","authors":"G. Brodal, J. Träff, C. Zaroliagis","doi":"10.1109/IPPS.1997.580979","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580979","url":null,"abstract":"Presents a parallel priority data structure that improves the running time of certain algorithms for problems that lack a fast and work-efficient parallel solution. As a main application, we give a parallel implementation of Dijkstra's (1959) algorithm which runs in O(n) time while performing O(m log n) work on a CREW PRAM. This is a logarithmic factor improvement for the running time compared with previous approaches. The main feature of our data structure is that the operations needed in each iteration of Dijkstra's algorithm can be supported in O(1) time.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132849764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580947
Sajal K. Das, M. C. Pinotti
We present an efficient mapping of a min-max-pair heap of size N on a hypercube multicomputer of p processors in such a way the load on each processor's local memory is balanced and no additional communication overhead is incurred for implementation of the single insertion, deletemin and deletemax operations. Our novel approach is based on an optimal mapping of the paths of a binary heap into a hypercube such that in O(log N/p+log p) time we can compute the Hamiltonian-suffix, which is defined as a pipelined suffix-minima computation on an O(log N)length heap path embedded into the Hamiltonian path of the hypercube according to the binary reflected Gray codes. However the binary tree underlying the heap data structure is not altered by the mapping process.
{"title":"O(log log n) time algorithms for Hamiltonian-suffix and min-max-pair heap operations on hypercube multicomputers","authors":"Sajal K. Das, M. C. Pinotti","doi":"10.1109/IPPS.1997.580947","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580947","url":null,"abstract":"We present an efficient mapping of a min-max-pair heap of size N on a hypercube multicomputer of p processors in such a way the load on each processor's local memory is balanced and no additional communication overhead is incurred for implementation of the single insertion, deletemin and deletemax operations. Our novel approach is based on an optimal mapping of the paths of a binary heap into a hypercube such that in O(log N/p+log p) time we can compute the Hamiltonian-suffix, which is defined as a pipelined suffix-minima computation on an O(log N)length heap path embedded into the Hamiltonian path of the hypercube according to the binary reflected Gray codes. However the binary tree underlying the heap data structure is not altered by the mapping process.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128462592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580904
S. Fu, N. Tzeng, Zhiyuan Li
We evaluate various distributed mutual exclusion algorithms on the IBM SP2 machine and the Intel iPSC/860 system. The empirical results are compared in terms of such criteria as the number of message exchanges and the response time. Our results indicate that the Star algorithm (M.L. Neilsen and M. Mizuno, 1991) achieves the shortest response time in most cases among all the algorithms on a small to medium sized system, when processors request for the critical section many times before involving any barrier synchronization. On the other hand, if every processor enters the critical section only once before encountering a barrier, the improved Ring algorithm (S.S. Fu and N.-F. Tzeng, 1995) is found to outperform others under a heavy load; but the Star algorithm and the CSL algorithm (Y.I. Chang et al., 1990) prevail when the request rate becomes light. The best solution to mutual exclusion in distributed memory systems is determined by how participating sites generate their mutual exclusion requests.
{"title":"Empirical evaluation of distributed mutual exclusion algorithms","authors":"S. Fu, N. Tzeng, Zhiyuan Li","doi":"10.1109/IPPS.1997.580904","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580904","url":null,"abstract":"We evaluate various distributed mutual exclusion algorithms on the IBM SP2 machine and the Intel iPSC/860 system. The empirical results are compared in terms of such criteria as the number of message exchanges and the response time. Our results indicate that the Star algorithm (M.L. Neilsen and M. Mizuno, 1991) achieves the shortest response time in most cases among all the algorithms on a small to medium sized system, when processors request for the critical section many times before involving any barrier synchronization. On the other hand, if every processor enters the critical section only once before encountering a barrier, the improved Ring algorithm (S.S. Fu and N.-F. Tzeng, 1995) is found to outperform others under a heavy load; but the Star algorithm and the CSL algorithm (Y.I. Chang et al., 1990) prevail when the request rate becomes light. The best solution to mutual exclusion in distributed memory systems is determined by how participating sites generate their mutual exclusion requests.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122842418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580943
A. Cox, S. Dwarkadas, Honghui Lu, W. Zwaenepoel
In this paper we evaluate the use of software distributed shared memory (DSM) on a message passing machine as the target for a parallelizing compiler. We compare this approach to compiler-generated message passing, hand-coded software DSM and hand-coded message passing. For this comparison, we use six applications: four that are regular and two that are irregular: Our results are gathered on an 8-node IBM SP/2 using the TreadMarks software DSM system. We use the APR shared-memory (SPF) compiler to generate the shared memory-programs and the APR XHPF compiler to generate message passing programs. The hand-coded message passing programs run with the IBM PVMe optimized message passing library. On the regular programs, both the compiler-generated and the hand-coded message passing outperform the SPF/TreadMarks combination: the compiler-generated message passing by 5.5% to 40%, and the hand-coded message passing by 7.5% to 49%. On the irregular programs, the SPF/TreadMarks combination outperforms the compiler-generated message passing by 38% and 89%, and only slightly underperforms the hand-coded message passing, differing by 4.4% and 16%. We also identify the factors that account for the performance differences, estimate their relative importance, and describe methods to improve the performance.
{"title":"Evaluating the performance of software distributed shared memory as a target for parallelizing compilers","authors":"A. Cox, S. Dwarkadas, Honghui Lu, W. Zwaenepoel","doi":"10.1109/IPPS.1997.580943","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580943","url":null,"abstract":"In this paper we evaluate the use of software distributed shared memory (DSM) on a message passing machine as the target for a parallelizing compiler. We compare this approach to compiler-generated message passing, hand-coded software DSM and hand-coded message passing. For this comparison, we use six applications: four that are regular and two that are irregular: Our results are gathered on an 8-node IBM SP/2 using the TreadMarks software DSM system. We use the APR shared-memory (SPF) compiler to generate the shared memory-programs and the APR XHPF compiler to generate message passing programs. The hand-coded message passing programs run with the IBM PVMe optimized message passing library. On the regular programs, both the compiler-generated and the hand-coded message passing outperform the SPF/TreadMarks combination: the compiler-generated message passing by 5.5% to 40%, and the hand-coded message passing by 7.5% to 49%. On the irregular programs, the SPF/TreadMarks combination outperforms the compiler-generated message passing by 38% and 89%, and only slightly underperforms the hand-coded message passing, differing by 4.4% and 16%. We also identify the factors that account for the performance differences, estimate their relative importance, and describe methods to improve the performance.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580983
A. Datta
We present several geometric data structures and algorithms for problems for a planar set of rectangles and bipartitioning problems for a point set in two dimensions on a reconfigurable mesh of size n/spl times/n. The problems for rectangles include computing the measure, contour perimeter and maximum clique for the union of a set of rectangles. The bipartitioning problems for a two dimensional point set are solved in the L/sub /spl infin// and L/sub 1/ metrics. We solve all these problems in O(log n) time.
{"title":"Geometric data structures on a reconfigurable mesh, with applications","authors":"A. Datta","doi":"10.1109/IPPS.1997.580983","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580983","url":null,"abstract":"We present several geometric data structures and algorithms for problems for a planar set of rectangles and bipartitioning problems for a point set in two dimensions on a reconfigurable mesh of size n/spl times/n. The problems for rectangles include computing the measure, contour perimeter and maximum clique for the union of a set of rectangles. The bipartitioning problems for a two dimensional point set are solved in the L/sub /spl infin// and L/sub 1/ metrics. We solve all these problems in O(log n) time.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115579236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580923
Jau-Der Shih
The author presents adaptive fault-tolerant deadlock-free routing algorithms for hypercubes and meshes by using only 3 virtual channels and 2 virtual channels respectively. Based on the concept of unsafe nodes, the author designs a routing algorithm for hypercubes that can tolerate at least n-1 node faults and can route a message via a path of length no more than the Hamming distance between the source and destination plus four. The author also develops a routing algorithm for meshes that can tolerate any block faults, as long as the distance between any two nodes in different faulty blocks is at least 2 in each dimension.
{"title":"Adaptive fault-tolerant wormhole routing algorithms for hypercube and mesh interconnection networks","authors":"Jau-Der Shih","doi":"10.1109/IPPS.1997.580923","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580923","url":null,"abstract":"The author presents adaptive fault-tolerant deadlock-free routing algorithms for hypercubes and meshes by using only 3 virtual channels and 2 virtual channels respectively. Based on the concept of unsafe nodes, the author designs a routing algorithm for hypercubes that can tolerate at least n-1 node faults and can route a message via a path of length no more than the Hamming distance between the source and destination plus four. The author also develops a routing algorithm for meshes that can tolerate any block faults, as long as the distance between any two nodes in different faulty blocks is at least 2 in each dimension.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121914385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-04-01DOI: 10.1109/IPPS.1997.580844
Yuanyuan Yang, Jianchao Wang
In this paper, we study wide-sense nonblocking conditions under packing strategy for the three-stage Clos network, or v(m, n, r) network. Wide-sense nonblocking networks are generally believed to have lower network cost than strictly nonblocking networks. However, the analysis for the wide-sense nonblocking conditions is usually more difficult. Moore proved that a v(m, n, 2) network is nonblocking under packing strategy if the number of middle stage switches m/spl ges/[/sup 3///sub 2/n]. This result has been widely cited in the literature, and is even considered as the wide-sense nonblocking condition under packing strategy for the general v(m, n, r) networks in some papers. In fact, it is still not known that whether the condition m/spl ges/[/sup 3///sub 2/n] holds for v(m, n, r) networks when r/spl ges/3. In this paper, we introduce a systematic approach to the analysis of wide-sense nonblocking conditions under packing strategy for general v(m, n, r) networks with any r values. We first translate the problem of finding the necessary and sufficient nonblocking conditions for v(m, n, r) networks to a set of linear programming problems. We then solve this special type of linear programming problems and obtain an elegant dosed form optimum solution. We prove that the necessary and sufficient condition for a v(m, n, r) network to be nonblocking under packing strategy is m/spl ges/[(2-1/F/sub 2r-1/)n] where F/sub 2r-1/ is the Fibonaaci number. We believe that the systematic approach developed in this paper can be used for analyzing other wide-sense nonblocking control strategies as well.
本文研究了三阶Clos网络或v(m, n, r)网络在填充策略下的广义非阻塞条件。广义非阻塞网络通常被认为比严格非阻塞网络具有更低的网络成本。然而,大范围的分析非阻塞条件通常是更加困难。Moore证明了在分组策略下,如果中间阶段交换机个数为m/spl /[/sup 3///sub 2/n],则v(m, n, 2)网络是非阻塞的。这一结果在文献中被广泛引用,甚至在一些论文中被认为是一般v(m, n, r)网络在填充策略下的广义非阻塞条件。实际上,尚不清楚当r/spl ges/3时,条件m/spl ges/[/sup 3///sub 2/n]是否对v(m, n, r)网络成立。本文系统地分析了具有任意r值的一般v(m, n, r)网络在填充策略下的广义非阻塞条件。我们首先将寻找v(m, n, r)网络的充分必要非阻塞条件的问题转化为一组线性规划问题。然后我们对这类特殊的线性规划问题进行了求解,得到了一个优雅的剂量形式的最优解。证明了在分组策略下v(m, n, r)网络非阻塞的充分必要条件为m/spl ges/[(2-1/F/sub 2r-1/)n],其中F/sub 2r-1/为斐波那契数。我们相信本文所建立的系统方法也可用于分析其他广义非阻塞控制策略。
{"title":"Wide-sense nonblocking Clos networks under packing strategy","authors":"Yuanyuan Yang, Jianchao Wang","doi":"10.1109/IPPS.1997.580844","DOIUrl":"https://doi.org/10.1109/IPPS.1997.580844","url":null,"abstract":"In this paper, we study wide-sense nonblocking conditions under packing strategy for the three-stage Clos network, or v(m, n, r) network. Wide-sense nonblocking networks are generally believed to have lower network cost than strictly nonblocking networks. However, the analysis for the wide-sense nonblocking conditions is usually more difficult. Moore proved that a v(m, n, 2) network is nonblocking under packing strategy if the number of middle stage switches m/spl ges/[/sup 3///sub 2/n]. This result has been widely cited in the literature, and is even considered as the wide-sense nonblocking condition under packing strategy for the general v(m, n, r) networks in some papers. In fact, it is still not known that whether the condition m/spl ges/[/sup 3///sub 2/n] holds for v(m, n, r) networks when r/spl ges/3. In this paper, we introduce a systematic approach to the analysis of wide-sense nonblocking conditions under packing strategy for general v(m, n, r) networks with any r values. We first translate the problem of finding the necessary and sufficient nonblocking conditions for v(m, n, r) networks to a set of linear programming problems. We then solve this special type of linear programming problems and obtain an elegant dosed form optimum solution. We prove that the necessary and sufficient condition for a v(m, n, r) network to be nonblocking under packing strategy is m/spl ges/[(2-1/F/sub 2r-1/)n] where F/sub 2r-1/ is the Fibonaaci number. We believe that the systematic approach developed in this paper can be used for analyzing other wide-sense nonblocking control strategies as well.","PeriodicalId":145892,"journal":{"name":"Proceedings 11th International Parallel Processing Symposium","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122739940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}