Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556313
D. Socha
This paper proposes a scheme for compiling an important class of iterative algorithms into efficient code for distributed memory computers. The programmer provides a description of the problem in Spot: a data parallel SIMD language that uses iterations as the unit of synchronization and is based on grids of data points. The data parallel description is in terms of a single point of the data space, with implicit communication semantics, and a set of numerical boundary conditions. The compiler eliminates the need for multi-tasking by ‘(expanding” the single-point code into multiplepoint code that executes over rectangular regions of points. Using rectangle intersection and difference operations on these regions allows the compiler to automatically insert the required communication calls and to hide communication latency by overlapping comput at ion and communication. The multiple-point code may be specialized, at compile-time, to the size and shape of different allocations, or it may use table-driven for-loops to adapt, at run-time, to the shape and size of the allocations. We show how to generalize this strategy to produce code for the near-rectangular shaped allocations required for balanced partitionings of rectangular arrays.
{"title":"An Approach to Compiling Single-point Iterative Programs for Distributed Memory Computers","authors":"D. Socha","doi":"10.1109/DMCC.1990.556313","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556313","url":null,"abstract":"This paper proposes a scheme for compiling an important class of iterative algorithms into efficient code for distributed memory computers. The programmer provides a description of the problem in Spot: a data parallel SIMD language that uses iterations as the unit of synchronization and is based on grids of data points. The data parallel description is in terms of a single point of the data space, with implicit communication semantics, and a set of numerical boundary conditions. The compiler eliminates the need for multi-tasking by ‘(expanding” the single-point code into multiplepoint code that executes over rectangular regions of points. Using rectangle intersection and difference operations on these regions allows the compiler to automatically insert the required communication calls and to hide communication latency by overlapping comput at ion and communication. The multiple-point code may be specialized, at compile-time, to the size and shape of different allocations, or it may use table-driven for-loops to adapt, at run-time, to the shape and size of the allocations. We show how to generalize this strategy to produce code for the near-rectangular shaped allocations required for balanced partitionings of rectangular arrays.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556295
M. Rosing, R. P. Weaver
Abstract : The authors present a structured scheme for allowing a programmer to specify the mapping of data to distributed memory multiprocessors. This scheme lets the programmer specify information about communication patterns as well as information about distributing data structures onto processors (including partitioning with replication). This mapping scheme allows the user to map arrays of data to arrays of processors. The user specifies how each axis of the data structure is mapped onto an axis of the processor structure. This mapping may either be one to one or one to many depending on the parallelism, load balancing, and communication requirements. The authors discuss the basics of how this scheme is implemented in the DINO language, the areas in which it has worked well, the few areas in which there were significant problems, and some ideas for future improvements.
{"title":"Mapping Data to Processors in Distributed Memory Computations","authors":"M. Rosing, R. P. Weaver","doi":"10.1109/DMCC.1990.556295","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556295","url":null,"abstract":"Abstract : The authors present a structured scheme for allowing a programmer to specify the mapping of data to distributed memory multiprocessors. This scheme lets the programmer specify information about communication patterns as well as information about distributing data structures onto processors (including partitioning with replication). This mapping scheme allows the user to map arrays of data to arrays of processors. The user specifies how each axis of the data structure is mapped onto an axis of the processor structure. This mapping may either be one to one or one to many depending on the parallelism, load balancing, and communication requirements. The authors discuss the basics of how this scheme is implemented in the DINO language, the areas in which it has worked well, the few areas in which there were significant problems, and some ideas for future improvements.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127401473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556274
T. Ramesh, S. Ganesan
Multiple-bus multiprocessor interconnection networks are still considered as a cost effective and easily expandable processor-memory interconnection. But, a fully connected multiple bus network requires all busses to be connected to each processor and memory module thus increasing the physical connection and the bus load on each memo y. Reduced-bus connections with different connection topology such as rhombus, trapezoidal etc., were presented in [l]. In this paper a general single network topology has been presented that can be reconfigured to any one of the reduced-bus connection schemes. The re-configurability is achieved through the arbitration of combinations of simple link switches in a ring structure. The motive behind developing this reconfigurable structure is to offer a flexibility in matching the reduced-bus connection schemes to structure of parallel algorithms. The paper presents a mapping scheme to arbitrate the link switches for various connection pattems. Also, a comparison of effective memoy bandwidth of each connection scheme is shown. Expandability of the system to larger sizes are addressed.
{"title":"A Re-Configurable Reduced-Bus Multiprocessor Interconnection Network","authors":"T. Ramesh, S. Ganesan","doi":"10.1109/DMCC.1990.556274","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556274","url":null,"abstract":"Multiple-bus multiprocessor interconnection networks are still considered as a cost effective and easily expandable processor-memory interconnection. But, a fully connected multiple bus network requires all busses to be connected to each processor and memory module thus increasing the physical connection and the bus load on each memo y. Reduced-bus connections with different connection topology such as rhombus, trapezoidal etc., were presented in [l]. In this paper a general single network topology has been presented that can be reconfigured to any one of the reduced-bus connection schemes. The re-configurability is achieved through the arbitration of combinations of simple link switches in a ring structure. The motive behind developing this reconfigurable structure is to offer a flexibility in matching the reduced-bus connection schemes to structure of parallel algorithms. The paper presents a mapping scheme to arbitrate the link switches for various connection pattems. Also, a comparison of effective memoy bandwidth of each connection scheme is shown. Expandability of the system to larger sizes are addressed.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132632523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556302
P. Liewer, E. W. Leaver, V. Decyk, J. Dawson
Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance.
{"title":"Dynamic Load Balancing in a Concurrent Plasma PIC Code on the JPL/Caltech Mark III Hypercube","authors":"P. Liewer, E. W. Leaver, V. Decyk, J. Dawson","doi":"10.1109/DMCC.1990.556302","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556302","url":null,"abstract":"Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130498587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556389
H. Ding
{"title":"The 600 Megaflops Performance of the QCD Code on the Mark IIIfp Hypercube","authors":"H. Ding","doi":"10.1109/DMCC.1990.556389","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556389","url":null,"abstract":"","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117177565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556399
Woei-kae Chen, Matthias F. Stallmann
The hypercube embedding problem, a restricted ver- sion of the general mapping problem, is the problem of mapping a set of communicating processes to a hy- percube multiprocessor. The goal is to find a map- ping that minimizes the average length of the paths between communicating processes. Iterative improve- ment heuristics for hypercube embedding, including a local search, a Kernighan-Lin, and a simulated an- nealing, are evaluated under different options includ- ing neighborhoods (all-swaps versus cube-neighbors), initial solutions (random versus greedy), and enhance- ments on terminating conditions (flat moves and up- hill moves). By varying these options we obtain a wide range of tradeoffs between execution time and solution quality.
{"title":"Local Search Variants for Hypercube Embedding","authors":"Woei-kae Chen, Matthias F. Stallmann","doi":"10.1109/DMCC.1990.556399","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556399","url":null,"abstract":"The hypercube embedding problem, a restricted ver- sion of the general mapping problem, is the problem of mapping a set of communicating processes to a hy- percube multiprocessor. The goal is to find a map- ping that minimizes the average length of the paths between communicating processes. Iterative improve- ment heuristics for hypercube embedding, including a local search, a Kernighan-Lin, and a simulated an- nealing, are evaluated under different options includ- ing neighborhoods (all-swaps versus cube-neighbors), initial solutions (random versus greedy), and enhance- ments on terminating conditions (flat moves and up- hill moves). By varying these options we obtain a wide range of tradeoffs between execution time and solution quality.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128421556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555398
C. Romine, K. Sigmon
{"title":"Reducing Inner Product Computation in the Parallel One-Sided Jacobi Algorithm","authors":"C. Romine, K. Sigmon","doi":"10.1109/DMCC.1990.555398","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555398","url":null,"abstract":"","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113977152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555363
S. Al-Bassam, H. El-Rewini, B. Bose, T. Lewis
We develop an efficient subcube recognition algorithm that recognizes all the possible subcubes. The algorithm is based on exploiting more subcubes at different levels of the buddy tree. In exploiting the different levels, the algorithm checks any subcube at most once. Moreover, many unavailable subcubes are not considered as candidates and hence not checked for availability. This makes the algorithm fast in recognizing the subcubes. The number of recognized subcubes, for different subcube sizes, can be easily adjusted by restricting the search level down the buddy tree. The previous known algorithms become a special case of this general approach. When one level is searched, this algorithm perfoms as the original buddy system. When two levels are searched, it will recognized the Same subcubes as the ones in [4] with a faster speed. When all the levels are searched, a complete subcube recognition is obtained. In a multi-processing system, each processor can execute this algorithm on a different tree. Using a given number of processors in a multi-processing system, we give a method of constructing the trees that maximizes the overall number of recognized subcubes. Finally, we introduce an allocation method "best fit" that reduces hypercube fragmentation. Simulation results and performance comparisons between this method and the traditional "first fit" are presented.
{"title":"Efficient Serial and Parallel Subcube Recognition in Hypercubes","authors":"S. Al-Bassam, H. El-Rewini, B. Bose, T. Lewis","doi":"10.1109/DMCC.1990.555363","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555363","url":null,"abstract":"We develop an efficient subcube recognition algorithm that recognizes all the possible subcubes. The algorithm is based on exploiting more subcubes at different levels of the buddy tree. In exploiting the different levels, the algorithm checks any subcube at most once. Moreover, many unavailable subcubes are not considered as candidates and hence not checked for availability. This makes the algorithm fast in recognizing the subcubes. The number of recognized subcubes, for different subcube sizes, can be easily adjusted by restricting the search level down the buddy tree. The previous known algorithms become a special case of this general approach. When one level is searched, this algorithm perfoms as the original buddy system. When two levels are searched, it will recognized the Same subcubes as the ones in [4] with a faster speed. When all the levels are searched, a complete subcube recognition is obtained. In a multi-processing system, each processor can execute this algorithm on a different tree. Using a given number of processors in a multi-processing system, we give a method of constructing the trees that maximizes the overall number of recognized subcubes. Finally, we introduce an allocation method \"best fit\" that reduces hypercube fragmentation. Simulation results and performance comparisons between this method and the traditional \"first fit\" are presented.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.555355
J. Horvath, T. Tang, L. P. Perry, R. Cole, D.B. OIster, J. Zipse
Controlling interplanetary spacecraft and planning their activities, as currently practiced, requires massive amounts of computer time and personnel. To improve this situation, it is desired to use advanced computing to speed up and automate the commanding process. Several design and prototype efforts have been underway at JPL to understand the appropriate roles for concurrent processors in future interplanetary spacecraft operations. Here we report on an effort to identify likely candidates for parallelism among existing software systems that both generate commands to be sent to the spacecraft and simulate what the spacecraft will do with these commands when it receives them. We also describe promising results from efforts to create parallel prototypes of representative portions of these software systems on the JPL/Caltech Mark 111 hypercube.
控制行星际航天器并规划其活动,如目前所实行的,需要大量的计算机时间和人员。为了改善这种情况,需要使用先进的计算来加速和自动化指挥过程。喷气推进实验室正在进行一些设计和原型工作,以了解并发处理器在未来星际航天器操作中的适当作用。在这里,我们报告了一项努力,以确定现有软件系统中可能的并行候选,这些软件系统既生成要发送给航天器的命令,又模拟航天器在接收这些命令时将如何处理这些命令。我们还描述了在JPL/Caltech Mark 111超立方体上创建这些软件系统的代表性部分的并行原型的有希望的结果。
{"title":"Hypercubes for Critical Space Flight Command Operations","authors":"J. Horvath, T. Tang, L. P. Perry, R. Cole, D.B. OIster, J. Zipse","doi":"10.1109/DMCC.1990.555355","DOIUrl":"https://doi.org/10.1109/DMCC.1990.555355","url":null,"abstract":"Controlling interplanetary spacecraft and planning their activities, as currently practiced, requires massive amounts of computer time and personnel. To improve this situation, it is desired to use advanced computing to speed up and automate the commanding process. Several design and prototype efforts have been underway at JPL to understand the appropriate roles for concurrent processors in future interplanetary spacecraft operations. Here we report on an effort to identify likely candidates for parallelism among existing software systems that both generate commands to be sent to the spacecraft and simulate what the spacecraft will do with these commands when it receives them. We also describe promising results from efforts to create parallel prototypes of representative portions of these software systems on the JPL/Caltech Mark 111 hypercube.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127664534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-04-08DOI: 10.1109/DMCC.1990.556410
Yi-Chieh Chang, K. Shin
This paper discusses and analyzes two load sharing (LS) issues: adjusting preferred lists and implementing a fault-tolerant mechanism in the presence of node failures. In an early paper, we have proposed to order the nodes in each node's proximity into a preferred list for the purpose of load sharing in distributed real-time systems. The preferred list of each node is constructed in such a way that each node will be selected as the kth preferred node by one and only one other node. Such lists are proven to allow the tasks to be evenly distributed in a system. However, the presence of faulty nodes will destroy the original structure of a preferred list if the faulty nodes are simply skipped in the preferred list. An algorithm is therefore proposed to modify each preferred list to retain its original features regardless of the number of faulty nodes in the system. The communication overhead introduced by this algorithm is shown to be minimal. Based on the modified preferred lists, a simple fault-tolerant mechanism is implemented. Each node is equipped with a backup queue which 'The work reported in this paper was supported in part by the Office of Naval Research under contract N0001485-K-0122, and the NSF under grant DMC-8721492. Any opinions, findings, and recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the funding agencies. stores and updates the arriving/completing tasks at its most preferrecd node. Whenever a node becomes faulty, its m.ost preferred node will treat the tasks in the baxkup queue as externally axriving tasks. Our simulation results show that this approach, despite of the simplicity, can reduce the number of task losses dramatically, as compared to the approaches without any faulttolerant mechanism.
{"title":"Load Sharing In Hypercube Multicomputers In The Presence Of Node Failures","authors":"Yi-Chieh Chang, K. Shin","doi":"10.1109/DMCC.1990.556410","DOIUrl":"https://doi.org/10.1109/DMCC.1990.556410","url":null,"abstract":"This paper discusses and analyzes two load sharing (LS) issues: adjusting preferred lists and implementing a fault-tolerant mechanism in the presence of node failures. In an early paper, we have proposed to order the nodes in each node's proximity into a preferred list for the purpose of load sharing in distributed real-time systems. The preferred list of each node is constructed in such a way that each node will be selected as the kth preferred node by one and only one other node. Such lists are proven to allow the tasks to be evenly distributed in a system. However, the presence of faulty nodes will destroy the original structure of a preferred list if the faulty nodes are simply skipped in the preferred list. An algorithm is therefore proposed to modify each preferred list to retain its original features regardless of the number of faulty nodes in the system. The communication overhead introduced by this algorithm is shown to be minimal. Based on the modified preferred lists, a simple fault-tolerant mechanism is implemented. Each node is equipped with a backup queue which 'The work reported in this paper was supported in part by the Office of Naval Research under contract N0001485-K-0122, and the NSF under grant DMC-8721492. Any opinions, findings, and recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the funding agencies. stores and updates the arriving/completing tasks at its most preferrecd node. Whenever a node becomes faulty, its m.ost preferred node will treat the tasks in the baxkup queue as externally axriving tasks. Our simulation results show that this approach, despite of the simplicity, can reduce the number of task losses dramatically, as compared to the approaches without any faulttolerant mechanism.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115882207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}