Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633356
L. Borrmann, P. Istavrinos
This paper describes a new scheme for weakly coherent, distributed shared memory systems. It shows that for most applications the semantics of weak coherence are sufficient. After sketching the bmic implementation schemes for weak coherence protocols it presents their benefits, mainly an improved exploitation of parallelism. Not only latency masking for write operations is exploited but also techniques like accumulating u,pdate and invalidation messages are introduced. First results ofa prototype implementation are given.
{"title":"Benefits of Weak Coherence for Distributed shared Memory Systems","authors":"L. Borrmann, P. Istavrinos","doi":"10.1109/DMCC.1991.633356","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633356","url":null,"abstract":"This paper describes a new scheme for weakly coherent, distributed shared memory systems. It shows that for most applications the semantics of weak coherence are sufficient. After sketching the bmic implementation schemes for weak coherence protocols it presents their benefits, mainly an improved exploitation of parallelism. Not only latency masking for write operations is exploited but also techniques like accumulating u,pdate and invalidation messages are introduced. First results ofa prototype implementation are given.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125804218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633355
M. Linden, J. Saarinen, K. Kaski, P. Kanerva
A highly parallel realization of Kanerva 's Sparse Distributed Memory has been developed using advanced structures. The system consists of a host computer, address unit and memory unit. The address and memory units have been implemented with commercially available digital components to two functioning boards, and they perform the Hamming distance comparison and memory storage functions. In ordeT to achieve an effective hardware realization the units are designed for highly parallel processing. The host computer i s used to edit, compile, and down-load the programs to be run in the units. The software environment has been implemented under UNIX operating system, and the set of specific commands has been designed to support simulations. The system is intended for real-time applications. The performance estimations are also presented.
{"title":"Highly Parallel Realization of Sparse Distributed Memory System","authors":"M. Linden, J. Saarinen, K. Kaski, P. Kanerva","doi":"10.1109/DMCC.1991.633355","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633355","url":null,"abstract":"A highly parallel realization of Kanerva 's Sparse Distributed Memory has been developed using advanced structures. The system consists of a host computer, address unit and memory unit. The address and memory units have been implemented with commercially available digital components to two functioning boards, and they perform the Hamming distance comparison and memory storage functions. In ordeT to achieve an effective hardware realization the units are designed for highly parallel processing. The host computer i s used to edit, compile, and down-load the programs to be run in the units. The software environment has been implemented under UNIX operating system, and the set of specific commands has been designed to support simulations. The system is intended for real-time applications. The performance estimations are also presented.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633212
T. Krucken, P. Liewer, R. Ferraro, V. Decyk
The two dimensional electrostatic plasma particle in cell (PIC) code described an [1] has been upgraded to a 2D electromagnetic PIC code running on the Caltech/JPL Mark IIIfp and the Intel iPSC/860 parallel MIMD computers. The code solves the complete time dependent Maxwell’s equations where the plasma responses, i.e., the charge and current density in the plasma, are evaluated by advancing in time the trajectories of ~ 10^6 particles in their self-consistent electromagnetic field. The field equations are solved in Fourier space. Parallelisation is achieved through domain decomposition in real and Fourier space. Results from a simulation showing a two-dimensional Alfen wave filamentation instability are shown; these are the first simulations of this 2D Alfen wave decay process.
{"title":"A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers","authors":"T. Krucken, P. Liewer, R. Ferraro, V. Decyk","doi":"10.1109/DMCC.1991.633212","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633212","url":null,"abstract":"The two dimensional electrostatic plasma particle in cell (PIC) code described an [1] has been upgraded to a 2D electromagnetic PIC code running on the Caltech/JPL Mark IIIfp and the Intel iPSC/860 parallel MIMD computers. The code solves the complete time dependent Maxwell’s equations where the plasma responses, i.e., the charge and current density in the plasma, are evaluated by advancing in time the trajectories of ~ 10^6 particles in their self-consistent electromagnetic field. The field equations are solved in Fourier space. Parallelisation is achieved through domain decomposition in real and Fourier space. Results from a simulation showing a two-dimensional Alfen wave filamentation instability are shown; these are the first simulations of this 2D Alfen wave decay process.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129252468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633302
D. Ecklund
Sorting is a classic problern[5], which naturally lends itself to parallel processing. Many researchers have investigated memory-based parallel sorting [3], but only a few researchers have inve,stigated the piroblem d parallel external sorting[2, 41. Existing algorithms employ local sorting of runs followedby pipelined merging of runs. The writing of the final merged result is a serial process performed by a single processor. This sequential bottleneck has a significant negative impact on the total sort time. It also does not make effective use of the concurrent I/O capabilities provided on ai number of parallel machines. I have proposed and prototyped a two phase parallel external sorting algorithm that removes the “final merge bottleneck” by partitioning sorted imns anid utilizing multiple processors to build a merged Iun.
{"title":"ExterniaJ Sorting on a Distributed Memory Machine","authors":"D. Ecklund","doi":"10.1109/DMCC.1991.633302","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633302","url":null,"abstract":"Sorting is a classic problern[5], which naturally lends itself to parallel processing. Many researchers have investigated memory-based parallel sorting [3], but only a few researchers have inve,stigated the piroblem d parallel external sorting[2, 41. Existing algorithms employ local sorting of runs followedby pipelined merging of runs. The writing of the final merged result is a serial process performed by a single processor. This sequential bottleneck has a significant negative impact on the total sort time. It also does not make effective use of the concurrent I/O capabilities provided on ai number of parallel machines. I have proposed and prototyped a two phase parallel external sorting algorithm that removes the “final merge bottleneck” by partitioning sorted imns anid utilizing multiple processors to build a merged Iun.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115838235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633346
R. Khanna, B. McMillin
A visualization model has been deireloped to analyse the performance of a massively parallel algorithm. Most visualization tools that have beten developed so far for performance analysis are based generally on individual processor information and commltrnication patterns. These tools, however, are inadequate ,for massively parallel computations. It is difSlcult to comprehend the visual information for many processors. The model, SMIW (Scientific visualization in Multicomputing for Interpretation of Large amounts of Injformation), addresses this problem by using abstract rqpresentations to attain a composite picture which gives better insight to the behavior of the algorithm. Chernoffs Faces have been selected to represent the multidimensional data because of their abiliry to portray multidimensional data in a very perceptible manner. SMILS has been used on an asynchronous massively parallel PDE (partial direrential equation) solver that is based on the multigrid paradigm. The visualization tool helps in tuning the control parameters of the multigrid algorithm to get optimal results.
{"title":"A Visualization Model For Massively Parallel Algorithms","authors":"R. Khanna, B. McMillin","doi":"10.1109/DMCC.1991.633346","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633346","url":null,"abstract":"A visualization model has been deireloped to analyse the performance of a massively parallel algorithm. Most visualization tools that have beten developed so far for performance analysis are based generally on individual processor information and commltrnication patterns. These tools, however, are inadequate ,for massively parallel computations. It is difSlcult to comprehend the visual information for many processors. The model, SMIW (Scientific visualization in Multicomputing for Interpretation of Large amounts of Injformation), addresses this problem by using abstract rqpresentations to attain a composite picture which gives better insight to the behavior of the algorithm. Chernoffs Faces have been selected to represent the multidimensional data because of their abiliry to portray multidimensional data in a very perceptible manner. SMILS has been used on an asynchronous massively parallel PDE (partial direrential equation) solver that is based on the multigrid paradigm. The visualization tool helps in tuning the control parameters of the multigrid algorithm to get optimal results.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"59 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130834832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633168
E. Kushner, E. Castro-Leon, M. L. Barton
A direct equation :rolver that addresses very large (out-of-core), linear systems has been developed for the iPSCl860. Routines that are included within the Prosolver-SES Library can factor and solve any matrix for which pivoting i s unnecessary. The xoftware i s designed to solve sparse matrices in which the non-zero pattern can be described by a skyline or profile. Separate routines exist to support applications that generate symmetric or non-symmetric coeflcient matrices. High performance has been achieved through the ,use of a dot product routine coded in is60 assembly language. In addition disk IIO has been optimized to ensure performance on very large applications. For problems that are small enough to fit in memory, the Prosolver-SES Library achieves approximately I5 MFLOPS p e r processor. On large problems with signijtcant I t 0 (10 000 x 10 000). current performance varies from 8 to 15 MFLOPS per processor.
一个直接的方程:rolver,解决了非常大的(核外),线性系统已经为iPSCl860开发。Prosolver-SES库中包含的例程可以分解和求解任何不需要旋转的矩阵。该软件设计用于求解稀疏矩阵,其中非零模式可以用天际线或轮廓来描述。存在单独的例程来支持生成对称或非对称系数矩阵的应用程序。通过使用is60汇编语言编写的点积例程实现了高性能。此外,磁盘IIO已经过优化,以确保在非常大的应用程序上的性能。对于小到足以装入内存的问题,Prosolver-SES库在每个处理器上实现大约I5 MFLOPS。对于具有显著I = 0 (10,000 x 10,000)的大问题。当前每个处理器的性能从8到15 MFLOPS不等。
{"title":"The ProSolver-SESm Library, a Skyline Solver for the iPSC/860","authors":"E. Kushner, E. Castro-Leon, M. L. Barton","doi":"10.1109/DMCC.1991.633168","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633168","url":null,"abstract":"A direct equation :rolver that addresses very large (out-of-core), linear systems has been developed for the iPSCl860. Routines that are included within the Prosolver-SES Library can factor and solve any matrix for which pivoting i s unnecessary. The xoftware i s designed to solve sparse matrices in which the non-zero pattern can be described by a skyline or profile. Separate routines exist to support applications that generate symmetric or non-symmetric coeflcient matrices. High performance has been achieved through the ,use of a dot product routine coded in is60 assembly language. In addition disk IIO has been optimized to ensure performance on very large applications. For problems that are small enough to fit in memory, the Prosolver-SES Library achieves approximately I5 MFLOPS p e r processor. On large problems with signijtcant I t 0 (10 000 x 10 000). current performance varies from 8 to 15 MFLOPS per processor.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124283556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633362
Andrew, Grimshaw, Jeff Rem
High performance parallel computers are expected to solve problems involving very large data sets, often far larger than can fir in primary memory. If It0 is not performed intelligently, then the wait for I10 can become a serious bottleneck, limiting the gains from improved processor technology. This paper introduces ELFS (ExtensibLe File Systems). ELFS is a parallel, asynchronous It0 system designed to attack the I10 bottleneck. It combines recent technological advances in three areas: objectoriented systems design, latency obscuring compiler technology, and parallel disk arrays attached to parallel architectures. We present the ELFS class pfo (parallel file object), a parallel 2D-matrix class. Pfo's allow the user to: I ) specify the access pattern, e.g., row-wise, column-wise, or by sub-blocks; 2 ) partition the p f o into sub-pfos defined by subsets of the original file structure, and specify where the new sub-pfo should be located; and 3) access the file in an asynchronous and pipelined manner. Preliminary performance results are presented.
{"title":"High Performance Parallel File Objects","authors":"Andrew, Grimshaw, Jeff Rem","doi":"10.1109/DMCC.1991.633362","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633362","url":null,"abstract":"High performance parallel computers are expected to solve problems involving very large data sets, often far larger than can fir in primary memory. If It0 is not performed intelligently, then the wait for I10 can become a serious bottleneck, limiting the gains from improved processor technology. This paper introduces ELFS (ExtensibLe File Systems). ELFS is a parallel, asynchronous It0 system designed to attack the I10 bottleneck. It combines recent technological advances in three areas: objectoriented systems design, latency obscuring compiler technology, and parallel disk arrays attached to parallel architectures. We present the ELFS class pfo (parallel file object), a parallel 2D-matrix class. Pfo's allow the user to: I ) specify the access pattern, e.g., row-wise, column-wise, or by sub-blocks; 2 ) partition the p f o into sub-pfos defined by subsets of the original file structure, and specify where the new sub-pfo should be located; and 3) access the file in an asynchronous and pipelined manner. Preliminary performance results are presented.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114320489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633142
E. Debenedictis, P. Madams
This paper presents a parallel 1/0 facility based on an extension of Unix. This facility, both scalable and transparently integrated, is part of the upcoming release 3 of nCUBEs system software. With the addition of scalability for 1/0 as well as computing, distributed memory machines become balanced between the two functions, suiting them for a wider applications range than their traditional domain of computation-intensive tasks. The basis of the 1/0 facility is a system-level data structure called a mapping function. A mapping function describes how data from the parts of a parallel program or parallel 1/0 device are combined to form a single 1/0 stream. Combining mapping functions from senders and receivers allows the system to me an optimal communications strategy. Finally, these facilities are added as extensions to Unix. For programs with a single processor, an exact Unix environment is provided. For parallel programs, the Unix environment is extended in a natural way to accommodate parallel I/O.
{"title":"nCUBE's Parallel I/O with Unix Compatibility","authors":"E. Debenedictis, P. Madams","doi":"10.1109/DMCC.1991.633142","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633142","url":null,"abstract":"This paper presents a parallel 1/0 facility based on an extension of Unix. This facility, both scalable and transparently integrated, is part of the upcoming release 3 of nCUBEs system software. With the addition of scalability for 1/0 as well as computing, distributed memory machines become balanced between the two functions, suiting them for a wider applications range than their traditional domain of computation-intensive tasks. The basis of the 1/0 facility is a system-level data structure called a mapping function. A mapping function describes how data from the parts of a parallel program or parallel 1/0 device are combined to form a single 1/0 stream. Combining mapping functions from senders and receivers allows the system to me an optimal communications strategy. Finally, these facilities are added as extensions to Unix. For programs with a single processor, an exact Unix environment is provided. For parallel programs, the Unix environment is extended in a natural way to accommodate parallel I/O.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122779259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633159
A. Mahanti, C. J. Daniels, S. Ghosh, M. Evett, A. Pal
Most admissible search algorithms fail to solve reallife problems because of their exponential time and storage requirements. Therefore, to quickljy obtain near-optimal solutions, the use of approximute algorithms and inadmissible heuristics are of practical interest. The use of parallel and distributed ahgorithms [l, 6, 8, 111 further reduces search complexity. I n this paper we present empirical results on a massively parallel search algorithm using a Connection .Machine CM-2. Our algorithm, PBDA', is based on the idea of staged search [9, lo]. Its execution time is directly proportional t o the depth of search, and solution quality is scalable with the number of processors. W e tested it on the 1Bpuzzle problem using both admissible and inadmissible heuristics. The best results gave an average relative error of 1.66% and 66% optimal solutions.
{"title":"Massively Parallel Heuristic Search for Approximate Optimization Problems","authors":"A. Mahanti, C. J. Daniels, S. Ghosh, M. Evett, A. Pal","doi":"10.1109/DMCC.1991.633159","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633159","url":null,"abstract":"Most admissible search algorithms fail to solve reallife problems because of their exponential time and storage requirements. Therefore, to quickljy obtain near-optimal solutions, the use of approximute algorithms and inadmissible heuristics are of practical interest. The use of parallel and distributed ahgorithms [l, 6, 8, 111 further reduces search complexity. I n this paper we present empirical results on a massively parallel search algorithm using a Connection .Machine CM-2. Our algorithm, PBDA', is based on the idea of staged search [9, lo]. Its execution time is directly proportional t o the depth of search, and solution quality is scalable with the number of processors. W e tested it on the 1Bpuzzle problem using both admissible and inadmissible heuristics. The best results gave an average relative error of 1.66% and 66% optimal solutions.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126473331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1991-04-28DOI: 10.1109/DMCC.1991.633117
D. O'Hallaron
ASSIGN is a toolfor building large-scale applications, in particular signal processing applications, on distributedmemory multicomputers. The jrst target machine is iWarp, a multicomputer system developed jointly by Intel Corporation and Carnegie Mellon University. This paper gives a high-level introduction to ASSIGN .
{"title":"The Assign Parallel Program Generator","authors":"D. O'Hallaron","doi":"10.1109/DMCC.1991.633117","DOIUrl":"https://doi.org/10.1109/DMCC.1991.633117","url":null,"abstract":"ASSIGN is a toolfor building large-scale applications, in particular signal processing applications, on distributedmemory multicomputers. The jrst target machine is iWarp, a multicomputer system developed jointly by Intel Corporation and Carnegie Mellon University. This paper gives a high-level introduction to ASSIGN .","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132171512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}