InfiniBand continues to become more and more important in High Performance Computing world. This talk discusses the impact of DDR (i.e., 20 GigE) switches and HCA's on the creation of low latency, high bandwidth InfiniBand fabrics. When combined with low latency HCA's, such as those from Qlogic, the fabrics discussed can make as much as a 40% reduction in fabric latency, improving the performance of fine grain parallel applications. They also make it possible to create 3 hop low latency fabrics that provide excellent performance that can be used with clusters that have as many as 1009 nodes.There are two approaches to using DDR fabrics to improving latency. The first uses a combination of one, two and three hop fabrics and uses what we call a FasTree topology to create fabrics which can be used to create small clusters (32 to 96 nodes). FasTree's not only have lower latency, but require fewer switch components than fat trees. This does not mean that they have smaller bi-sectional bandwidths, as their links run at twice the speed of an SDR fabric. One of the features of a FasTree, that distinguishes it from other fabrics, is that it does not contain spines. The second fabric, which we call a ThinTree, uses complex single hop spines to link together different fabric sub-domains. Any node in a ThinTree is at most 3 hops away from any other node. There are, however, some compromises required to make it possible to link together up to 1008 nodes without exceeding 3 hops. These compromises result in sub-domains whose intra-domain bandwidth is full CBB while their inter-domain bandwidth typically runs around 40% of CBB. However, because of the 1.8 GB/sec bandwidth of the DDR fabrics, all the connections between any two nodes in a ThinTree fabric are adequate for virtually any HPC application and most others as well. The characteristics of both these topologies are discussed in the talk.
{"title":"Topologies for improved InfiniBand latency","authors":"Stephen Fried","doi":"10.1145/1188455.1188757","DOIUrl":"https://doi.org/10.1145/1188455.1188757","url":null,"abstract":"InfiniBand continues to become more and more important in High Performance Computing world. This talk discusses the impact of DDR (i.e., 20 GigE) switches and HCA's on the creation of low latency, high bandwidth InfiniBand fabrics. When combined with low latency HCA's, such as those from Qlogic, the fabrics discussed can make as much as a 40% reduction in fabric latency, improving the performance of fine grain parallel applications. They also make it possible to create 3 hop low latency fabrics that provide excellent performance that can be used with clusters that have as many as 1009 nodes.There are two approaches to using DDR fabrics to improving latency. The first uses a combination of one, two and three hop fabrics and uses what we call a FasTree topology to create fabrics which can be used to create small clusters (32 to 96 nodes). FasTree's not only have lower latency, but require fewer switch components than fat trees. This does not mean that they have smaller bi-sectional bandwidths, as their links run at twice the speed of an SDR fabric. One of the features of a FasTree, that distinguishes it from other fabrics, is that it does not contain spines. The second fabric, which we call a ThinTree, uses complex single hop spines to link together different fabric sub-domains. Any node in a ThinTree is at most 3 hops away from any other node. There are, however, some compromises required to make it possible to link together up to 1008 nodes without exceeding 3 hops. These compromises result in sub-domains whose intra-domain bandwidth is full CBB while their inter-domain bandwidth typically runs around 40% of CBB. However, because of the 1.8 GB/sec bandwidth of the DDR fabrics, all the connections between any two nodes in a ThinTree fabric are adequate for virtually any HPC application and most others as well. The characteristics of both these topologies are discussed in the talk.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125446083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A presentation of the Tera-10 system, the number 1 supercomputer in Europe (and number 5 in the world according to the TOP500® ranking of June 2006), designed and installed by Bull for CEA, France's Atomic Energy Authority. This presentation will cover the different technologies that are at the heart of Tera-10, the overall architecture of the system, as well as the issues that had to be addressed by the implementation team.
{"title":"The tera-10 system: implementing the number 1 supercomputer in Europe","authors":"Jean-Louis Lahaie","doi":"10.1145/1188455.1188760","DOIUrl":"https://doi.org/10.1145/1188455.1188760","url":null,"abstract":"A presentation of the Tera-10 system, the number 1 supercomputer in Europe (and number 5 in the world according to the TOP500® ranking of June 2006), designed and installed by Bull for CEA, France's Atomic Energy Authority. This presentation will cover the different technologies that are at the heart of Tera-10, the overall architecture of the system, as well as the issues that had to be addressed by the implementation team.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122552924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Remote inferface control within an access grid environment","authors":"John W. Langkals","doi":"10.1145/1188455.1188787","DOIUrl":"https://doi.org/10.1145/1188455.1188787","url":null,"abstract":"Underdevelopment","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126594944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Bruce, Richard Chamberlain, M. Devlin, S. Marshall
Until relatively recently, users of FPGA-based computers have needed electronic-design skills to implement high-performance computing (HPC) algorithms. With the advent of high-level languages for FPGAs it is possible for non-experts in FPGA design to implement algorithms by describing them in a high-level syntax. A natural progression from developing high-level languages is to develop low-level libraries that support them.DIME-C is a high-level language that takes a subset of ANSI C as its input and outputs auto-generated hardware description language (HDL) and pre-synthesised netlists. Within DIME-C, the authors have implemented a math library composed of single-precision, floating-point, elementary functions such as the natural exponential and logarithm. Complex, fully-pipelined algorithms can be described in ANSI-compatible C and implemented on FPGAs, delivering orders of magnitude speed-up over microprocessor implementations. Work is ongoing, expanding the library.The poster will detail project motivations and direction, speedup and resource-use measurements, C-code examples and multi-fpga examples.
{"title":"Implementing algorithms on FPGAs using high-level languages and low-level libraries","authors":"R. Bruce, Richard Chamberlain, M. Devlin, S. Marshall","doi":"10.1145/1188455.1188614","DOIUrl":"https://doi.org/10.1145/1188455.1188614","url":null,"abstract":"Until relatively recently, users of FPGA-based computers have needed electronic-design skills to implement high-performance computing (HPC) algorithms. With the advent of high-level languages for FPGAs it is possible for non-experts in FPGA design to implement algorithms by describing them in a high-level syntax. A natural progression from developing high-level languages is to develop low-level libraries that support them.DIME-C is a high-level language that takes a subset of ANSI C as its input and outputs auto-generated hardware description language (HDL) and pre-synthesised netlists. Within DIME-C, the authors have implemented a math library composed of single-precision, floating-point, elementary functions such as the natural exponential and logarithm. Complex, fully-pipelined algorithms can be described in ANSI-compatible C and implemented on FPGAs, delivering orders of magnitude speed-up over microprocessor implementations. Work is ongoing, expanding the library.The poster will detail project motivations and direction, speedup and resource-use measurements, C-code examples and multi-fpga examples.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114112937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent improvements in processor performance have been accompanied by increased chip complexity and power consumption, resulting in increased heat dissipation. This has resulted in higher cooling costs and lower reliability. In this paper, we focus on power-aware high performance scientific computing and in particular the Barnes-Hut (BH) code that is used for N-body problems. We show how low power modes of the CPU and caches, and hardware optimizations such as a load miss predictor and data prefetchers enable BH to operate at lower power configurations with out performance degradation. On our optimized processor, power is reduced by 57% and energy is reduced by 58% with no performance penalty using simulations with SimpleScalar and Wattch. Consequently, the energy efficiency of the processor increases by a factor of more than two when compared to the base architecture.
{"title":"Toward a power efficient computer architecture for Barnes-Hut N-body simulations","authors":"K. Malkowski, P. Raghavan, M. J. Irwin","doi":"10.1145/1188455.1188607","DOIUrl":"https://doi.org/10.1145/1188455.1188607","url":null,"abstract":"Recent improvements in processor performance have been accompanied by increased chip complexity and power consumption, resulting in increased heat dissipation. This has resulted in higher cooling costs and lower reliability. In this paper, we focus on power-aware high performance scientific computing and in particular the Barnes-Hut (BH) code that is used for N-body problems. We show how low power modes of the CPU and caches, and hardware optimizations such as a load miss predictor and data prefetchers enable BH to operate at lower power configurations with out performance degradation. On our optimized processor, power is reduced by 57% and energy is reduced by 58% with no performance penalty using simulations with SimpleScalar and Wattch. Consequently, the energy efficiency of the processor increases by a factor of more than two when compared to the base architecture.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114144560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Dynamic Data Driven Applications Systems (DDDAS) concept entails capabilities where application simulations can dynamically accept and respond to field-data and measurements, and/or can control such measurements. This synergistic and symbiotic feedback control-loop between simulations and measurements goes beyond the traditional control systems approaches, and advances applications and measurement approaches, beneficially impacting science and engineering fields, as well as manufacturing, commerce, transportation, hazard prediction/management, medicine, etc. DDDAS environments extend the current computational grids. The multi-agency DDDAS Program Solicitation (www.cise.nsf.gov/dddas) fosters systematically the relevant research areas. NSF, NOAA and NIH, the NSF/OISE and SBIR Offices, and the EU-IST and e-Sciences Programs are cooperating sponsors. This session will consist of a panel of experts, including awardees of DDDAS projects and representatives from funding agencies, and will provide a forum to engage the broader community in open discussion for expanding the opportunities and impact of DDDAS.
{"title":"Dynamic data-driven applications systems","authors":"F. Darema, M. Rotea","doi":"10.1145/1188455.1188458","DOIUrl":"https://doi.org/10.1145/1188455.1188458","url":null,"abstract":"The Dynamic Data Driven Applications Systems (DDDAS) concept entails capabilities where application simulations can dynamically accept and respond to field-data and measurements, and/or can control such measurements. This synergistic and symbiotic feedback control-loop between simulations and measurements goes beyond the traditional control systems approaches, and advances applications and measurement approaches, beneficially impacting science and engineering fields, as well as manufacturing, commerce, transportation, hazard prediction/management, medicine, etc. DDDAS environments extend the current computational grids. The multi-agency DDDAS Program Solicitation (www.cise.nsf.gov/dddas) fosters systematically the relevant research areas. NSF, NOAA and NIH, the NSF/OISE and SBIR Offices, and the EU-IST and e-Sciences Programs are cooperating sponsors. This session will consist of a panel of experts, including awardees of DDDAS projects and representatives from funding agencies, and will provide a forum to engage the broader community in open discussion for expanding the opportunities and impact of DDDAS.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121180049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Massive stars die in stellar explosions known as core collapse supernovae. Such supernovae are a dominant source of elements in the Universe and, thus, an important link in our chain of origin from the Big Bang to the present day. Understanding how they occur will require three-dimensional, general relativistic, radiation-magnetohydrodynamics simulations that model the stellar core multifrequency and multiangle neutrino (radiation) transport, fluid instabilities and flow, rotation, magnetic field, and strong gravitational field. Such simulations will require petascale platforms, in turn requiring scalable solution algorithms for the underlying integro-partial differential equations, and a commensurate infrastructure for data management, networking, and visualization that will enable scientific discovery by a geographically distributed team. I will present the current state of the art and discuss near- and longer-term efforts. The ongoing rapid increase in supercomputer capability will allow us to address this Grand Challenge in earnest, in all of its complexity, for the first time.
{"title":"Understanding our cosmic origin through petascale computing","authors":"A. Mezzacappa","doi":"10.1145/1188455.1188510","DOIUrl":"https://doi.org/10.1145/1188455.1188510","url":null,"abstract":"Massive stars die in stellar explosions known as core collapse supernovae. Such supernovae are a dominant source of elements in the Universe and, thus, an important link in our chain of origin from the Big Bang to the present day. Understanding how they occur will require three-dimensional, general relativistic, radiation-magnetohydrodynamics simulations that model the stellar core multifrequency and multiangle neutrino (radiation) transport, fluid instabilities and flow, rotation, magnetic field, and strong gravitational field. Such simulations will require petascale platforms, in turn requiring scalable solution algorithms for the underlying integro-partial differential equations, and a commensurate infrastructure for data management, networking, and visualization that will enable scientific discovery by a geographically distributed team. I will present the current state of the art and discuss near- and longer-term efforts. The ongoing rapid increase in supercomputer capability will allow us to address this Grand Challenge in earnest, in all of its complexity, for the first time.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125317824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monitoring tools have evolved greatly, but when mis-configured they can forget how to do intelligent, non-intrusive and accomplish just-in-time alerts. Non-intrusive collection windows arise during bootup, idle system time, before major workload events and after these events finish. Batch schedulers allow health checking during these opportune times. Most just-in-time alerts arrive via system logs and out-of-band queries that can then trigger appropriate actions. However, abusive out-of-band queries may interrupt normal operational activities.Some vendor and open implementations have been heavyweight in watch-dogging at a brutal cost on computation as systems start scaling to thousands of nodes. Configuring tools to query intelligently during certain opportunities and running only necessary daemons helps to meet monitoring goals. These tools and daemons can include HP's hpasm, Dell's OMSA, supermon, lm_sensors, nagios, ganglia, logsurfer/syslog-ng, torque health checks. Share your monitoring stories and learn how triggers implemented scale to 4000+ node systems.
{"title":"Monitoring trix","authors":"Christopher D. Maestas","doi":"10.1145/1188455.1188488","DOIUrl":"https://doi.org/10.1145/1188455.1188488","url":null,"abstract":"Monitoring tools have evolved greatly, but when mis-configured they can forget how to do intelligent, non-intrusive and accomplish just-in-time alerts. Non-intrusive collection windows arise during bootup, idle system time, before major workload events and after these events finish. Batch schedulers allow health checking during these opportune times. Most just-in-time alerts arrive via system logs and out-of-band queries that can then trigger appropriate actions. However, abusive out-of-band queries may interrupt normal operational activities.Some vendor and open implementations have been heavyweight in watch-dogging at a brutal cost on computation as systems start scaling to thousands of nodes. Configuring tools to query intelligently during certain opportunities and running only necessary daemons helps to meet monitoring goals. These tools and daemons can include HP's hpasm, Dell's OMSA, supermon, lm_sensors, nagios, ganglia, logsurfer/syslog-ng, torque health checks. Share your monitoring stories and learn how triggers implemented scale to 4000+ node systems.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125095256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's HPC environments are increasingly complex in order to achieve highest performance. Hardware platforms introduce features like out-of-order execution, multi-level caches, multi-cores, non-uniform memory access etc. Application software combines OpenMP, MPI, optimized libraries and various types of compiler optimization to exploit potential performance.To reach a reasonable percentage of the theoretical peak performance, three fundamental steps need to be accomplished. First, correctness must be guaranteed especially during the course of optimization. Second, the actual performance achieved needs to be determined. In particular the contributions/limitations of all sub-systems involved (CPU, memory, network, I/O) have to be identified. Third, actual optimization can only be successful with the previously obtained knowledge.Those steps are by no means trivial. There are sophisticated tools beyond simple profiling to support the HPC user. The tutorial introduces a variety of such tools: it shows how they play together and how they scale with long-running massively parallel cases.
{"title":"Program analysis tools for massively parallel applications: how to achieve highest performance","authors":"A. Knüpfer, D. Kranzlmüller, B. Mohr, W. Nagel","doi":"10.1145/1188455.1188687","DOIUrl":"https://doi.org/10.1145/1188455.1188687","url":null,"abstract":"Today's HPC environments are increasingly complex in order to achieve highest performance. Hardware platforms introduce features like out-of-order execution, multi-level caches, multi-cores, non-uniform memory access etc. Application software combines OpenMP, MPI, optimized libraries and various types of compiler optimization to exploit potential performance.To reach a reasonable percentage of the theoretical peak performance, three fundamental steps need to be accomplished. First, correctness must be guaranteed especially during the course of optimization. Second, the actual performance achieved needs to be determined. In particular the contributions/limitations of all sub-systems involved (CPU, memory, network, I/O) have to be identified. Third, actual optimization can only be successful with the previously obtained knowledge.Those steps are by no means trivial. There are sophisticated tools beyond simple profiling to support the HPC user. The tutorial introduces a variety of such tools: it shows how they play together and how they scale with long-running massively parallel cases.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122512929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen C. Simms, M. Davy, B. Hammond, Matthew R. Link, C. Stewart, R. Bramley, Beth Plale, Dennis Gannon, M. Baik, S. Teige, J. Huffman, Rick McMullen, Douglas A. Balog, Gregory G. Pike
Indiana University provides powerful compute, storage, and network resources to a diverse local and national research community every day. IU's facilities have been used to support data-intensive applications ranging from digital humanities to computational biology.For this year's bandwidth challenge, several IU researchers will conduct experiments from the exhibit floor utilizing the resources that University Information Technology Services currently provides.Using IU's newly constructed 535 TB Data Capacitor and an additional component installed on the exhibit floor, we will use Lustre across the wide area network to simultaneously facilitate dynamic weather modeling, protein analysis, instrument data capture, and the production, storage, and analysis of simulation data.
{"title":"All in a day's work: advancing data-intensive research with the data capacitor","authors":"Stephen C. Simms, M. Davy, B. Hammond, Matthew R. Link, C. Stewart, R. Bramley, Beth Plale, Dennis Gannon, M. Baik, S. Teige, J. Huffman, Rick McMullen, Douglas A. Balog, Gregory G. Pike","doi":"10.1145/1188455.1188711","DOIUrl":"https://doi.org/10.1145/1188455.1188711","url":null,"abstract":"Indiana University provides powerful compute, storage, and network resources to a diverse local and national research community every day. IU's facilities have been used to support data-intensive applications ranging from digital humanities to computational biology.For this year's bandwidth challenge, several IU researchers will conduct experiments from the exhibit floor utilizing the resources that University Information Technology Services currently provides.Using IU's newly constructed 535 TB Data Capacitor and an additional component installed on the exhibit floor, we will use Lustre across the wide area network to simultaneously facilitate dynamic weather modeling, protein analysis, instrument data capture, and the production, storage, and analysis of simulation data.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134028262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}