Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029908
A. Ivan, J. Harman, M. Allen, V. Karamcheti
Several recently proposed infrastructures permit client applications to interact with distributed network-accessible services by simply "plugging in" into a substrate that provides essential functionality, such as naming, discovery, and multi-protocol binding. However much work remains before the interaction can be considered truly seamless in the sense of adapting to the characteristics of the heterogeneous environments in which clients and services operate. This paper describes a novel approach for addressing this shortcoming: the partitionable services framework, which enables services to be flexibly assembled from multiple components, and facilitates transparent migration and replication of these components at locations closer to the client while still appearing as a single monolithic service. The framework consists of three pieces: (1) declarative specification of services in terms of constituent components; (2) run-time support for dynamic component deployment; and (3) planning policies, which steer the deployment to accomodate underlying environment characteristics. We demonstrate the salient features of the framework and highlight its usability and performance benefits with a case study involving a security-sensitive mail service.
{"title":"Partitionable services: A framework for seamlessly adapting distributed applications to heterogeneous environments","authors":"A. Ivan, J. Harman, M. Allen, V. Karamcheti","doi":"10.1109/HPDC.2002.1029908","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029908","url":null,"abstract":"Several recently proposed infrastructures permit client applications to interact with distributed network-accessible services by simply \"plugging in\" into a substrate that provides essential functionality, such as naming, discovery, and multi-protocol binding. However much work remains before the interaction can be considered truly seamless in the sense of adapting to the characteristics of the heterogeneous environments in which clients and services operate. This paper describes a novel approach for addressing this shortcoming: the partitionable services framework, which enables services to be flexibly assembled from multiple components, and facilitates transparent migration and replication of these components at locations closer to the client while still appearing as a single monolithic service. The framework consists of three pieces: (1) declarative specification of services in terms of constituent components; (2) run-time support for dynamic component deployment; and (3) planning policies, which steer the deployment to accomodate underlying environment characteristics. We demonstrate the salient features of the framework and highlight its usability and performance benefits with a case study involving a security-sensitive mail service.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130028036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029938
E. Jeannot, Björn Knutsson, M. Björkman
Quickly transmitting large datasets in the context of distributed computing on wide area networks can be achieved by compressing data before transmission, However such an approach is not efficient when dealing with higher speed networks. Indeed, the time to compress a large file and to send it is greater than the time to send the uncompressed file. In this paper we explore and enhance an algorithm that allows us to overlap communications with compression and to automatically adapt the compression effort to currently available network and processor resources.
{"title":"Adaptive online data compression","authors":"E. Jeannot, Björn Knutsson, M. Björkman","doi":"10.1109/HPDC.2002.1029938","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029938","url":null,"abstract":"Quickly transmitting large datasets in the context of distributed computing on wide area networks can be achieved by compressing data before transmission, However such an approach is not efficient when dealing with higher speed networks. Indeed, the time to compress a large file and to send it is greater than the time to send the uncompressed file. In this paper we explore and enhance an algorithm that allows us to overlap communications with compression and to automatically adapt the compression effort to currently available network and processor resources.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115972849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029952
M. Rambadt, P. Wieder
Summary form only given. This work describes a software prototype developed at Research Center Julich to demonstrate the interoperability between UNICORE (Uniform Interface to Computer Resources) and Globus without changes to any of the systems. By combining UNICORE's workflow oriented approach to job submission with Globus, grid users can gain seamless access to a wide number of Globus enabled systems. We define the following scenario of a job submission from UNICORE to Globus: the user prepares the job via UNICORE's graphical user interface, chooses a Globus site where the job is to be computed and submits it to UNICORE's target system interface (TSI). This is the entity normally interfacing with the local batch system. It is enhanced to communicate with Globus. The TSI translates the job description from the UNICORE specific abstract job object (AJO) into the GRAM Resource Specification Language (RSL) and submits it to the GRAM Gatekeeper. Standard Globus mechanisms are used to monitor the status of the job and transfer the output back to the TSI.
{"title":"UNICORE-Globus interoperability: getting the best of both worlds","authors":"M. Rambadt, P. Wieder","doi":"10.1109/HPDC.2002.1029952","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029952","url":null,"abstract":"Summary form only given. This work describes a software prototype developed at Research Center Julich to demonstrate the interoperability between UNICORE (Uniform Interface to Computer Resources) and Globus without changes to any of the systems. By combining UNICORE's workflow oriented approach to job submission with Globus, grid users can gain seamless access to a wide number of Globus enabled systems. We define the following scenario of a job submission from UNICORE to Globus: the user prepares the job via UNICORE's graphical user interface, chooses a Globus site where the job is to be computed and submits it to UNICORE's target system interface (TSI). This is the entity normally interfacing with the local batch system. It is enhanced to communicate with Globus. The TSI translates the job description from the UNICORE specific abstract job object (AJO) into the GRAM Resource Specification Language (RSL) and submits it to the GRAM Gatekeeper. Standard Globus mechanisms are used to monitor the status of the job and transfer the output back to the TSI.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126794212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029903
Yun Huang, N. Venkatasubramanian
In this paper, we address the problem of resource discovery in a grid based multimedia environment, where the resources providers, i.e. servers, are intermittently available. Given a graph theoretic approach, we define and formulate various policies for QoS-based resource discovery with intermittently available servers that can meet a variety of user needs. We evaluate the performance of these policies under various time-map scenarios and placement strategies. Our performance results illustrate the added benefits obtained by adding flexibility to the scheduling process.
{"title":"QoS-based resource discovery in intermittently available environments","authors":"Yun Huang, N. Venkatasubramanian","doi":"10.1109/HPDC.2002.1029903","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029903","url":null,"abstract":"In this paper, we address the problem of resource discovery in a grid based multimedia environment, where the resources providers, i.e. servers, are intermittently available. Given a graph theoretic approach, we define and formulate various policies for QoS-based resource discovery with intermittently available servers that can meet a variety of user needs. We evaluate the performance of these policies under various time-map scenarios and placement strategies. Our performance results illustrate the added benefits obtained by adding flexibility to the scheduling process.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133535361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029917
W. Allcock, J. Bester, J. Bresnahan, Ian T Foster, Jarek Gawor, J. Insley, Joseph M. Link, M. Papka
Grid applications can combine the use of computation, storage, network, and other resources. These resources are often geographically distributed, adding to application complexity and thus the difficulty of understanding application performance. We present GridMapper, a tool for monitoring and visualizing the behavior of such distributed systems. GridMapper builds on basic mechanisms for registering, discovering, and accessing performance information sources, as well as for mapping from domain names to physical locations. The visualization system itself then supports the automatic layout of distributed sets of such sources and animation of their activities. We use a set of examples to illustrate how the system can provide valuable insights into the behavior and performance of a range of different applications.
{"title":"GridMapper: a tool for visualizing the behavior of large-scale distributed systems","authors":"W. Allcock, J. Bester, J. Bresnahan, Ian T Foster, Jarek Gawor, J. Insley, Joseph M. Link, M. Papka","doi":"10.1109/HPDC.2002.1029917","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029917","url":null,"abstract":"Grid applications can combine the use of computation, storage, network, and other resources. These resources are often geographically distributed, adding to application complexity and thus the difficulty of understanding application performance. We present GridMapper, a tool for monitoring and visualizing the behavior of such distributed systems. GridMapper builds on basic mechanisms for registering, discovering, and accessing performance information sources, as well as for mapping from domain names to physical locations. The visualization system itself then supports the automatic layout of distributed sets of such sources and animation of their activities. We use a set of examples to illustrate how the system can provide valuable insights into the behavior and performance of a range of different applications.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131118420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029906
Q. Snell, K. Tew, J. Ekstrom, M. Clement
As the Internet began its exponential growth into a global information environment, software was often unreliable, slow and had difficulty in interoperating with other systems. Supercomputing node counts also continue to follow high growth trends. Supercomputer and grid resource management software must mature into a reliable computational platform in much the same way that web services matured for the Internet. DOGMA The Next Generation (DOGMA-NG) improves on current resource management approaches by using tested off-the-shelf enterprise technologies to build a robust, scalable, and extensible resource management platform. Distributed web service technologies constitute the core of DOGMA-NG's design and provide fault tolerance and scalability. DOGMA-NG's use of open standard web technologies and efficient management algorithms promises to reduce management time and accommodate the growing size of future supercomputers. The use of web technologies also provides the opportunity for anew parallel programming paradigm, enterprise web services parallel programming, that also gains benefit from the scalable, robust component architecture.
{"title":"An enterprise-based grid resource management system","authors":"Q. Snell, K. Tew, J. Ekstrom, M. Clement","doi":"10.1109/HPDC.2002.1029906","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029906","url":null,"abstract":"As the Internet began its exponential growth into a global information environment, software was often unreliable, slow and had difficulty in interoperating with other systems. Supercomputing node counts also continue to follow high growth trends. Supercomputer and grid resource management software must mature into a reliable computational platform in much the same way that web services matured for the Internet. DOGMA The Next Generation (DOGMA-NG) improves on current resource management approaches by using tested off-the-shelf enterprise technologies to build a robust, scalable, and extensible resource management platform. Distributed web service technologies constitute the core of DOGMA-NG's design and provide fault tolerance and scalability. DOGMA-NG's use of open standard web technologies and efficient management algorithms promises to reduce management time and accommodate the growing size of future supercomputers. The use of web technologies also provides the opportunity for anew parallel programming paradigm, enterprise web services parallel programming, that also gains benefit from the scalable, robust component architecture.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116663077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029948
M. Humphrey, S. Arnold, G. Wasson
Globus is a powerful toolkit but lacks location transparency in its naming system, due to a reliance on URLs. In practical terms, this means that a Grid user (or software running on behalf of the user) must know precisely where Grid entities are. The problem is that hardware reconfiguration, file system reorganization, and changes in organizational structure can often result in dangling links. At the University of Virginia, we are designing and implementing a comprehensive project that combines the best aspects of Globus and Legion into Legion-G - roughly an "applications-level" interface from Legion to Globus, whereby Legion "runs on" key Grid functionality of Globus such as GSI. Among the capabilities already supported in Legion, and thus will be delivered to the Globus user, are: end-user tools for transparent remote execution and parameter-space studies; support for dynamic, transparent remote instantiation of transient Grid services, with integrated scheduling support; fine-grained access control for Grid services; and the Legion programming model which supports arbitrary, asynchronous, data-flow-style, secure Grid computations. This poster describes the Legion-G support for location-transparent naming in Grid Computing and illustrates its value in the context of Globus MPI computations that accesses LegionFS which is a location-transparent, Grid-enabled distributed file system.
{"title":"Location-transparent naming in grid computing using Legion-G","authors":"M. Humphrey, S. Arnold, G. Wasson","doi":"10.1109/HPDC.2002.1029948","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029948","url":null,"abstract":"Globus is a powerful toolkit but lacks location transparency in its naming system, due to a reliance on URLs. In practical terms, this means that a Grid user (or software running on behalf of the user) must know precisely where Grid entities are. The problem is that hardware reconfiguration, file system reorganization, and changes in organizational structure can often result in dangling links. At the University of Virginia, we are designing and implementing a comprehensive project that combines the best aspects of Globus and Legion into Legion-G - roughly an \"applications-level\" interface from Legion to Globus, whereby Legion \"runs on\" key Grid functionality of Globus such as GSI. Among the capabilities already supported in Legion, and thus will be delivered to the Globus user, are: end-user tools for transparent remote execution and parameter-space studies; support for dynamic, transparent remote instantiation of transient Grid services, with integrated scheduling support; fine-grained access control for Grid services; and the Legion programming model which supports arbitrary, asynchronous, data-flow-style, secure Grid computations. This poster describes the Legion-G support for location-transparent naming in Grid Computing and illustrates its value in the context of Globus MPI computations that accesses LegionFS which is a location-transparent, Grid-enabled distributed file system.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116709229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029941
Gabrielle Allen, Kelly Davis, Thomas Dramlitsch, T. Goodale, I. Kelley, Gerd Lanfermann, Jason Novotny, T. Radke, Kashif Rasul, Michael Russell, E. Seidel, Oliver Wehrens
We present a synopsis of the Grid Application Toolkit, under development in the EU GridLab project, along with some of the new application scenarios which it will enable.
{"title":"The GridLab grid application toolkit","authors":"Gabrielle Allen, Kelly Davis, Thomas Dramlitsch, T. Goodale, I. Kelley, Gerd Lanfermann, Jason Novotny, T. Radke, Kashif Rasul, Michael Russell, E. Seidel, Oliver Wehrens","doi":"10.1109/HPDC.2002.1029941","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029941","url":null,"abstract":"We present a synopsis of the Grid Application Toolkit, under development in the EU GridLab project, along with some of the new application scenarios which it will enable.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116732785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029912
James R. McCombs, A. Stathopoulos
Clusters of workstations have become a cost-effective means of performing scientific computations. However, large network latencies, resource sharing, and heterogeneity found in networks of clusters and Grids can impede the performance of applications not specifically tailored for use in such environments. A typical example is the traditional fine grain implementations of Krylov-like iterative methods, a central component in many scientific applications. To exploit the potential of these environments, advances in networking technology must be complemented by advances in parallel algorithmic design. In this paper, we present an algorithmic technique that increases the granularity of parallel block iterative methods by inducing additional work during the preconditioning (inexact solution) phase of the iteration. During this phase, each vector in the block is preconditioned by a different subgroup of processors, yielding a much coarser granularity. The rest of the method comprises a small portion of the total time and is still implemented in fine grain. We call this combination of fine and coarse grain parallelism multigrain. We apply this idea to the block Jacobi-Davidson eigensolver, and present experimental data that shows the significant reduction of latency effects on networks of clusters of roughly equal capacity and size. We conclude with a discussion on how multigrain can be applied dynamically based on runtime network performance monitoring.
{"title":"Multigrain parallelism for eigenvalue computations on networks of clusters","authors":"James R. McCombs, A. Stathopoulos","doi":"10.1109/HPDC.2002.1029912","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029912","url":null,"abstract":"Clusters of workstations have become a cost-effective means of performing scientific computations. However, large network latencies, resource sharing, and heterogeneity found in networks of clusters and Grids can impede the performance of applications not specifically tailored for use in such environments. A typical example is the traditional fine grain implementations of Krylov-like iterative methods, a central component in many scientific applications. To exploit the potential of these environments, advances in networking technology must be complemented by advances in parallel algorithmic design. In this paper, we present an algorithmic technique that increases the granularity of parallel block iterative methods by inducing additional work during the preconditioning (inexact solution) phase of the iteration. During this phase, each vector in the block is preconditioned by a different subgroup of processors, yielding a much coarser granularity. The rest of the method comprises a small portion of the total time and is still implemented in fine grain. We call this combination of fine and coarse grain parallelism multigrain. We apply this idea to the block Jacobi-Davidson eigensolver, and present experimental data that shows the significant reduction of latency effects on networks of clusters of roughly equal capacity and size. We conclude with a discussion on how multigrain can be applied dynamically based on runtime network performance monitoring.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129468639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-24DOI: 10.1109/HPDC.2002.1029901
Matthew S. Allen, R. Wolski, J. Plank
In this paper we present a novel methodology for improving the performance and dependability of application-level messaging in Grid systems. Based on the Network Weather Service, our system uses nonparametric statistical forecasts of request-response times to automatically determine message timeouts. By choosing a timeout based on predicted network performance, the methodology improves application and Grid service performance as extraneous and overly-long timeouts are avoided. We describe the technique, the additional execution and programming overhead it introduces, and demonstrate the effectiveness using a wide-area test application.
{"title":"Adaptive timeout discovery using the Network Weather Service","authors":"Matthew S. Allen, R. Wolski, J. Plank","doi":"10.1109/HPDC.2002.1029901","DOIUrl":"https://doi.org/10.1109/HPDC.2002.1029901","url":null,"abstract":"In this paper we present a novel methodology for improving the performance and dependability of application-level messaging in Grid systems. Based on the Network Weather Service, our system uses nonparametric statistical forecasts of request-response times to automatically determine message timeouts. By choosing a timeout based on predicted network performance, the methodology improves application and Grid service performance as extraneous and overly-long timeouts are avoided. We describe the technique, the additional execution and programming overhead it introduces, and demonstrate the effectiveness using a wide-area test application.","PeriodicalId":279053,"journal":{"name":"Proceedings 11th IEEE International Symposium on High Performance Distributed Computing","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116413078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}