Exact knowledge of the scheduling policy used at every resource on a dynamic heterogeneous environment such as a cloud and a grid deploying a variety of different resources is unlikely to be available. This paper describes techniques for determining whether or not a set of advance reservation requests can meet their deadlines when the details of the scheduling policy deployed at the resource is unknown. A discussion of the techniques including a simulation-based performance analysis is presented.
{"title":"The “Any-Schedulability” Criterion for Providing QoS Guarantees through Advance Reservation Requests","authors":"S. Majumdar","doi":"10.1109/CCGRID.2009.70","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.70","url":null,"abstract":"Exact knowledge of the scheduling policy used at every resource on a dynamic heterogeneous environment such as a cloud and a grid deploying a variety of different resources is unlikely to be available. This paper describes techniques for determining whether or not a set of advance reservation requests can meet their deadlines when the details of the scheduling policy deployed at the resource is unknown. A discussion of the techniques including a simulation-based performance analysis is presented.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125067398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Whenever a resource allocation fails although enough free capacity being available, fragmentation is easily spotted as cause. But how the fragmentation in a system requiring continuous allocations like time schedules or memory can be quantified is hardly analyzed. A Grid environment using advance reservation even combines two-dimensions: time and resource dimension. In this paper a new way to measure the fragmentation of a system in one dimension is proposed. This measure is then extended to incorporate also the second dimension. Extensive simulations showed that the proposed fragmentation measure is a good indicator of the state of the system.
{"title":"Measuring Fragmentation of Two-Dimensional Resources Applied to Advance Reservation Grid Scheduling","authors":"J. Gehr, Jörg Schneider","doi":"10.1109/CCGRID.2009.81","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.81","url":null,"abstract":"Whenever a resource allocation fails although enough free capacity being available, fragmentation is easily spotted as cause. But how the fragmentation in a system requiring continuous allocations like time schedules or memory can be quantified is hardly analyzed. A Grid environment using advance reservation even combines two-dimensions: time and resource dimension. In this paper a new way to measure the fragmentation of a system in one dimension is proposed. This measure is then extended to incorporate also the second dimension. Extensive simulations showed that the proposed fragmentation measure is a good indicator of the state of the system.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123440541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing has emerged as a new technology that provides large amounts of computing and data storage capacity to its users with a promise of increased scalability, high availability, and reduced administration and maintenance costs. As the use of cloud computing environments increases, it becomes crucial to understand the performance of these environments. So, it is of great importance to assess the performance of computing clouds in terms of various metrics, such as the overhead of acquiring and releasing the virtual computing resources, and other virtualization and network communications overheads. To address these issues, we have designed and implemented C-Meter, which is a portable, extensible, and easy-to-use framework for generating and submitting test workloads to computing clouds. In this paper, first we state the requirements for frameworks to assess the performance of computing clouds. Then, we present the architecture of the C-Meter framework and discuss several cloud resource management alternatives. Finally, we present ourearly experiences with C-Meter in Amazon EC2. We show how C-Meter can be used for assessing the overhead of acquiring and releasing the virtual computing resources, for comparing different configurations, and for evaluating different scheduling algorithms.
{"title":"C-Meter: A Framework for Performance Analysis of Computing Clouds","authors":"N. Yigitbasi, A. Iosup, D. Epema, S. Ostermann","doi":"10.1109/CCGRID.2009.40","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.40","url":null,"abstract":"Cloud computing has emerged as a new technology that provides large amounts of computing and data storage capacity to its users with a promise of increased scalability, high availability, and reduced administration and maintenance costs. As the use of cloud computing environments increases, it becomes crucial to understand the performance of these environments. So, it is of great importance to assess the performance of computing clouds in terms of various metrics, such as the overhead of acquiring and releasing the virtual computing resources, and other virtualization and network communications overheads. To address these issues, we have designed and implemented C-Meter, which is a portable, extensible, and easy-to-use framework for generating and submitting test workloads to computing clouds. In this paper, first we state the requirements for frameworks to assess the performance of computing clouds. Then, we present the architecture of the C-Meter framework and discuss several cloud resource management alternatives. Finally, we present ourearly experiences with C-Meter in Amazon EC2. We show how C-Meter can be used for assessing the overhead of acquiring and releasing the virtual computing resources, for comparing different configurations, and for evaluating different scheduling algorithms.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124472809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Workflow management system is widely accepted and used in the wide area network environment, especially in the e-Science application scenarios, to coordinate the operation of different functional components and to provide more powerful functions. The error-prone nature of the wide area network environment makes the fault-tolerance requirements of workflow management become more and more urgent. In this paper, we propose Cesar-FD, a stateful fault detection mechanism, which builds up states related to the runtime and external environments of workflow management system by aggregating multiple messages and provides more accurate notifications asynchronously. We demonstrate the use of this mechanism in the Drug Discovery Grid environment by two use cases. We also show that it can be used to detect faulty situations more accurately.
{"title":"Cesar-FD: An Effective Stateful Fault Detection Mechanism in Drug Discovery Grid","authors":"Yongjian Wang, Yinan Ren, Ting-Wen Chen, Yuanqiang Huang, Zhongzhi Luan, Zhongxin Wu, D. Qian","doi":"10.1109/CCGRID.2009.28","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.28","url":null,"abstract":"Workflow management system is widely accepted and used in the wide area network environment, especially in the e-Science application scenarios, to coordinate the operation of different functional components and to provide more powerful functions. The error-prone nature of the wide area network environment makes the fault-tolerance requirements of workflow management become more and more urgent. In this paper, we propose Cesar-FD, a stateful fault detection mechanism, which builds up states related to the runtime and external environments of workflow management system by aggregating multiple messages and provides more accurate notifications asynchronously. We demonstrate the use of this mechanism in the Drug Discovery Grid environment by two use cases. We also show that it can be used to detect faulty situations more accurately.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121682333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the transient performancestudy on the adaptability of an application layer multicastprotocol, namely Adaptive Overlay Multicast (AOM). The studyfocuses on how the application layer multicast tree structure adapts to network dynamics and faults by using efficient tree management and adaptation algorithms, thus reducing the adverse effects of network faults on the application performance. We present extensive simulation studies as well as studies using real Internet data The results show that AOM is highly adaptive to network dynamics while incurring low overhead.
{"title":"Transient Analysis of an Overlay Multicast Protocol","authors":"Xiaobing Hou, S. Wu","doi":"10.1109/CCGRID.2009.27","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.27","url":null,"abstract":"This paper presents the transient performancestudy on the adaptability of an application layer multicastprotocol, namely Adaptive Overlay Multicast (AOM). The studyfocuses on how the application layer multicast tree structure adapts to network dynamics and faults by using efficient tree management and adaptation algorithms, thus reducing the adverse effects of network faults on the application performance. We present extensive simulation studies as well as studies using real Internet data The results show that AOM is highly adaptive to network dynamics while incurring low overhead.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115864684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grid infrastructures are in operation around the world, federating an impressive collection of computational resources and a wide variety of application software. In this context, it is important to establish advanced software discovery services that could help end-users locate software components suitable to their needs. In this paper, we present the design, architecture and implementation of an open-source keyword-based paradigm for the search of software resources in Grid infrastructures, called Minersoft. A key goal of Minersoft is to annotate automatically all the software resources with keyword-rich metadata. Using advanced Information Retrieval techniques, we locate software resources with respect to users queries. Experiments were conducted in EGEE, one of the largest Grid production services currently in operation. Results showed that Minersoft successfully crawled 12.3 million valid files (620 GB size) and sustained, in most sites, high crawling rates.
{"title":"Harvesting Large-Scale Grids for Software Resources","authors":"Asterios Katsifodimos, G. Pallis, M. Dikaiakos","doi":"10.1109/CCGRID.2009.51","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.51","url":null,"abstract":"Grid infrastructures are in operation around the world, federating an impressive collection of computational resources and a wide variety of application software. In this context, it is important to establish advanced software discovery services that could help end-users locate software components suitable to their needs. In this paper, we present the design, architecture and implementation of an open-source keyword-based paradigm for the search of software resources in Grid infrastructures, called Minersoft. A key goal of Minersoft is to annotate automatically all the software resources with keyword-rich metadata. Using advanced Information Retrieval techniques, we locate software resources with respect to users queries. Experiments were conducted in EGEE, one of the largest Grid production services currently in operation. Results showed that Minersoft successfully crawled 12.3 million valid files (620 GB size) and sustained, in most sites, high crawling rates.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130534934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This talk will (modestly) attend to address challenges and opportunities ahead of us for making parallel programming as efficient, as productive, as effective as possible, on both Multi-cores and large-scale infrastructures such as Grids and Clouds. We draw on our experience at simplifying the programming of applications that are distributed on Local Area Network (LAN), on cluster of workstations, or GRIDs, and of course, Clouds. We will discuss a specific approach, Network On Ship, to cope seamlessly with both distributed and shared-memory multi-core machines. The point will be illustrated with ProActive an Open Source library for parallel, distributed, and concurrent computing, allowing to showcase interactive and graphical GUI and tools. Benchmarks on platforms such as Grid 5000, together with standardization and collaboration with Chinese partners will also be reported.
本次演讲将(适度地)关注我们面临的挑战和机遇,以使并行编程在多核和大型基础设施(如网格和云)上尽可能高效、高效和有效。我们利用我们在简化分布在局域网(LAN)、工作站集群或网格,当然还有云上的应用程序编程方面的经验。我们将讨论一种特定的方法,Network On Ship,以无缝地处理分布式和共享内存的多核机器。这一点将通过ProActive来说明,它是一个用于并行、分布式和并发计算的开源库,允许展示交互式和图形化GUI和工具。还将报告Grid 5000等平台的基准,以及与中国合作伙伴的标准化和合作。
{"title":"Challenges and Opportunities on Parallel/Distributed Programming for Large-scale: From Multi-core to Clouds","authors":"D. Caromel","doi":"10.1109/CCGRID.2009.98","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.98","url":null,"abstract":"This talk will (modestly) attend to address challenges and opportunities ahead of us for making parallel programming as efficient, as productive, as effective as possible, on both Multi-cores and large-scale infrastructures such as Grids and Clouds. We draw on our experience at simplifying the programming of applications that are distributed on Local Area Network (LAN), on cluster of workstations, or GRIDs, and of course, Clouds. We will discuss a specific approach, Network On Ship, to cope seamlessly with both distributed and shared-memory multi-core machines. The point will be illustrated with ProActive an Open Source library for parallel, distributed, and concurrent computing, allowing to showcase interactive and graphical GUI and tools. Benchmarks on platforms such as Grid 5000, together with standardization and collaboration with Chinese partners will also be reported.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115841542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Science & Technology Facilities Council is home to international Facilities such as the ISIS Neutron Spallation Source ≪http://www.isis.rl.ac.uk≫, Central Laser Facility ≪http://www.clf.rl.ac.uk/≫ and Diamond Light Source ≪http://www.diamond.ac.uk/default.htm≫, the National Grid Service ≪http://www.grid-support.ac.uk/≫ including national super computers ≪http://www.cse.scitech.ac.uk≫, Tier1 data service for CERN particle physics experiment, the British Atmospheric data Centre and the British Oceanographic Data Centre at the Space Science and Technology department. Together these Facilities generate several Terabytes of data per month which needs to be handled, catalogued and provided access to. In addition, the scientists within STFC departments also develop complex simulations and undertake data analysis for their own experiments. Facilities also have strong ongoing collaborations with UK academic and commercial users through their involvement with Collaborative Computational Programme, generating very large simulation datasets. There is thus the need to support high resolution data analysis using distributed compute, data and visualization resources. At the same time, these requirements offer the computational and visualization scientists within STFC unique opportunities to advocate the take up of advanced visualization techniques and toolkits in distributed high performance, high resolution hardware environment. It gives an opportunity to understand the requirements and usefulness of distributed visualization. Given this seemingly advantageous position, the STFC vizNET ≪http://www.viznet.ac.uk/≫ partners have been actively pursuing visualization awareness activities and services aimed at application holders of various scientific disciplines. These activities include holding workshops, hands-on tutorials, show case demonstrations and the setting up of hardware based visualization services with technical support. This report provides details of these activities, the outcomes, the status and some suggestions as to the way forward.
{"title":"Supporting Distributed Visualization Services for High Performance Science and Engineering Applications A Service Provider Perspective","authors":"L. Sastry, R. Fowler, S. Nagella, J. Churchill","doi":"10.1109/CCGRID.2009.94","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.94","url":null,"abstract":"The Science & Technology Facilities Council is home to international Facilities such as the ISIS Neutron Spallation Source ≪http://www.isis.rl.ac.uk≫, Central Laser Facility ≪http://www.clf.rl.ac.uk/≫ and Diamond Light Source ≪http://www.diamond.ac.uk/default.htm≫, the National Grid Service ≪http://www.grid-support.ac.uk/≫ including national super computers ≪http://www.cse.scitech.ac.uk≫, Tier1 data service for CERN particle physics experiment, the British Atmospheric data Centre and the British Oceanographic Data Centre at the Space Science and Technology department. Together these Facilities generate several Terabytes of data per month which needs to be handled, catalogued and provided access to. In addition, the scientists within STFC departments also develop complex simulations and undertake data analysis for their own experiments. Facilities also have strong ongoing collaborations with UK academic and commercial users through their involvement with Collaborative Computational Programme, generating very large simulation datasets. There is thus the need to support high resolution data analysis using distributed compute, data and visualization resources. At the same time, these requirements offer the computational and visualization scientists within STFC unique opportunities to advocate the take up of advanced visualization techniques and toolkits in distributed high performance, high resolution hardware environment. It gives an opportunity to understand the requirements and usefulness of distributed visualization. Given this seemingly advantageous position, the STFC vizNET ≪http://www.viznet.ac.uk/≫ partners have been actively pursuing visualization awareness activities and services aimed at application holders of various scientific disciplines. These activities include holding workshops, hands-on tutorials, show case demonstrations and the setting up of hardware based visualization services with technical support. This report provides details of these activities, the outcomes, the status and some suggestions as to the way forward.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124973855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today many Peer-to-peer (P2P) applications are widely used on the Internet. Especially file sharing is a popular P2P application that has, at least partially, replaced the centralized file sharing infrastructure. However, there are still a number of legacy applications that utilize a centralized infrastructure as opposed to a decentralized approach. In this paper we present a generic framework for decentralizing legacy applications. Even though we focus especially on Voice over IP (VoIP), email, and web applications, we believe that our framework could also be utilized with other legacy applications. A notable feature of our framework is that it does not require any changes to legacy applications.
{"title":"Framework for Decentralizing Legacy Applications","authors":"J. Hautakorpi, G. Camarillo, David López","doi":"10.1109/CCGRID.2009.75","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.75","url":null,"abstract":"Today many Peer-to-peer (P2P) applications are widely used on the Internet. Especially file sharing is a popular P2P application that has, at least partially, replaced the centralized file sharing infrastructure. However, there are still a number of legacy applications that utilize a centralized infrastructure as opposed to a decentralized approach. In this paper we present a generic framework for decentralizing legacy applications. Even though we focus especially on Voice over IP (VoIP), email, and web applications, we believe that our framework could also be utilized with other legacy applications. A notable feature of our framework is that it does not require any changes to legacy applications.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122756914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Range query in Peer-to-Peer networks based on Distributed Hash Table (DHT) is still an open problem. The traditional way uses order-preserving hashing functions to create value indexes that are placed and stored on the corresponding peers to support range query. The way, however, suffers from high index maintenance costs. To avoid the issue, a scalable blind search method over DHTs - recursive partition search (RPS) can be used. But, RPS still easily incurs high network overhead as network size grows. Thus, in this paper, a learning-aware RPS (LARPS) is proposed to overcome the disadvantages of two approaches above mentioned. Extensive experiments show LARPS is a scalable and robust approach for range query, especially in the following cases: a) query range is wide, b) the requested resources follow Zipf distribution, and c) the number of required resources is small.
{"title":"Range Query Using Learning-Aware RPS in DHT-Based Peer-to-Peer Networks","authors":"Ze Deng, D. Feng, Ke Zhou, Zhan Shi, Chao Luo","doi":"10.1109/CCGRID.2009.25","DOIUrl":"https://doi.org/10.1109/CCGRID.2009.25","url":null,"abstract":"Range query in Peer-to-Peer networks based on Distributed Hash Table (DHT) is still an open problem. The traditional way uses order-preserving hashing functions to create value indexes that are placed and stored on the corresponding peers to support range query. The way, however, suffers from high index maintenance costs. To avoid the issue, a scalable blind search method over DHTs - recursive partition search (RPS) can be used. But, RPS still easily incurs high network overhead as network size grows. Thus, in this paper, a learning-aware RPS (LARPS) is proposed to overcome the disadvantages of two approaches above mentioned. Extensive experiments show LARPS is a scalable and robust approach for range query, especially in the following cases: a) query range is wide, b) the requested resources follow Zipf distribution, and c) the number of required resources is small.","PeriodicalId":118263,"journal":{"name":"2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}