In regards to applications like 3D seismic migration, it is quite important to improve the I/O performance within an cluster computing system. Such seismic data processing applications are the I/O intensive applications. For example, large 3D data volume cannot be hold totally in computer memories. Therefore the input data files have to be divided into many fine-grained chunks. Intermediate results are written out at various stages during the execution, and final results are written out by the master process. This paper describes a novel manner for optimizing the parallel I/O data access strategy and load balancing for the above-mentioned particular program model. The optimization, based on the application defined API, reduces the number of I/O operations and communication (as compared to the original model). This is done by forming groups of threads with "group roots", so to speak, that read input data (determined by an index retrieved from the master process) and then send it to their group members. In the original model, each process/thread reads the whole input data and outputs its own results. Moreover the loads are balanced, for the on-line dynamic scheduling of access request to process the migration data. Finally, in the actual performance test, the improvement of performance is often more than 60% by comparison with the original model.
{"title":"A Task-Pool Parallel I/O Paradigm for an I/O Intensive Application","authors":"Jianjiang Li, Lin Yan, Zhe Gao, D. Hei","doi":"10.1109/ISPA.2009.20","DOIUrl":"https://doi.org/10.1109/ISPA.2009.20","url":null,"abstract":"In regards to applications like 3D seismic migration, it is quite important to improve the I/O performance within an cluster computing system. Such seismic data processing applications are the I/O intensive applications. For example, large 3D data volume cannot be hold totally in computer memories. Therefore the input data files have to be divided into many fine-grained chunks. Intermediate results are written out at various stages during the execution, and final results are written out by the master process. This paper describes a novel manner for optimizing the parallel I/O data access strategy and load balancing for the above-mentioned particular program model. The optimization, based on the application defined API, reduces the number of I/O operations and communication (as compared to the original model). This is done by forming groups of threads with \"group roots\", so to speak, that read input data (determined by an index retrieved from the master process) and then send it to their group members. In the original model, each process/thread reads the whole input data and outputs its own results. Moreover the loads are balanced, for the on-line dynamic scheduling of access request to process the migration data. Finally, in the actual performance test, the improvement of performance is often more than 60% by comparison with the original model.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122467916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motif is overrepresented pattern in biological sequence and Motif finding is an important problem in bioinformatics. Due to high computational complexity of motif finding, more and more computational capabilities are required as the rapid growth of available biological data, such as gene transcription data. Among many motif finding algorithms, Gibbs sampling is an effective method for long motif finding. In this paper we present an improved Gibbs sampling method on graphics processing units (GPU) to accelerate motif finding. Experimental data support that, compared to traditional programs on CPU, our program running on GPU provides an effective and low-cost solution for motif finding problem, especially for long Motif finding.
{"title":"A Parallel Gibbs Sampling Algorithm for Motif Finding on GPU","authors":"Linbin Yu, Yun Xu","doi":"10.1109/ISPA.2009.88","DOIUrl":"https://doi.org/10.1109/ISPA.2009.88","url":null,"abstract":"Motif is overrepresented pattern in biological sequence and Motif finding is an important problem in bioinformatics. Due to high computational complexity of motif finding, more and more computational capabilities are required as the rapid growth of available biological data, such as gene transcription data. Among many motif finding algorithms, Gibbs sampling is an effective method for long motif finding. In this paper we present an improved Gibbs sampling method on graphics processing units (GPU) to accelerate motif finding. Experimental data support that, compared to traditional programs on CPU, our program running on GPU provides an effective and low-cost solution for motif finding problem, especially for long Motif finding.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126988543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuri Nishikawa, M. Koibuchi, Masato Yoshimi, Akihiro Shitara, K. Miura, H. Amano
ClearSpeed's CSX600 that consists of 96 Processing Elements (PEs) employs a one-dimensional array topology for a simple SIMD processing. To clearly show the performance factors and practical issues of NoCs in an existing modern many-core SIMD system, this paper measures and analyzes NoCs of CSX600 called Swazzle and ClearConnect. Evaluation and analysis results show that the sending and receiving overheads are the major limitation factors to the effective network bandwidth. We found that (1) the number of used PEs, (2) the size of transferred data, and (3) data alignment of a shared memory are three main points to make the best use of bandwidth. In addition, we estimated the best- and worst-case latencies of data transfers in parallel applications.
{"title":"Performance Analysis of ClearSpeed's CSX600 Interconnects","authors":"Yuri Nishikawa, M. Koibuchi, Masato Yoshimi, Akihiro Shitara, K. Miura, H. Amano","doi":"10.1109/ISPA.2009.102","DOIUrl":"https://doi.org/10.1109/ISPA.2009.102","url":null,"abstract":"ClearSpeed's CSX600 that consists of 96 Processing Elements (PEs) employs a one-dimensional array topology for a simple SIMD processing. To clearly show the performance factors and practical issues of NoCs in an existing modern many-core SIMD system, this paper measures and analyzes NoCs of CSX600 called Swazzle and ClearConnect. Evaluation and analysis results show that the sending and receiving overheads are the major limitation factors to the effective network bandwidth. We found that (1) the number of used PEs, (2) the size of transferred data, and (3) data alignment of a shared memory are three main points to make the best use of bandwidth. In addition, we estimated the best- and worst-case latencies of data transfers in parallel applications.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131193361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P-Cache to provide prioritized caching service for storage server which is used to serve multiple concurrently accessing applications with diverse access patterns and unequal importance. Given the replacement algorithm and the application access patterns, the end performance of each individual application in a shared cache is actually determined by its allocated cache resource. So, P-Cache adopts a dynamic partitioning approach to explicitly divide cache resource among applications and utilizes a global cache allocation policy to make adaptive cache allocations to guarantee the preset relative caching priority among competing applications. We have implemented P-Cache in Linux kernel 2.6.18 as a pseudo device driver and measured its performance using synthetic benchmark and real-life workloads. The experiment results show that the prioritized caching service provided by P-Cache can not only be used to support application priority but can also be utilized to improve the overall storage system performance. Its runtime overhead is also smaller compared with Linux page cache.
{"title":"P-Cache: Providing Prioritized Caching Service for Storage System","authors":"Xiaoxuan Meng, Chengxiang Si, Wenwu Na, Lu Xu","doi":"10.1109/ISPA.2009.40","DOIUrl":"https://doi.org/10.1109/ISPA.2009.40","url":null,"abstract":"P-Cache to provide prioritized caching service for storage server which is used to serve multiple concurrently accessing applications with diverse access patterns and unequal importance. Given the replacement algorithm and the application access patterns, the end performance of each individual application in a shared cache is actually determined by its allocated cache resource. So, P-Cache adopts a dynamic partitioning approach to explicitly divide cache resource among applications and utilizes a global cache allocation policy to make adaptive cache allocations to guarantee the preset relative caching priority among competing applications. We have implemented P-Cache in Linux kernel 2.6.18 as a pseudo device driver and measured its performance using synthetic benchmark and real-life workloads. The experiment results show that the prioritized caching service provided by P-Cache can not only be used to support application priority but can also be utilized to improve the overall storage system performance. Its runtime overhead is also smaller compared with Linux page cache.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125942667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
He Huang, Lei Liu, Nan Yuan, Wei Lin, Fenglong Song, Junchao Zhang, Dongrui Fan
The efficient support of cache coherence is extremely important to design and implement many-core processors. In this paper, we propose a synchronization-based coherence (SBC) protocol to efficiently support cache coherence for shared memory many-core architectures. The unique feature of our scheme is that it doesn’t use directory at all. Inspired by scope consistency memory model, our protocol maintains coherence at synchronization point. Within critical section, processor cores record write-sets (which lines have been written in critical section) with bloom-filter function. When the core releases the lock, the write-set is transferred to a synchronization manager. When another core acquires the same lock, it gets the write-set from the synchronization manager and invalidates stale data in its local cache. Experimental results show that the SBC outperforms by averages of 5% in execution time across a suite of scientific applications. At the mean time, the SBC is more cost-effective comparing to directory-based protocol that requires large amount of hardware resource and huge design verification effort.
{"title":"A Synchronization-Based Alternative to Directory Protocol","authors":"He Huang, Lei Liu, Nan Yuan, Wei Lin, Fenglong Song, Junchao Zhang, Dongrui Fan","doi":"10.1109/ISPA.2009.25","DOIUrl":"https://doi.org/10.1109/ISPA.2009.25","url":null,"abstract":"The efficient support of cache coherence is extremely important to design and implement many-core processors. In this paper, we propose a synchronization-based coherence (SBC) protocol to efficiently support cache coherence for shared memory many-core architectures. The unique feature of our scheme is that it doesn’t use directory at all. Inspired by scope consistency memory model, our protocol maintains coherence at synchronization point. Within critical section, processor cores record write-sets (which lines have been written in critical section) with bloom-filter function. When the core releases the lock, the write-set is transferred to a synchronization manager. When another core acquires the same lock, it gets the write-set from the synchronization manager and invalidates stale data in its local cache. Experimental results show that the SBC outperforms by averages of 5% in execution time across a suite of scientific applications. At the mean time, the SBC is more cost-effective comparing to directory-based protocol that requires large amount of hardware resource and huge design verification effort.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"37 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongbin Zhou, Junchao Zhang, Shuai Zhang, Nan Yuan, Dongrui Fan
to date, most of many-core prototypes employ tiled topologies connected through on-chip networks. The throughput and latency of the on-chip networks usually become to the bottleneck to achieve peak performance especially for communication intensive applications. Most of studies are focus on on-chip networks only, such as routing algorithms or router micro-architecture, to improve the above metrics. The salient aspect of our approach is that we provide a data management framework to implement high efficient on-chip traffic based on overall many-core system. The major contributions of this paper include that: (1) providing a novel tiled many-core architecture which supports software controlled on-chip data storage and movement management; (2) identifying that the asynchronous bulk data transfer mechanism is an effective method to tolerant the latency of 2-D mesh on-chip networks. At last, we evaluate the 1-D FFT algorithm on the framework and the performance achieves 47.6 Gflops with 24.8% computation efficiency.
{"title":"Data Management: The Spirit to Pursuit Peak Performance on Many-Core Processor","authors":"Yongbin Zhou, Junchao Zhang, Shuai Zhang, Nan Yuan, Dongrui Fan","doi":"10.1109/ISPA.2009.22","DOIUrl":"https://doi.org/10.1109/ISPA.2009.22","url":null,"abstract":"to date, most of many-core prototypes employ tiled topologies connected through on-chip networks. The throughput and latency of the on-chip networks usually become to the bottleneck to achieve peak performance especially for communication intensive applications. Most of studies are focus on on-chip networks only, such as routing algorithms or router micro-architecture, to improve the above metrics. The salient aspect of our approach is that we provide a data management framework to implement high efficient on-chip traffic based on overall many-core system. The major contributions of this paper include that: (1) providing a novel tiled many-core architecture which supports software controlled on-chip data storage and movement management; (2) identifying that the asynchronous bulk data transfer mechanism is an effective method to tolerant the latency of 2-D mesh on-chip networks. At last, we evaluate the 1-D FFT algorithm on the framework and the performance achieves 47.6 Gflops with 24.8% computation efficiency.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128737491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization is a new area for research in recent years, and virtualization technology can bring convenience to the management of computing resources. Together with the development of the network and the network computing, it gives the virtualization technology more scenarios. The cloud computing technology uses the virtualization technology as while. With the development of the technology, it meets some security problems, such as rootkit attacks and malignant tampers. Malicious programs can plug into the system, and be booted at the any time of the virtualized system. There is little theoretical research on booting a trusted virtualized system. We propose an active trusted model in order to give a theoretical model for not only analyzing the state of a virtualized system, but also helping to design trusted virtual machine application. TBoot is a project to boot a trusted virtual machine. We use our model to illustrate that TBoot can boot a trusted virtual machine theoretically.
{"title":"An Active Trusted Model for Virtual Machine Systems","authors":"Wentao Qu, Minglu Li, Chuliang Weng","doi":"10.1109/ISPA.2009.68","DOIUrl":"https://doi.org/10.1109/ISPA.2009.68","url":null,"abstract":"Virtualization is a new area for research in recent years, and virtualization technology can bring convenience to the management of computing resources. Together with the development of the network and the network computing, it gives the virtualization technology more scenarios. The cloud computing technology uses the virtualization technology as while. With the development of the technology, it meets some security problems, such as rootkit attacks and malignant tampers. Malicious programs can plug into the system, and be booted at the any time of the virtualized system. There is little theoretical research on booting a trusted virtualized system. We propose an active trusted model in order to give a theoretical model for not only analyzing the state of a virtualized system, but also helping to design trusted virtual machine application. TBoot is a project to boot a trusted virtual machine. We use our model to illustrate that TBoot can boot a trusted virtual machine theoretically.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junguo Li, Gang Huang, Xingrun Chen, Franck Chauvel, Hong Mei
Dynamic reconfiguration support in application servers is a solution to meet the demands for flexible and adaptive component-based applications. However, when an application is reconfigured, its fault-tolerant mechanism should be reconfigured either. This is one of the crucial problems we have to solve before a fault-tolerant application is dynamically reconfigured at runtime. This paper proposes a fault-tolerant sandbox to support the reconfigurable fault-tolerant mechanisms on application servers. We present how the sandbox integrates multiple error detection and recovery mechanisms, and how to reconfigure these mechanisms at runtime, especially for coordinated recovery mechanisms. We implement a prototype and perform a set of controlled experiments to demonstrate the sandbox’s capabilities.
{"title":"Supporting Reconfigurable Fault Tolerance on Application Servers","authors":"Junguo Li, Gang Huang, Xingrun Chen, Franck Chauvel, Hong Mei","doi":"10.1109/ISPA.2009.57","DOIUrl":"https://doi.org/10.1109/ISPA.2009.57","url":null,"abstract":"Dynamic reconfiguration support in application servers is a solution to meet the demands for flexible and adaptive component-based applications. However, when an application is reconfigured, its fault-tolerant mechanism should be reconfigured either. This is one of the crucial problems we have to solve before a fault-tolerant application is dynamically reconfigured at runtime. This paper proposes a fault-tolerant sandbox to support the reconfigurable fault-tolerant mechanisms on application servers. We present how the sandbox integrates multiple error detection and recovery mechanisms, and how to reconfigure these mechanisms at runtime, especially for coordinated recovery mechanisms. We implement a prototype and perform a set of controlled experiments to demonstrate the sandbox’s capabilities.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122661217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In information era advocating ubiquitous computing, computing resources are susceptible to attack without guaranteeing security of network system. It is necessary and desirable for network system to employ powerful safeguard to protect itself against diversified vulnerabilities. In this paper, we present reverse analysis and vulnerability detection for network system software (RAVDNSS), a novel approach which uses reverse analysis and vulnerability detection technologies to deal with security problems on critical network system. Adaptive reverse analysis we propose is used to dig out potential vulnerabilities, which might be abused by unauthorized and unlawful communities. A new vulnerability detection model is designed to provide safety precautions through detecting vulnerabilities and monitoring program behaviors. Our investigation aims to improve the ability to guard network system against malicious attacks. The proposed schemes demonstrate that our approach can effectively perform security detection and management of network system software.
{"title":"Reverse Analysis and Vulnerability Detection for Network System Software","authors":"Wei Pan, Weihua Li","doi":"10.1109/ISPA.2009.73","DOIUrl":"https://doi.org/10.1109/ISPA.2009.73","url":null,"abstract":"In information era advocating ubiquitous computing, computing resources are susceptible to attack without guaranteeing security of network system. It is necessary and desirable for network system to employ powerful safeguard to protect itself against diversified vulnerabilities. In this paper, we present reverse analysis and vulnerability detection for network system software (RAVDNSS), a novel approach which uses reverse analysis and vulnerability detection technologies to deal with security problems on critical network system. Adaptive reverse analysis we propose is used to dig out potential vulnerabilities, which might be abused by unauthorized and unlawful communities. A new vulnerability detection model is designed to provide safety precautions through detecting vulnerabilities and monitoring program behaviors. Our investigation aims to improve the ability to guard network system against malicious attacks. The proposed schemes demonstrate that our approach can effectively perform security detection and management of network system software.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117205350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concepts of self, nonself, antibody, antigen and vaccine in in-Depth-Defense system for Network Security was presented in this paper, the architecture of in-Depth Defense for network intrusion and detection based on immune principle is proposed. The intrusion information gotten from current monitored network is encapsulated and sent to the neighbor network as bacterin; therefore the neighbor network can make use of the bacterin and predict the danger of network. We can use communicate agent cooperated with response agent to achieve active defense formwork. The experimental results show that the new model not only actualizes an active prevention method but also improves the ability of intrusion detection and prevention than that of the traditional passive intrusion prevention systems
{"title":"A Method of In-Depth-Defense for Network Security Based on Immunity Principles","authors":"Yaping Jiang, Jianhua Zhou, Yong Gan, Zengyu Cai","doi":"10.1109/ISPA.2009.65","DOIUrl":"https://doi.org/10.1109/ISPA.2009.65","url":null,"abstract":"The concepts of self, nonself, antibody, antigen and vaccine in in-Depth-Defense system for Network Security was presented in this paper, the architecture of in-Depth Defense for network intrusion and detection based on immune principle is proposed. The intrusion information gotten from current monitored network is encapsulated and sent to the neighbor network as bacterin; therefore the neighbor network can make use of the bacterin and predict the danger of network. We can use communicate agent cooperated with response agent to achieve active defense formwork. The experimental results show that the new model not only actualizes an active prevention method but also improves the ability of intrusion detection and prevention than that of the traditional passive intrusion prevention systems","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121113128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}