Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271463
M. Vilayannur, R. Ross, P. Carns, R. Thakur, A. Sivasubramaniam, M. Kandemir
The ever-increasing gap in performance between CPU/memory technologies and the I/O subsystem (disks, I/O buses) in modern workstations has exacerbated the I/O bottlenecks inherent in applications that access large disk resident data sets. A common technique to alleviate the I/O bottlenecks on clusters of workstations, is the use of parallel file systems. One such parallel file system is the parallel virtual file system (PVFS), which is a freely available tool to achieve high-performance I/O on Linux-based clusters. Here, we describe the performance and scalability of the UNIX I/O interface to PVFS. To illustrate the performance, we present experimental results using Bonnie++, a commonly used file system benchmark to test file system throughput; a synthetic parallel I/O application for calculating aggregate read and write bandwidths; and a synthetic benchmark which calculates the time taken to untar the Linux kernel source tree to measure performance of a large number of small file operations. We obtained aggregate read and write bandwidths as high as 550 MB/s with a Myrinet-based network and 160MB/s with fast Ethernet.
{"title":"On the performance of the POSIX I/O interface to PVFS","authors":"M. Vilayannur, R. Ross, P. Carns, R. Thakur, A. Sivasubramaniam, M. Kandemir","doi":"10.1109/EMPDP.2004.1271463","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271463","url":null,"abstract":"The ever-increasing gap in performance between CPU/memory technologies and the I/O subsystem (disks, I/O buses) in modern workstations has exacerbated the I/O bottlenecks inherent in applications that access large disk resident data sets. A common technique to alleviate the I/O bottlenecks on clusters of workstations, is the use of parallel file systems. One such parallel file system is the parallel virtual file system (PVFS), which is a freely available tool to achieve high-performance I/O on Linux-based clusters. Here, we describe the performance and scalability of the UNIX I/O interface to PVFS. To illustrate the performance, we present experimental results using Bonnie++, a commonly used file system benchmark to test file system throughput; a synthetic parallel I/O application for calculating aggregate read and write bandwidths; and a synthetic benchmark which calculates the time taken to untar the Linux kernel source tree to measure performance of a large number of small file operations. We obtained aggregate read and write bandwidths as high as 550 MB/s with a Myrinet-based network and 160MB/s with fast Ethernet.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131233356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271459
C. Albrecht, Rainer Hagenau, Andreas C. Döring
Multithreading is an efficient way to improve efficiency of processor cores in embedded products for networking infrastructures. To make such improvements also accessible to processor cores without hardware support for multithreading, we present a concept for efficient software multithreading through compiler post-pass optimization of the application code. Our approach aims at reducing the overhead for cooperative multithreading context switches at compile time by using standard compiler techniques such as context-insensitive analysis. Additionally, register usage is rearranged to reduce the amount of context-switch code by exploiting multiple-load/store instructions. Performance model analysis encourages the use of software multithreading to improve processor utilization by showing the benefit of our approach. We present results obtained by an implementation for the PowerPC ISA (Instruction Set Architecture) using the code of a real network application (iSCSI). We were able to reduce the expected run-time of a context switch to as little as 38% of the original.
{"title":"Cooperative software multithreading to enhance utilization of embedded processors for network applications","authors":"C. Albrecht, Rainer Hagenau, Andreas C. Döring","doi":"10.1109/EMPDP.2004.1271459","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271459","url":null,"abstract":"Multithreading is an efficient way to improve efficiency of processor cores in embedded products for networking infrastructures. To make such improvements also accessible to processor cores without hardware support for multithreading, we present a concept for efficient software multithreading through compiler post-pass optimization of the application code. Our approach aims at reducing the overhead for cooperative multithreading context switches at compile time by using standard compiler techniques such as context-insensitive analysis. Additionally, register usage is rearranged to reduce the amount of context-switch code by exploiting multiple-load/store instructions. Performance model analysis encourages the use of software multithreading to improve processor utilization by showing the benefit of our approach. We present results obtained by an implementation for the PowerPC ISA (Instruction Set Architecture) using the code of a real network application (iSCSI). We were able to reduce the expected run-time of a context switch to as little as 38% of the original.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116531866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271471
L. Cárdenas, J. Sahuquillo, A. Pont, J. A. Gil
Proxy caches have become an important mechanism to reduce latencies. Efficient management techniques for proxy caches which exploits Web-objects inherent characteristics are an essential key to reach good performance. One important segment of the replacement algorithms being applied today are the multikey algorithms that use several key or object characteristics to decide which object or objects must be replaced. This feature is not considered in most of the current simulators. In this paper we propose a proxy-cache platform to check the performance of Web object based on multikey management techniques and algorithms. The proposed platform is coded in a modular way, which allows the implementation of new algorithms or policies proposals in an easy and robust manner. In addition to the classical performance metrics like the hit ratio and the byte hit ratio, the proposed framework also offers the response time perceived by users.
{"title":"The multikey Web cache simulator: a platform for designing proxy cache management techniques","authors":"L. Cárdenas, J. Sahuquillo, A. Pont, J. A. Gil","doi":"10.1109/EMPDP.2004.1271471","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271471","url":null,"abstract":"Proxy caches have become an important mechanism to reduce latencies. Efficient management techniques for proxy caches which exploits Web-objects inherent characteristics are an essential key to reach good performance. One important segment of the replacement algorithms being applied today are the multikey algorithms that use several key or object characteristics to decide which object or objects must be replaced. This feature is not considered in most of the current simulators. In this paper we propose a proxy-cache platform to check the performance of Web object based on multikey management techniques and algorithms. The proposed platform is coded in a modular way, which allows the implementation of new algorithms or policies proposals in an easy and robust manner. In addition to the classical performance metrics like the hit ratio and the byte hit ratio, the proposed framework also offers the response time perceived by users.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125107366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271458
Z. Németh, G. Gombás, Z. Balaton
Grids are semantically different from other distributed systems. Therefore, performance analysis, just like any other technique requires careful reconsideration. We analyse the fundamental differences between grids and other systems and point out the special requirements raised to performance analysis. The main aim is to survey the special problems, the possible directions and the existing solutions. A monitoring system, that is able to support the posed requirements is introduced as an example.
{"title":"Performance evaluation on grids: directions, issues, and open problems","authors":"Z. Németh, G. Gombás, Z. Balaton","doi":"10.1109/EMPDP.2004.1271458","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271458","url":null,"abstract":"Grids are semantically different from other distributed systems. Therefore, performance analysis, just like any other technique requires careful reconsideration. We analyse the fundamental differences between grids and other systems and point out the special requirements raised to performance analysis. The main aim is to survey the special problems, the possible directions and the existing solutions. A monitoring system, that is able to support the posed requirements is introduced as an example.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129446127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271438
Francesco Moscato, N. Mazzocca, V. Vittorini
Real world dependable distributed systems are often heterogeneous, not only in their physical composition, but also from a modeling and analysis perspective. Indeed, different components may be modeled by using the most suitable modeling formalism and multisolution strategies may be applied to analyze the resulting multi-formalism model since no single solution method is adequate to solve all submodels. We present the architecture of an extensible multiformalism framework for the modeling and design of distributed dependable system. We show that the process needed to solve/analyze a model expressed through different formalisms may be described as it were a business process and executed by means of a workflow engine. We apply the proposed technique to a fault tolerant remote SCADA (supervisory control and data acquisition) system.
{"title":"Workflow principles applied to multi-solution analysis of dependable distributed systems","authors":"Francesco Moscato, N. Mazzocca, V. Vittorini","doi":"10.1109/EMPDP.2004.1271438","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271438","url":null,"abstract":"Real world dependable distributed systems are often heterogeneous, not only in their physical composition, but also from a modeling and analysis perspective. Indeed, different components may be modeled by using the most suitable modeling formalism and multisolution strategies may be applied to analyze the resulting multi-formalism model since no single solution method is adequate to solve all submodels. We present the architecture of an extensible multiformalism framework for the modeling and design of distributed dependable system. We show that the process needed to solve/analyze a model expressed through different formalisms may be described as it were a business process and executed by means of a workflow engine. We apply the proposed technique to a fault tolerant remote SCADA (supervisory control and data acquisition) system.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115486397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271434
J. Górriz, C. Puntonet, M. Salmerón, R. Martín-Clemente
We show a parallel neural network (cross-over prediction model) for time series statistical learning implemented in PVM ("parallel virtual machine") and MPI ("message passing interface"), in order to reduce computational time. Parallelization is achieved in two ways: updating autoregressive parameters via a genetic algorithm and evaluating the overall prediction function via a parallel neural network. PVM permits an heterogeneous collection of Unix computers networked together to be viewed by our program as a simple parallel computer. We show different architectures of parallel processors systems and discuss its computing model.
{"title":"Parallelization of time series forecasting model","authors":"J. Górriz, C. Puntonet, M. Salmerón, R. Martín-Clemente","doi":"10.1109/EMPDP.2004.1271434","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271434","url":null,"abstract":"We show a parallel neural network (cross-over prediction model) for time series statistical learning implemented in PVM (\"parallel virtual machine\") and MPI (\"message passing interface\"), in order to reduce computational time. Parallelization is achieved in two ways: updating autoregressive parameters via a genetic algorithm and evaluating the overall prediction function via a parallel neural network. PVM permits an heterogeneous collection of Unix computers networked together to be viewed by our program as a simple parallel computer. We show different architectures of parallel processors systems and discuss its computing model.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121324798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271420
R. Asenjo, F. Corbera, E. Gutiérrez, M. Navarro, O. Plata, E. Zapata
Current compilers show inefficiencies when optimizing complex applications, both analyzing dependences and exploiting critical performance issues, like data locality and instruction/thread parallelism. Complex applications usually present irregular and/or dynamic (pointer-based) computational/data structures. By irregular we means applications that arrange data as multidimensional arrays and issue memory references through array indirections. Pointer-based applications, on the other hand, organize data as pointer-based structures (lists, trees, etc.) and issue memory references by means of pointers. We discuss optimization/parallelization and program analysis techniques we have developed to instruct a compiler to generate efficient object code from important classes of irregular and pointer-based applications. These techniques are embodied into a methodology that proceeds in three stages: program structure recognition, data analysis and program optimization/parallelization based on code/data transformations.
{"title":"Optimization techniques for irregular and pointer-based programs","authors":"R. Asenjo, F. Corbera, E. Gutiérrez, M. Navarro, O. Plata, E. Zapata","doi":"10.1109/EMPDP.2004.1271420","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271420","url":null,"abstract":"Current compilers show inefficiencies when optimizing complex applications, both analyzing dependences and exploiting critical performance issues, like data locality and instruction/thread parallelism. Complex applications usually present irregular and/or dynamic (pointer-based) computational/data structures. By irregular we means applications that arrange data as multidimensional arrays and issue memory references through array indirections. Pointer-based applications, on the other hand, organize data as pointer-based structures (lists, trees, etc.) and issue memory references by means of pointers. We discuss optimization/parallelization and program analysis techniques we have developed to instruct a compiler to generate efficient object code from important classes of irregular and pointer-based applications. These techniques are embodied into a methodology that proceeds in three stages: program structure recognition, data analysis and program optimization/parallelization based on code/data transformations.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122988693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271446
Márk Jelasity, W. Kowalczyk, M. Steen
The emergence of the Internet as a computing platform increases the demand for new classes of algorithms that combine massive distributed processing and complete decentralization. Moreover, these algorithms should be able to execute in an environment that is heterogeneous, changes almost continuously, and consists of millions of nodes. An important class of algorithms that can play an important role in such environments is aggregate computing: computing the aggregation of attributes such as extremal values, mean, and variance. These algorithms typically find their application in distributed data mining and systems management. We present novel, massively scalable and fully decentralized algorithms for computing aggregates, and substantiate our scalability claims through simulations and theoretical analysis.
{"title":"An approach to massively distributed aggregate computing on peer-to-peer networks","authors":"Márk Jelasity, W. Kowalczyk, M. Steen","doi":"10.1109/EMPDP.2004.1271446","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271446","url":null,"abstract":"The emergence of the Internet as a computing platform increases the demand for new classes of algorithms that combine massive distributed processing and complete decentralization. Moreover, these algorithms should be able to execute in an environment that is heterogeneous, changes almost continuously, and consists of millions of nodes. An important class of algorithms that can play an important role in such environments is aggregate computing: computing the aggregation of attributes such as extremal values, mean, and variance. These algorithms typically find their application in distributed data mining and systems management. We present novel, massively scalable and fully decentralized algorithms for computing aggregates, and substantiate our scalability claims through simulations and theoretical analysis.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132096151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271430
A. C. Aljundi, J. Dekeyser
Interconnection network performance is a key factor when constructing parallel computers. Today's technological progress makes it possible to build and use crossbars of sizes up to 128. Crossbars can be used as switching elements (SEs) in parallel architectures intercommunication systems such as multistage interconnection networks (MINs). A MIN is usually defined, among others, by its topology. One of the factors defining the topology of a MIN is its degree. The degree of a MIN is the size of the SE of which it is composed. We are interested in studying the influence of the degree of two classes of MINs on their performance. The tested MINs classes are the famous delta networks and a subclass of this family called the over-sized delta networks. This study is to be used in future work in order to evaluate the use of MINs as an intercommunication medium in symmetric multiprocessors.
{"title":"The effect of the degree of multistage interconnection networks on their performance: the case of delta and over-sized delta networks","authors":"A. C. Aljundi, J. Dekeyser","doi":"10.1109/EMPDP.2004.1271430","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271430","url":null,"abstract":"Interconnection network performance is a key factor when constructing parallel computers. Today's technological progress makes it possible to build and use crossbars of sizes up to 128. Crossbars can be used as switching elements (SEs) in parallel architectures intercommunication systems such as multistage interconnection networks (MINs). A MIN is usually defined, among others, by its topology. One of the factors defining the topology of a MIN is its degree. The degree of a MIN is the size of the SE of which it is composed. We are interested in studying the influence of the degree of two classes of MINs on their performance. The tested MINs classes are the famous delta networks and a subclass of this family called the over-sized delta networks. This study is to be used in future work in order to evaluate the use of MINs as an intercommunication medium in symmetric multiprocessors.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125858537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-08DOI: 10.1109/EMPDP.2004.1271460
Evangelia Athanasaki, N. Koziris
Minimizing cache misses is one of the most important factors to reduce average latency for memory accesses. Tiled codes modify the instruction stream to exploit cache locality for array accesses. Here, we further reduce cache misses, restructuring the memory layout of multidimensional arrays, that are accessed by tiled instruction code. In our method, array elements are stored in a blocked way, exactly as they are swept by the tiled instruction stream. We present a straightforward way to easily translate multidimensional indexing of arrays into their blocked memory layout using simple binary-mask operations. Indices for such array layouts are easily calculated based on the algebra of dilated integers, similarly to morton-order indexing. Actual experimental results, using matrix multiplication and LU-decomposition on various size arrays, illustrate that execution time is greatly improved when combining tiled code with tiled array layouts and binary mask-based index translation functions. Simulations using the Simplescalar tool, verify that enhanced performance is due to the considerable reduction of total cache misses.
{"title":"Improving cache locality with blocked array layouts","authors":"Evangelia Athanasaki, N. Koziris","doi":"10.1109/EMPDP.2004.1271460","DOIUrl":"https://doi.org/10.1109/EMPDP.2004.1271460","url":null,"abstract":"Minimizing cache misses is one of the most important factors to reduce average latency for memory accesses. Tiled codes modify the instruction stream to exploit cache locality for array accesses. Here, we further reduce cache misses, restructuring the memory layout of multidimensional arrays, that are accessed by tiled instruction code. In our method, array elements are stored in a blocked way, exactly as they are swept by the tiled instruction stream. We present a straightforward way to easily translate multidimensional indexing of arrays into their blocked memory layout using simple binary-mask operations. Indices for such array layouts are easily calculated based on the algebra of dilated integers, similarly to morton-order indexing. Actual experimental results, using matrix multiplication and LU-decomposition on various size arrays, illustrate that execution time is greatly improved when combining tiled code with tiled array layouts and binary mask-based index translation functions. Simulations using the Simplescalar tool, verify that enhanced performance is due to the considerable reduction of total cache misses.","PeriodicalId":105726,"journal":{"name":"12th Euromicro Conference on Parallel, Distributed and Network-Based Processing, 2004. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115593386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}