Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663786
Carl Christian Rolf, K. Kuchcinski
Program parallelization and distribution becomes increasingly important when new multi-core architectures and cheaper cluster technology provide ways to improve performance. Using declarative languages, such as constraint programming, can make the transition to parallelism easier for the programmer. In this paper, we address parallel and distributed search in constraint programming (CP) by proposing several load-balancing methods. We show how these methods improve the execution-time scalability of constraint programs. Scalability is the greatest challenge of parallelism and it is particularly an issue in constraint programming, where load-balancing is difficult. We address this problem by proposing CP-specific load-balancing methods and evaluating them on a cluster by using benchmark problems. Our experimental results show that the methods behave differently well depending on the type of problem and the type of search. This gives the programmer the opportunity to optimize the performance for a particular problem.
{"title":"Load-balancing methods for parallel and distributed constraint solving","authors":"Carl Christian Rolf, K. Kuchcinski","doi":"10.1109/CLUSTR.2008.4663786","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663786","url":null,"abstract":"Program parallelization and distribution becomes increasingly important when new multi-core architectures and cheaper cluster technology provide ways to improve performance. Using declarative languages, such as constraint programming, can make the transition to parallelism easier for the programmer. In this paper, we address parallel and distributed search in constraint programming (CP) by proposing several load-balancing methods. We show how these methods improve the execution-time scalability of constraint programs. Scalability is the greatest challenge of parallelism and it is particularly an issue in constraint programming, where load-balancing is difficult. We address this problem by proposing CP-specific load-balancing methods and evaluating them on a cluster by using benchmark problems. Our experimental results show that the methods behave differently well depending on the type of problem and the type of search. This gives the programmer the opportunity to optimize the performance for a particular problem.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"18 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663778
D. Feng, Qiang Zou, Hong Jiang, Yifeng Zhu
One of the challenging issues in performance evaluation of parallel storage systems through synthetic-trace-driven simulation is to accurately characterize the I/O demands of data-intensive scientific applications. This paper analyzes several I/O traces collected from different distributed systems and concludes that correlations in parallel I/O inter-arrival times are inconsistent, either with little correlation or with evident and abundant correlations. Thus conventional Poisson or Markov arrival processes are inappropriate to model I/O arrivals in some applications. Instead, a new and generic model based on the alpha-stable process is proposed and validated in this paper to accurately model parallel I/O burstiness in both workloads with little and strong correlations. This model can be used to generate reliable synthetic I/O sequences in simulation studies. Experimental results presented in this paper show that this model can capture the complex I/O behaviors of real storage systems more accurately and faithfully than conventional models, particularly for the burstiness characteristics in the parallel I/O workloads.
{"title":"A novel model for synthesizing parallel I/O workloads in scientific applications","authors":"D. Feng, Qiang Zou, Hong Jiang, Yifeng Zhu","doi":"10.1109/CLUSTR.2008.4663778","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663778","url":null,"abstract":"One of the challenging issues in performance evaluation of parallel storage systems through synthetic-trace-driven simulation is to accurately characterize the I/O demands of data-intensive scientific applications. This paper analyzes several I/O traces collected from different distributed systems and concludes that correlations in parallel I/O inter-arrival times are inconsistent, either with little correlation or with evident and abundant correlations. Thus conventional Poisson or Markov arrival processes are inappropriate to model I/O arrivals in some applications. Instead, a new and generic model based on the alpha-stable process is proposed and validated in this paper to accurately model parallel I/O burstiness in both workloads with little and strong correlations. This model can be used to generate reliable synthetic I/O sequences in simulation studies. Experimental results presented in this paper show that this model can capture the complex I/O behaviors of real storage systems more accurately and faithfully than conventional models, particularly for the burstiness characteristics in the parallel I/O workloads.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663803
Keith Seymour, Haihang You, J. Dongarra
This paper describes the application of various search techniques to the problem of automatic empirical code optimization. The search process is a critical aspect of auto-tuning systems because the large size of the search space and the cost of evaluating the candidate implementations makes it infeasible to find the true optimum point by brute force. We evaluate the effectiveness of Nelder-Mead Simplex, Genetic Algorithms, Simulated Annealing, Particle Swarm Optimization, Orthogonal search, and Random search in terms of the performance of the best candidate found under varying time limits.
{"title":"A comparison of search heuristics for empirical code optimization","authors":"Keith Seymour, Haihang You, J. Dongarra","doi":"10.1109/CLUSTR.2008.4663803","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663803","url":null,"abstract":"This paper describes the application of various search techniques to the problem of automatic empirical code optimization. The search process is a critical aspect of auto-tuning systems because the large size of the search space and the cost of evaluating the candidate implementations makes it infeasible to find the true optimum point by brute force. We evaluate the effectiveness of Nelder-Mead Simplex, Genetic Algorithms, Simulated Annealing, Particle Swarm Optimization, Orthogonal search, and Random search in terms of the performance of the best candidate found under varying time limits.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663763
Xiaojun Yang, Fei Chen, Hailiang Cheng, Ninghui Sun
Instead of all using commodity components, an approach building a personal parallel computer on top of a non-coherent HyperTransport (HT) fabric is presented in the paper. The advantage is to provide both lower cost and higher performance compared with the existing method. A HT switch is designed and implemented for the interconnection of a set of AMD Opteron processors for building an in-a-box cluster. On our prototyping system, the result of evaluation experiments shows this approach gives the better performance.
{"title":"A HyperTransport-based personal parallel computer","authors":"Xiaojun Yang, Fei Chen, Hailiang Cheng, Ninghui Sun","doi":"10.1109/CLUSTR.2008.4663763","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663763","url":null,"abstract":"Instead of all using commodity components, an approach building a personal parallel computer on top of a non-coherent HyperTransport (HT) fabric is presented in the paper. The advantage is to provide both lower cost and higher performance compared with the existing method. A HT switch is designed and implemented for the interconnection of a set of AMD Opteron processors for building an in-a-box cluster. On our prototyping system, the result of evaluation experiments shows this approach gives the better performance.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117098694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663790
H. Jitsumoto, Toshio Endo, S. Matsuoka
Fault-tolerance for HPC systems with long-running applications of massive and growing scale is now essential. Although checkpointing with rollback recovery is a popular technique, automated checkpointing is becoming troublesome in a real system, due to the extremely large size of collective application memory. Therefore, automated optimization of the checkpoint interval is essential, but the optimal point depends on hardware failure rates and I/O bandwidth. Our new model and an algorithm, which is an extension of Vaidyapsilas model, solve the problem by taking such parameters into account. Prototype implementation on our fault-tolerant MPI framework ABARIS showed approximately 5.5% improvement over statically user-determined cases.
{"title":"Environmental-aware optimization of MPI checkpointing intervals","authors":"H. Jitsumoto, Toshio Endo, S. Matsuoka","doi":"10.1109/CLUSTR.2008.4663790","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663790","url":null,"abstract":"Fault-tolerance for HPC systems with long-running applications of massive and growing scale is now essential. Although checkpointing with rollback recovery is a popular technique, automated checkpointing is becoming troublesome in a real system, due to the extremely large size of collective application memory. Therefore, automated optimization of the checkpoint interval is essential, but the optimal point depends on hardware failure rates and I/O bandwidth. Our new model and an algorithm, which is an extension of Vaidyapsilas model, solve the problem by taking such parameters into account. Prototype implementation on our fault-tolerant MPI framework ABARIS showed approximately 5.5% improvement over statically user-determined cases.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121610068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663766
N. Desai, P. Balaji, P. Sadayappan, Mohammad Islam
High-speed interconnects are frequently used to provide scalable communication on increasingly large high-end computing systems. Often, these networks are nonblocking, where there exist independent paths between all pairs of nodes in the system allowing for simultaneous communication with zero network contention. This performance, however, comes at a heavy cost as the number of components needed (and hence cost) increases superlinearly with the number of nodes in the system. In this paper, we study the behavior of real and synthetic supercomputer workloads to understand the impact of the networkpsilas nonblocking capability on overall performance. Starting from a fully nonblocking network, we begin by assessing the worse-case performance degradation caused by removing interstage communication links, resulting in over provisioning and hence potentially blocking in the communication network.We also study the impact of several factors on this behavior, including system workloads, multicore processors, and switch crossbar sizes. Our observations show that a significant reduction in the number of interstage links can be tolerated on all of the workloads analyzed, causing less than 5% overall loss of performance.
{"title":"Are nonblocking networks really needed for high-end-computing workloads?","authors":"N. Desai, P. Balaji, P. Sadayappan, Mohammad Islam","doi":"10.1109/CLUSTR.2008.4663766","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663766","url":null,"abstract":"High-speed interconnects are frequently used to provide scalable communication on increasingly large high-end computing systems. Often, these networks are nonblocking, where there exist independent paths between all pairs of nodes in the system allowing for simultaneous communication with zero network contention. This performance, however, comes at a heavy cost as the number of components needed (and hence cost) increases superlinearly with the number of nodes in the system. In this paper, we study the behavior of real and synthetic supercomputer workloads to understand the impact of the networkpsilas nonblocking capability on overall performance. Starting from a fully nonblocking network, we begin by assessing the worse-case performance degradation caused by removing interstage communication links, resulting in over provisioning and hence potentially blocking in the communication network.We also study the impact of several factors on this behavior, including system workloads, multicore processors, and switch crossbar sizes. Our observations show that a significant reduction in the number of interstage links can be tolerated on all of the workloads analyzed, causing less than 5% overall loss of performance.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117125342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663810
Tina Miriam John, Anuradharthi Thiruvenkata Ramani, J. Chandy
The increasing performance and decreasing cost of processors and memory are causing system intelligence to move from the CPU to peripherals such as disk drives. Storage system designers are using this trend toward excessive computation capability to perform more complex processing and optimizations directly inside the storage devices. Such kind of optimizations have been performed only at low levels of the storage protocol. Another factor to consider is the current trends in storage density, mechanics, and electronics, which are eliminating the bottleneck encountered while moving data off the media, and putting pressure on interconnects and host processors to move data more efficiently. Previous work on active storage has taken advantage of the extra processing power on individual disk drives to run application-level code. This idea of moving portions of an applicationpsilas processing to run directly at disk drives can dramatically reduce data traffic and take advantage of the parallel storage already present in large systems today. This paper aims at demonstrating active storage on an iSCSI OSD standards-based object oriented framework.
{"title":"Active storage using object-based devices","authors":"Tina Miriam John, Anuradharthi Thiruvenkata Ramani, J. Chandy","doi":"10.1109/CLUSTR.2008.4663810","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663810","url":null,"abstract":"The increasing performance and decreasing cost of processors and memory are causing system intelligence to move from the CPU to peripherals such as disk drives. Storage system designers are using this trend toward excessive computation capability to perform more complex processing and optimizations directly inside the storage devices. Such kind of optimizations have been performed only at low levels of the storage protocol. Another factor to consider is the current trends in storage density, mechanics, and electronics, which are eliminating the bottleneck encountered while moving data off the media, and putting pressure on interconnects and host processors to move data more efficiently. Previous work on active storage has taken advantage of the extra processing power on individual disk drives to run application-level code. This idea of moving portions of an applicationpsilas processing to run directly at disk drives can dramatically reduce data traffic and take advantage of the parallel storage already present in large systems today. This paper aims at demonstrating active storage on an iSCSI OSD standards-based object oriented framework.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114575401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663798
Genaro Costa, Josep Jorba, A. Sikora, T. Margalef, E. Luque
Performance is a main issue in parallel application development. Dynamic tuning is a technique that acts over application parameters to raise execution performance indexes. To perform that, it is necessary to collect measurements, analyze application behavior using a performance model and carry out tuning actions. Computational Grids present proclivity for dynamic changes on their features during application execution. Thus, dynamic tuning tools are indispensable to reach the expected performance indexes on those environments. A particular problem which provokes performance bottlenecks is the load distribution in master/worker applications. This paper addresses the performance modeling of such applications on Computational Grids for the perspective of dynamic tuning. It is inferred that grain size and number of workers are critical parameters to reduce execution time while raising the efficiency of resources usage. A heuristic to dynamically tune granularity and number of workers is proposed. The experimental simulated results of a matrix multiplication application in a heterogeneous Grid environment are shown.
{"title":"Performance models for dynamic tuning of parallel applications on Computational Grids","authors":"Genaro Costa, Josep Jorba, A. Sikora, T. Margalef, E. Luque","doi":"10.1109/CLUSTR.2008.4663798","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663798","url":null,"abstract":"Performance is a main issue in parallel application development. Dynamic tuning is a technique that acts over application parameters to raise execution performance indexes. To perform that, it is necessary to collect measurements, analyze application behavior using a performance model and carry out tuning actions. Computational Grids present proclivity for dynamic changes on their features during application execution. Thus, dynamic tuning tools are indispensable to reach the expected performance indexes on those environments. A particular problem which provokes performance bottlenecks is the load distribution in master/worker applications. This paper addresses the performance modeling of such applications on Computational Grids for the perspective of dynamic tuning. It is inferred that grain size and number of workers are critical parameters to reduce execution time while raising the efficiency of resources usage. A heuristic to dynamically tune granularity and number of workers is proposed. The experimental simulated results of a matrix multiplication application in a heterogeneous Grid environment are shown.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123070748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663791
Jeremy S. Logan, P. Dickens
Lustre is becoming an increasingly important file system for large-scale computing clusters. The problem, however, is that many data-intensive applications use MPI-IO for their I/O requirements, and MPI-IO performs poorly in a Lustre file system environment. While this poor performance has been well documented, the reasons for such performance are currently not well understood. Our research suggests that the primary performance issues have to do with the assumptions underpinning most of the parallel I/O optimizations implemented in MPI-IO, which do not appear to hold in a Lustre environment. Perhaps the most important assumption is that optimal performance is obtained by performing large, contiguous I/O operations. However, the research results presented in this poster show that this is often the worst approach to take in a Lustre file system. In fact, we found that the best performance is often achieved when each process performs a series of smaller, non-contiguous I/O requests. In this poster, we provide experimental results supporting these non-intuitive ideas, and provide alternative approaches that significantly enhance the performance of MPI-IO in a Lustre file system.
{"title":"Towards an understanding of the performance of MPI-IO in Lustre file systems","authors":"Jeremy S. Logan, P. Dickens","doi":"10.1109/CLUSTR.2008.4663791","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663791","url":null,"abstract":"Lustre is becoming an increasingly important file system for large-scale computing clusters. The problem, however, is that many data-intensive applications use MPI-IO for their I/O requirements, and MPI-IO performs poorly in a Lustre file system environment. While this poor performance has been well documented, the reasons for such performance are currently not well understood. Our research suggests that the primary performance issues have to do with the assumptions underpinning most of the parallel I/O optimizations implemented in MPI-IO, which do not appear to hold in a Lustre environment. Perhaps the most important assumption is that optimal performance is obtained by performing large, contiguous I/O operations. However, the research results presented in this poster show that this is often the worst approach to take in a Lustre file system. In fact, we found that the best performance is often achieved when each process performs a series of smaller, non-contiguous I/O requests. In this poster, we provide experimental results supporting these non-intuitive ideas, and provide alternative approaches that significantly enhance the performance of MPI-IO in a Lustre file system.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115757555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-10-31DOI: 10.1109/CLUSTR.2008.4663794
Jason Cope, H. Tufo
The special priority and urgent computing environment (SPRUCE) provides on-demand access to high-performance computing resources for time-critical applications. While SPRUCE supports computationally intensive applications, it does not fully support high-priority, data intensive applications. To support data intensive applications in urgent computing environments, we developed the urgent computing environment data resource manager (CEDAR). CEDAR provides storage resource provisioning capabilities that manage the availability and quality of service of storage resources used by urgent computing applications. In this paper, we describe the architecture of CEDAR, illustrate how CEDAR will integrate with urgent computing environments, and evaluate the capabilities of CEDAR in simulated urgent computing environments.
{"title":"Supporting storage resources in Urgent Computing Environments","authors":"Jason Cope, H. Tufo","doi":"10.1109/CLUSTR.2008.4663794","DOIUrl":"https://doi.org/10.1109/CLUSTR.2008.4663794","url":null,"abstract":"The special priority and urgent computing environment (SPRUCE) provides on-demand access to high-performance computing resources for time-critical applications. While SPRUCE supports computationally intensive applications, it does not fully support high-priority, data intensive applications. To support data intensive applications in urgent computing environments, we developed the urgent computing environment data resource manager (CEDAR). CEDAR provides storage resource provisioning capabilities that manage the availability and quality of service of storage resources used by urgent computing applications. In this paper, we describe the architecture of CEDAR, illustrate how CEDAR will integrate with urgent computing environments, and evaluate the capabilities of CEDAR in simulated urgent computing environments.","PeriodicalId":198768,"journal":{"name":"2008 IEEE International Conference on Cluster Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128395298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}