Pub Date : 2020-12-01DOI: 10.1109/hipc50609.2020.00012
{"title":"HiPC 2020 Industry Sponsors","authors":"","doi":"10.1109/hipc50609.2020.00012","DOIUrl":"https://doi.org/10.1109/hipc50609.2020.00012","url":null,"abstract":"","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114157800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/HiPC50609.2020.00024
Abdoulaye Diop, N. Emad, Thierry Winter
In this article, we propose an innovative method for the detection of insider threats. This method is based on a unite and conquer approach used to combine ensemble learning techniques, which have the particularity of being intrinsically parallel. Furthermore, it showcases multi-level parallelism properties, offers fault tolerance, and is suitable for heterogeneous architectures. To highlight our approach's efficacy, we present a use case of insider threat detection on a parallel platform. This experiment's results showed the benefits of this method relative to its improvement of classification AUC-score and its scalability.
{"title":"A Parallel and Scalable Framework for Insider Threat Detection","authors":"Abdoulaye Diop, N. Emad, Thierry Winter","doi":"10.1109/HiPC50609.2020.00024","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00024","url":null,"abstract":"In this article, we propose an innovative method for the detection of insider threats. This method is based on a unite and conquer approach used to combine ensemble learning techniques, which have the particularity of being intrinsically parallel. Furthermore, it showcases multi-level parallelism properties, offers fault tolerance, and is suitable for heterogeneous architectures. To highlight our approach's efficacy, we present a use case of insider threat detection on a parallel platform. This experiment's results showed the benefits of this method relative to its improvement of classification AUC-score and its scalability.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129799239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/hipc50609.2020.00003
{"title":"[Copyright notice]","authors":"","doi":"10.1109/hipc50609.2020.00003","DOIUrl":"https://doi.org/10.1109/hipc50609.2020.00003","url":null,"abstract":"","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122748086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model checking is now a standard technology for verifying large and complex systems. While there are a range of tools and techniques to verify various properties of a system under consideration, in this work, we restrict our attention to safety checking procedures using explicit state space generation. The necessary hardware resources required in this approach depends on the model complexity and the resulting state transition graph that gets generated. This cannot be estimated apriori. For reasonably realistic models, the available main memory in even high end servers may not be sufficient. Hence, we have to use distributed safety verification approaches on a cluster of nodes. However, the problem of estimating the minimum number of nodes in the cluster for the verification procedure to complete successfully remains unsolved. In this paper, we propose a dynamically scalable model checker using an actor based architecture. Using the proposed approach, an end user can invoke a model checker hosted on a cloud platform in a push button fashion. Our safety verification procedures automatically expands the cluster by requesting more virtual machines from the cloud provider. Finally, the user gets to pay only for the hardware resources he rented for the duration of the verification procedure. We refer to this as Model Checking as Service. We approach this problem by proposing an asynchronous algorithm for safety checking in actor framework. The actor based approach allows for scaling the resources on a need basis and redistributes the work load transparently through state migration. We tested our approach by developing a distributed version of SpinJA model checker using Akka actor framework. We conducted our experiments on Google Cloud Engine (GCE) platform wherein we scale our resources automatically using the GCE API. On large models such as anderson.8 from BEEM benchmark suite, our approach reduced the model checking cost in dollars by 8.6x while reducing the wall clock time to complete the safety checking procedure 5.5x times.
{"title":"Model Checking as a Service using Dynamic Resource Scaling","authors":"Surya Teja Palavalasa, Yuvraj Singh, Adhish Singla, Suresh Purini, Venkatesh Choppella","doi":"10.1109/HiPC50609.2020.00027","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00027","url":null,"abstract":"Model checking is now a standard technology for verifying large and complex systems. While there are a range of tools and techniques to verify various properties of a system under consideration, in this work, we restrict our attention to safety checking procedures using explicit state space generation. The necessary hardware resources required in this approach depends on the model complexity and the resulting state transition graph that gets generated. This cannot be estimated apriori. For reasonably realistic models, the available main memory in even high end servers may not be sufficient. Hence, we have to use distributed safety verification approaches on a cluster of nodes. However, the problem of estimating the minimum number of nodes in the cluster for the verification procedure to complete successfully remains unsolved. In this paper, we propose a dynamically scalable model checker using an actor based architecture. Using the proposed approach, an end user can invoke a model checker hosted on a cloud platform in a push button fashion. Our safety verification procedures automatically expands the cluster by requesting more virtual machines from the cloud provider. Finally, the user gets to pay only for the hardware resources he rented for the duration of the verification procedure. We refer to this as Model Checking as Service. We approach this problem by proposing an asynchronous algorithm for safety checking in actor framework. The actor based approach allows for scaling the resources on a need basis and redistributes the work load transparently through state migration. We tested our approach by developing a distributed version of SpinJA model checker using Akka actor framework. We conducted our experiments on Google Cloud Engine (GCE) platform wherein we scale our resources automatically using the GCE API. On large models such as anderson.8 from BEEM benchmark suite, our approach reduced the model checking cost in dollars by 8.6x while reducing the wall clock time to complete the safety checking procedure 5.5x times.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132199701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/HiPC50609.2020.00019
Keke Zhai, Pan He, Tania Banerjee-Mishra, A. Rangarajan, S. Ranka
We propose SparsePipe, an efficient and asynchronous parallelism approach for handling 3D point clouds with multi-GPU training. SparsePipe is built to support 3D sparse data such as point clouds. It achieves this by adopting generalized convolutions with sparse tensor representation to build expressive high-dimensional convolutional neural networks. Compared to dense solutions, the new models can efficiently process irregular point clouds without densely sliding over the entire space, significantly reducing the memory requirements and allowing higher resolutions of the underlying 3D volumes for better performance. SparsePipe exploits intra-batch parallelism that partitions input data into multiple processors and further improves the training throughput with inter-batch pipelining to overlap communication and computing. Besides, it suitably partitions the model when the GPUs are heterogeneous such that the computing is load-balanced with reduced communication overhead. Using experimental results on an eight-GPU platform, we show that SparsePipe can parallelize effectively and obtain better performance on current point cloud benchmarks for both training and inference, compared to its dense solutions.
{"title":"SparsePipe: Parallel Deep Learning for 3D Point Clouds","authors":"Keke Zhai, Pan He, Tania Banerjee-Mishra, A. Rangarajan, S. Ranka","doi":"10.1109/HiPC50609.2020.00019","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00019","url":null,"abstract":"We propose SparsePipe, an efficient and asynchronous parallelism approach for handling 3D point clouds with multi-GPU training. SparsePipe is built to support 3D sparse data such as point clouds. It achieves this by adopting generalized convolutions with sparse tensor representation to build expressive high-dimensional convolutional neural networks. Compared to dense solutions, the new models can efficiently process irregular point clouds without densely sliding over the entire space, significantly reducing the memory requirements and allowing higher resolutions of the underlying 3D volumes for better performance. SparsePipe exploits intra-batch parallelism that partitions input data into multiple processors and further improves the training throughput with inter-batch pipelining to overlap communication and computing. Besides, it suitably partitions the model when the GPUs are heterogeneous such that the computing is load-balanced with reduced communication overhead. Using experimental results on an eight-GPU platform, we show that SparsePipe can parallelize effectively and obtain better performance on current point cloud benchmarks for both training and inference, compared to its dense solutions.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134333135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/hipc50609.2020.00006
B. Uçar, G. Agrawal
{"title":"Message from the Program Chairs","authors":"B. Uçar, G. Agrawal","doi":"10.1109/hipc50609.2020.00006","DOIUrl":"https://doi.org/10.1109/hipc50609.2020.00006","url":null,"abstract":"","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121802629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/HiPC50609.2020.00018
Michael P. Lingg, S. Hughey, Doga Dikbayir, B. Shanker, H. Aktulga
The Multi-Level Fast Multipole Algorithm (MLFMA), a variant of the fast multiple method (FMM) for problems with oscillatory potentials, significantly accelerates the solution of problems based on wave physics, such as those in electromagnetics and acoustics. Existing shared memory parallel approaches for MLFMA have adopted the bulk synchronous parallel (BSP) model. While the BSP approach has served well so far, it is prone to significant thread synchronization overheads, but more importantly fails to leverage the communication/computation overlap opportunities due to complicated data dependencies in MLFMA. In this paper, we develop a task parallel MLFMA implementation for shared memory architectures, and discuss optimizations to improve its performance. We then evaluate the new task parallel MLFMA implementation against a BSP implementation for a number of geometries. Our findings suggest that task parallelism is generally superior to the BSP model, and considering its potential advantages over the BSP model in a hybrid parallel setting, we see it to be a promising approach in addressing the scalability issues of MLFMA in large scale computations.
{"title":"Exploring Task Parallelism for the Multilevel Fast Multipole Algorithm","authors":"Michael P. Lingg, S. Hughey, Doga Dikbayir, B. Shanker, H. Aktulga","doi":"10.1109/HiPC50609.2020.00018","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00018","url":null,"abstract":"The Multi-Level Fast Multipole Algorithm (MLFMA), a variant of the fast multiple method (FMM) for problems with oscillatory potentials, significantly accelerates the solution of problems based on wave physics, such as those in electromagnetics and acoustics. Existing shared memory parallel approaches for MLFMA have adopted the bulk synchronous parallel (BSP) model. While the BSP approach has served well so far, it is prone to significant thread synchronization overheads, but more importantly fails to leverage the communication/computation overlap opportunities due to complicated data dependencies in MLFMA. In this paper, we develop a task parallel MLFMA implementation for shared memory architectures, and discuss optimizations to improve its performance. We then evaluate the new task parallel MLFMA implementation against a BSP implementation for a number of geometries. Our findings suggest that task parallelism is generally superior to the BSP model, and considering its potential advantages over the BSP model in a hybrid parallel setting, we see it to be a promising approach in addressing the scalability issues of MLFMA in large scale computations.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132181642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/HiPC50609.2020.00039
Vivek Kumar
Due to the challenges in providing adequate memory access to many cores on a single processor, Multi-Die and Multi-Socket based multicore systems are becoming mainstream. These systems offer cache-coherent Non-Uniform Memory Access (NUMA) across several memory banks and cache hierarchy to increase memory capacity and bandwidth. Random work-stealing is a widely used technique for dynamic load balancing of tasks on multicore processors. However, it scales poorly on such NUMA systems for memory-bound applications due to cache misses and remote memory access latency. Hierarchical Place Tree (HPT) [1] is a popular approach for improving the locality of a task-based parallel programming model, albeit it requires the programmer to map the dynamically unfolding tasks over a NUMA system evenly. Specifying data-affinity hints provides a more natural way to map the tasks than HPT. Still, a scalable work-stealing implementation for the same is mostly unexplored for modern NUMA systems. This paper presents PufferFish, a new async-finish parallel programming model and work-stealing runtime for NUMA systems that provide a close coupling of the data-affinity hints provided for an asynchronous task with the HPTs in Habanero C/C++ library (HClib). PufferFish introduces Hierarchical Elastic Tasks (HET) that improves the locality by shrinking itself to run on a single worker inside a place or puffing up across multiple workers depending on the work imbalance at a particular place in an HPT. We use a set of widely used memory-bound benchmarks exhibiting regular and irregular execution graphs for evaluating PufferFish. On these benchmarks, we show that PufferFish achieves a geometric mean speedup of 1.5× and 1.9× over HPT implementation in HClib and random work-stealing in CilkPlus, respectively, on a 32-core NUMA AMD EPYC processor.
{"title":"PufferFish: NUMA-Aware Work-stealing Library using Elastic Tasks","authors":"Vivek Kumar","doi":"10.1109/HiPC50609.2020.00039","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00039","url":null,"abstract":"Due to the challenges in providing adequate memory access to many cores on a single processor, Multi-Die and Multi-Socket based multicore systems are becoming mainstream. These systems offer cache-coherent Non-Uniform Memory Access (NUMA) across several memory banks and cache hierarchy to increase memory capacity and bandwidth. Random work-stealing is a widely used technique for dynamic load balancing of tasks on multicore processors. However, it scales poorly on such NUMA systems for memory-bound applications due to cache misses and remote memory access latency. Hierarchical Place Tree (HPT) [1] is a popular approach for improving the locality of a task-based parallel programming model, albeit it requires the programmer to map the dynamically unfolding tasks over a NUMA system evenly. Specifying data-affinity hints provides a more natural way to map the tasks than HPT. Still, a scalable work-stealing implementation for the same is mostly unexplored for modern NUMA systems. This paper presents PufferFish, a new async-finish parallel programming model and work-stealing runtime for NUMA systems that provide a close coupling of the data-affinity hints provided for an asynchronous task with the HPTs in Habanero C/C++ library (HClib). PufferFish introduces Hierarchical Elastic Tasks (HET) that improves the locality by shrinking itself to run on a single worker inside a place or puffing up across multiple workers depending on the work imbalance at a particular place in an HPT. We use a set of widely used memory-bound benchmarks exhibiting regular and irregular execution graphs for evaluating PufferFish. On these benchmarks, we show that PufferFish achieves a geometric mean speedup of 1.5× and 1.9× over HPT implementation in HClib and random work-stealing in CilkPlus, respectively, on a 32-core NUMA AMD EPYC processor.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114356415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/HiPC50609.2020.00033
Lionel Eyraud-Dubois, C. Bentes
Modern GPUs allow concurrent kernel execution and preemption to improve hardware utilization and responsiveness. Currently, the decision on the simultaneous execution of kernels is performed by the hardware, which can lead to unreasonable use of resources. In this work, we tackle the problem of co-scheduling for GPUs in high competition scenarios. We propose a novel graph-based preemptive co-scheduling algorithm, with the focus on reducing the number of preemptions. We show that the optimal preemptive makespan can be computed by solving a Linear Program in polynomial time. Based on this solution we propose graph theoretical model and an algorithm to build preemptive schedules which minimize the number of preemptions. We show, however, that finding the minimum amount of preemptions among all preemptive solutions of optimal makespan is a NP-hard problem. We performed experiments on real-world GPU applications and our approach can achieve optimal makespan by preempting 6 to 9% of the tasks.
{"title":"Algorithms for Preemptive Co-scheduling of Kernels on GPUs","authors":"Lionel Eyraud-Dubois, C. Bentes","doi":"10.1109/HiPC50609.2020.00033","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00033","url":null,"abstract":"Modern GPUs allow concurrent kernel execution and preemption to improve hardware utilization and responsiveness. Currently, the decision on the simultaneous execution of kernels is performed by the hardware, which can lead to unreasonable use of resources. In this work, we tackle the problem of co-scheduling for GPUs in high competition scenarios. We propose a novel graph-based preemptive co-scheduling algorithm, with the focus on reducing the number of preemptions. We show that the optimal preemptive makespan can be computed by solving a Linear Program in polynomial time. Based on this solution we propose graph theoretical model and an algorithm to build preemptive schedules which minimize the number of preemptions. We show, however, that finding the minimum amount of preemptions among all preemptive solutions of optimal makespan is a NP-hard problem. We performed experiments on real-world GPU applications and our approach can achieve optimal makespan by preempting 6 to 9% of the tasks.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114610553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/HiPC50609.2020.00017
H. Freitas, C. Mendes, A. Ilic
Hydrological models are extensively used in applications such as water resources, climate change, land use, and forecast systems. The focus of this paper is performance optimization of the MGB hydrological model, which is widely employed to simulate water flows in large-scale watersheds. The optimization strategies that we selected include AVX-512 vectorization, thread-parallelism on multi-core CPUs (OpenMP), and data-parallelism on many-core GPUs (CUDA). We conducted experiments for real-world input datasets on state-of-the-art HPC systems based on Intel's Skylake CPUs and NVIDIA GPUs. In addition, a Roofline model characterization for these datasets confirmed performance improvements of up to 37.5x on the most time-consuming part of the code and 8.6x on the full MGB model. The work proposed herein shows that careful optimizations are needed for hydrological models to achieve a significant fraction of the performance potential in modern processors.
{"title":"Performance Optimization and Scalability Analysis of the MGB Hydrological Model","authors":"H. Freitas, C. Mendes, A. Ilic","doi":"10.1109/HiPC50609.2020.00017","DOIUrl":"https://doi.org/10.1109/HiPC50609.2020.00017","url":null,"abstract":"Hydrological models are extensively used in applications such as water resources, climate change, land use, and forecast systems. The focus of this paper is performance optimization of the MGB hydrological model, which is widely employed to simulate water flows in large-scale watersheds. The optimization strategies that we selected include AVX-512 vectorization, thread-parallelism on multi-core CPUs (OpenMP), and data-parallelism on many-core GPUs (CUDA). We conducted experiments for real-world input datasets on state-of-the-art HPC systems based on Intel's Skylake CPUs and NVIDIA GPUs. In addition, a Roofline model characterization for these datasets confirmed performance improvements of up to 37.5x on the most time-consuming part of the code and 8.6x on the full MGB model. The work proposed herein shows that careful optimizations are needed for hydrological models to achieve a significant fraction of the performance potential in modern processors.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127406267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}