In this paper, we present a dynamic compilation system called ExanaDBT for transparently optimizing and parallelizing binaries at runtime based on the polyhedral model. Starting from hot spot detection of the execution, ExanaDBT dynamically estimates gains for optimization, translates the target region into highly optimized code, and switches the execution of original code to optimized one. To realize advanced loop-level optimizations beyond trace- or instruction-level, ExanaDBT uses a polyhedral optimizer and performs loop transformation for rewarding sustainable performance gain on systems with deeper memory hierarchy. Especially for successful optimizations, we reveal that a simple conversion from the original binaries to LLVM IR will not enough for representing the code in polyhedral model, and then investigate a feasible way to lift binaries to the IR capable of polyhedral optimizations. We implement a proof-of-concept design of ExanaDBT and evaluate it. From the evaluation results, we confirm that ExanaDBT realizes dynamic optimization in a fully automated fashion. The results also show that ExanaDBT can contribute to speeding up the execution in average 3.2 times from unoptimized serial code in single thread execution and 11.9 times in 16 thread parallel execution.
{"title":"ExanaDBT: A Dynamic Compilation System for Transparent Polyhedral Optimizations at Runtime","authors":"Yukinori Sato, Tomoya Yuki, Toshio Endo","doi":"10.1145/3075564.3077627","DOIUrl":"https://doi.org/10.1145/3075564.3077627","url":null,"abstract":"In this paper, we present a dynamic compilation system called ExanaDBT for transparently optimizing and parallelizing binaries at runtime based on the polyhedral model. Starting from hot spot detection of the execution, ExanaDBT dynamically estimates gains for optimization, translates the target region into highly optimized code, and switches the execution of original code to optimized one. To realize advanced loop-level optimizations beyond trace- or instruction-level, ExanaDBT uses a polyhedral optimizer and performs loop transformation for rewarding sustainable performance gain on systems with deeper memory hierarchy. Especially for successful optimizations, we reveal that a simple conversion from the original binaries to LLVM IR will not enough for representing the code in polyhedral model, and then investigate a feasible way to lift binaries to the IR capable of polyhedral optimizations. We implement a proof-of-concept design of ExanaDBT and evaluate it. From the evaluation results, we confirm that ExanaDBT realizes dynamic optimization in a fully automated fashion. The results also show that ExanaDBT can contribute to speeding up the execution in average 3.2 times from unoptimized serial code in single thread execution and 11.9 times in 16 thread parallel execution.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jooyoung Lee, Konstantin Lopatin, Rasheed Hussain, Waqas Nawaz
Understanding the evolution of relationship among users, through generic interactions, is the key driving force to this study. We model the evolution of friendship in the social network of MobiClique using observations of interactions among users. MobiClique is a mobile ad-hoc network setting where Bluetooth enabled mobile devices communicate directly with each other as they meet opportunistically. We first apply existing topological methods to predict future friendship in MobiClique and then compare the results with the proposed interaction-based method. Our approach combines four types of user activity information to measure the similarity between users at any specific time. We also define the temporal accuracy evaluation metric and show that interaction data with temporal information is a good indicator to predict temporal social ties. The experimental evaluation suggests that the well-known static topological metrics do not perform well in ad-hoc network scenario. The results suggest that to accurately predict evolution of friendship, or topology of the network, it is necessary to utilise some interaction information.
{"title":"Evolution of Friendship: a case study of MobiClique","authors":"Jooyoung Lee, Konstantin Lopatin, Rasheed Hussain, Waqas Nawaz","doi":"10.1145/3075564.3075595","DOIUrl":"https://doi.org/10.1145/3075564.3075595","url":null,"abstract":"Understanding the evolution of relationship among users, through generic interactions, is the key driving force to this study. We model the evolution of friendship in the social network of MobiClique using observations of interactions among users. MobiClique is a mobile ad-hoc network setting where Bluetooth enabled mobile devices communicate directly with each other as they meet opportunistically. We first apply existing topological methods to predict future friendship in MobiClique and then compare the results with the proposed interaction-based method. Our approach combines four types of user activity information to measure the similarity between users at any specific time. We also define the temporal accuracy evaluation metric and show that interaction data with temporal information is a good indicator to predict temporal social ties. The experimental evaluation suggests that the well-known static topological metrics do not perform well in ad-hoc network scenario. The results suggest that to accurately predict evolution of friendship, or topology of the network, it is necessary to utilise some interaction information.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129462065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dawen Xu, Yi Liao, Ying Wang, Huawei Li, Xiaowei Li
Processing-in-Memory (PIM) is returning as a promising solution to address the issue of memory wall as computing systems gradually step into the big data era. Researchers continually proposed various PIM architecture combined with novel memory device or 3D integration technology, but it is still a lack of universal task scheduling method in terms of the new heterogeneous platform. In this paper, we propose a formalized model to quantify the performance and energy of the PIM+CPU heterogeneous parallel system. In addition, we are the first to build a task partitioning and mapping framework to exploit different PIM engines. In this framework, an application is divided into subtasks and mapped onto appropriate execution units based on the proposed PIM-oriented Earliest-Finish-Time (PEFT) algorithm to maximize the performance gains brought by PIM. Experimental evaluations show our PIM-aware framework significantly improves the system performance compared to conventional processor architectures.
{"title":"Selective off-loading to Memory: Task Partitioning and Mapping for PIM-enabled Heterogeneous Systems","authors":"Dawen Xu, Yi Liao, Ying Wang, Huawei Li, Xiaowei Li","doi":"10.1145/3075564.3075584","DOIUrl":"https://doi.org/10.1145/3075564.3075584","url":null,"abstract":"Processing-in-Memory (PIM) is returning as a promising solution to address the issue of memory wall as computing systems gradually step into the big data era. Researchers continually proposed various PIM architecture combined with novel memory device or 3D integration technology, but it is still a lack of universal task scheduling method in terms of the new heterogeneous platform. In this paper, we propose a formalized model to quantify the performance and energy of the PIM+CPU heterogeneous parallel system. In addition, we are the first to build a task partitioning and mapping framework to exploit different PIM engines. In this framework, an application is divided into subtasks and mapped onto appropriate execution units based on the proposed PIM-oriented Earliest-Finish-Time (PEFT) algorithm to maximize the performance gains brought by PIM. Experimental evaluations show our PIM-aware framework significantly improves the system performance compared to conventional processor architectures.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125331195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel Ortiz, L. Svensson, Erik Alveflo, P. Larsson-Edefors
Processor energy models can be used by developers to estimate, without the need of hardware implementation or additional measurement setups, the power consumption of software applications. Furthermore, these energy models can be used for energy-aware compiler optimization. This paper presents a measurement-based instruction-level energy characterization for the Adapteva Epiphany processor, which is a 16-core shared-memory architecture connected by a 2D network-on-chip. Based on a number of microbenchmarks, the instruction-level characterization was used to build an energy model that includes essential Epiphany instructions such as remote memory loads and stores. To validate the model, an FFT application was developed. This validation showed that the energy estimated by the model is within 0.4% of the measured energy.
{"title":"Instruction level energy model for the Adapteva Epiphany multi-core processor","authors":"Gabriel Ortiz, L. Svensson, Erik Alveflo, P. Larsson-Edefors","doi":"10.1145/3075564.3078892","DOIUrl":"https://doi.org/10.1145/3075564.3078892","url":null,"abstract":"Processor energy models can be used by developers to estimate, without the need of hardware implementation or additional measurement setups, the power consumption of software applications. Furthermore, these energy models can be used for energy-aware compiler optimization. This paper presents a measurement-based instruction-level energy characterization for the Adapteva Epiphany processor, which is a 16-core shared-memory architecture connected by a 2D network-on-chip. Based on a number of microbenchmarks, the instruction-level characterization was used to build an energy model that includes essential Epiphany instructions such as remote memory loads and stores. To validate the model, an FFT application was developed. This validation showed that the energy estimated by the model is within 0.4% of the measured energy.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116383567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The replacement policies known as MIN and OPT are optimal for a two-level memory hierarchy. The computation of the cache content for these policies requires the off-line knowledge of the entire address trace. However, the stack distance of a given access, that is, the smallest capacity of a cache for which that access results in a hit, is independent of future accesses and can be computed on-line. Off-line and on-line algorithms to compute the stack distance in time O(V) per access have been known for several decades, where V denotes the number of distinct addresses within the trace. The off-line time bound was recently improved to O(√V log V). This paper introduces the Critical Stack Algorithm for the online computation of the stack distance of MIN and OPT, in time O(log V) per access. The result exploits a novel analysis of properties of OPT and data structures based on balanced binary trees. A corresponding Ω(log V) lower bound is derived by a reduction from element distinctness; this bound holds in a variety of models of computation and applies even to the off-line simulation of just one cache capacity.
{"title":"Optimal On-Line Computation of Stack Distances for MIN and OPT","authors":"G. Bilardi, K. Ekanadham, P. Pattnaik","doi":"10.1145/3075564.3075571","DOIUrl":"https://doi.org/10.1145/3075564.3075571","url":null,"abstract":"The replacement policies known as MIN and OPT are optimal for a two-level memory hierarchy. The computation of the cache content for these policies requires the off-line knowledge of the entire address trace. However, the stack distance of a given access, that is, the smallest capacity of a cache for which that access results in a hit, is independent of future accesses and can be computed on-line. Off-line and on-line algorithms to compute the stack distance in time O(V) per access have been known for several decades, where V denotes the number of distinct addresses within the trace. The off-line time bound was recently improved to O(√V log V). This paper introduces the Critical Stack Algorithm for the online computation of the stack distance of MIN and OPT, in time O(log V) per access. The result exploits a novel analysis of properties of OPT and data structures based on balanced binary trees. A corresponding Ω(log V) lower bound is derived by a reduction from element distinctness; this bound holds in a variety of models of computation and applies even to the off-line simulation of just one cache capacity.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"52 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Zhang, Rui Hou, Junfeng Fan, KeKe Liu, Lixin Zhang, S. Mckee
Control-flow integrity (CFI) is considered as a general and promising method to prevent code-reuse attacks, which utilize benign code sequences to realize arbitrary computation. Current approaches can efficiently protect control-flow transfers caused by indirect jumps and function calls (forward-edge CFI). However, they cannot effectively protect control-flow caused by the function return (backward-edge CFI). The reason is that the set of return addresses of the functions that are frequently called can be very large, which might bend the backward-edge CFI. We address this backward-edge CFI problem by proposing a novel hardware-assisted mechanism (RAGuard) that binds a message authentication code to each return address and enhances security via a physical unclonable function and a hardware hash function. The message authentication codes can be stored on the program stack with return address. RAGuard hardware automatically verifies the integrity of return addresses. Our experiments show that for a subset of the SPEC CPU2006 benchmarks, RAGuard incurs 1.86% runtime overheads on average with no need for OS support.
{"title":"RAGuard: A Hardware Based Mechanism for Backward-Edge Control-Flow Integrity","authors":"Jun Zhang, Rui Hou, Junfeng Fan, KeKe Liu, Lixin Zhang, S. Mckee","doi":"10.1145/3075564.3075570","DOIUrl":"https://doi.org/10.1145/3075564.3075570","url":null,"abstract":"Control-flow integrity (CFI) is considered as a general and promising method to prevent code-reuse attacks, which utilize benign code sequences to realize arbitrary computation. Current approaches can efficiently protect control-flow transfers caused by indirect jumps and function calls (forward-edge CFI). However, they cannot effectively protect control-flow caused by the function return (backward-edge CFI). The reason is that the set of return addresses of the functions that are frequently called can be very large, which might bend the backward-edge CFI. We address this backward-edge CFI problem by proposing a novel hardware-assisted mechanism (RAGuard) that binds a message authentication code to each return address and enhances security via a physical unclonable function and a hardware hash function. The message authentication codes can be stored on the program stack with return address. RAGuard hardware automatically verifies the integrity of return addresses. Our experiments show that for a subset of the SPEC CPU2006 benchmarks, RAGuard incurs 1.86% runtime overheads on average with no need for OS support.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127387059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark will begin with a brief overview of deep learning and what has led to its recent popularity. He will provide a few demonstrations and examples of deep learning applications based on recent work at Intel Nervana. He will explain some of the challenges to continued progress in deep learning - such as high compute requirements and lengthy training time - and will discuss some of the solutions (e.g. custom deep learning hardware) that Intel Nervana is developing to usher in a new era of even more powerful AI.
{"title":"The Future of Deep Learning: Challenges & Solutions","authors":"M. Robins","doi":"10.1145/3075564.3097267","DOIUrl":"https://doi.org/10.1145/3075564.3097267","url":null,"abstract":"Mark will begin with a brief overview of deep learning and what has led to its recent popularity. He will provide a few demonstrations and examples of deep learning applications based on recent work at Intel Nervana. He will explain some of the challenges to continued progress in deep learning - such as high compute requirements and lengthy training time - and will discuss some of the solutions (e.g. custom deep learning hardware) that Intel Nervana is developing to usher in a new era of even more powerful AI.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129944172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Input/Output (I/O) performance is very important when running desktop applications in virtualized environments. Previous research has focused on cold execution or installation of desktop applications, where the I/O requests are obvious; in many other scenarios such as warm launch or web page browsing however, I/O behaviors are less clear, and in this paper, we analyze the I/O behavior of these desktop scenarios. Our analysis reveals several interesting I/O behaviors of desktop applications; for example, we show that many warm applications will send random read requests during their launch, which leads to storage-sensitivity of these applications. We also find that the write requests from web page browsing generates considerable I/O pressure, even when the users only open a simple news page and take no further action. Our results have strong ramifications for the management of storage systems and the deployment of virtual machines in virtualized environments.
{"title":"Understanding the I/O Behavior of Desktop Applications in Virtualization","authors":"Yan Sui, Chun Yang, Xu Cheng","doi":"10.1145/3075564.3076263","DOIUrl":"https://doi.org/10.1145/3075564.3076263","url":null,"abstract":"Input/Output (I/O) performance is very important when running desktop applications in virtualized environments. Previous research has focused on cold execution or installation of desktop applications, where the I/O requests are obvious; in many other scenarios such as warm launch or web page browsing however, I/O behaviors are less clear, and in this paper, we analyze the I/O behavior of these desktop scenarios. Our analysis reveals several interesting I/O behaviors of desktop applications; for example, we show that many warm applications will send random read requests during their launch, which leads to storage-sensitivity of these applications. We also find that the write requests from web page browsing generates considerable I/O pressure, even when the users only open a simple news page and take no further action. Our results have strong ramifications for the management of storage systems and the deployment of virtual machines in virtualized environments.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121192289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the potential of applying deep learning techniques for plant classification and its usage for citizen science in large-scale biodiversity monitoring. We show that plant classification using near state-of-the-art convolutional network architectures like ResNet50 achieves significant improvements in accuracy compared to the most widespread plant classification application in test sets composed of thousands of different species labels. We find that the predictions can be confidently used as a baseline classification in citizen science communities like iNaturalist (or its Spanish fork, Natusfera) which in turn can share their data with biodiversity portals like GBIF.
{"title":"Large-Scale Plant Classification with Deep Neural Networks","authors":"Ignacio Heredia","doi":"10.1145/3075564.3075590","DOIUrl":"https://doi.org/10.1145/3075564.3075590","url":null,"abstract":"This paper discusses the potential of applying deep learning techniques for plant classification and its usage for citizen science in large-scale biodiversity monitoring. We show that plant classification using near state-of-the-art convolutional network architectures like ResNet50 achieves significant improvements in accuracy compared to the most widespread plant classification application in test sets composed of thousands of different species labels. We find that the predictions can be confidently used as a baseline classification in citizen science communities like iNaturalist (or its Spanish fork, Natusfera) which in turn can share their data with biodiversity portals like GBIF.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131321792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing error resilience inherent to the digital processing workloads provides application-specific insights towards approximate computing strategies for improving power efficiency and/or performance. With the case study of radio astronomy calibration, our contributions for improving the error resilience analysis are focused primarily on iterative methods that use a convergence criterion as a quality metric to terminate the iterative computations. We propose an adaptive statistical approximation model for high-level resilience analysis that provides an opportunity to divide a workload into exact and approximate iterations. This improves the existing error resilience analysis methodology by quantifying the number of approximate iterations (23% of the total iterations in our case study) in addition to other parameters used in the state-of-the-art techniques. This way heterogeneous architectures comprised of exact and inexact computing cores and adaptive accuracy architectures can be exploited efficiently. Moreover, we demonstrate the importance of quality function reconsideration for convergence based iterative processes as the original quality function (the convergence criterion) is not necessarily sufficient in the resilience analysis phase. If such is the case, an additional quality function has to be defined to assess the viability of the approximate techniques.
{"title":"Improving Error Resilience Analysis Methodology of Iterative Workloads for Approximate Computing","authors":"G. Gillani, A. Kokkeler","doi":"10.1145/3075564.3078891","DOIUrl":"https://doi.org/10.1145/3075564.3078891","url":null,"abstract":"Assessing error resilience inherent to the digital processing workloads provides application-specific insights towards approximate computing strategies for improving power efficiency and/or performance. With the case study of radio astronomy calibration, our contributions for improving the error resilience analysis are focused primarily on iterative methods that use a convergence criterion as a quality metric to terminate the iterative computations. We propose an adaptive statistical approximation model for high-level resilience analysis that provides an opportunity to divide a workload into exact and approximate iterations. This improves the existing error resilience analysis methodology by quantifying the number of approximate iterations (23% of the total iterations in our case study) in addition to other parameters used in the state-of-the-art techniques. This way heterogeneous architectures comprised of exact and inexact computing cores and adaptive accuracy architectures can be exploited efficiently. Moreover, we demonstrate the importance of quality function reconsideration for convergence based iterative processes as the original quality function (the convergence criterion) is not necessarily sufficient in the resilience analysis phase. If such is the case, an additional quality function has to be defined to assess the viability of the approximate techniques.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127534527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}