Datacenters demand big memory servers for big data. For blade servers, which disaggregate memory across multiple blades, we derive technology and architectural models to estimate communication delay and energy. These models permit new case studies in refusal scheduling to mitigate NUMA and improve the energy efficiency of data movement. Preliminary results show that our model helps researchers coordinate NUMA mitigation and queueing dynamics. We find that judiciously permitting NUMA reduces queueing time, benefiting throughput, latency and energy efficiency for datacenter workloads like Spark. These findings highlight blade servers' strengths and opportunities when building distributed shared memory machines for data analytics.
{"title":"Modeling Communication Costs in Blade Servers","authors":"Qiuyun Wang, Benjamin C. Lee","doi":"10.1145/2883591.2883607","DOIUrl":"https://doi.org/10.1145/2883591.2883607","url":null,"abstract":"Datacenters demand big memory servers for big data. For blade servers, which disaggregate memory across multiple blades, we derive technology and architectural models to estimate communication delay and energy. These models permit new case studies in refusal scheduling to mitigate NUMA and improve the energy efficiency of data movement. Preliminary results show that our model helps researchers coordinate NUMA mitigation and queueing dynamics. We find that judiciously permitting NUMA reduces queueing time, benefiting throughput, latency and energy efficiency for datacenter workloads like Spark. These findings highlight blade servers' strengths and opportunities when building distributed shared memory machines for data analytics.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74227249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biplob K. Debnath, Alireza Haghdoost, Asim Kadav, Mohammed G. Khatib, C. Ungureanu
Phase Change Memory (PCM) is emerging as an attractive alternative to Dynamic Random Access Memory (DRAM) in building data-intensive computing systems. PCM offers read/write performance asymmetry that makes it necessary to revisit the design of in-memory applications. In this paper, we focus on in-memory hash tables, a family of data structures with wide applicability. We evaluate several popular hash-table designs to understand their performance under PCM. We find that for write-heavy workloads the designs that achieve best performance for PCMdiffer from the ones that are best for DRAM, and that designs achieving a high load factor also cause a high number of memory writes. Finally, we propose PFHT, a PCM-Friendly Hash Table which presents a cuckoo hashing variant that is tailored to PCM characteristics, and offers a better trade-off between performance, the amount of writes generated, and the expected load factor than any of the existing DRAMbased implementations.
{"title":"Revisiting Hash Table Design for Phase Change Memory","authors":"Biplob K. Debnath, Alireza Haghdoost, Asim Kadav, Mohammed G. Khatib, C. Ungureanu","doi":"10.1145/2883591.2883597","DOIUrl":"https://doi.org/10.1145/2883591.2883597","url":null,"abstract":"Phase Change Memory (PCM) is emerging as an attractive alternative to Dynamic Random Access Memory (DRAM) in building data-intensive computing systems. PCM offers read/write performance asymmetry that makes it necessary to revisit the design of in-memory applications. In this paper, we focus on in-memory hash tables, a family of data structures with wide applicability. We evaluate several popular hash-table designs to understand their performance under PCM. We find that for write-heavy workloads the designs that achieve best performance for PCMdiffer from the ones that are best for DRAM, and that designs achieving a high load factor also cause a high number of memory writes. Finally, we propose PFHT, a PCM-Friendly Hash Table which presents a cuckoo hashing variant that is tailored to PCM characteristics, and offers a better trade-off between performance, the amount of writes generated, and the expected load factor than any of the existing DRAMbased implementations.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90928003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. A. Fröhlich, L. Becker, George Lima, Stefan M. Petters, D. D. Silva, E. Barros
{"title":"Brazilian symposium on computing system engineering","authors":"A. A. Fröhlich, L. Becker, George Lima, Stefan M. Petters, D. D. Silva, E. Barros","doi":"10.1145/2146382.2146393","DOIUrl":"https://doi.org/10.1145/2146382.2146393","url":null,"abstract":"","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83278234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles Jacobsen, Muktesh Khole, Sarah Spall, Scotty Bauer, A. Burtsev
Despite a number of radical changes in how computer systems are used, the design principles behind the very core of the systems stack--an operating system kernel--has remained unchanged for decades.We run monolithic kernels developed with a combination ofan unsafe programming language, global sharing of data structures, opaque interfaces, and no explicit knowledge of kernel protocols. Today, the monolithic architecture of a kernel is the main factor undermining its security, and even worse, limiting its evolution towards a safer, more secure environment. Lack of isolation across kernel subsystems allows attackers to take control over the entire machine with a single kernel vulnerability. Furthermore, complex, semantically rich monolithic code with globally shared data structures and no explicit interfaces is not amenable to formal analysis and verification tools. Even after decades of work to make monolithic kernels more secure, over a hundred serious kernel vulnerabilities are still reported every year. Modern kernels need decomposition as a practical means of confining the effects of individual attacks. Historically, decomposed kernels were prohibitively slow. Today, the complexity of a modern kernel prevents a trivial decomposition effort. We argue, however, that despite all odds modern kernels can be decomposed. Careful choice of communication abstractions and execution model, a general approach to decomposition, a path for incremental adoption, and automation through proper language tools can address complexity of decomposition and performance overheads of decomposed kernels. Our work on lightweight capability domains (LCDs) develops principles, mechanisms, and tools that enable incremental, practical decomposition of a modern operating system kerne.
{"title":"Lightweight Capability Domains: Towards Decomposing the Linux Kernel","authors":"Charles Jacobsen, Muktesh Khole, Sarah Spall, Scotty Bauer, A. Burtsev","doi":"10.1145/2883591.2883601","DOIUrl":"https://doi.org/10.1145/2883591.2883601","url":null,"abstract":"Despite a number of radical changes in how computer systems are used, the design principles behind the very core of the systems stack--an operating system kernel--has remained unchanged for decades.We run monolithic kernels developed with a combination ofan unsafe programming language, global sharing of data structures, opaque interfaces, and no explicit knowledge of kernel protocols. Today, the monolithic architecture of a kernel is the main factor undermining its security, and even worse, limiting its evolution towards a safer, more secure environment. Lack of isolation across kernel subsystems allows attackers to take control over the entire machine with a single kernel vulnerability. Furthermore, complex, semantically rich monolithic code with globally shared data structures and no explicit interfaces is not amenable to formal analysis and verification tools. Even after decades of work to make monolithic kernels more secure, over a hundred serious kernel vulnerabilities are still reported every year.\u0000 Modern kernels need decomposition as a practical means of confining the effects of individual attacks. Historically, decomposed kernels were prohibitively slow. Today, the complexity of a modern kernel prevents a trivial decomposition effort. We argue, however, that despite all odds modern kernels can be decomposed. Careful choice of communication abstractions and execution model, a general approach to decomposition, a path for incremental adoption, and automation through proper language tools can address complexity of decomposition and performance overheads of decomposed kernels. Our work on lightweight capability domains (LCDs) develops principles, mechanisms, and tools that enable incremental, practical decomposition of a modern operating system kerne.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74794187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Brazilian Symposium on Computing System Engineering (SBESC) is an initiative of the research community originally associated with three events: the Brazilian Workshop on Real-Time Systems, created in 1998; the Brazilian Workshop on Operating Systems, created in 2004; and the Brazilian Workshop on Embedded Systems, created in 2010. The identification of a strong synergy among these research areas added to the fact that designing computing systems is an increasingly multidisciplinary task and motivated the workshops to move from their native conferences to form an independent symposium. From the beginning, the symposium has been holding the Brazilian Embedded Systems School. In 2013, the symposium incorporated another related research community, which is focused on topics related to Critical Embedded Systems such as system safety and dependability. In the same year, it also started to host the Education Forum in Computing Engineering and the Embedded Systems Competition organized by Intel. This year, SBESC was colocated with the 5th IFIP International Embedded Systems Symposium.
{"title":"5th Brazilian Symposium on Computing System Engineering","authors":"M. Oyamada, A. A. Fröhlich, L. Becker","doi":"10.1145/2903267.2903273","DOIUrl":"https://doi.org/10.1145/2903267.2903273","url":null,"abstract":"The Brazilian Symposium on Computing System Engineering (SBESC) is an initiative of the research community originally associated with three events: the Brazilian Workshop on Real-Time Systems, created in 1998; the Brazilian Workshop on Operating Systems, created in 2004; and the Brazilian Workshop on Embedded Systems, created in 2010. The identification of a strong synergy among these research areas added to the fact that designing computing systems is an increasingly multidisciplinary task and motivated the workshops to move from their native conferences to form an independent symposium. From the beginning, the symposium has been holding the Brazilian Embedded Systems School. In 2013, the symposium incorporated another related research community, which is focused on topics related to Critical Embedded Systems such as system safety and dependability. In the same year, it also started to host the Education Forum in Computing Engineering and the Embedded Systems Competition organized by Intel. This year, SBESC was colocated with the 5th IFIP International Embedded Systems Symposium.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90496404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Brazilian Symposium on Computing System Engineering (SBESC) is an initiative of the research community originally associated with three events: the Brazilian Workshop on Real-Time Systems, created in 1998; the Brazilian Workshop on Operating Systems, created in 2004; and the Brazilian Workshop on Embedded Systems, created in 2010. The identification of a strong synergy among these research areas added to the fact that designing computing systems is an increasingly multidisciplinary task and motivated the workshops to move from their native conferences to form an independent symposium. Last year, the symposium incorporated another related research community, which is focused on topics related to Critical Embedded Systems such as system safety and dependability, and promoted an Education Forum. This year, besides the Education Forum, we also had an Industrial Track.
{"title":"Brazilian Symposium on Computing System Engineering","authors":"R. Barreto, R. Obelheiro, L. Becker","doi":"10.1145/2883591.2883593","DOIUrl":"https://doi.org/10.1145/2883591.2883593","url":null,"abstract":"The Brazilian Symposium on Computing System Engineering (SBESC) is an initiative of the research community originally associated with three events: the Brazilian Workshop on Real-Time Systems, created in 1998; the Brazilian Workshop on Operating Systems, created in 2004; and the Brazilian Workshop on Embedded Systems, created in 2010. The identification of a strong synergy among these research areas added to the fact that designing computing systems is an increasingly multidisciplinary task and motivated the workshops to move from their native conferences to form an independent symposium. Last year, the symposium incorporated another related research community, which is focused on topics related to Critical Embedded Systems such as system safety and dependability, and promoted an Education Forum. This year, besides the Education Forum, we also had an Industrial Track.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73193681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Battery State of Charge (SOC) estimation is a fundamental component of today's smartphones that affects the internal processes and observable behavior of the devices. This article systematically investigates and analyzes the SOC estimation techniques in smartphones. First, we discover that the voltage curve of a given smartphone implicitly captures the usable capacity of the battery while charging the mobile device. Second, we observe that today's SOC estimation techniques do not model battery capacity loss sufficiently to accurately capture the usable capacity. Finally, we report findings based on battery analytics of 2077 devices that validate the relationship between battery voltage and the usable capacity of a device. The presented results enable the development of more accurate battery gauges and metering solutions thus resulting in better power-saving decisions, recommendations for the users, and most importantly more reliable system.
{"title":"Sudden Drop in the Battery Level?: Understanding Smartphone State of Charge Anomaly","authors":"M. A. Hoque, S. Tarkoma","doi":"10.1145/2883591.2883606","DOIUrl":"https://doi.org/10.1145/2883591.2883606","url":null,"abstract":"Battery State of Charge (SOC) estimation is a fundamental component of today's smartphones that affects the internal processes and observable behavior of the devices. This article systematically investigates and analyzes the SOC estimation techniques in smartphones. First, we discover that the voltage curve of a given smartphone implicitly captures the usable capacity of the battery while charging the mobile device. Second, we observe that today's SOC estimation techniques do not model battery capacity loss sufficiently to accurately capture the usable capacity. Finally, we report findings based on battery analytics of 2077 devices that validate the relationship between battery voltage and the usable capacity of a device. The presented results enable the development of more accurate battery gauges and metering solutions thus resulting in better power-saving decisions, recommendations for the users, and most importantly more reliable system.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90941408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ricardo Santana, R. Rangaswami, Vasily Tarasov, Dean Hildebrand
There is a vast number and variety of file systems currently available, each optimizing for an ever growing number of storage devices and workloads. Users have an unprecedented, and somewhat overwhelming, number of data management options. At the same time, the fastest storage devices are only getting faster, and it is unclear on how well the existing file systems will adapt. Using emulation techniques, we evaluate five popular Linux file systems across a range of storage device latencies typical to low-end hard drives, latest high-performance persistent memory block devices, and in between. Our findings are often surprising. Depending on the workload, we find that some file systems can clearly scale with faster storage devices much better than others. Further, as storage device latency decreases, we find unexpected performance inversions across file systems. Finally, file system scalability in the higher device latency range is not representative of scalability in the lower, sub-millisecond, latency range. We then focus on Nilfs2 as an especially alarming example of an unexpectedly poor scalability and present detailed instructions for identifying bottlenecks in the I/O stack.
{"title":"A fast and slippery slope for file systems","authors":"Ricardo Santana, R. Rangaswami, Vasily Tarasov, Dean Hildebrand","doi":"10.1145/2819001.2819003","DOIUrl":"https://doi.org/10.1145/2819001.2819003","url":null,"abstract":"There is a vast number and variety of file systems currently available, each optimizing for an ever growing number of storage devices and workloads. Users have an unprecedented, and somewhat overwhelming, number of data management options. At the same time, the fastest storage devices are only getting faster, and it is unclear on how well the existing file systems will adapt. Using emulation techniques, we evaluate five popular Linux file systems across a range of storage device latencies typical to low-end hard drives, latest high-performance persistent memory block devices, and in between. Our findings are often surprising. Depending on the workload, we find that some file systems can clearly scale with faster storage devices much better than others. Further, as storage device latency decreases, we find unexpected performance inversions across file systems. Finally, file system scalability in the higher device latency range is not representative of scalability in the lower, sub-millisecond, latency range. We then focus on Nilfs2 as an especially alarming example of an unexpectedly poor scalability and present detailed instructions for identifying bottlenecks in the I/O stack.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82746102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinhao Yuan, David Williams-King, Junfeng Yang, S. Sethumadhavan
Among all classes of parallel programming abstractions, lock-free data structures are considered one of the most scalable and efficient thanks to their fine-grained style of synchronization. However, they are also challenging for developers and tools to verify because of the huge number of possible interleavings that result from finegrained synchronizations. This paper addresses this fundamental problem between performance and verifiability of lock-free data structure implementations. We present TXIT, a system that greatly reduces the set of possible interleavings by inserting transactions into the implementation of a lock-free data structure. We leverage hardware transactional memory support from Intel Haswell processors to enforce these artificial transactions. Evaluation on six popular lock-free data structure libraries shows that TXIT makes it easy to verify lock-free data structures while incurring acceptable runtime overhead. Further analysis shows that two inefficiencies in Haswell are the largest contributors to this overhead.
{"title":"Making Lock-free Data Structures Verifiable with Artificial Transactions","authors":"Xinhao Yuan, David Williams-King, Junfeng Yang, S. Sethumadhavan","doi":"10.1145/2883591.2883603","DOIUrl":"https://doi.org/10.1145/2883591.2883603","url":null,"abstract":"Among all classes of parallel programming abstractions, lock-free data structures are considered one of the most scalable and efficient thanks to their fine-grained style of synchronization. However, they are also challenging for developers and tools to verify because of the huge number of possible interleavings that result from finegrained synchronizations.\u0000 This paper addresses this fundamental problem between performance and verifiability of lock-free data structure implementations. We present TXIT, a system that greatly reduces the set of possible interleavings by inserting transactions into the implementation of a lock-free data structure. We leverage hardware transactional memory support from Intel Haswell processors to enforce these artificial transactions. Evaluation on six popular lock-free data structure libraries shows that TXIT makes it easy to verify lock-free data structures while incurring acceptable runtime overhead. Further analysis shows that two inefficiencies in Haswell are the largest contributors to this overhead.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86494663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Chockler, F. Junqueira, R. Rodrigues, Ymir Vigfusson
LADIS is an annual workshop focused on aspects of largescale distributed systems and middleware. Since its inauguration in 2007, the primary goal has been to offer a program with a good balance between presentations from industry and academia. The attendance of the workshop also reflects this balance, with regular participants from top universities and major companies offering Web-scale online services. The topics covered throughout the years have varied widely, covering aspects of distributed systems such as fault tolerance, consistency, security, and performance. Furthermore, they have been presented in the broad context of storage systems, data centers, and online services such as Web search and social networks. Given the focus on fostering discussion between industry and academia, talks at LADIS common showcase ongoing research work and present lessons and experience from industry. This interplay has proven to be a great opportunity for a reality check: does a given research work have any chance of succeeding in practice? LADIS is certainly not an oracle, but sharing experiences during the workshop has helped many researchers to better guide their ongoing research work.
{"title":"LADIS'14: 8th Workshop on Large-Scale Distributed Systems and Middleware","authors":"G. Chockler, F. Junqueira, R. Rodrigues, Ymir Vigfusson","doi":"10.1145/2723872.2723888","DOIUrl":"https://doi.org/10.1145/2723872.2723888","url":null,"abstract":"LADIS is an annual workshop focused on aspects of largescale distributed systems and middleware. Since its inauguration in 2007, the primary goal has been to offer a program with a good balance between presentations from industry and academia. The attendance of the workshop also reflects this balance, with regular participants from top universities and major companies offering Web-scale online services. The topics covered throughout the years have varied widely, covering aspects of distributed systems such as fault tolerance, consistency, security, and performance. Furthermore, they have been presented in the broad context of storage systems, data centers, and online services such as Web search and social networks. Given the focus on fostering discussion between industry and academia, talks at LADIS common showcase ongoing research work and present lessons and experience from industry. This interplay has proven to be a great opportunity for a reality check: does a given research work have any chance of succeeding in practice? LADIS is certainly not an oracle, but sharing experiences during the workshop has helped many researchers to better guide their ongoing research work.","PeriodicalId":7046,"journal":{"name":"ACM SIGOPS Oper. Syst. Rev.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90730363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}