Shih-Yao Dai, Fedor V. Yarochkin, S. Kuo, Ming-Wei Wu, Yennun Huang
In order to steal valuable data, hackers are uninterrupted research and development new techniques to intrude computer systems. Opposite to hackers, security researchers are uninterrupted analysis and tracking new malicious techniques for protecting sensitive data. There are a lot of existing analyzers can be used to help security researchers to analyze and track new malicious techniques. However, these existing analyzers cannot provide sufficient information to security researchers to perform precise assessment and deep analysis. In this paper, we introduce a behavior-based malicious software profiler, named Holography platform, to assist security researchers to obtain sufficient information. Holography platform analyzes virtualization hardware data, including CPU instructions, CPU registers, memory data and disk data, to obtain high level behavior semantic of all running processes. High level behavior semantic can provide sufficient information to security researchers to perform precise assessment and deep analysis new malicious techniques, such as malicious advertisement attack(malvertising attack).
{"title":"Malware Profiler Based on Innovative Behavior-Awareness Technique","authors":"Shih-Yao Dai, Fedor V. Yarochkin, S. Kuo, Ming-Wei Wu, Yennun Huang","doi":"10.1109/PRDC.2011.53","DOIUrl":"https://doi.org/10.1109/PRDC.2011.53","url":null,"abstract":"In order to steal valuable data, hackers are uninterrupted research and development new techniques to intrude computer systems. Opposite to hackers, security researchers are uninterrupted analysis and tracking new malicious techniques for protecting sensitive data. There are a lot of existing analyzers can be used to help security researchers to analyze and track new malicious techniques. However, these existing analyzers cannot provide sufficient information to security researchers to perform precise assessment and deep analysis. In this paper, we introduce a behavior-based malicious software profiler, named Holography platform, to assist security researchers to obtain sufficient information. Holography platform analyzes virtualization hardware data, including CPU instructions, CPU registers, memory data and disk data, to obtain high level behavior semantic of all running processes. High level behavior semantic can provide sufficient information to security researchers to perform precise assessment and deep analysis new malicious techniques, such as malicious advertisement attack(malvertising attack).","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a comprehensive optimization mathematical model of the planing craft navigational performance reliability was established. A hierarchical parallel chaos-genetic algorithm, which is called P-CX-GA, is proposed based on parallel thinking, genetic algorithm and new chaos algorithm. The conclusion based on quantities of computation results shows that P-CX-GA is reliable and efficient. It proposed a solid foundation for hull form optimization design and evaluation analysis of the high-speed ships.
{"title":"One Optimization Method on the Navigation Performance Reliability of Planing Craft","authors":"Songlin Yang, Ning Yu, Feng Zhu, Huile Li","doi":"10.1109/PRDC.2011.43","DOIUrl":"https://doi.org/10.1109/PRDC.2011.43","url":null,"abstract":"In this paper, a comprehensive optimization mathematical model of the planing craft navigational performance reliability was established. A hierarchical parallel chaos-genetic algorithm, which is called P-CX-GA, is proposed based on parallel thinking, genetic algorithm and new chaos algorithm. The conclusion based on quantities of computation results shows that P-CX-GA is reliable and efficient. It proposed a solid foundation for hull form optimization design and evaluation analysis of the high-speed ships.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131019303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an unified modeling framework of Markov-type software reliability models (SRMs) using Markovian arrival processes (MAPs). The MAP is defined as a point process whose inter-arrival time follows a phase-type distribution incorporating the correlation between successive two arrivals. This paper presents MAP representation of Markov-type SRMs, called MAP-based SRMs. This framework enables us to use generalized formulas for several reliability measures such as the expected number of failures and the software reliability which can be applied to all the Markov-type SRMs. In addition, we discuss the parameter estimation for the MAP-based SRMs from grouped failure data and find maximum likelihood estimates of all the Markov-type SRMs. The resulting MAP-based SRM is a novel approach to unifying the model-based software reliability evaluation using failure data.
{"title":"Unification of Software Reliability Models Using Markovian Arrival Processes","authors":"H. Okamura, T. Dohi","doi":"10.1109/PRDC.2011.12","DOIUrl":"https://doi.org/10.1109/PRDC.2011.12","url":null,"abstract":"This paper proposes an unified modeling framework of Markov-type software reliability models (SRMs) using Markovian arrival processes (MAPs). The MAP is defined as a point process whose inter-arrival time follows a phase-type distribution incorporating the correlation between successive two arrivals. This paper presents MAP representation of Markov-type SRMs, called MAP-based SRMs. This framework enables us to use generalized formulas for several reliability measures such as the expected number of failures and the software reliability which can be applied to all the Markov-type SRMs. In addition, we discuss the parameter estimation for the MAP-based SRMs from grouped failure data and find maximum likelihood estimates of all the Markov-type SRMs. The resulting MAP-based SRM is a novel approach to unifying the model-based software reliability evaluation using failure data.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124387616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel Salles-Loustau, R. Berthier, Etienne Collange, Bertrand Sobesto, M. Cukier
This paper describes an empirical research study to characterize attackers and attacks against targets of opportunity. A honey net infrastructure was built and deployed over 167 days that leveraged three different honey pot configurations and a SSH-based authentication proxy to attract and follow attackers over several weeks. A total of 211 attack sessions were recorded and evidence was collected at each stage of the attack sequence: from discovery to intrusion and exploitation of rogue software. This study makes two important contributions: 1) we introduce a new approach to measure attacker skills, and 2) we leverage keystroke profile analysis to differentiate attackers beyond their IP address of origin.
{"title":"Characterizing Attackers and Attacks: An Empirical Study","authors":"Gabriel Salles-Loustau, R. Berthier, Etienne Collange, Bertrand Sobesto, M. Cukier","doi":"10.1109/PRDC.2011.29","DOIUrl":"https://doi.org/10.1109/PRDC.2011.29","url":null,"abstract":"This paper describes an empirical research study to characterize attackers and attacks against targets of opportunity. A honey net infrastructure was built and deployed over 167 days that leveraged three different honey pot configurations and a SSH-based authentication proxy to attract and follow attackers over several weeks. A total of 211 attack sessions were recorded and evidence was collected at each stage of the attack sequence: from discovery to intrusion and exploitation of rogue software. This study makes two important contributions: 1) we introduce a new approach to measure attacker skills, and 2) we leverage keystroke profile analysis to differentiate attackers beyond their IP address of origin.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121580118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since software fault detection process is well-modeled by a non-homogeneous Poisson process, it is of great interest to estimate accurately the intensity function from observed software-fault data. In the existing work the same authors introduced the wavelet-based techniques for this problem and found that the Haar wavelet transform provided a very powerful performance in estimating software intensity function. In this paper, we also study the Haar-wavelet-transform-based approach to be investigated from the point of view of multiscale analysis. More specifically, a Bayesian multiscale intensity estimation algorithm is employed. In numerical study with real software-fault count data, we compare the Bayesian multiscale intensity estimation with the existing non-Bayesian wavelet-based estimation as well as the conventional maximum likelihood estimation method and least squares estimation method.
{"title":"Estimating Software Intensity Function via Multiscale Analysis and Its Application to Reliability Assessment","authors":"Xiao Xiao, T. Dohi","doi":"10.1109/PRDC.2011.11","DOIUrl":"https://doi.org/10.1109/PRDC.2011.11","url":null,"abstract":"Since software fault detection process is well-modeled by a non-homogeneous Poisson process, it is of great interest to estimate accurately the intensity function from observed software-fault data. In the existing work the same authors introduced the wavelet-based techniques for this problem and found that the Haar wavelet transform provided a very powerful performance in estimating software intensity function. In this paper, we also study the Haar-wavelet-transform-based approach to be investigated from the point of view of multiscale analysis. More specifically, a Bayesian multiscale intensity estimation algorithm is employed. In numerical study with real software-fault count data, we compare the Bayesian multiscale intensity estimation with the existing non-Bayesian wavelet-based estimation as well as the conventional maximum likelihood estimation method and least squares estimation method.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116784335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Fujiwara, M. Kimura, Yoshinobu Satoh, S. Yamada
In the functional safety standard (IEC 61508), development methods and quantitative analytical methods are defined for establishment of safety-related systems. However, only development methods are recommended to establish the software of safety-related systems. That is, the safety integrity level for software is determined only by the number of the development methods applied to practical safety-related system development. This is not reasonable to evaluate the safety integrity level, because various risk factors should be taken up. In this paper, we propose how to calculate the safety integrity level for software. Especially, we propose the calculation method based on software reliability growth models that have been utilized for many years in the large-scale system development.
{"title":"A Method of Calculating Safety Integrity Level for IEC 61508 Conformity Software","authors":"T. Fujiwara, M. Kimura, Yoshinobu Satoh, S. Yamada","doi":"10.1109/PRDC.2011.50","DOIUrl":"https://doi.org/10.1109/PRDC.2011.50","url":null,"abstract":"In the functional safety standard (IEC 61508), development methods and quantitative analytical methods are defined for establishment of safety-related systems. However, only development methods are recommended to establish the software of safety-related systems. That is, the safety integrity level for software is determined only by the number of the development methods applied to practical safety-related system development. This is not reasonable to evaluate the safety integrity level, because various risk factors should be taken up. In this paper, we propose how to calculate the safety integrity level for software. Especially, we propose the calculation method based on software reliability growth models that have been utilized for many years in the large-scale system development.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130621493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays highly dependable electronic devices are demanded by many safety-critical applications. Dependability attributes such as reliability and availability/maintainability of a many-processor system-on-chip (MPSoC) should already be examined at the design phase. Design for dependability approaches such as using available fault-free processor-cores and introducing a dependability manager infrastructural IP for self-test and evaluation can greatly enhance the dependability of an MPSoC. This is further supported by subsequent software-based repair. Design choices such as test fault coverage, test and repair time are examined to optimize the dependability attributes. Utilizing existing infrastructures like a network-on-chip (NoC) and tile-wrappers are needed to ensure a test can be performed at application run-time. An example design following the proposed design for dependability approach is shown. The MPSoC has been processed and measurement results have validated the proposed dependability approach.
{"title":"A Dependability Solution for Homogeneous MPSoCs","authors":"Xiao Zhang, H. Kerkhoff","doi":"10.1109/PRDC.2011.16","DOIUrl":"https://doi.org/10.1109/PRDC.2011.16","url":null,"abstract":"Nowadays highly dependable electronic devices are demanded by many safety-critical applications. Dependability attributes such as reliability and availability/maintainability of a many-processor system-on-chip (MPSoC) should already be examined at the design phase. Design for dependability approaches such as using available fault-free processor-cores and introducing a dependability manager infrastructural IP for self-test and evaluation can greatly enhance the dependability of an MPSoC. This is further supported by subsequent software-based repair. Design choices such as test fault coverage, test and repair time are examined to optimize the dependability attributes. Utilizing existing infrastructures like a network-on-chip (NoC) and tile-wrappers are needed to ensure a test can be performed at application run-time. An example design following the proposed design for dependability approach is shown. The MPSoC has been processed and measurement results have validated the proposed dependability approach.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115237645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a number of areas, for example, sensor networks and systems of systems, complex networks are being used as part of applications that have to be dependable and safe. A common feature of these networks is they operate in a de-centralised manner and are formed in an ad-hoc manner and are often based on individual nodes that were not originally developed specifically for the situation that they are to be used. In addition, the nodes and their environment will have different behaviours over time, and there will be little knowledge during development of how they will interact. A key challenge is therefore how to understand what behaviour is normal from that which is abnormal so that the abnormal behaviour can be detected, and be prevented from affecting other parts of the system where appropriate recovery can then be performed. In this paper we review the state of the art in bio-inspired approaches, discuss how they can be used for error detection as part of providing a safe dependable sensor network, and then provide and evaluate an efficient and effective approach to error detection.
{"title":"Bio-inspired Error Detection for Complex Systems","authors":"M. Drozda, I. Bate, J. Timmis","doi":"10.1109/PRDC.2011.27","DOIUrl":"https://doi.org/10.1109/PRDC.2011.27","url":null,"abstract":"In a number of areas, for example, sensor networks and systems of systems, complex networks are being used as part of applications that have to be dependable and safe. A common feature of these networks is they operate in a de-centralised manner and are formed in an ad-hoc manner and are often based on individual nodes that were not originally developed specifically for the situation that they are to be used. In addition, the nodes and their environment will have different behaviours over time, and there will be little knowledge during development of how they will interact. A key challenge is therefore how to understand what behaviour is normal from that which is abnormal so that the abnormal behaviour can be detected, and be prevented from affecting other parts of the system where appropriate recovery can then be performed. In this paper we review the state of the art in bio-inspired approaches, discuss how they can be used for error detection as part of providing a safe dependable sensor network, and then provide and evaluate an efficient and effective approach to error detection.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124233774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Horst Schirmeier, J. Neuhalfen, Ingo Korb, O. Spinczyk, M. Engel
Memory errors are a major source of reliability problems in current computers. Undetected errors may result in program termination, or, even worse, silent data corruption. Recent studies have shown that the frequency of permanent memory errors is an order of magnitude higher than previously assumed and regularly affects everyday operation. Often, neither additional circuitry to support hardware-based error detection nor downtime for performing hardware tests can be afforded. In the case of permanent memory errors, a system faces two challenges: detecting errors as early as possible and handling them while avoiding system downtime. To increase system reliability, we have developed RAMpage, an online memory testing infrastructure for commodity x86-64-based Linux servers, which is capable of efficiently detecting memory errors and which provides graceful degradation by withdrawing affected memory pages from further use. We describe the design and implementation of RAMpage and present results of an extensive qualitative as well as quantitative evaluation.
{"title":"RAMpage: Graceful Degradation Management for Memory Errors in Commodity Linux Servers","authors":"Horst Schirmeier, J. Neuhalfen, Ingo Korb, O. Spinczyk, M. Engel","doi":"10.1109/PRDC.2011.20","DOIUrl":"https://doi.org/10.1109/PRDC.2011.20","url":null,"abstract":"Memory errors are a major source of reliability problems in current computers. Undetected errors may result in program termination, or, even worse, silent data corruption. Recent studies have shown that the frequency of permanent memory errors is an order of magnitude higher than previously assumed and regularly affects everyday operation. Often, neither additional circuitry to support hardware-based error detection nor downtime for performing hardware tests can be afforded. In the case of permanent memory errors, a system faces two challenges: detecting errors as early as possible and handling them while avoiding system downtime. To increase system reliability, we have developed RAMpage, an online memory testing infrastructure for commodity x86-64-based Linux servers, which is capable of efficiently detecting memory errors and which provides graceful degradation by withdrawing affected memory pages from further use. We describe the design and implementation of RAMpage and present results of an extensive qualitative as well as quantitative evaluation.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127926076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Checkpoint-recovery based Virtual Machine (VM) replication is an emerging approach towards accommodating VM installations with high availability, especially, due to its inherent capability of tackling with symmetric multiprocessing (SMP) virtual machines, i.e. VMs with multiple virtual CPUs (vCPUs). However, it comes with the price of significant performance degradation of the application executed in the VM because of the large amount of state that needs to be synchronized between the primary and the backup machines. Previous research improving VM replication performance focused primarily on decreasing the amount of data transferred over the network, while relying on constant checkpoint frequency. Our goal is to investigate how and to what extent performance degradation can be mitigated by adjusting the checkpoint period dynamically. We provide a comprehensive analysis of various workloads from the aspect of VM replication, paying special attention to their behavior over the increasing number of vCPUs in the system. We propose several heuristics for scheduling replication checkpoints in order to improve quality of service. Our algorithm adapts dynamically to the properties of the workload being executed in the VM, such as changes in the number of dirtied memory pages, network and disk I/O operations, as well as to the network bandwidth available for replication. We evaluate our scheduling algorithm over two network architectures, Gigabit Ethernet and Infiniband, a high-performance interconnect fabric. We find that checkpoint scheduling has a great impact on the performance of replicated virtual machines, and show that replicated virtual machines with up to 16 vCPUs can attain performance close to the native VM execution, not only over high-performance, but also over commercial network architectures.
{"title":"Workload Adaptive Checkpoint Scheduling of Virtual Machine Replication","authors":"Balazs Gerofi, Y. Ishikawa","doi":"10.1109/PRDC.2011.32","DOIUrl":"https://doi.org/10.1109/PRDC.2011.32","url":null,"abstract":"Checkpoint-recovery based Virtual Machine (VM) replication is an emerging approach towards accommodating VM installations with high availability, especially, due to its inherent capability of tackling with symmetric multiprocessing (SMP) virtual machines, i.e. VMs with multiple virtual CPUs (vCPUs). However, it comes with the price of significant performance degradation of the application executed in the VM because of the large amount of state that needs to be synchronized between the primary and the backup machines. Previous research improving VM replication performance focused primarily on decreasing the amount of data transferred over the network, while relying on constant checkpoint frequency. Our goal is to investigate how and to what extent performance degradation can be mitigated by adjusting the checkpoint period dynamically. We provide a comprehensive analysis of various workloads from the aspect of VM replication, paying special attention to their behavior over the increasing number of vCPUs in the system. We propose several heuristics for scheduling replication checkpoints in order to improve quality of service. Our algorithm adapts dynamically to the properties of the workload being executed in the VM, such as changes in the number of dirtied memory pages, network and disk I/O operations, as well as to the network bandwidth available for replication. We evaluate our scheduling algorithm over two network architectures, Gigabit Ethernet and Infiniband, a high-performance interconnect fabric. We find that checkpoint scheduling has a great impact on the performance of replicated virtual machines, and show that replicated virtual machines with up to 16 vCPUs can attain performance close to the native VM execution, not only over high-performance, but also over commercial network architectures.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114848334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}