Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601360
J. Liedtke, Hermann Härtig, Michael Hohmuth
Cache-partitioning techniques have been invented to make modern processors with an extensive cache structure useful in real-time systems where task switches disrupt cache working sets and hence make execution times unpredictable. This paper describes an OS-controlled application-transparent cache-partitioning technique. The resulting partitions can be transparently assigned to tasks for their exclusive use. The major drawbacks found in other cache-partitioning techniques, namely waste of memory and additions on the critical performance path within CPUs, are avoided using memory coloring techniques that do nor require changes within the chips of modern CPUs or on the critical path for performance. A simple filter algorithm commonly used in real-time systems, a matrix-multiplication algorithm and the interaction of both are analysed with regard to cache-induced worst case penalties. Worst-case penalties are determined for different widely-used cache architectures. Some insights regarding the impact of cache architectures on worst-case execution are described.
{"title":"OS-controlled cache predictability for real-time systems","authors":"J. Liedtke, Hermann Härtig, Michael Hohmuth","doi":"10.1109/RTTAS.1997.601360","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601360","url":null,"abstract":"Cache-partitioning techniques have been invented to make modern processors with an extensive cache structure useful in real-time systems where task switches disrupt cache working sets and hence make execution times unpredictable. This paper describes an OS-controlled application-transparent cache-partitioning technique. The resulting partitions can be transparently assigned to tasks for their exclusive use. The major drawbacks found in other cache-partitioning techniques, namely waste of memory and additions on the critical performance path within CPUs, are avoided using memory coloring techniques that do nor require changes within the chips of modern CPUs or on the critical path for performance. A simple filter algorithm commonly used in real-time systems, a matrix-multiplication algorithm and the interaction of both are analysed with regard to cache-induced worst case penalties. Worst-case penalties are determined for different widely-used cache architectures. Some insights regarding the impact of cache architectures on worst-case execution are described.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128622532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601359
Sung-Whan Moon, Kang G. Shin, J. Rexford
In packet-switched networks, queueing of packets at the switches can result when multiple connections share the same physical link. To accommodate a large number of connections, a switch can employ link-scheduling algorithms to prioritize the transmission of the queued packets. Due to the high-speed links and small packet sizes, a hardware solution is needed for the priority queue in order to make the link schedulers effective. But for good performance, the switch should also support a large number of priority levels (P) and be able to buffer a large number of packets (N). So a hardware priority queue design must be both fast and scalable (with respect to N and P) in order to be implemented effectively. In this paper we first compare four existing hardware priority queue architectures, and identify scalability limitations on implementing these existing architectures for large N and P. Based on our findings, we propose two new priority queue architectures, and evaluate them using simulation results from Verilog HDL and Epoch implementations.
{"title":"Scalable hardware priority queue architectures for high-speed packet switches","authors":"Sung-Whan Moon, Kang G. Shin, J. Rexford","doi":"10.1109/RTTAS.1997.601359","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601359","url":null,"abstract":"In packet-switched networks, queueing of packets at the switches can result when multiple connections share the same physical link. To accommodate a large number of connections, a switch can employ link-scheduling algorithms to prioritize the transmission of the queued packets. Due to the high-speed links and small packet sizes, a hardware solution is needed for the priority queue in order to make the link schedulers effective. But for good performance, the switch should also support a large number of priority levels (P) and be able to buffer a large number of packets (N). So a hardware priority queue design must be both fast and scalable (with respect to N and P) in order to be implemented effectively. In this paper we first compare four existing hardware priority queue architectures, and identify scalability limitations on implementing these existing architectures for large N and P. Based on our findings, we propose two new priority queue architectures, and evaluate them using simulation results from Verilog HDL and Epoch implementations.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126410645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601325
K. M. Zuberi, K. Shin
In object-oriented programming, updates to the state variables of objects (by the methods of the object) have to be protected through semaphores to ensure mutual exclusion. Semaphore operations are invoked each time an object is accessed, and this represents significant run-time overhead. This is of special concern in cost-conscious, small-size embedded systems-such as those used in automotive applications-where costs must be kept to an absolute minimum. Object-oriented programming can be feasible in such applications only if the OS provides efficient, low-overhead semaphores. The authors present a new semaphore implementation scheme which saves one context switch per semaphore lock operation in most circumstances and gives performance improvements of 18-25% over traditional semaphore implementation schemes.
{"title":"An efficient semaphore implementation scheme for small-memory embedded systems","authors":"K. M. Zuberi, K. Shin","doi":"10.1109/RTTAS.1997.601325","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601325","url":null,"abstract":"In object-oriented programming, updates to the state variables of objects (by the methods of the object) have to be protected through semaphores to ensure mutual exclusion. Semaphore operations are invoked each time an object is accessed, and this represents significant run-time overhead. This is of special concern in cost-conscious, small-size embedded systems-such as those used in automotive applications-where costs must be kept to an absolute minimum. Object-oriented programming can be feasible in such applications only if the OS provides efficient, low-overhead semaphores. The authors present a new semaphore implementation scheme which saves one context switch per semaphore lock operation in most circumstances and gives performance improvements of 18-25% over traditional semaphore implementation schemes.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131335396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601357
Azer Bestavros, Gitae Kim
In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We stall with an overview of the protocol, and analytically characterize the dynamic redundancy control features of TCP Boston. Next, we present detailed simulation results that show the superiority TCP Boston compared to other adaptations of TCP/IP over ATMs. Namely, we show that it improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).
{"title":"Exploiting redundancy for timeliness in TCP Boston","authors":"Azer Bestavros, Gitae Kim","doi":"10.1109/RTTAS.1997.601357","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601357","url":null,"abstract":"In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We stall with an overview of the protocol, and analytically characterize the dynamic redundancy control features of TCP Boston. Next, we present detailed simulation results that show the superiority TCP Boston compared to other adaptations of TCP/IP over ATMs. Namely, we show that it improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130060414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601345
S. Chatterjee, Kevin Bradley, Jose A. Madriz, James A. Colquist, J. Strosnider
The authors describe a toolset for performance-based design and analysis of distributed real-time systems. The toolset is based on their design methodology, denoted distributed pipeline scheduling, that provides a set of rules that an engineer can follow to design near-optimal, distributed real-time systems with fully predictable, end-to-end performance properties. The methodology provides (1) models for capturing the application, resource, and system design specifications; (2) an analysis algorithm and figures of merit for evaluating a system design; and (3) allocation and scheduling algorithms for navigating the design space to find a near-optimal solution that meets application timing requirements and optimizes a set of system objectives (e.g., minimize total monetary cost of system and the number of resources used). The toolset, denoted the System Engineering Workbench (SEW), aids system engineers to design, maintain, and upgrade distributed real-time systems by encapsulating the complexities of the methodology, while exporting a graphical user interface that is intuitive and easy to learn. The toolset has been applied to the design of several sonar, medical, and multimedia systems that have end-to-end timing requirements.
{"title":"SEW: a toolset for design and analysis of distributed real-time systems","authors":"S. Chatterjee, Kevin Bradley, Jose A. Madriz, James A. Colquist, J. Strosnider","doi":"10.1109/RTTAS.1997.601345","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601345","url":null,"abstract":"The authors describe a toolset for performance-based design and analysis of distributed real-time systems. The toolset is based on their design methodology, denoted distributed pipeline scheduling, that provides a set of rules that an engineer can follow to design near-optimal, distributed real-time systems with fully predictable, end-to-end performance properties. The methodology provides (1) models for capturing the application, resource, and system design specifications; (2) an analysis algorithm and figures of merit for evaluating a system design; and (3) allocation and scheduling algorithms for navigating the design space to find a near-optimal solution that meets application timing requirements and optimizes a set of system objectives (e.g., minimize total monetary cost of system and the number of resources used). The toolset, denoted the System Engineering Workbench (SEW), aids system engineers to design, maintain, and upgrade distributed real-time systems by encapsulating the complexities of the methodology, while exporting a graphical user interface that is intuitive and easy to learn. The toolset has been applied to the design of several sonar, medical, and multimedia systems that have end-to-end timing requirements.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117184155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601361
T. Abdelzaher, E. Atkins, K. Shin
We propose a model for quality-of-service (QoS) negotiation in building real-time services to meet both predictability and graceful degradation requirements. QoS negotiation is shown to (i) outperform conventional "binary" admission control schemes (either guaranteeing the required QoS or rejecting the service request), and (ii) achieve higher application-perceived system utility. We incorporated the proposed QoS-negotiation model into an example real-time middleware service, called RTPOOL, which manages a distributed pool of shared computing resources (processors) to guarantee timeliness QoS for real-time applications. The efficacy and power of QoS negotiation are demonstrated for an automated flight control system implemented on a network of PCs running RT-POOL. This system is used to fly an F-16 fighter aircraft modeled using the Aerial Combat (ACM) F-16 Flight Simulator. Experimental results indicate that QoS negotiation, while maintaining real-time guarantees, enables graceful QoS degradation under conditions in which traditional schedulability analysis and admission control schemes fail.
{"title":"QoS negotiation in real-time systems and its application to automated flight control","authors":"T. Abdelzaher, E. Atkins, K. Shin","doi":"10.1109/RTTAS.1997.601361","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601361","url":null,"abstract":"We propose a model for quality-of-service (QoS) negotiation in building real-time services to meet both predictability and graceful degradation requirements. QoS negotiation is shown to (i) outperform conventional \"binary\" admission control schemes (either guaranteeing the required QoS or rejecting the service request), and (ii) achieve higher application-perceived system utility. We incorporated the proposed QoS-negotiation model into an example real-time middleware service, called RTPOOL, which manages a distributed pool of shared computing resources (processors) to guarantee timeliness QoS for real-time applications. The efficacy and power of QoS negotiation are demonstrated for an automated flight control system implemented on a network of PCs running RT-POOL. This system is used to fly an F-16 fighter aircraft modeled using the Aerial Combat (ACM) F-16 Flight Simulator. Experimental results indicate that QoS negotiation, while maintaining real-time guarantees, enables graceful QoS degradation under conditions in which traditional schedulability analysis and admission control schemes fail.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"24 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124598760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601363
A. Mok, Guangtian Liu
A real-time system operates under timing constraints which it may be unable to meet under some circumstances. The criticality of a timing constraint determines how a system is to react when a timing failure happens. For critical timing constraints, a timing failure should be detected as soon as possible. However, early detection of timing failures requires more resource usage which may be deemed excessive. While work in real-time system monitoring has progressed in recent years, the issue of tradeoff between detection latency and resource overhead has not been adequately considered. This paper presents an approach for monitoring timing constraints in real-time systems which is based on a simple and expressive specification method for defining the timing constraints to be monitored. Efficient algorithms are developed to catch violations of timing constraints at the earliest possible time. These algorithms have been implemented in a tool called JRTM (Java Run-time Timing-constraint Monitor) in the language Java. This tool can be used to specify and monitor timing constraints of Java applications.
{"title":"Efficient Run-time Monitoring Of Timing Constraints","authors":"A. Mok, Guangtian Liu","doi":"10.1109/RTTAS.1997.601363","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601363","url":null,"abstract":"A real-time system operates under timing constraints which it may be unable to meet under some circumstances. The criticality of a timing constraint determines how a system is to react when a timing failure happens. For critical timing constraints, a timing failure should be detected as soon as possible. However, early detection of timing failures requires more resource usage which may be deemed excessive. While work in real-time system monitoring has progressed in recent years, the issue of tradeoff between detection latency and resource overhead has not been adequately considered. This paper presents an approach for monitoring timing constraints in real-time systems which is based on a simple and expressive specification method for defining the timing constraints to be monitored. Efficient algorithms are developed to catch violations of timing constraints at the earliest possible time. These algorithms have been implemented in a tool called JRTM (Java Run-time Timing-constraint Monitor) in the language Java. This tool can be used to specify and monitor timing constraints of Java applications.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"380 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113995836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601354
Tei-Wei Kuo, D. Locke, Farn Wang
This paper proposes a methodology for high-level error propagation analysis of real-time data-intensive systems. A formal system in C-style programming language is proposed to provide a research framework for various issues on real-time system designs. A symbolic procedure is then presented to formally verify the amount of data errors tolerable to systems.
{"title":"Error propagation analysis of real-time data-intensive applications","authors":"Tei-Wei Kuo, D. Locke, Farn Wang","doi":"10.1109/RTTAS.1997.601354","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601354","url":null,"abstract":"This paper proposes a methodology for high-level error propagation analysis of real-time data-intensive systems. A formal system in C-style programming language is proposed to provide a research framework for various issues on real-time system designs. A symbolic procedure is then presented to formally verify the amount of data errors tolerable to systems.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127801759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601346
P. Binns
The paper presents an algorithm and its run-time performance for scheduling periodic incremental and design-to-time processes. The algorithm is based on the slack stealer which dynamically answers the question "how much execution time is available prior to a deadline" when all periodic processes are scheduled using rate monotonic scheduling. An incremental process asks how much execution time is available after the baseline component has completed and prior to the execution of a process increment. A design-to-time process asks how much execution time is available before the process begins execution and selects a version which gives the greatest precision in the available time. For both incremental and design-to-time processes, a minimum amount of time is statically reserved so that an acceptable but suboptimal solution will always be calculated. The author identifies and proposes a solution for the practical problem of supporting criticalities when scheduling slack and analyze the run-time overheads of this algorithm. The analysis is applied to two real-world data sets. In certain cases, the execution time of this algorithm is found to be efficient.
{"title":"Incremental rate monotonic scheduling for improved control system performance","authors":"P. Binns","doi":"10.1109/RTTAS.1997.601346","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601346","url":null,"abstract":"The paper presents an algorithm and its run-time performance for scheduling periodic incremental and design-to-time processes. The algorithm is based on the slack stealer which dynamically answers the question \"how much execution time is available prior to a deadline\" when all periodic processes are scheduled using rate monotonic scheduling. An incremental process asks how much execution time is available after the baseline component has completed and prior to the execution of a process increment. A design-to-time process asks how much execution time is available before the process begins execution and selects a version which gives the greatest precision in the available time. For both incremental and design-to-time processes, a minimum amount of time is statically reserved so that an acceptable but suboptimal solution will always be calculated. The author identifies and proposes a solution for the practical problem of supporting criticalities when scheduling slack and analyze the run-time overheads of this algorithm. The analysis is applied to two real-world data sets. In certain cases, the execution time of this algorithm is found to be efficient.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115568768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-09DOI: 10.1109/RTTAS.1997.601355
Sam Siewert, M. Humphrey
This paper presents work-in-progress to build a confidence-based in-kernel pipeline execution performance interface to a fixed priority deadline monotonic scheduler. The interface provides performance-controlled pipeline execution, allowing applications to specify expected execution times, negotiate desired deadline confidence and to configure and control pipelines. The confidence-based scheduling interface and in-kernel pipeline are being evaluated on an unoccupied air vehicle incorporating digital control, continuous media, and event-driven pipelines.
{"title":"A real-time execution performance agent interface to parametrically controlled in-kernel pipelines","authors":"Sam Siewert, M. Humphrey","doi":"10.1109/RTTAS.1997.601355","DOIUrl":"https://doi.org/10.1109/RTTAS.1997.601355","url":null,"abstract":"This paper presents work-in-progress to build a confidence-based in-kernel pipeline execution performance interface to a fixed priority deadline monotonic scheduler. The interface provides performance-controlled pipeline execution, allowing applications to specify expected execution times, negotiate desired deadline confidence and to configure and control pipelines. The confidence-based scheduling interface and in-kernel pipeline are being evaluated on an unoccupied air vehicle incorporating digital control, continuous media, and event-driven pipelines.","PeriodicalId":448474,"journal":{"name":"Proceedings Third IEEE Real-Time Technology and Applications Symposium","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130326581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}