J. F. Tarillo, Nikolaos Mavrogiannakis, C. Lisbôa, C. Argyrides, L. Carro
Technology evolution provides ever increasing density of transistors in chips, lower power consumption and higher performance. In this environment the occurrence of multiple-bit upsets (MBUs) becomes a significant concern. Critical applications need high reliability, but traditional error mitigation techniques assume only the single error model, and only a few techniques to correct MBUs at algorithm level have been proposed. In this paper, a novel circuit level technique to detect and correct multiple errors in memory is proposed. Since it is implemented at circuit level, it is transparent to programmers. This technique is based in the Decimal Hamming coding and here it is compared to Reed Solomon coding at circuit level. Experimental results show that for memory words wider than 16 bits, the proposed technique is faster and imposes lower area overhead than optimized RS, while mitigating errors affecting up to 25% of the memory word.
{"title":"Multiple Bit Error Detection and Correction in Memory","authors":"J. F. Tarillo, Nikolaos Mavrogiannakis, C. Lisbôa, C. Argyrides, L. Carro","doi":"10.1109/DSD.2010.64","DOIUrl":"https://doi.org/10.1109/DSD.2010.64","url":null,"abstract":"Technology evolution provides ever increasing density of transistors in chips, lower power consumption and higher performance. In this environment the occurrence of multiple-bit upsets (MBUs) becomes a significant concern. Critical applications need high reliability, but traditional error mitigation techniques assume only the single error model, and only a few techniques to correct MBUs at algorithm level have been proposed. In this paper, a novel circuit level technique to detect and correct multiple errors in memory is proposed. Since it is implemented at circuit level, it is transparent to programmers. This technique is based in the Decimal Hamming coding and here it is compared to Reed Solomon coding at circuit level. Experimental results show that for memory words wider than 16 bits, the proposed technique is faster and imposes lower area overhead than optimized RS, while mitigating errors affecting up to 25% of the memory word.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129041256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Software Radio context, the parametrization is becoming an important topic especially when it comes to multi-standard designs. This paper capitalizes on the Common Operator technique to present a new common structure for the FFT and Viterbi algorithms. A key benefit of exhibiting common operators is the regular architecture it brings when implemented in a Common Operator Bank (COB). This regularity makes the architecture open to future function mapping and adapted to accommodated silicon technology variability through dependable design. Global complexity impact is discussed in the paper.
{"title":"A Common Operator for FFT and Viterbi Algorithms","authors":"Malek Naoues, Laurent Alaus, D. Noguet","doi":"10.1109/DSD.2010.80","DOIUrl":"https://doi.org/10.1109/DSD.2010.80","url":null,"abstract":"In the Software Radio context, the parametrization is becoming an important topic especially when it comes to multi-standard designs. This paper capitalizes on the Common Operator technique to present a new common structure for the FFT and Viterbi algorithms. A key benefit of exhibiting common operators is the regular architecture it brings when implemented in a Common Operator Bank (COB). This regularity makes the architecture open to future function mapping and adapted to accommodated silicon technology variability through dependable design. Global complexity impact is discussed in the paper.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116365557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Bachmann, Andreas Genser, C. Steger, R. Weiss, J. Haid
With the advent of increasingly complex systems, the use of traditional power estimation approaches is rendered infeasible due to extensive simulation times. Hardware accelerated power emulation techniques, performing power estimation as a by-product of functional emulation, are a promising solution to this problem. However, only little attention has been awarded so far to the problem of devising a generic methodology capable of automatically enabling the power emulation of a given system-under-test. In this paper, we propose an automated power characterization and modeling methodology for high level power emulation. Our methodology automatically extracts relevant model parameters from training set data and generates an according power model. Furthermore, we investigate the automation of the power model hardware implementation and the automated integration into the overall system’s HDL description. For a smart card controller test-system the automatically created power model reduces the average estimation error from 11.78% to 4.71% as compared to a manually optimized one.
{"title":"Automated Power Characterization for Run-Time Power Emulation of SoC Designs","authors":"Christian Bachmann, Andreas Genser, C. Steger, R. Weiss, J. Haid","doi":"10.1109/DSD.2010.38","DOIUrl":"https://doi.org/10.1109/DSD.2010.38","url":null,"abstract":"With the advent of increasingly complex systems, the use of traditional power estimation approaches is rendered infeasible due to extensive simulation times. Hardware accelerated power emulation techniques, performing power estimation as a by-product of functional emulation, are a promising solution to this problem. However, only little attention has been awarded so far to the problem of devising a generic methodology capable of automatically enabling the power emulation of a given system-under-test. In this paper, we propose an automated power characterization and modeling methodology for high level power emulation. Our methodology automatically extracts relevant model parameters from training set data and generates an according power model. Furthermore, we investigate the automation of the power model hardware implementation and the automated integration into the overall system’s HDL description. For a smart card controller test-system the automatically created power model reduces the average estimation error from 11.78% to 4.71% as compared to a manually optimized one.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116696315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a new method of test patterns compression based on a design of a dedicated SAT-based ATPG (Automatic Test Pattern Generator). This compression method is targeted to systems on chip (SoCs)provided with the P1500 test standard. The RESPIN architecture can be used for test patterns decompression. The main idea is based on finding the best overlap of test patterns during the test generation, unlike other methods, which are based on efficient overlapping of pre-generated test patterns. The proposed algorithm takes advantage of an implicit test representation as SAT problem instances. The results of test patterns compression obtained for standard ISCAS’85 and ‘89benchmark circuits are shown and compared with competitive test compression methods.
{"title":"Test Patterns Compression Technique Based on a Dedicated SAT-Based ATPG","authors":"Jiri Balcarek, P. Fiser, Jan Schmidt","doi":"10.1109/DSD.2010.111","DOIUrl":"https://doi.org/10.1109/DSD.2010.111","url":null,"abstract":"In this paper we propose a new method of test patterns compression based on a design of a dedicated SAT-based ATPG (Automatic Test Pattern Generator). This compression method is targeted to systems on chip (SoCs)provided with the P1500 test standard. The RESPIN architecture can be used for test patterns decompression. The main idea is based on finding the best overlap of test patterns during the test generation, unlike other methods, which are based on efficient overlapping of pre-generated test patterns. The proposed algorithm takes advantage of an implicit test representation as SAT problem instances. The results of test patterns compression obtained for standard ISCAS’85 and ‘89benchmark circuits are shown and compared with competitive test compression methods.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116830410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data transmission in Network-on-Chips (NoCs) is a serious problem due to cross talk faults happening in adjacent communication links. This paper proposes an efficient flow-control method to enhance the reliability of packet transmission in Network-on-Chips. The method investigates the opposite direction transitions appearing between flits of a packet to reorder the flits in the packet. Flits are reordered in a fixed-size window to reduce: 1) the probability of cross talk occurrence, and 2) the total power consumed for packet delivery. The proposed flow-control method is evaluated by a VHDL-based simulator under different window sizes and various channel widths. Simulation results enable NoC designers to make a trade-off between window size, reliability and power consumption of packet delivery. This method is also compared with other cross talk tolerant methods in terms of reliability and power consumption. Comparison results confirm that the method is a cost efficient solution to overcome the cross talk problem.
{"title":"An Efficient Method to Reliable Data Transmission in Network-on-Chips","authors":"A. Patooghy, H. Tabkhi, S. Miremadi","doi":"10.1109/DSD.2010.23","DOIUrl":"https://doi.org/10.1109/DSD.2010.23","url":null,"abstract":"Data transmission in Network-on-Chips (NoCs) is a serious problem due to cross talk faults happening in adjacent communication links. This paper proposes an efficient flow-control method to enhance the reliability of packet transmission in Network-on-Chips. The method investigates the opposite direction transitions appearing between flits of a packet to reorder the flits in the packet. Flits are reordered in a fixed-size window to reduce: 1) the probability of cross talk occurrence, and 2) the total power consumed for packet delivery. The proposed flow-control method is evaluated by a VHDL-based simulator under different window sizes and various channel widths. Simulation results enable NoC designers to make a trade-off between window size, reliability and power consumption of packet delivery. This method is also compared with other cross talk tolerant methods in terms of reliability and power consumption. Comparison results confirm that the method is a cost efficient solution to overcome the cross talk problem.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115253120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent technology trends have made radiation-induced soft errors a growing threat to the reliability of microprocessors, a problem previously only known to the aerospace industry. Therefore, the ability to handle higher soft error rates in modern processor architectures is essential in order to allow further technology scaling. This paper presents an efficient fault-tolerance method for pipeline-based processors using temporal redundancy. Instructions are executed twice at each pipeline stage, which allows the detection of transient faults. Once a fault is detected the execution is stopped immediately and recovery is implicitly performed within the pipeline stages. Due to this fast reaction the fault is contained at its origin and no expensive rollback operation is required later on.
{"title":"Low Latency Recovery from Transient Faults for Pipelined Processor Architectures","authors":"M. Jeitler, J. Lechner","doi":"10.1109/DSD.2010.87","DOIUrl":"https://doi.org/10.1109/DSD.2010.87","url":null,"abstract":"Recent technology trends have made radiation-induced soft errors a growing threat to the reliability of microprocessors, a problem previously only known to the aerospace industry. Therefore, the ability to handle higher soft error rates in modern processor architectures is essential in order to allow further technology scaling. This paper presents an efficient fault-tolerance method for pipeline-based processors using temporal redundancy. Instructions are executed twice at each pipeline stage, which allows the detection of transient faults. Once a fault is detected the execution is stopped immediately and recovery is implicitly performed within the pipeline stages. Due to this fast reaction the fault is contained at its origin and no expensive rollback operation is required later on.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114331952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asynchronous circuit implementations operating under strong constraints (DIMS, Direct Logic, some of NCL gates, etc.) are attractive due to: 1) regularity, 2) combined implementation of the functional and completion detection logics, what simplifies the design process, 3) circuit output latency is based on the actual gate delays of the unbounded nature, 4) absence of additional synchronization chains (even of a local nature). However, the area and speed penalty is rather high. In contrast to the state-of-the-art approaches, where simple (NAND, NOR, etc.) 2 input gates are used, we propose a synthesis method based on complex nodes, i.e., nodes implementing any function of an arbitrary number of inputs. Synchronous synthesis procedures may be freely adopted for this purpose. Numerous experiments on standard benchmarks were performed and the efficiency of the proposed complex gate based method is clearly shown. DIMS and Direct Logic based asynchronous designs are considered in the paper.
在强约束下运行的异步电路实现(DIMS, Direct Logic,一些NCL门等)具有吸引力,因为:1)规律性,2)功能和完成检测逻辑的组合实现,简化了设计过程,3)电路输出延迟基于无界性质的实际门延迟,4)缺乏额外的同步链(即使是局部性质)。然而,面积和速度的惩罚是相当高的。与使用简单(NAND, NOR等)2输入门的最先进方法相反,我们提出了一种基于复杂节点的综合方法,即实现任意数量输入的任何函数的节点。为此目的可自由采用同步合成方法。在标准基准上进行了大量的实验,并清楚地表明了所提出的基于复杂门的方法的有效性。本文考虑了基于DIMS和Direct Logic的异步设计。
{"title":"Area and Speed Oriented Implementations of Asynchronous Logic Operating under Strong Constraints","authors":"I. Lemberski, P. Fiser","doi":"10.1109/DSD.2010.82","DOIUrl":"https://doi.org/10.1109/DSD.2010.82","url":null,"abstract":"Asynchronous circuit implementations operating under strong constraints (DIMS, Direct Logic, some of NCL gates, etc.) are attractive due to: 1) regularity, 2) combined implementation of the functional and completion detection logics, what simplifies the design process, 3) circuit output latency is based on the actual gate delays of the unbounded nature, 4) absence of additional synchronization chains (even of a local nature). However, the area and speed penalty is rather high. In contrast to the state-of-the-art approaches, where simple (NAND, NOR, etc.) 2 input gates are used, we propose a synthesis method based on complex nodes, i.e., nodes implementing any function of an arbitrary number of inputs. Synchronous synthesis procedures may be freely adopted for this purpose. Numerous experiments on standard benchmarks were performed and the efficiency of the proposed complex gate based method is clearly shown. DIMS and Direct Logic based asynchronous designs are considered in the paper.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127344686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Test technologies for integrated circuits have traditionally tried to maximise test data compression rates, because these are essential for keeping test time and costs low. However, power consumption during the test process is a problem that has been addressed on recently. Excessive power consumption may result in thermal stress and increased voltage drops within the circuit, which implies increasing signal delays. Thereby even fully-functional circuits may fail during delay testing. Therefore, in this paper a flexible concept is proposed which combines test pattern compression using a scan controller concept and reduction of power consumption during the fast capture cycles of transition delay tests. Essentially, this concept consists of a Greedy algorithm, which fills X-rich pattern with 0s or 1s step-by-step, and an event-driven logic and power consumption simulator, which calculates the costs of these steps. The implemented concept is applied to X-rich test sets of ISCAS'89, ITC'99 benchmarks and OpenSparc cores. Results show a best case with 96 percent test data reduction combined with 32 percent less peak capture power. With this concept it is also possible to reduce the peak power for shift-in, launch and shift-out cycles by over 50 percent.
{"title":"Test Data and Power Reductions for Transition Delay Tests for Massive-Parallel Scan Structures","authors":"R. Kothe, H. Vierhaus","doi":"10.1109/DSD.2010.89","DOIUrl":"https://doi.org/10.1109/DSD.2010.89","url":null,"abstract":"Test technologies for integrated circuits have traditionally tried to maximise test data compression rates, because these are essential for keeping test time and costs low. However, power consumption during the test process is a problem that has been addressed on recently. Excessive power consumption may result in thermal stress and increased voltage drops within the circuit, which implies increasing signal delays. Thereby even fully-functional circuits may fail during delay testing. Therefore, in this paper a flexible concept is proposed which combines test pattern compression using a scan controller concept and reduction of power consumption during the fast capture cycles of transition delay tests. Essentially, this concept consists of a Greedy algorithm, which fills X-rich pattern with 0s or 1s step-by-step, and an event-driven logic and power consumption simulator, which calculates the costs of these steps. The implemented concept is applied to X-rich test sets of ISCAS'89, ITC'99 benchmarks and OpenSparc cores. Results show a best case with 96 percent test data reduction combined with 32 percent less peak capture power. With this concept it is also possible to reduce the peak power for shift-in, launch and shift-out cycles by over 50 percent.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125489557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Lange, N. Harb, Haisheng Liu, S. Niar, R. B. Atitallah
Multiple Target Tracking (MTT) algorithms are widely used in various military and civilian applications but its use in automotive safety has little been investigated. In MTT algorithms, implemented in embedded systems, it is important to use the minimum required resources to allow the entire DAS system to be integrated on the same chip (data acquisition, MTT and alarm restitution). This allows the reduction of the System on Chip (SoC) complexity and cost. This paper presents an efficient Driver Assistance System (DAS) based on MTT application. To do so, we first identified the performance bottlenecks in the application. In this application, a set of optimizations were applied to reduce the MTT algorithm’s complexity. Tuning in conjunction the hardware and the software yielded to optimize the final system and to meet the functional requirements. The result is a complete embedded MTT application running on an embedded system that fits in a contemporary medium sized FPGA device.
{"title":"An Improved Automotive Multiple Target Tracking System Design","authors":"T. Lange, N. Harb, Haisheng Liu, S. Niar, R. B. Atitallah","doi":"10.1109/DSD.2010.54","DOIUrl":"https://doi.org/10.1109/DSD.2010.54","url":null,"abstract":"Multiple Target Tracking (MTT) algorithms are widely used in various military and civilian applications but its use in automotive safety has little been investigated. In MTT algorithms, implemented in embedded systems, it is important to use the minimum required resources to allow the entire DAS system to be integrated on the same chip (data acquisition, MTT and alarm restitution). This allows the reduction of the System on Chip (SoC) complexity and cost. This paper presents an efficient Driver Assistance System (DAS) based on MTT application. To do so, we first identified the performance bottlenecks in the application. In this application, a set of optimizations were applied to reduce the MTT algorithm’s complexity. Tuning in conjunction the hardware and the software yielded to optimize the final system and to meet the functional requirements. The result is a complete embedded MTT application running on an embedded system that fits in a contemporary medium sized FPGA device.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instruction set accelerator architectures have emerged recently as light-weight hardware coprocessors, so as to transparently improve applications performance. This paper investigates the effectiveness of adding hardware accelerators as refers to scaling, based on applications that show data level parallelism such as image edge detection and fractal applications. The implementation results using reconfigurable technology show that, by utilizing a number of hardware coprocessor units, applications such as Sobel edge detection can achieve speedup more than 37Í. Finally, architectural directions based on the developed case studies show that even better performance can be achieved when the overheads of communication, of serialized data accesses, shared memory and of bus protocols are reduced.
{"title":"On Scaling Speedup with Coarse-Grain Coprocessor Accelerators on Reconfigurable Platforms","authors":"Georgios Kornaros, Antonios Motakis","doi":"10.1109/DSD.2010.79","DOIUrl":"https://doi.org/10.1109/DSD.2010.79","url":null,"abstract":"Instruction set accelerator architectures have emerged recently as light-weight hardware coprocessors, so as to transparently improve applications performance. This paper investigates the effectiveness of adding hardware accelerators as refers to scaling, based on applications that show data level parallelism such as image edge detection and fractal applications. The implementation results using reconfigurable technology show that, by utilizing a number of hardware coprocessor units, applications such as Sobel edge detection can achieve speedup more than 37Í. Finally, architectural directions based on the developed case studies show that even better performance can be achieved when the overheads of communication, of serialized data accesses, shared memory and of bus protocols are reduced.","PeriodicalId":356885,"journal":{"name":"2010 13th Euromicro Conference on Digital System Design: Architectures, Methods and Tools","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132600319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}