A reduced interface and high performance embedded system architecture (MSBUS) is proposed in this paper. The control bus is low-cost and low-power, whereas the data bus is high-bandwidth and high-speed especially. In addition, a Universal Verification Methodology (UVM)-based performance evaluation methodology is proposed to estimate the hardware structures. In order to evaluate the bus performance, AHB, AXI and MSBUS DMA are implemented as a case study. The experimental results show that MSBUS DMA uses the least hardware resources, reduces energy consumption to a half of AHB and AXI in the block transfer mode, and achieves 3.3 times and 1.6 times valid bandwidth of AHB and AXI respectively. Moreover, the proposed evaluation methodology is effectively used with sufficient accuracy.
{"title":"A Low-Cost and High-Performance Embedded System Architecture and an Evaluation Methodology","authors":"Xiaokun Yang, J. Andrian","doi":"10.1109/ISVLSI.2014.20","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.20","url":null,"abstract":"A reduced interface and high performance embedded system architecture (MSBUS) is proposed in this paper. The control bus is low-cost and low-power, whereas the data bus is high-bandwidth and high-speed especially. In addition, a Universal Verification Methodology (UVM)-based performance evaluation methodology is proposed to estimate the hardware structures. In order to evaluate the bus performance, AHB, AXI and MSBUS DMA are implemented as a case study. The experimental results show that MSBUS DMA uses the least hardware resources, reduces energy consumption to a half of AHB and AXI in the block transfer mode, and achieves 3.3 times and 1.6 times valid bandwidth of AHB and AXI respectively. Moreover, the proposed evaluation methodology is effectively used with sufficient accuracy.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116687940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent experimental studies reveal that FinFET devices commercialized in recent years tend to suffer from moresevere NBTI degradation compared to planar transistors, necessitating effective techniques on processors built with FinFET for endurable operations. We propose to address this problem by exploiting the device heterogeneity and leveraging the slower NBTI aging rate manifested on the planar devices. We focus on modern graphics processing units in this study due to their wide usage in the current community. We validate the effectiveness of the technique byapplying it to the warp scheduler and demonstrate NBTIdegradation is considerably alleviated with slight performance overhead.
{"title":"Mitigating NBTI Degradation on FinFET GPUs through Exploiting Device Heterogeneity","authors":"Ying Zhang, Sui Chen, Lu Peng, Shaoming Chen","doi":"10.1109/ISVLSI.2014.21","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.21","url":null,"abstract":"Recent experimental studies reveal that FinFET devices commercialized in recent years tend to suffer from moresevere NBTI degradation compared to planar transistors, necessitating effective techniques on processors built with FinFET for endurable operations. We propose to address this problem by exploiting the device heterogeneity and leveraging the slower NBTI aging rate manifested on the planar devices. We focus on modern graphics processing units in this study due to their wide usage in the current community. We validate the effectiveness of the technique byapplying it to the warp scheduler and demonstrate NBTIdegradation is considerably alleviated with slight performance overhead.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129569948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Q. Xie, X. Lin, Yanzhi Wang, M. Dousti, A. Shafaei, Majid Ghasemi-Gol, Massoud Pedram
FinFET device has been proposed as a promising substitute for the traditional bulk CMOS-based device at the nanoscale, due to its extraordinary properties such as improved channel controllability, high ON/OFF current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. In addition, the near-ideal subthreshold behavior indicates the potential application of FinFET circuits in the near-threshold supply voltage regime, which consumes an order of magnitude less energy than the regular strong-inversion circuits operating in the super-threshold supply voltage regime. This paper presents a design flow of creating standard cells by using the FinFET 5nm technology node, including both near-threshold and super-threshold operations, and building a Liberty-format standard cell library. The circuit synthesis results of various combinational and sequential circuits based on the 5nm FinFET standard cell library show up to 40X circuit speed improvement and three orders of magnitude energy reduction compared to those of 45nm bulk CMOS technology.
{"title":"5nm FinFET Standard Cell Library Optimization and Circuit Synthesis in Near-and Super-Threshold Voltage Regimes","authors":"Q. Xie, X. Lin, Yanzhi Wang, M. Dousti, A. Shafaei, Majid Ghasemi-Gol, Massoud Pedram","doi":"10.1109/ISVLSI.2014.101","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.101","url":null,"abstract":"FinFET device has been proposed as a promising substitute for the traditional bulk CMOS-based device at the nanoscale, due to its extraordinary properties such as improved channel controllability, high ON/OFF current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. In addition, the near-ideal subthreshold behavior indicates the potential application of FinFET circuits in the near-threshold supply voltage regime, which consumes an order of magnitude less energy than the regular strong-inversion circuits operating in the super-threshold supply voltage regime. This paper presents a design flow of creating standard cells by using the FinFET 5nm technology node, including both near-threshold and super-threshold operations, and building a Liberty-format standard cell library. The circuit synthesis results of various combinational and sequential circuits based on the 5nm FinFET standard cell library show up to 40X circuit speed improvement and three orders of magnitude energy reduction compared to those of 45nm bulk CMOS technology.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129831328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Senni, L. Torres, G. Sassatelli, Anastasiia Butko, Bruno Mussard
Today's memory systems mainly integrate SRAM, DRAM and FLASH technologies. SRAM and DRAM are generally used for cache and working memory, while FLASH memory is used for non-volatile storage at low speed. But all are facing to manufacturing constraints in the most advanced node, which compromises further evolution. Besides, with the increasing size of the memory system, a significant portion of the total system power is spent into memories. Magnetic RAM (MRAM) technology is a very attractive alternative offering simultaneously reasonable performance and power consumption efficiency, high density and non-volatility. While MRAM is always under severe investigation to improve manufacturing process, the state of the art shows that this memory technology can be accessed in less than 5ns with a read/write dynamic energy not so far to SRAM dynamic energy. Besides, non-volatility of MRAM can be used for optimizing leakage current thanks to instant on/off policies. This paper demonstrates how current characteristics of MRAM can be used into memory hierarchy of multiprocessor chips (CMPs). The goal is to highlight the interest to use MRAM for cache memory in order to keep overall application performance saving static power.
{"title":"Exploration of Magnetic RAM Based Memory Hierarchy for Multicore Architecture","authors":"S. Senni, L. Torres, G. Sassatelli, Anastasiia Butko, Bruno Mussard","doi":"10.1109/ISVLSI.2014.29","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.29","url":null,"abstract":"Today's memory systems mainly integrate SRAM, DRAM and FLASH technologies. SRAM and DRAM are generally used for cache and working memory, while FLASH memory is used for non-volatile storage at low speed. But all are facing to manufacturing constraints in the most advanced node, which compromises further evolution. Besides, with the increasing size of the memory system, a significant portion of the total system power is spent into memories. Magnetic RAM (MRAM) technology is a very attractive alternative offering simultaneously reasonable performance and power consumption efficiency, high density and non-volatility. While MRAM is always under severe investigation to improve manufacturing process, the state of the art shows that this memory technology can be accessed in less than 5ns with a read/write dynamic energy not so far to SRAM dynamic energy. Besides, non-volatility of MRAM can be used for optimizing leakage current thanks to instant on/off policies. This paper demonstrates how current characteristics of MRAM can be used into memory hierarchy of multiprocessor chips (CMPs). The goal is to highlight the interest to use MRAM for cache memory in order to keep overall application performance saving static power.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126534049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contemporary multi-core architectures deployed inembedded systems are expected to function near the operational limits of temperature, voltage, and device wear-out. To date, most on-chip sensing systems have been designed to collect and use sensor information for these parameters locally. In this paper, a new sensing system to enhance multi-core dependability which supports both the local and global distribution of sensing data in embedded processors is considered. The benefit of the new sensing architecture is verified using the broadcast of microarchitectural parameter signatures which can be used toidentify impending voltage droops. Low-latency broadcasts are supported for a range of sensor data transfer rates. Up to a 9% performance improvement for a 16-core system is determined via the use of the distributed voltage droop sensor information (5.4% on average). The entire sensing system, including broadcasting resources, requires about 2.6% of multi-core area.
{"title":"A Broadcast-Enabled Sensing System for Embedded Multi-core Processors","authors":"Jia Zhao, Shiting Lu, W. Burleson, R. Tessier","doi":"10.1109/ISVLSI.2014.18","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.18","url":null,"abstract":"Contemporary multi-core architectures deployed inembedded systems are expected to function near the operational limits of temperature, voltage, and device wear-out. To date, most on-chip sensing systems have been designed to collect and use sensor information for these parameters locally. In this paper, a new sensing system to enhance multi-core dependability which supports both the local and global distribution of sensing data in embedded processors is considered. The benefit of the new sensing architecture is verified using the broadcast of microarchitectural parameter signatures which can be used toidentify impending voltage droops. Low-latency broadcasts are supported for a range of sensor data transfer rates. Up to a 9% performance improvement for a 16-core system is determined via the use of the distributed voltage droop sensor information (5.4% on average). The entire sensing system, including broadcasting resources, requires about 2.6% of multi-core area.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127925902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although path-delay faults (PDF) have been studied extensively during the last three decades, design of combinational circuits to achieve low-overhead robust PDF testability, still poses many challenges. In this paper, we revisit the problem of synthesizing a robust path-delay fault testable combinational circuit based on certain new functional properties. Given the boolean cubes of a function, we first design a two-level robust PDF testable circuit by properly grouping the cubes using a few additional control lines. Next, we apply some testability-preserving algebraic factorization techniques to design multi-level circuits. The method readily extends to multi-output circuits as well. Experimental results establish that the proposed functional approach yields fully robust PDF-testable circuits with much lower overhead as compared to earlier approaches.
{"title":"On Designing Robust Path-Delay Fault Testable Combinational Circuits Based on Functional Properties","authors":"Rupali Mitra, D. K. Das, B. Bhattacharya","doi":"10.1109/ISVLSI.2014.81","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.81","url":null,"abstract":"Although path-delay faults (PDF) have been studied extensively during the last three decades, design of combinational circuits to achieve low-overhead robust PDF testability, still poses many challenges. In this paper, we revisit the problem of synthesizing a robust path-delay fault testable combinational circuit based on certain new functional properties. Given the boolean cubes of a function, we first design a two-level robust PDF testable circuit by properly grouping the cubes using a few additional control lines. Next, we apply some testability-preserving algebraic factorization techniques to design multi-level circuits. The method readily extends to multi-output circuits as well. Experimental results establish that the proposed functional approach yields fully robust PDF-testable circuits with much lower overhead as compared to earlier approaches.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115947008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sabyasachee Banerjee, S. Majumder, B. Bhattacharya
Netlist partitioning is an important part of the physical design of 3D IC chips. Each subcircuit corresponding to a partition is subsequently assigned to a suitable device layer in the design phase. This paper proposes a netlist partitioning technique that intends to minimize the number of inter-layer interconnections while maintaining the area constraints. This, in turn, will minimize the area and cost associated with the Through-Silicon Vias (TSVs) needed in the design. The proposed method starts with an BFS-based initial solution and then improves iteratively using a heuristic. Experimental results demonstrate that by reassigning some modules to other layers, our algorithm could achieve up to 45% reduction in the number of TSVs on several benchmark circuits compared to earlier approaches. The resulting increase in floor area due to movement of modules a cross layers, is almost compensated by the decrease in TSV-area. Thus while satisfying the area-constraints, it allows us to reduce the number of TSVs as well as the IR-drop and delay associated with the vias.
{"title":"A Graph-Based 3D IC Partitioning Technique","authors":"Sabyasachee Banerjee, S. Majumder, B. Bhattacharya","doi":"10.1109/ISVLSI.2014.82","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.82","url":null,"abstract":"Netlist partitioning is an important part of the physical design of 3D IC chips. Each subcircuit corresponding to a partition is subsequently assigned to a suitable device layer in the design phase. This paper proposes a netlist partitioning technique that intends to minimize the number of inter-layer interconnections while maintaining the area constraints. This, in turn, will minimize the area and cost associated with the Through-Silicon Vias (TSVs) needed in the design. The proposed method starts with an BFS-based initial solution and then improves iteratively using a heuristic. Experimental results demonstrate that by reassigning some modules to other layers, our algorithm could achieve up to 45% reduction in the number of TSVs on several benchmark circuits compared to earlier approaches. The resulting increase in floor area due to movement of modules a cross layers, is almost compensated by the decrease in TSV-area. Thus while satisfying the area-constraints, it allows us to reduce the number of TSVs as well as the IR-drop and delay associated with the vias.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131368838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a modified folded cascode error amplifier of low dropout (LDO) regulator and a slew-rate enhancement circuit to minimize compensation capacitance and improve transient response. The proposed error amplifier eliminates the tradeoffs between small and large slew-rate that is imposed by the tail-current in conventional error amplifier design. The design is implemented in a standard UMC 0.18 ìm standard CMOS process. Simulation results show that, the LDO regulator consumes a quiescent current of 49.64μA only with a total power consumption of .079mW. It regulates the output voltage at 1.4v from 1.6-1.8v supply. The overshoot/undershoot in the output voltage under the extreme load transients are 220.7mV/280.26mV for load current range of 0 to 100mA. The line regulation is 1.244mV/V at 1.8V, load regulation is 40.6mV/A. This circuit finds its beneficial behavior for chip-level power management units requiring high-area efficiency as compensation capacitors are avoided.
{"title":"A Transient-Enhanced Capacitorless LDO Regulator with improved Error Amplifier","authors":"S. Alapati, P. SrihariRao, K. Prasad, S. Dixit","doi":"10.1109/ISVLSI.2014.28","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.28","url":null,"abstract":"This paper presents a modified folded cascode error amplifier of low dropout (LDO) regulator and a slew-rate enhancement circuit to minimize compensation capacitance and improve transient response. The proposed error amplifier eliminates the tradeoffs between small and large slew-rate that is imposed by the tail-current in conventional error amplifier design. The design is implemented in a standard UMC 0.18 ìm standard CMOS process. Simulation results show that, the LDO regulator consumes a quiescent current of 49.64μA only with a total power consumption of .079mW. It regulates the output voltage at 1.4v from 1.6-1.8v supply. The overshoot/undershoot in the output voltage under the extreme load transients are 220.7mV/280.26mV for load current range of 0 to 100mA. The line regulation is 1.244mV/V at 1.8V, load regulation is 40.6mV/A. This circuit finds its beneficial behavior for chip-level power management units requiring high-area efficiency as compensation capacitors are avoided.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130405118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose a key-based locking/unlocking mechanism for sense amplifiers, an integral part of many analog systems. We leverage process variations to make the circuit functional only when the correct key is entered. An incorrect key will result in a circuit breakdown, making it inaccessible to an attacker. To enable this secure functionality, we leverage emerging technology devices, specifically memristors. The proposed secure sense amplifier can be used in important analog applications such as memories and sensors. We develop properties to ensure the security of the sense amplifier. These properties are validated by simulations results at the 22 nm CMOS technology node and assuming HP memristor properties.
{"title":"Towards Secure Analog Designs: A Secure Sense Amplifier Using Memristors","authors":"D. Hoe, Jeyavijayan Rajendran, R. Karri","doi":"10.1109/ISVLSI.2014.50","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.50","url":null,"abstract":"In this work, we propose a key-based locking/unlocking mechanism for sense amplifiers, an integral part of many analog systems. We leverage process variations to make the circuit functional only when the correct key is entered. An incorrect key will result in a circuit breakdown, making it inaccessible to an attacker. To enable this secure functionality, we leverage emerging technology devices, specifically memristors. The proposed secure sense amplifier can be used in important analog applications such as memories and sensors. We develop properties to ensure the security of the sense amplifier. These properties are validated by simulations results at the 22 nm CMOS technology node and assuming HP memristor properties.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134360530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vahid Janfaza, Payman Behnam, B. Forouzandeh, B. Alizadeh
Long test application time for a System on Chip (SoC) is a major problem in digital design testing. This problem mostly originates from large test data. High volume test data not only increases required ATE memory and bandwidth, but also increases test time. Test compression reduces test data volume without any impact on its coverage. This work proposes two novel efficient test data compression schemes. These schemes suggest a slice partitioning along with a multiple dictionaries bitmask approach, and also a slice bit reordering method. These approaches are combined with low power method to decrease power consumption without sacrificing compression efficiency. Experimental results show improvements in compression efficiency and power consumption when compared with the existing works.
{"title":"A Low-Power Enhanced Bitmask-Dictionary Scheme for Test Data Compression","authors":"Vahid Janfaza, Payman Behnam, B. Forouzandeh, B. Alizadeh","doi":"10.1109/ISVLSI.2014.103","DOIUrl":"https://doi.org/10.1109/ISVLSI.2014.103","url":null,"abstract":"Long test application time for a System on Chip (SoC) is a major problem in digital design testing. This problem mostly originates from large test data. High volume test data not only increases required ATE memory and bandwidth, but also increases test time. Test compression reduces test data volume without any impact on its coverage. This work proposes two novel efficient test data compression schemes. These schemes suggest a slice partitioning along with a multiple dictionaries bitmask approach, and also a slice bit reordering method. These approaches are combined with low power method to decrease power consumption without sacrificing compression efficiency. Experimental results show improvements in compression efficiency and power consumption when compared with the existing works.","PeriodicalId":405755,"journal":{"name":"2014 IEEE Computer Society Annual Symposium on VLSI","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131295192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}