This tutorial will provide a bottom-up view of the changes in semiconductor memory design as we move into the nanometer regime. We begin by discussing the breakdown of scaling and the power problem. As innovation replaces classical scaling we investigate the use of stress engineering to improve device level performance. Technology challenges in lithography and interconnects are addressed. The consequences of innovation and scaling on RF/Analog characteristics must also be considered. The scaling of memory presents yet another challenge. We proceed to discuss the modeling of these effects for the circuit designer including discussion of the many new and traditional sources of variation. We describe how these are characterized how they can be controlled by layout rules and how the remaining variation can be describe in the model to enable Statistical Timing and other advanced circuit techniques. At the circuit level we consider in detail embedded DRAM and SRAM design for both bulk and SOI. We discuss the benefits and challenges of advanced technologies including methods for creating robust designs in the presence of manufacturing variation. We also discuss the design innovations required to utilize advanced technologies for overcoming the "memory wall", "power wall" and "ILP wall".
{"title":"Memory Design and Advanced Semiconductor Technology","authors":"D. Harame, S. Iyer, J. Watts, R. Joshi, J. Barth","doi":"10.1109/VLSI.2008.133","DOIUrl":"https://doi.org/10.1109/VLSI.2008.133","url":null,"abstract":"This tutorial will provide a bottom-up view of the changes in semiconductor memory design as we move into the nanometer regime. We begin by discussing the breakdown of scaling and the power problem. As innovation replaces classical scaling we investigate the use of stress engineering to improve device level performance. Technology challenges in lithography and interconnects are addressed. The consequences of innovation and scaling on RF/Analog characteristics must also be considered. The scaling of memory presents yet another challenge. We proceed to discuss the modeling of these effects for the circuit designer including discussion of the many new and traditional sources of variation. We describe how these are characterized how they can be controlled by layout rules and how the remaining variation can be describe in the model to enable Statistical Timing and other advanced circuit techniques. At the circuit level we consider in detail embedded DRAM and SRAM design for both bulk and SOI. We discuss the benefits and challenges of advanced technologies including methods for creating robust designs in the presence of manufacturing variation. We also discuss the design innovations required to utilize advanced technologies for overcoming the \"memory wall\", \"power wall\" and \"ILP wall\".","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115719344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a discrete-time time-domain vector fitting algorithm, called TD-VFz, for rational function macromodeling of port-to-port responses with discrete time-sampled data. The core routine involves a two-step pole refinement process based on a linear least-squares solve and an eigenvalue problem. Applications in the macromodeling of practical circuits demonstrate that TD-VFz exhibits fast computation, excellent accuracy, and robustness against noisy data. We also utilize an quasi-error bound unique to the discrete-time setting to facilitate the determination of approximant model order.
{"title":"Efficient Linear Macromodeling via Discrete-Time Time-Domain Vector Fitting","authors":"Chi-Un Lei, N. Wong","doi":"10.1109/VLSI.2008.12","DOIUrl":"https://doi.org/10.1109/VLSI.2008.12","url":null,"abstract":"We present a discrete-time time-domain vector fitting algorithm, called TD-VFz, for rational function macromodeling of port-to-port responses with discrete time-sampled data. The core routine involves a two-step pole refinement process based on a linear least-squares solve and an eigenvalue problem. Applications in the macromodeling of practical circuits demonstrate that TD-VFz exhibits fast computation, excellent accuracy, and robustness against noisy data. We also utilize an quasi-error bound unique to the discrete-time setting to facilitate the determination of approximant model order.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116695699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This tutorial will describe in detail and demonstrate an ESL design flow for architectural exploration to determine low power designs. Increasingly SoC design is driven by integrated mobile devices such as cell phones, music players and hand-held game consoles. These devices rely on standard algorithms such as H.264, 802.1 In, or JPEG2000, which allow room for innovative implementations that can result in differentiated products. An ESL design-flow that integrates application engine synthesis with an industry-leading RTL power estimation technology, such as Sequence Power Theater, enables a designer to explore multiple algorithms and architectures with different power profiles to determine the optimal algorithm-architecture combination in a very short period of time.
本教程将详细描述和演示用于架构探索的ESL设计流程,以确定低功耗设计。越来越多的SoC设计是由集成的移动设备,如手机,音乐播放器和手持游戏机驱动的。这些设备依赖于H.264、802.1 In或JPEG2000等标准算法,这为创新实现提供了空间,从而产生差异化的产品。ESL设计流程将应用引擎合成与业界领先的RTL功率估计技术(如Sequence power Theater)集成在一起,使设计人员能够探索具有不同功率配置的多种算法和架构,从而在很短的时间内确定最佳的算法-架构组合。
{"title":"Architecture Exploration for Low Power Design","authors":"V. Kathail, T. Miller","doi":"10.1109/VLSI.2008.132","DOIUrl":"https://doi.org/10.1109/VLSI.2008.132","url":null,"abstract":"This tutorial will describe in detail and demonstrate an ESL design flow for architectural exploration to determine low power designs. Increasingly SoC design is driven by integrated mobile devices such as cell phones, music players and hand-held game consoles. These devices rely on standard algorithms such as H.264, 802.1 In, or JPEG2000, which allow room for innovative implementations that can result in differentiated products. An ESL design-flow that integrates application engine synthesis with an industry-leading RTL power estimation technology, such as Sequence Power Theater, enables a designer to explore multiple algorithms and architectures with different power profiles to determine the optimal algorithm-architecture combination in a very short period of time.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Achieving accurate and speedy circuit sizing is a challenge in automated analog synthesis. System matrix model based estimators predict circuit performance accurately. In this paper we employ hashing in conjunction with matrix models for faster synthesis convergence. With hash tables some matrix element recomputations are avoided, thus improving synthesis time. Hashing is effectively performed by dividing matrix elements into classes and building class-wise hash tables. Hash tables are updated over several synthesis runs which further expedites convergence. Experimental results show that the proposed method can provide 4x-6x speedup over that offered by synthesis approaches employing macromodels but no hashing.
{"title":"On the Use of Hash Tables for Efficient Analog Circuit Synthesis","authors":"Almitra Pradhan, R. Vemuri","doi":"10.1109/VLSI.2008.35","DOIUrl":"https://doi.org/10.1109/VLSI.2008.35","url":null,"abstract":"Achieving accurate and speedy circuit sizing is a challenge in automated analog synthesis. System matrix model based estimators predict circuit performance accurately. In this paper we employ hashing in conjunction with matrix models for faster synthesis convergence. With hash tables some matrix element recomputations are avoided, thus improving synthesis time. Hashing is effectively performed by dividing matrix elements into classes and building class-wise hash tables. Hash tables are updated over several synthesis runs which further expedites convergence. Experimental results show that the proposed method can provide 4x-6x speedup over that offered by synthesis approaches employing macromodels but no hashing.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124824706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel dynamic voltage scaling (DVS) scheme that is specifically directed towards 3D graphics- intensive interactive game applications running on battery-operated portable devices. The key to this DVS scheme lies in parsing each game frame to estimate its rendering workload and then using such an estimate to scale the voltage/frequency of the underlying processor. The main novelty of this scheme stems from the fact that game frames offer a rich variety of "structural" information (e.g. number of brush and alias models, texture information and light maps) which can be exploited to estimate their processing workload. Although DVS has been extensively applied to video decoding applications, compressed video frames do not offer any information (beyond the frame types - I, B or P) that can be used in a similar manner to estimate their processing workload. As a result, DVS algorithms designed for video decoding mostly rely on control-theoretic feedback mechanisms, where the workload of a frame is predicted from the workloads of the previously-rendered frames. We show that compared to such history-based predictors, our proposed scheme performs significantly better for game applications. Our experimental results, based on the Quake II game engine running on Windows XP, show that for the same energy consumption our scheme results in more than 50% improvement in quality (measured in terms of number of frames meeting their deadlines) compared to history-based prediction schemes.
我们提出了一种新的动态电压缩放(DVS)方案,专门针对在电池供电的便携式设备上运行的3D图形密集型交互式游戏应用程序。此分布式交换机方案的关键在于解析每个游戏帧以估计其渲染工作量,然后使用该估计来缩放底层处理器的电压/频率。这个方案的主要新颖之处在于,游戏框架提供了丰富多样的“结构”信息(例如,画笔和别名模型的数量,纹理信息和光线贴图),这些信息可以用来估计它们的处理工作量。尽管分布式交换机已广泛应用于视频解码应用,但压缩视频帧不提供任何信息(除了帧类型- I, B或P),可以以类似的方式用于估计其处理工作量。因此,为视频解码设计的分布式交换机算法主要依赖于控制理论反馈机制,其中帧的工作负载是根据先前呈现帧的工作负载预测的。我们表明,与这些基于历史的预测器相比,我们提出的方案在游戏应用程序中表现得更好。我们基于Windows XP上运行的Quake II游戏引擎的实验结果表明,与基于历史的预测方案相比,在相同的能耗下,我们的方案的质量提高了50%以上(以满足截止日期的帧数衡量)。
{"title":"Power Management of Interactive 3D Games Using Frame Structures","authors":"Yan Gu, S. Chakraborty","doi":"10.1109/VLSI.2008.102","DOIUrl":"https://doi.org/10.1109/VLSI.2008.102","url":null,"abstract":"We propose a novel dynamic voltage scaling (DVS) scheme that is specifically directed towards 3D graphics- intensive interactive game applications running on battery-operated portable devices. The key to this DVS scheme lies in parsing each game frame to estimate its rendering workload and then using such an estimate to scale the voltage/frequency of the underlying processor. The main novelty of this scheme stems from the fact that game frames offer a rich variety of \"structural\" information (e.g. number of brush and alias models, texture information and light maps) which can be exploited to estimate their processing workload. Although DVS has been extensively applied to video decoding applications, compressed video frames do not offer any information (beyond the frame types - I, B or P) that can be used in a similar manner to estimate their processing workload. As a result, DVS algorithms designed for video decoding mostly rely on control-theoretic feedback mechanisms, where the workload of a frame is predicted from the workloads of the previously-rendered frames. We show that compared to such history-based predictors, our proposed scheme performs significantly better for game applications. Our experimental results, based on the Quake II game engine running on Windows XP, show that for the same energy consumption our scheme results in more than 50% improvement in quality (measured in terms of number of frames meeting their deadlines) compared to history-based prediction schemes.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123763146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Mathew, C. Argyrides, A. Jabir, H. Rahaman, D. Pradhan
This paper presents a new method for designing single error correcting Galois field multipliers over polynomial basis. The proposed method uses multiple parity prediction circuits to detect and correct logic errors and gives 100% fault coverage both in the functional unit and the parity prediction circuitry. Area, power and delay overhead for the proposed design technique is analyzed. It is found that compared to the traditional triple modular redundancy (TMR) techniques for single error correction the proposed technique is very cost efficient.
{"title":"Single Error Correcting Finite Field Multipliers Over GF(2m)","authors":"J. Mathew, C. Argyrides, A. Jabir, H. Rahaman, D. Pradhan","doi":"10.1109/VLSI.2008.105","DOIUrl":"https://doi.org/10.1109/VLSI.2008.105","url":null,"abstract":"This paper presents a new method for designing single error correcting Galois field multipliers over polynomial basis. The proposed method uses multiple parity prediction circuits to detect and correct logic errors and gives 100% fault coverage both in the functional unit and the parity prediction circuitry. Area, power and delay overhead for the proposed design technique is analyzed. It is found that compared to the traditional triple modular redundancy (TMR) techniques for single error correction the proposed technique is very cost efficient.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115575356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two-pattern tests for delay faults in standard scan circuits can be of one of two types: skewed-load or broadside. Each type of tests creates different conditions during test application due to the different way in which scan mode and functional mode are interleaved. Therefore, tests that are applicable both as skewed-load tests and as broadside tests are useful for comparing the two types of tests with respect to properties such as defect coverage or overtesting. In this work we investigate the possibility of generating tests that are applicable under both test application schemes. We refer to two-pattern tests that are applicable as both skewed-load and broadside tests as common-mode tests. We show that most benchmark circuits have sufficient numbers of common-mode tests to make them an interesting class of tests. Moreover, we show that the use of multiple scan chains increases the number of common-mode tests.
{"title":"On Common-Mode Skewed-Load and Broadside Tests","authors":"I. Pomeranz, S. Reddy, S. Kundu","doi":"10.1109/VLSI.2008.16","DOIUrl":"https://doi.org/10.1109/VLSI.2008.16","url":null,"abstract":"Two-pattern tests for delay faults in standard scan circuits can be of one of two types: skewed-load or broadside. Each type of tests creates different conditions during test application due to the different way in which scan mode and functional mode are interleaved. Therefore, tests that are applicable both as skewed-load tests and as broadside tests are useful for comparing the two types of tests with respect to properties such as defect coverage or overtesting. In this work we investigate the possibility of generating tests that are applicable under both test application schemes. We refer to two-pattern tests that are applicable as both skewed-load and broadside tests as common-mode tests. We show that most benchmark circuits have sufficient numbers of common-mode tests to make them an interesting class of tests. Moreover, we show that the use of multiple scan chains increases the number of common-mode tests.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128224250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Analog-to-digital converters (or sigma-delta) converters have now become routine aspects of high- performance signal processing, ranging from precision audio to RF transceivers. In this tutorial, we will present, in a systematic fashion, the basics and design aspects of delta-sigma data converters, along with a case study of a high performance ADC designed for digital audio. The intended audience is analog/mixed signal designers with limited prior exposure to over sampling converters and graduate students. Anyone interested in designing, simulating and testing such converters should benefit greatly by attending this tutorial.
{"title":"Oversampling Analog-to-Digital Converter Design","authors":"S. Pavan, N. Krishnapura","doi":"10.1109/VLSI.2008.130","DOIUrl":"https://doi.org/10.1109/VLSI.2008.130","url":null,"abstract":"Summary form only given. Analog-to-digital converters (or sigma-delta) converters have now become routine aspects of high- performance signal processing, ranging from precision audio to RF transceivers. In this tutorial, we will present, in a systematic fashion, the basics and design aspects of delta-sigma data converters, along with a case study of a high performance ADC designed for digital audio. The intended audience is analog/mixed signal designers with limited prior exposure to over sampling converters and graduate students. Anyone interested in designing, simulating and testing such converters should benefit greatly by attending this tutorial.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Saluja, Shriram Vijayakumar, Warin Sootkaneung, Xaingning Yang
Negative bias temperature instability (NBTI) has been identified as a major and critical reliability issue for PMOS devices in nano-scale designs. It manifests as a negative threshold voltage shift, thereby degrading the performance of the PMOS devices over the lifetime of a circuit. In order to determine the quantitative impact of this phenomenon an accurate and tractable model is needed. In this paper we explore a novel and practical methodology for modeling NBTI degradation at the logic level for digital circuits. Its major contributions include i) a SPICE level simulation to identify stress on PMOS devices under varying input conditions for various gate types and ii) a gate level simulation methodology that is scalable and accurate for determining stress on large circuits. We validate the proposed logic level simulation methodology by showing that it is accurate within 1% of the reference model. Contrary to many other papers in this area, our experimental results show that the overall delay degradation of large digital circuits due to NBTI is relatively small.
{"title":"NBTI Degradation: A Problem or a Scare?","authors":"K. Saluja, Shriram Vijayakumar, Warin Sootkaneung, Xaingning Yang","doi":"10.1109/VLSI.2008.43","DOIUrl":"https://doi.org/10.1109/VLSI.2008.43","url":null,"abstract":"Negative bias temperature instability (NBTI) has been identified as a major and critical reliability issue for PMOS devices in nano-scale designs. It manifests as a negative threshold voltage shift, thereby degrading the performance of the PMOS devices over the lifetime of a circuit. In order to determine the quantitative impact of this phenomenon an accurate and tractable model is needed. In this paper we explore a novel and practical methodology for modeling NBTI degradation at the logic level for digital circuits. Its major contributions include i) a SPICE level simulation to identify stress on PMOS devices under varying input conditions for various gate types and ii) a gate level simulation methodology that is scalable and accurate for determining stress on large circuits. We validate the proposed logic level simulation methodology by showing that it is accurate within 1% of the reference model. Contrary to many other papers in this area, our experimental results show that the overall delay degradation of large digital circuits due to NBTI is relatively small.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VLSI floor-planning in the gigascale era must deal with multiple objectives including wiring congestion, performance and reliability. Genetic algorithms lend themselves naturally to multi-objective optimization. In this paper, a multi-objective genetic algorithm is proposed for floorplanning that simultaneously minimizes area and total wirelength. The proposed genetic floorplanner is the first to use non-domination concepts to rank solutions. Two novel crossover operators are presented that build floorplans using good sub-floorplans. The efficiency of the proposed approach is illustrated by the 18% wirelength savings and 4.6% area savings obtained for the GSRC benchmarks and 26% wirelength savings for the MCNC benchmarks for a marginal 1.3% increase in area when compared to previous floorplanners that perform simultaneous area and wirelength minimization.
{"title":"An Elitist Non-Dominated Sorting Based Genetic Algorithm for Simultaneous Area and Wirelength Minimization in VLSI Floorplanning","authors":"Pradeep Fernando, S. Katkoori","doi":"10.1109/VLSI.2008.97","DOIUrl":"https://doi.org/10.1109/VLSI.2008.97","url":null,"abstract":"VLSI floor-planning in the gigascale era must deal with multiple objectives including wiring congestion, performance and reliability. Genetic algorithms lend themselves naturally to multi-objective optimization. In this paper, a multi-objective genetic algorithm is proposed for floorplanning that simultaneously minimizes area and total wirelength. The proposed genetic floorplanner is the first to use non-domination concepts to rank solutions. Two novel crossover operators are presented that build floorplans using good sub-floorplans. The efficiency of the proposed approach is illustrated by the 18% wirelength savings and 4.6% area savings obtained for the GSRC benchmarks and 26% wirelength savings for the MCNC benchmarks for a marginal 1.3% increase in area when compared to previous floorplanners that perform simultaneous area and wirelength minimization.","PeriodicalId":143886,"journal":{"name":"21st International Conference on VLSI Design (VLSID 2008)","volume":"92 S84","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132905854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}