Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5654154
M. Chao, Ching-Yu Chin, Chen-Wei Lin
Defect repair has become a necessary process to enhance the overall yield for memories since manufacturing a natural good memory is difficult in current memory technologies. This paper presents an yield-estimation scheme, which utilizes an induction-based approach to calculate the probability that all defects in a memory can be successfully repaired by a two-dimensional redundancy design. Unlike previous works, which rely on a time-consuming simulation to estimate the expected yield, our yield-estimation scheme only requires scalable mathematical computation and can achieve a high accuracy with limited time and space complexity. Also, the proposed estimation scheme can consider the impact of single defects, column defects, and row defects simultaneously. With the help of the proposed yield-estimation scheme, we can effectively identify the most profitable redundancy configuration for large memory designs within few seconds while it may take several hours or even days by using conventional simulation approach.
{"title":"Mathematical yield estimation for two-dimensional-redundancy memory arrays","authors":"M. Chao, Ching-Yu Chin, Chen-Wei Lin","doi":"10.1109/ICCAD.2010.5654154","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5654154","url":null,"abstract":"Defect repair has become a necessary process to enhance the overall yield for memories since manufacturing a natural good memory is difficult in current memory technologies. This paper presents an yield-estimation scheme, which utilizes an induction-based approach to calculate the probability that all defects in a memory can be successfully repaired by a two-dimensional redundancy design. Unlike previous works, which rely on a time-consuming simulation to estimate the expected yield, our yield-estimation scheme only requires scalable mathematical computation and can achieve a high accuracy with limited time and space complexity. Also, the proposed estimation scheme can consider the impact of single defects, column defects, and row defects simultaneously. With the help of the proposed yield-estimation scheme, we can effectively identify the most profitable redundancy configuration for large memory designs within few seconds while it may take several hours or even days by using conventional simulation approach.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130185985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5654169
John Hansen, Montek Singh
In this paper we explore the problem of scheduling and allocation for asynchronous systems under latency, area, energy, and power constraints, and present exact methods for minimizing an implementation for either latency, area, or energy. This approach utilizes the the branch-and-bound strategy developed in [1], but targets a much more robust solution space by incorporating many-to-many mappings of operations to function units and energy and power considerations into the search space. Unlike many recent solutions that adapt synchronous methods to the asynchronous realm, our approach specifically targets the asynchronous domain. As a result, our solver's complexity and performance are independent of the discretization of time. We illustrate the effectiveness of this approach by running 36 different test cases on small and large input specifications; results are produced in 60 seconds or less for each example.
{"title":"An energy and power-aware approach to high-level synthesis of asynchronous systems","authors":"John Hansen, Montek Singh","doi":"10.1109/ICCAD.2010.5654169","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5654169","url":null,"abstract":"In this paper we explore the problem of scheduling and allocation for asynchronous systems under latency, area, energy, and power constraints, and present exact methods for minimizing an implementation for either latency, area, or energy. This approach utilizes the the branch-and-bound strategy developed in [1], but targets a much more robust solution space by incorporating many-to-many mappings of operations to function units and energy and power considerations into the search space. Unlike many recent solutions that adapt synchronous methods to the asynchronous realm, our approach specifically targets the asynchronous domain. As a result, our solver's complexity and performance are independent of the discretization of time. We illustrate the effectiveness of this approach by running 36 different test cases on small and large input specifications; results are produced in 60 seconds or less for each example.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134308577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5653851
J. Villena, L. M. Silveira
This paper introduces a distributed and shared memory parallel projection based model order reduction framework for parameterized linear systems. The proposed methodology is based on a sampling scheme followed by a projection to build the reduced model. It exploits the parallel nature of the sampling methods to improve the efficiency of the basis generation. The sample selection scheme uses the residue as a proxy for the model error in order to improve automation and maximize the effectiveness of the sampling step. This yields an automatic and reliable methodology, able to handle large systems depending on the frequency and multiple parameters. The framework can be used in shared and distributed memory architectures separately or in conjunction. It is able to deal with different system representations and models of different characteristics, is highly scalable and the parallelization is very effective, as will be demonstrated on a variety of industrial benchmarks, with super linear speed-ups in certain cases. The methodology provides the potential to tackle large and complex models, depending on multiple parameters in an automatic fashion.
{"title":"3POr — Parallel projection based parameterized order reduction for multi-dimensional linear models","authors":"J. Villena, L. M. Silveira","doi":"10.1109/ICCAD.2010.5653851","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5653851","url":null,"abstract":"This paper introduces a distributed and shared memory parallel projection based model order reduction framework for parameterized linear systems. The proposed methodology is based on a sampling scheme followed by a projection to build the reduced model. It exploits the parallel nature of the sampling methods to improve the efficiency of the basis generation. The sample selection scheme uses the residue as a proxy for the model error in order to improve automation and maximize the effectiveness of the sampling step. This yields an automatic and reliable methodology, able to handle large systems depending on the frequency and multiple parameters. The framework can be used in shared and distributed memory architectures separately or in conjunction. It is able to deal with different system representations and models of different characteristics, is highly scalable and the parallelization is very effective, as will be demonstrated on a variety of industrial benchmarks, with super linear speed-ups in certain cases. The methodology provides the potential to tackle large and complex models, depending on multiple parameters in an automatic fashion.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131243446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5654113
Ju-Yueh Lee, Zhe Feng, Lei He
The programmable logic block (PLB) in a modern FPGA features a built-in carry chain (or adder) and a decomposable LUT, where such an LUT may be decomposed into two or more smaller LUTs. Leveraging decomposable LUTs and underutilized carry chains, we propose to decompose a logic function in a PLB into two subfunctions and to combine the subfunctions via a carry chain to make the circuit more robust against single-event upsets(SEUs). Note that such decomposition can be implemented using the decomposable LUT and carry chain in the original PLB without changing the PLB-level placement and routing. Therefore, it is an in-place decomposition (IPD) with no area and timing overhead at the PLB level and has an ideal design closure between logic and physical syntheses. For 10 largest combinational MCNC benchmark circuits with a conservative 20% utilization rate for carry chain, IPD improves MTTF (mean time to failure) by 1.43 and 2.70 times respectively, for PLBs similar to those in Xilinx Virtex-5 and Altera Stratix-IV.
{"title":"In-place decomposition for robustness in FPGA","authors":"Ju-Yueh Lee, Zhe Feng, Lei He","doi":"10.1109/ICCAD.2010.5654113","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5654113","url":null,"abstract":"The programmable logic block (PLB) in a modern FPGA features a built-in carry chain (or adder) and a decomposable LUT, where such an LUT may be decomposed into two or more smaller LUTs. Leveraging decomposable LUTs and underutilized carry chains, we propose to decompose a logic function in a PLB into two subfunctions and to combine the subfunctions via a carry chain to make the circuit more robust against single-event upsets(SEUs). Note that such decomposition can be implemented using the decomposable LUT and carry chain in the original PLB without changing the PLB-level placement and routing. Therefore, it is an in-place decomposition (IPD) with no area and timing overhead at the PLB level and has an ideal design closure between logic and physical syntheses. For 10 largest combinational MCNC benchmark circuits with a conservative 20% utilization rate for carry chain, IPD improves MTTF (mean time to failure) by 1.43 and 2.70 times respectively, for PLBs similar to those in Xilinx Virtex-5 and Altera Stratix-IV.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114709255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optimization for power is always one of the most important design objectives in modern nanometer IC design. Recent studies have shown the effectiveness of applying multi-bit flip-flops to save the power consumption of the clock network. However, all the previous works applied multi-bit flip-flops at earlier design stages, which could be very difficult to carry out the trade-off among power, timing, and other design objectives. This paper presents a novel power optimization method by incrementally applying more multi-bit flip-flops at the post-placement stage to gain more clock power saving while considering the placement density and timing slack constraints, and simultaneously minimizing interconnecting wirelength. Experimental results based on the industry benchmark circuits show that our approach is very effective and efficient, which can be seamlessly integrated in modern design flow.
{"title":"Post-placement power optimization with multi-bit flip-flops","authors":"Yao-Tsung Chang, Chih-Cheng Hsu, Mark Po-Hung Lin, Yu-Wen Tsai, Sheng-Fong Chen","doi":"10.1109/ICCAD.2010.5654155","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5654155","url":null,"abstract":"Optimization for power is always one of the most important design objectives in modern nanometer IC design. Recent studies have shown the effectiveness of applying multi-bit flip-flops to save the power consumption of the clock network. However, all the previous works applied multi-bit flip-flops at earlier design stages, which could be very difficult to carry out the trade-off among power, timing, and other design objectives. This paper presents a novel power optimization method by incrementally applying more multi-bit flip-flops at the post-placement stage to gain more clock power saving while considering the placement density and timing slack constraints, and simultaneously minimizing interconnecting wirelength. Experimental results based on the industry benchmark circuits show that our approach is very effective and efficient, which can be seamlessly integrated in modern design flow.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117107233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5653616
Enno Wein
Multicore architectures have become ubiquitous in the recent years. Yet, traditional serial programming techniques cannot exploit their potential because they do not express the dependencies of the tasks clearly rendering them unsuitable for any system which can execute tasks in parallel. We present a methodology which enables designers to explicitly and separately express function, communication and platform aspects. The approach allows to explore all aspects of a system without even building virtual prototypes or platform-dependent code. A bottleneck analysis and resolution leads to a well matched hardware/software partitioning as a basis for subsequent HW and SW design.
{"title":"HW/SW co-design of parallel systems","authors":"Enno Wein","doi":"10.1109/ICCAD.2010.5653616","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5653616","url":null,"abstract":"Multicore architectures have become ubiquitous in the recent years. Yet, traditional serial programming techniques cannot exploit their potential because they do not express the dependencies of the tasks clearly rendering them unsuitable for any system which can execute tasks in parallel. We present a methodology which enables designers to explicitly and separately express function, communication and platform aspects. The approach allows to explore all aspects of a system without even building virtual prototypes or platform-dependent code. A bottleneck analysis and resolution leads to a well matched hardware/software partitioning as a basis for subsequent HW and SW design.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125865161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5654139
Zefu Dai, Mark Jarvin, Jianwen Zhu
Multi-port memory controllers (MPMC) play an important role in system-on-chips by coordinating accesses from different subsystems to shared DRAMs. The main challenge of MPMC design is optimize quality-of-service by simultaneously satisfying different-and often competing-requirements, including bandwidth and latency. While previous works have attempted to address the challenge, the proposed solutions are heuristic and often cannot provide bandwidth and/or latency guarantees. In this paper, we propose a new technique called Credit-Borrow-and-Repay (CBR) that augments a dynamic scheduling algorithm drawn from the networking community, improving it to achieve minimum latency while preserving minimum bandwidth guarantees. Our experiments show that on typical multimedia workloads, the cache response latency can be improved as much as 2.5X.
{"title":"Credit Borrow and Repay: Sharing DRAM with minimum latency and bandwidth guarantees","authors":"Zefu Dai, Mark Jarvin, Jianwen Zhu","doi":"10.1109/ICCAD.2010.5654139","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5654139","url":null,"abstract":"Multi-port memory controllers (MPMC) play an important role in system-on-chips by coordinating accesses from different subsystems to shared DRAMs. The main challenge of MPMC design is optimize quality-of-service by simultaneously satisfying different-and often competing-requirements, including bandwidth and latency. While previous works have attempted to address the challenge, the proposed solutions are heuristic and often cannot provide bandwidth and/or latency guarantees. In this paper, we propose a new technique called Credit-Borrow-and-Repay (CBR) that augments a dynamic scheduling algorithm drawn from the networking community, improving it to achieve minimum latency while preserving minimum bandwidth guarantees. Our experiments show that on typical multimedia workloads, the cache response latency can be improved as much as 2.5X.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129444540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5654090
Sören Sonntag, Francisco Gilabert Villamón
System-on-Chip (SoC) has become a common design technique in the integrated circuits industry as it offers many advantages in terms of cost and performance efficiency. SoCs are increasingly complex and heterogeneous systems that are highly integrated comprising processors, caches, hardware accelerators, memories, peripherals and interconnects. Modern SoCs deploy not only simple buses but also crossbars and Networks-on-Chip (NoC) to connect dozens or even hundreds of modules. However, it is difficult to evaluate the performance of these interconnects because of their complexity. This is a potential design risk. In order to address this challenge, early design space exploration is required to find appropriate system architectures out of many candidate architectures. An appropriate interconnect architecture is a fundamental outcome of these evaluations since its latency and throughput characteristics affect the performance of all attached modules in the SoC. In this paper we show how to perform early design space exploration using our Electronic System Level (ESL) performance evaluation framework SystemQ. We use a heterogeneous MultiProcessor SoC that features a complex NoC as a central interconnect. Based on this example we show the importance of proper abstraction in order to keep simulation efforts manageable.
{"title":"Design space exploration and performance evaluation at Electronic System Level for NoC-based MPSoC","authors":"Sören Sonntag, Francisco Gilabert Villamón","doi":"10.1109/ICCAD.2010.5654090","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5654090","url":null,"abstract":"System-on-Chip (SoC) has become a common design technique in the integrated circuits industry as it offers many advantages in terms of cost and performance efficiency. SoCs are increasingly complex and heterogeneous systems that are highly integrated comprising processors, caches, hardware accelerators, memories, peripherals and interconnects. Modern SoCs deploy not only simple buses but also crossbars and Networks-on-Chip (NoC) to connect dozens or even hundreds of modules. However, it is difficult to evaluate the performance of these interconnects because of their complexity. This is a potential design risk. In order to address this challenge, early design space exploration is required to find appropriate system architectures out of many candidate architectures. An appropriate interconnect architecture is a fundamental outcome of these evaluations since its latency and throughput characteristics affect the performance of all attached modules in the SoC. In this paper we show how to perform early design space exploration using our Electronic System Level (ESL) performance evaluation framework SystemQ. We use a heterogeneous MultiProcessor SoC that features a complex NoC as a central interconnect. Based on this example we show the importance of proper abstraction in order to keep simulation efforts manageable.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130094050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5653787
M. Takamiya, K. Ishida, T. Sekitani, T. Someya, T. Sakurai
Organic electronics is attracting a lot of attention for large-area pervasive electronics applications, because organic transistors can be fabricated using printing technologies on arbitrary substrates and this enables both high-throughput and low-cost production. In this paper, some examples of the large area electronics with the organic transistors including a wireless power transmission sheet and a communication sheet are presented. Challenges for future large area electronics are also described.
{"title":"Design of large area electronics with organic transistors","authors":"M. Takamiya, K. Ishida, T. Sekitani, T. Someya, T. Sakurai","doi":"10.1109/ICCAD.2010.5653787","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5653787","url":null,"abstract":"Organic electronics is attracting a lot of attention for large-area pervasive electronics applications, because organic transistors can be fabricated using printing technologies on arbitrary substrates and this enables both high-throughput and low-cost production. In this paper, some examples of the large area electronics with the organic transistors including a wireless power transmission sheet and a communication sheet are presented. Challenges for future large area electronics are also described.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130839693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-07DOI: 10.1109/ICCAD.2010.5653921
N. Evmorfopoulos, M. Rammou, G. Stamoulis, J. Moondanos
Validating the robustness of power distribution in modern IC design is a crucial but very difficult problem, due to the vast number of possible working modes and the high operating frequencies which necessitate the modeling of power grid as a general RLC network. In this paper we provide a characterization of the worst-case current waveform excitations that produce the maximum voltage drop among all possible working modes of the IC. In addition, we give a practical methodology to estimate these worst-case excitations on the basis of a sample of the excitation space acquired via plain circuit simulation. In the course of characterizing the worst-case excitations we also establish that the voltage drop function for RLC grid models has nonnegative coefficients, which has been an open problem so far.
{"title":"Characterization of the worst-case current waveform excitations in general RLC-model power grid analysis","authors":"N. Evmorfopoulos, M. Rammou, G. Stamoulis, J. Moondanos","doi":"10.1109/ICCAD.2010.5653921","DOIUrl":"https://doi.org/10.1109/ICCAD.2010.5653921","url":null,"abstract":"Validating the robustness of power distribution in modern IC design is a crucial but very difficult problem, due to the vast number of possible working modes and the high operating frequencies which necessitate the modeling of power grid as a general RLC network. In this paper we provide a characterization of the worst-case current waveform excitations that produce the maximum voltage drop among all possible working modes of the IC. In addition, we give a practical methodology to estimate these worst-case excitations on the basis of a sample of the excitation space acquired via plain circuit simulation. In the course of characterizing the worst-case excitations we also establish that the voltage drop function for RLC grid models has nonnegative coefficients, which has been an open problem so far.","PeriodicalId":344703,"journal":{"name":"2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131016464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}