Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810653
M. Kamon, S. McCormick, K. Sheperd
Accurate interconnect analysis has become essential not only for post-layout verification but also for synthesis. This tutorial explores interconnect analysis and extraction methodology on three levels: coarse extraction to guide synthesis, detailed extraction for full-chip analysis, and full 3D analysis for critical nets. We will also describe the electrical issues caused by parasitics and how they have, and will be, influenced by changing technology. The importance of model order reduction will be described as well as methodologies at the synthesis stage for avoiding parasitic problems.
{"title":"Interconnect parasitic extraction in the digital IC design methodology","authors":"M. Kamon, S. McCormick, K. Sheperd","doi":"10.1109/ICCAD.1999.810653","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810653","url":null,"abstract":"Accurate interconnect analysis has become essential not only for post-layout verification but also for synthesis. This tutorial explores interconnect analysis and extraction methodology on three levels: coarse extraction to guide synthesis, detailed extraction for full-chip analysis, and full 3D analysis for critical nets. We will also describe the electrical issues caused by parasitics and how they have, and will be, influenced by changing technology. The importance of model order reduction will be described as well as methodologies at the synthesis stage for avoiding parasitic problems.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"1 1","pages":"223-230"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79942624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810613
H. Etawil, S. Areibi, A. Vannelli
Traditionally, analytic placement has used linear or quadratic wirelength objective functions. Minimizing either formulation attracts cells sharing common signals (nets) together. The result is a placement with a great deal of overlap among the cells. To reduce cell overlap, the methodology iterates between global optimization and repartitioning of the placement area. In this work, we added new attractive and repulsive forces to the traditional formulation so that overlap among cells is diminished without repartitioning the placement area. The superiority of our approach stems from the fact that our new formulations are convex and no hard constraints are required. A preliminary version of the new placement method is tested using a set of MCNC benchmarks and, on average, the new method achieved 3.96% and 7.6% reduction in wirelength and CPU time compared to TimberWolf v7.0 in the hierarchical mode.
{"title":"Attractor-repeller approach for global placement","authors":"H. Etawil, S. Areibi, A. Vannelli","doi":"10.1109/ICCAD.1999.810613","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810613","url":null,"abstract":"Traditionally, analytic placement has used linear or quadratic wirelength objective functions. Minimizing either formulation attracts cells sharing common signals (nets) together. The result is a placement with a great deal of overlap among the cells. To reduce cell overlap, the methodology iterates between global optimization and repartitioning of the placement area. In this work, we added new attractive and repulsive forces to the traditional formulation so that overlap among cells is diminished without repartitioning the placement area. The superiority of our approach stems from the fact that our new formulations are convex and no hard constraints are required. A preliminary version of the new placement method is tested using a set of MCNC benchmarks and, on average, the new method achieved 3.96% and 7.6% reduction in wirelength and CPU time compared to TimberWolf v7.0 in the hierarchical mode.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"90 1","pages":"20-24"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84532256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810662
G. Bernacchia, M. Papaefthymiou
This paper presents a new macromodeling technique for high-level power estimation. Our technique is based on a parameterizable analytical model that relies exclusively on statistical information of the circuit's primary inputs. During estimation, the statistics of the required metrics are extracted from the input stream, and a power estimate is obtained by evaluating a model function that has been characterized in advance. Our model yields power estimates within seconds, because it does not rely on the statistics of the circuit's primary outputs and, consequently, does not perform any simulation during estimation. Moreover, it achieves better accuracy than previous macromodeling approaches by taking into account both spatial and temporal correlations in the input stream. In experiments with the ISCAS-85 combinational circuits, the average absolute relative error of our power macromodeling technique was at most 1.8%. The worst-case error was at most 12.8%. For a ripple-carry adder family, in comparison with power estimates that were obtained using Spice, the average absolute and worst-case errors of our model's estimates were at most 5.1% and 19.8%, respectively. In addition to power dissipation, our macromodeling technique can be used to estimate the statistics of a circuit's primary outputs with very low average errors. It is thus suitable for power estimation in core-based systems with pre-characterized blocks. Once the metrics of the primary inputs are known, the power dissipation of the entire system can be estimated by simply propagating this information through the blocks using their corresponding model functions.
{"title":"Analytical macromodeling for high-level power estimation","authors":"G. Bernacchia, M. Papaefthymiou","doi":"10.1109/ICCAD.1999.810662","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810662","url":null,"abstract":"This paper presents a new macromodeling technique for high-level power estimation. Our technique is based on a parameterizable analytical model that relies exclusively on statistical information of the circuit's primary inputs. During estimation, the statistics of the required metrics are extracted from the input stream, and a power estimate is obtained by evaluating a model function that has been characterized in advance. Our model yields power estimates within seconds, because it does not rely on the statistics of the circuit's primary outputs and, consequently, does not perform any simulation during estimation. Moreover, it achieves better accuracy than previous macromodeling approaches by taking into account both spatial and temporal correlations in the input stream. In experiments with the ISCAS-85 combinational circuits, the average absolute relative error of our power macromodeling technique was at most 1.8%. The worst-case error was at most 12.8%. For a ripple-carry adder family, in comparison with power estimates that were obtained using Spice, the average absolute and worst-case errors of our model's estimates were at most 5.1% and 19.8%, respectively. In addition to power dissipation, our macromodeling technique can be used to estimate the statistics of a circuit's primary outputs with very low average errors. It is thus suitable for power estimation in core-based systems with pre-characterized blocks. Once the metrics of the primary inputs are known, the power dissipation of the entire system can be estimated by simply propagating this information through the blocks using their corresponding model functions.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"25 1","pages":"280-283"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84980800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810656
C. Visweswariah, A. Conn
Static circuit optimization implies sizing of transistors and wires on a static timing basis, taking into account all paths through a circuit. Previous methods of formulating static circuit optimization produced problem statements that are very large and contain inherent redundancy and degeneracy. In this paper, a method of manipulating the timing formulation is presented which produces a dramatically more compact optimization problem, and reduces redundancy and degeneracy. The circuit optimization is therefore more efficient and effective. Numerical results to demonstrate these improvements are presented.
{"title":"Formulation of static circuit optimization with reduced size, degeneracy and redundancy by timing graph manipulation","authors":"C. Visweswariah, A. Conn","doi":"10.1109/ICCAD.1999.810656","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810656","url":null,"abstract":"Static circuit optimization implies sizing of transistors and wires on a static timing basis, taking into account all paths through a circuit. Previous methods of formulating static circuit optimization produced problem statements that are very large and contain inherent redundancy and degeneracy. In this paper, a method of manipulating the timing formulation is presented which produces a dramatically more compact optimization problem, and reduces redundancy and degeneracy. The circuit optimization is therefore more efficient and effective. Numerical results to demonstrate these improvements are presented.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"26 1","pages":"244-251"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83069147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810643
Yu Chen, A. Kahng, Gang Qu, A. Zelikovsky
We introduce the associative skew clock routing problem, which seeks a clock routing tree such that zero skew is preserved only within identified groups of sinks. The associative skew problem is easier to address within current EDA frameworks than useful-skew (skew-scheduling) approaches, and defines an interesting tradeoff between the traditional zero-skew clock routing problem (one sink group) and the Steiner minimum tree problem (n sink groups). We present a set of heuristic building blocks, including an efficient and optimal method of merging two zero-skew trees such that zero skew is preserved within the sink sets of each tree. Finally, we list a number of open issues for research and practical application.
{"title":"The associative-skew clock routing problem","authors":"Yu Chen, A. Kahng, Gang Qu, A. Zelikovsky","doi":"10.1109/ICCAD.1999.810643","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810643","url":null,"abstract":"We introduce the associative skew clock routing problem, which seeks a clock routing tree such that zero skew is preserved only within identified groups of sinks. The associative skew problem is easier to address within current EDA frameworks than useful-skew (skew-scheduling) approaches, and defines an interesting tradeoff between the traditional zero-skew clock routing problem (one sink group) and the Steiner minimum tree problem (n sink groups). We present a set of heuristic building blocks, including an efficient and optimal method of merging two zero-skew trees such that zero skew is preserved within the sink sets of each tree. Finally, we list a number of open issues for research and practical application.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"19 1","pages":"168-172"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88107120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810639
X. Lin, I. Pomeranz, S. Reddy
New techniques are presented in this paper to improve the efficiency of a test generation procedure for synchronous sequential circuits. These techniques aid the test generation procedure by reducing the search space, carrying out non-chronological backtracking, and reusing the test generation effort. They have been integrated into an existing sequential test generation system MIX to constitute a new system, named MIX-PLUS. The experimental results for the ISCAS-89 and ADDENDUM-93 benchmark circuits demonstrate the effectiveness of these techniques in improving the fault coverage and test generation efficiency.
{"title":"Techniques for improving the efficiency of sequential circuit test generation","authors":"X. Lin, I. Pomeranz, S. Reddy","doi":"10.1109/ICCAD.1999.810639","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810639","url":null,"abstract":"New techniques are presented in this paper to improve the efficiency of a test generation procedure for synchronous sequential circuits. These techniques aid the test generation procedure by reducing the search space, carrying out non-chronological backtracking, and reusing the test generation effort. They have been integrated into an existing sequential test generation system MIX to constitute a new system, named MIX-PLUS. The experimental results for the ISCAS-89 and ADDENDUM-93 benchmark circuits demonstrate the effectiveness of these techniques in improving the fault coverage and test generation efficiency.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"23 1","pages":"147-151"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77876373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810618
In-Ho Moon, J. Kukula, T. Shiple, F. Somenzi
The knowledge of the reachable states of a sequential circuit can dramatically speed up optimization and model checking. However, since exact reachability analysis may be intractable, approximate techniques are often preferable. H. Cho et al. (1996) presented the machine-by-machine (MBM) and frame-by-frame (FBF) methods to perform approximate finite state machine (FSM) traversal. FBF produces tighter upper bounds than MBM; however, it usually takes much more time and it may have convergence problems. In this paper, we show that there exists a class of methods-least fixpoint approximations-that compute the same results as RFBF ("reached FBF", one of the FBF methods). We show that one member of this class, which we call "least fixpoint MBM" (LMBM), is as efficient as MBM, but provably more accurate. Therefore, the trade-off that existed between MBM and RFBF has been eliminated. LMBM can compute RFBF-quality approximations for all the large ISCAS-89 benchmark circuits in a total of less than 9000 seconds.
{"title":"Least fixpoint approximations for reachability analysis","authors":"In-Ho Moon, J. Kukula, T. Shiple, F. Somenzi","doi":"10.1109/ICCAD.1999.810618","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810618","url":null,"abstract":"The knowledge of the reachable states of a sequential circuit can dramatically speed up optimization and model checking. However, since exact reachability analysis may be intractable, approximate techniques are often preferable. H. Cho et al. (1996) presented the machine-by-machine (MBM) and frame-by-frame (FBF) methods to perform approximate finite state machine (FSM) traversal. FBF produces tighter upper bounds than MBM; however, it usually takes much more time and it may have convergence problems. In this paper, we show that there exists a class of methods-least fixpoint approximations-that compute the same results as RFBF (\"reached FBF\", one of the FBF methods). We show that one member of this class, which we call \"least fixpoint MBM\" (LMBM), is as efficient as MBM, but provably more accurate. Therefore, the trade-off that existed between MBM and RFBF has been eliminated. LMBM can compute RFBF-quality approximations for all the large ISCAS-89 benchmark circuits in a total of less than 9000 seconds.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"77 8","pages":"41-44"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72620686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810637
Pinhong Chen, K. Keutzer
Accurate noise analysis is currently of significant concern to high-performance designs, and the number of signals susceptible to noise effects will certainly increase in smaller process geometries. Our approach uses a combination of temporal and functional information to eliminate false transition combinations and thereby overcome insufficiencies in static noise analysis. A similar idea arises in timing analysis where functional and timing information is used to eliminate false paths. The goal of our work is to develop an algorithm, software tool, and noise analysis flow that provide an accurate and conservative approach to noise analysis. In particular, this paper proposes an approach to identifying a pair of vectors that exercises the maximum crosstalk noise.
{"title":"Towards true crosstalk noise analysis","authors":"Pinhong Chen, K. Keutzer","doi":"10.1109/ICCAD.1999.810637","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810637","url":null,"abstract":"Accurate noise analysis is currently of significant concern to high-performance designs, and the number of signals susceptible to noise effects will certainly increase in smaller process geometries. Our approach uses a combination of temporal and functional information to eliminate false transition combinations and thereby overcome insufficiencies in static noise analysis. A similar idea arises in timing analysis where functional and timing information is used to eliminate false paths. The goal of our work is to develop an algorithm, software tool, and noise analysis flow that provide an accurate and conservative approach to noise analysis. In particular, this paper proposes an approach to identifying a pair of vectors that exercises the maximum crosstalk noise.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"38 1","pages":"132-137"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78123868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810699
D. Herrmann, R. Ernst
The paper presents an approach to reduce interconnect cost by insertion of identity operations in a control and data flow graph (CDFG). Other than previous approaches, it is based on systematic pattern analysis and automated transformation selection. The cost function controlling transformation selection is derived with statistical experiments and is optimized using practical benchmarks. The results show significantly reduced interconnect cost for most register architectures and application examples.
{"title":"Improved interconnect sharing by identity operation insertion","authors":"D. Herrmann, R. Ernst","doi":"10.1109/ICCAD.1999.810699","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810699","url":null,"abstract":"The paper presents an approach to reduce interconnect cost by insertion of identity operations in a control and data flow graph (CDFG). Other than previous approaches, it is based on systematic pattern analysis and automated transformation selection. The cost function controlling transformation selection is derived with statistical experiments and is optimized using practical benchmarks. The results show significantly reduced interconnect cost for most register architectures and application examples.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"39 1","pages":"489-492"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77397071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810698
K. Khouri, G. Lakshminarayana, N. Jha
The paper presents a memory binding algorithm for behaviors that are characterized by the presence of conditionals and deeply-nested loops that access memory extensively through arrays. Unlike previous works, this algorithm examines the effects of branch probabilities and allocation constraints. First, we demonstrate through examples, the importance of incorporating branch probabilities and allocation constraint information when searching for a performance-efficient memory binding. We also show the interdependence of these two factors and how varying one without considering the other may greatly affect the performance of the behavior. Second, we introduce a memory binding algorithm that has the ability to examine numerous bindings by employing an efficient performance estimation procedure. The estimation procedure exploits locality of execution, which is an inherent characteristic of target behaviors. This enables the performance estimation technique to look at the global impact of the different bindings, given the allocation constraints. We tested our algorithm using a number of benchmarks from the parallel computing domain. A series of experiments demonstrates the algorithm's ability to produce bindings that optimize performance, meet memory allocation constraints, and adapt to different resource constraints and branch probabilities. Results show that the algorithm requires 37% fewer memories with a performance loss of only 0.3% when compared to a parallel memory architecture. When compared to the best of a series of random memory bindings, the algorithm improves schedule performance by 21%.
{"title":"Memory binding for performance optimization of control-flow intensive behaviors","authors":"K. Khouri, G. Lakshminarayana, N. Jha","doi":"10.1109/ICCAD.1999.810698","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810698","url":null,"abstract":"The paper presents a memory binding algorithm for behaviors that are characterized by the presence of conditionals and deeply-nested loops that access memory extensively through arrays. Unlike previous works, this algorithm examines the effects of branch probabilities and allocation constraints. First, we demonstrate through examples, the importance of incorporating branch probabilities and allocation constraint information when searching for a performance-efficient memory binding. We also show the interdependence of these two factors and how varying one without considering the other may greatly affect the performance of the behavior. Second, we introduce a memory binding algorithm that has the ability to examine numerous bindings by employing an efficient performance estimation procedure. The estimation procedure exploits locality of execution, which is an inherent characteristic of target behaviors. This enables the performance estimation technique to look at the global impact of the different bindings, given the allocation constraints. We tested our algorithm using a number of benchmarks from the parallel computing domain. A series of experiments demonstrates the algorithm's ability to produce bindings that optimize performance, meet memory allocation constraints, and adapt to different resource constraints and branch probabilities. Results show that the algorithm requires 37% fewer memories with a performance loss of only 0.3% when compared to a parallel memory architecture. When compared to the best of a series of random memory bindings, the algorithm improves schedule performance by 21%.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"1996 1","pages":"482-488"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75667080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}