S. Nakano, Yoichi Wakaba, Shinobu Nagayama, S. Wakabayashi
This paper presents a design method for programmable two-variable discrete (real-valued) function generators based on a piecewise polynomial approximation. To approximate a given discrete function by polynomials efficiently, we propose a hybrid approximation method using both spline and bilinear interpolations. The proposed method can significantly reduce memory size needed to implement a two-variable discrete function by accepting a small approximation error, and thus it can be used to explore design space taking into account a trade-off between memory size and approximation error. Experimental results show that the proposed design method reduces 75% of memory size without losing circuit speed by accepting only 1% error, and the circuits designed by the proposed method achieve about 650 times greater throughput than their software programs. We can automatically synthesize such compact and fast function generators using the proposed design method.
{"title":"A Design Method for Programmable Two-Variable Discrete Function Generators Using Spline and Bilinear Interpolations","authors":"S. Nakano, Yoichi Wakaba, Shinobu Nagayama, S. Wakabayashi","doi":"10.1109/DSD.2011.94","DOIUrl":"https://doi.org/10.1109/DSD.2011.94","url":null,"abstract":"This paper presents a design method for programmable two-variable discrete (real-valued) function generators based on a piecewise polynomial approximation. To approximate a given discrete function by polynomials efficiently, we propose a hybrid approximation method using both spline and bilinear interpolations. The proposed method can significantly reduce memory size needed to implement a two-variable discrete function by accepting a small approximation error, and thus it can be used to explore design space taking into account a trade-off between memory size and approximation error. Experimental results show that the proposed design method reduces 75% of memory size without losing circuit speed by accepting only 1% error, and the circuits designed by the proposed method achieve about 650 times greater throughput than their software programs. We can automatically synthesize such compact and fast function generators using the proposed design method.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115504355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many contemporary electronic systems are based on System-on-Chips (SoC) such as micro-controllers or signal processors that communicate with many peripheral devices on the system board and beyond. While, SoC test was a topic of extremely high interest during the last decade, the test beyond SoCs didn't get much attention after introduction of Boundary Scan (BS) 30 years ago. It is not a surprise that the restricted capabilities of BS with respect of such modern challenges as dynamic (timing-accurate), at-speed and high-speed testing as well as in-system programming create considerable troubles for test engineers in production environments. In this paper, we point out particular challenges in testing the system's infrastructure beyond the SoCs as well as propose a general modeling methodology for test automation for microprocessor SoC-based system boards. The new so-called "Lego-style" test automation methodology forms a complimentary solution to traditional boundary scan. Together, they provide extended fault coverage that targets shorts, opens, stuck-at faults as well as dynamic faults (e.g. delays and transition faults). The "Legostyle" model allows reducing the labour effort drastically once the library of model components is created.
{"title":"SoC and Board Modeling for Processor-Centric Board Testing","authors":"A. Tsertov, R. Ubar, A. Jutman, S. Devadze","doi":"10.1109/DSD.2011.79","DOIUrl":"https://doi.org/10.1109/DSD.2011.79","url":null,"abstract":"Many contemporary electronic systems are based on System-on-Chips (SoC) such as micro-controllers or signal processors that communicate with many peripheral devices on the system board and beyond. While, SoC test was a topic of extremely high interest during the last decade, the test beyond SoCs didn't get much attention after introduction of Boundary Scan (BS) 30 years ago. It is not a surprise that the restricted capabilities of BS with respect of such modern challenges as dynamic (timing-accurate), at-speed and high-speed testing as well as in-system programming create considerable troubles for test engineers in production environments. In this paper, we point out particular challenges in testing the system's infrastructure beyond the SoCs as well as propose a general modeling methodology for test automation for microprocessor SoC-based system boards. The new so-called \"Lego-style\" test automation methodology forms a complimentary solution to traditional boundary scan. Together, they provide extended fault coverage that targets shorts, opens, stuck-at faults as well as dynamic faults (e.g. delays and transition faults). The \"Legostyle\" model allows reducing the labour effort drastically once the library of model components is created.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Historically compiler optimizations have been used mainly for improving embedded systems performance. However, for a wide range of today's power restricted, battery operated embedded devices, power consumption becomes a crucial problem that is addressed by modern compilers. Biomedical implants are one good example of such embedded systems. In addition to power, such devices need to also satisfy high reliability levels. Therefore, performance, power and reliability optimizations should all be considered while designing and programming implantable systems. Various software optimizations, e.g., during compilation, can provide the necessary means to achieve this goal. Additionally the system can be configured to trade-off between the above three factors based on the specific application requirements. In this paper we categorize previous works on compiler optimizations for low power and fault tolerance. Our study considers differences in instruction count and memory overhead, fault coverage and hardware modifications. Finally, the compatibility of different methods from both optimization classes is assessed. Five compatible pairs that can be combined with few or no limitations have been identified.
{"title":"Compatibility Study of Compile-Time Optimizations for Power and Reliability","authors":"G. Nazarian, C. Strydis, G. Gaydadjiev","doi":"10.1109/DSD.2011.108","DOIUrl":"https://doi.org/10.1109/DSD.2011.108","url":null,"abstract":"Historically compiler optimizations have been used mainly for improving embedded systems performance. However, for a wide range of today's power restricted, battery operated embedded devices, power consumption becomes a crucial problem that is addressed by modern compilers. Biomedical implants are one good example of such embedded systems. In addition to power, such devices need to also satisfy high reliability levels. Therefore, performance, power and reliability optimizations should all be considered while designing and programming implantable systems. Various software optimizations, e.g., during compilation, can provide the necessary means to achieve this goal. Additionally the system can be configured to trade-off between the above three factors based on the specific application requirements. In this paper we categorize previous works on compiler optimizations for low power and fault tolerance. Our study considers differences in instruction count and memory overhead, fault coverage and hardware modifications. Finally, the compatibility of different methods from both optimization classes is assessed. Five compatible pairs that can be combined with few or no limitations have been identified.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123014012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Grinschgl, Armin Krieg, C. Steger, R. Weiss, H. Bock, J. Haid
The increasing level of integration and decreasing size of circuit elements leads to greater probabilities of operational faults. More sensible electronic devices are also more prone to external in?uences by energizing radiation. Additionally not only natural causes of faults are a concern of today's chip designers. Especially smart cards are exposed to complex attacks through which an adversary tries to extract knowledge from a secured system by putting it into an undefined state. These problems make it increasingly necessary to test a new design for its fault robustness. Several previous publications propose the usage of single bit injection platforms, but the limited impact of these campaigns might not be the right choice to provide a wide fault attack coverage. This paper first introduces a new in-system fault injection strategy for automatic test pattern injection. Secondly, an approach is presented that provides an abstraction of the internal fault injection structures to a more generic high level view. Through this abstraction it is possible to support the task separation of design and test-engineers and to enable the emulation of physical attacks on circuit level. The controller's generalized interface provides the ability to use the developed controller on different systems using the same bus system. The high level of abstraction is combinable with the advantage of high performance autonomous emulations on high end FPGA-platforms.
{"title":"Modular Fault Injector for Multiple Fault Dependability and Security Evaluations","authors":"J. Grinschgl, Armin Krieg, C. Steger, R. Weiss, H. Bock, J. Haid","doi":"10.1109/DSD.2011.76","DOIUrl":"https://doi.org/10.1109/DSD.2011.76","url":null,"abstract":"The increasing level of integration and decreasing size of circuit elements leads to greater probabilities of operational faults. More sensible electronic devices are also more prone to external in?uences by energizing radiation. Additionally not only natural causes of faults are a concern of today's chip designers. Especially smart cards are exposed to complex attacks through which an adversary tries to extract knowledge from a secured system by putting it into an undefined state. These problems make it increasingly necessary to test a new design for its fault robustness. Several previous publications propose the usage of single bit injection platforms, but the limited impact of these campaigns might not be the right choice to provide a wide fault attack coverage. This paper first introduces a new in-system fault injection strategy for automatic test pattern injection. Secondly, an approach is presented that provides an abstraction of the internal fault injection structures to a more generic high level view. Through this abstraction it is possible to support the task separation of design and test-engineers and to enable the emulation of physical attacks on circuit level. The controller's generalized interface provides the ability to use the developed controller on different systems using the same bus system. The high level of abstraction is combinable with the advantage of high performance autonomous emulations on high end FPGA-platforms.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"373 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116627360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exponential growth in biological sequence data combined with the computationally intensive nature of bioinformatics applications results in a continuously rising demand for processing power. In this paper, we propose a performance model that captures the behavior and performance scalability of HMMER, a bioinformatics application that identifies similarities between protein sequences and a protein family model. With our analytical model, the optimal master-worker ratio for a user scenario can be estimated. The model is evaluated and is found accurate with less than 2% error. We applied our model to a widely used heterogeneous multicore, the Cell BE, using the PPE and SPEs as master and workers respectively. Experimental results show that for the current parallelization strategy, the I/O speed at which the database is read from disk and the inputs pre-processing are the two most limiting factors in the Cell BE case.
{"title":"HMMER Performance Model for Multicore Architectures","authors":"S. Isaza, Ernst Houtgast, G. Gaydadjiev","doi":"10.1109/DSD.2011.111","DOIUrl":"https://doi.org/10.1109/DSD.2011.111","url":null,"abstract":"Exponential growth in biological sequence data combined with the computationally intensive nature of bioinformatics applications results in a continuously rising demand for processing power. In this paper, we propose a performance model that captures the behavior and performance scalability of HMMER, a bioinformatics application that identifies similarities between protein sequences and a protein family model. With our analytical model, the optimal master-worker ratio for a user scenario can be estimated. The model is evaluated and is found accurate with less than 2% error. We applied our model to a widely used heterogeneous multicore, the Cell BE, using the PPE and SPEs as master and workers respectively. Experimental results show that for the current parallelization strategy, the I/O speed at which the database is read from disk and the inputs pre-processing are the two most limiting factors in the Cell BE case.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129777191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Representation of multiple-output logic functions by Multi-Terminal Binary Decision Diagrams (MTBDDs) is studied for the useful class of sparse logic functions specified by the number of true min-terms. This paper derives upper bounds on the MTBDD width, which determine the size of look-up tables (LUTs) needed for hardware realization of these functions in FPGA logic synthesis. The obtained bounds are generalization of similar known bounds for single-output logic functions. Finally a procedure how to find the optimum mapping of MTBDD to a LUT cascade is presented and illustrated on a set of benchmarks.
{"title":"On the Cascade Implementation of Multiple-Output Sparse Logic Functions","authors":"V. Dvorák, P. Mikusek","doi":"10.1109/DSD.2011.8","DOIUrl":"https://doi.org/10.1109/DSD.2011.8","url":null,"abstract":"Representation of multiple-output logic functions by Multi-Terminal Binary Decision Diagrams (MTBDDs) is studied for the useful class of sparse logic functions specified by the number of true min-terms. This paper derives upper bounds on the MTBDD width, which determine the size of look-up tables (LUTs) needed for hardware realization of these functions in FPGA logic synthesis. The obtained bounds are generalization of similar known bounds for single-output logic functions. Finally a procedure how to find the optimum mapping of MTBDD to a LUT cascade is presented and illustrated on a set of benchmarks.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116294559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D IC technology enables NoC architectures to offer greater device integration and shorter interlayer interconnects. The primary 3D NoC architectures such as Symmetric 3D Mesh NoC could not exploit the beneficial feature of a negligible inter-layer distance in 3D chips. To cope with this, 3D NoC-Bus Hybrid architecture was proposed which is a hybrid between packet-switched network and a bus. This architecture is feasible providing both performance and area benefits, while still suffering from naive and straightforward hybridization between NoC and bus media. In this paper, an ultra optimized hybridization scheme is proposed to enhance system performance, power consumption, area and thermal issues of 3D NoC-Bus Hybrid Mesh. The scheme benefits from a rule called emph{LastZ} which enables ultra optimization of the inter-layer communication architecture. In addition, we present a wrapper to preserve the backward compatibility of the proposed architecture for connecting with the existing network interfaces. To estimate the efficiency of the proposed architecture, the system has been simulated using uniform, hotspot 10%, and Negative Exponential Distribution (NED) traffic patterns. Our extensive simulations demonstrate significant area, power, and performance improvements compared to a typical 3D NoC-Bus Hybrid Mesh architecture.
{"title":"LastZ: An Ultra Optimized 3D Networks-on-Chip Architecture","authors":"A. Rahmani, P. Liljeberg, J. Plosila, H. Tenhunen","doi":"10.1109/DSD.2011.26","DOIUrl":"https://doi.org/10.1109/DSD.2011.26","url":null,"abstract":"3D IC technology enables NoC architectures to offer greater device integration and shorter interlayer interconnects. The primary 3D NoC architectures such as Symmetric 3D Mesh NoC could not exploit the beneficial feature of a negligible inter-layer distance in 3D chips. To cope with this, 3D NoC-Bus Hybrid architecture was proposed which is a hybrid between packet-switched network and a bus. This architecture is feasible providing both performance and area benefits, while still suffering from naive and straightforward hybridization between NoC and bus media. In this paper, an ultra optimized hybridization scheme is proposed to enhance system performance, power consumption, area and thermal issues of 3D NoC-Bus Hybrid Mesh. The scheme benefits from a rule called emph{LastZ} which enables ultra optimization of the inter-layer communication architecture. In addition, we present a wrapper to preserve the backward compatibility of the proposed architecture for connecting with the existing network interfaces. To estimate the efficiency of the proposed architecture, the system has been simulated using uniform, hotspot 10%, and Negative Exponential Distribution (NED) traffic patterns. Our extensive simulations demonstrate significant area, power, and performance improvements compared to a typical 3D NoC-Bus Hybrid Mesh architecture.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114806025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
HDL-mutation based fault injection and analysis is considered as an important coverage metric for measuring the quality of design simulation processes [20, 3, 1, 2]. In this work, we try to solve the problem of automatic simulation data generation targeting HDL mutation faults. We follow a search based approach and eliminate the need for symbolic execution and mathematical constraint solving from existing work. An objective cost function is defined on the test input space and serves the guidance of search for fault-detecting test data. This is done by first mapping the simulation traces under a test onto a control and data flow graph structure which is extracted from the design. Then the progress of fault detection can be measured quantitatively on this graph to be the cost value. By minimizing this cost we approach the target test data. The effectiveness of the cost function is investigated under an example neighborhood search scheme. Case study with a floating point arithmetic IP design has shown that the cost function is able to guide effectively the search procedure towards a fault-detecting test. The cost calculation time as the search overhead was also observed to be minor compared to the actual design simulation time.
{"title":"HDL-Mutation Based Simulation Data Generation by Propagation Guided Search","authors":"Tao Xie, W. Müller, Florian Letombe","doi":"10.1109/DSD.2011.83","DOIUrl":"https://doi.org/10.1109/DSD.2011.83","url":null,"abstract":"HDL-mutation based fault injection and analysis is considered as an important coverage metric for measuring the quality of design simulation processes [20, 3, 1, 2]. In this work, we try to solve the problem of automatic simulation data generation targeting HDL mutation faults. We follow a search based approach and eliminate the need for symbolic execution and mathematical constraint solving from existing work. An objective cost function is defined on the test input space and serves the guidance of search for fault-detecting test data. This is done by first mapping the simulation traces under a test onto a control and data flow graph structure which is extracted from the design. Then the progress of fault detection can be measured quantitatively on this graph to be the cost value. By minimizing this cost we approach the target test data. The effectiveness of the cost function is investigated under an example neighborhood search scheme. Case study with a floating point arithmetic IP design has shown that the cost function is able to guide effectively the search procedure towards a fault-detecting test. The cost calculation time as the search overhead was also observed to be minor compared to the actual design simulation time.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125246818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the crucial problem of static power reduction for circuits implemented in nano-CMOS technologies. Its solution requires accurate and rapid power estimation, but the known power simulators are not accurate and quick at the same time. The paper proposes and discusses a new rapid and very accurate leakage power estimation method and related simulator. The maximum estimation error of the simulator is within 5%, with an average error of only 0.57%, and run-times in the range of seconds, while for the same circuits HSPICE runs for hours or days.
{"title":"Rapid and Accurate Leakage Power Estimation for Nano-CMOS Circuits","authors":"M. Bryk, L. Józwiak, W. Kuzmicz","doi":"10.1109/DSD.2011.92","DOIUrl":"https://doi.org/10.1109/DSD.2011.92","url":null,"abstract":"This paper addresses the crucial problem of static power reduction for circuits implemented in nano-CMOS technologies. Its solution requires accurate and rapid power estimation, but the known power simulators are not accurate and quick at the same time. The paper proposes and discusses a new rapid and very accurate leakage power estimation method and related simulator. The maximum estimation error of the simulator is within 5%, with an average error of only 0.57%, and run-times in the range of seconds, while for the same circuits HSPICE runs for hours or days.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125261270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modular multiplication is the key ingredient needed to realize most public-key cryptographic primitives. In a modular setting, multiplications are carried in two steps: namely a usual integer arithmetic followed by a reduction step. Progress in any of these steps naturally improves the modular multiplication but it is not possible to interleave the best algorithms of these stages. In this study, we propose architectures for recently proposed method of interleaving the Karatsuba-Ofman multiplier and bipartite modular reduction on the upper most layer of Karatsuba-Ofman's recursion. We manage to come up with a high performance modular multiplication architecture by taking the advantage of a fast multiplication and a parallel reduction method.
{"title":"Architectures for Fast Modular Multiplication","authors":"Ahmet Aris, S. Yalcin, G. Saldamli","doi":"10.1109/DSD.2011.60","DOIUrl":"https://doi.org/10.1109/DSD.2011.60","url":null,"abstract":"Modular multiplication is the key ingredient needed to realize most public-key cryptographic primitives. In a modular setting, multiplications are carried in two steps: namely a usual integer arithmetic followed by a reduction step. Progress in any of these steps naturally improves the modular multiplication but it is not possible to interleave the best algorithms of these stages. In this study, we propose architectures for recently proposed method of interleaving the Karatsuba-Ofman multiplier and bipartite modular reduction on the upper most layer of Karatsuba-Ofman's recursion. We manage to come up with a high performance modular multiplication architecture by taking the advantage of a fast multiplication and a parallel reduction method.","PeriodicalId":267187,"journal":{"name":"2011 14th Euromicro Conference on Digital System Design","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121681069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}