Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431237
T. Rajaprabhu, Ashutosh Kumar Singh, A. Jabir, D. Pradhan
Recently a mathematical framework was presented that bridges the gap between bit level BDD representation and word level representations such as BMD and TED. Here we present an approach that demonstrates that these diagrams admit fast evaluation of circuits for multiple outputs. The representation is based on characteristic function which provides faster evaluation time as well as compact representation. The average path length is used as a metric for evaluation time. The results obtained for benchmark circuits shows lesser number of nodes and faster evaluation time compared to binary representation.
{"title":"MODD for CF: a representation for fast evaluation of multiple-output functions","authors":"T. Rajaprabhu, Ashutosh Kumar Singh, A. Jabir, D. Pradhan","doi":"10.1109/HLDVT.2004.1431237","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431237","url":null,"abstract":"Recently a mathematical framework was presented that bridges the gap between bit level BDD representation and word level representations such as BMD and TED. Here we present an approach that demonstrates that these diagrams admit fast evaluation of circuits for multiple outputs. The representation is based on characteristic function which provides faster evaluation time as well as compact representation. The average path length is used as a metric for evaluation time. The results obtained for benchmark circuits shows lesser number of nodes and faster evaluation time compared to binary representation.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127928821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431262
Fulvio Corno, J. P. Acle, M. Ramasso, M. Reorda, M. Violante
The validation of networked systems is mandatory to guarantee the dependability levels that international standards impose in many safety-critical applications. In this paper we present an environment to study how soft errors affecting the memory elements of network nodes in CAN-based systems may alter the dynamic behavior of a car. The experimental evidence of the effectiveness of the approach is reported on a case study.
{"title":"Validation of the dependability of CAN-based networked systems","authors":"Fulvio Corno, J. P. Acle, M. Ramasso, M. Reorda, M. Violante","doi":"10.1109/HLDVT.2004.1431262","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431262","url":null,"abstract":"The validation of networked systems is mandatory to guarantee the dependability levels that international standards impose in many safety-critical applications. In this paper we present an environment to study how soft errors affecting the memory elements of network nodes in CAN-based systems may alter the dynamic behavior of a car. The experimental evidence of the effectiveness of the approach is reported on a case study.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121881429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431242
J. Campos, H. Al-Asaad
In this paper we present a preliminary method of validating a high-level microprocessor implementation by generating a test sequence for a collection of abstract design error models that can be used to compare the responses of the implementation against the specification. We first introduce a general description of the abstract mutation-based design error models that can be tailored to span any coverage measure for microprocessor validation. Then we present the clustering-and-partitioning technique that single-handedly makes the concurrent design error simulation of a large set of design errors efficient and allows for the acquisition of statistical data on the distribution of design errors across the design space. We finally present a method of effectively using this statistical information to guide the ATPG efforts.
{"title":"Mutation-based validation of high-level microprocessor implementations","authors":"J. Campos, H. Al-Asaad","doi":"10.1109/HLDVT.2004.1431242","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431242","url":null,"abstract":"In this paper we present a preliminary method of validating a high-level microprocessor implementation by generating a test sequence for a collection of abstract design error models that can be used to compare the responses of the implementation against the specification. We first introduce a general description of the abstract mutation-based design error models that can be tailored to span any coverage measure for microprocessor validation. Then we present the clustering-and-partitioning technique that single-handedly makes the concurrent design error simulation of a large set of design errors efficient and allows for the acquisition of statistical data on the distribution of design errors across the design space. We finally present a method of effectively using this statistical information to guide the ATPG efforts.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431230
I. Bayraktaroglu, M. d'Abreu
Application of an ATPG based functional test methodology that is tailored towards data paths to a floating point unit is described. The methodology employs the instruction set of the processor to control the inputs and to observe the outputs of the data path and utilizes an ATPG tool to generate test patterns. The test patterns are then converted to instruction sequences and applied as a functional test. This methodology provides high at-speed coverage without the performance and area overhead of the traditional structural testing. While we target stuck-at faults in this work, the methodology is applicable to other faults models, including delay faults.
{"title":"ATPG based functional test for data paths: application to a floating point unit","authors":"I. Bayraktaroglu, M. d'Abreu","doi":"10.1109/HLDVT.2004.1431230","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431230","url":null,"abstract":"Application of an ATPG based functional test methodology that is tailored towards data paths to a floating point unit is described. The methodology employs the instruction set of the processor to control the inputs and to observe the outputs of the data path and utilizes an ATPG tool to generate test patterns. The test patterns are then converted to instruction sequences and applied as a functional test. This methodology provides high at-speed coverage without the performance and area overhead of the traditional structural testing. While we target stuck-at faults in this work, the methodology is applicable to other faults models, including delay faults.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114854371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431271
H. Foster
Position - What is needed today is the ability to manage the various verification processes in an intelligent fashion, which requires: Partition the system-level verification problem info a targeted optimal lower-level solution Manage the bookkeeping and interaction between the partitioned verification blocks Define (and then measure) various metrics that represent some notion of progress or completeness The intelligent testbench merges dynamic, formal, and mixed signal verification with advanced coverage feedback techniques. The benefit of using the intelligent testbench in the verification flow is to reduce many of the manual steps that verification engineers currently perform, particularly those related to partitioning the design info portions ideally targeted for various tools and coverage analysis.
{"title":"Driving the intelligent testbanch: are we there yet?","authors":"H. Foster","doi":"10.1109/HLDVT.2004.1431271","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431271","url":null,"abstract":"Position - What is needed today is the ability to manage the various verification processes in an intelligent fashion, which requires: Partition the system-level verification problem info a targeted optimal lower-level solution Manage the bookkeeping and interaction between the partitioned verification blocks Define (and then measure) various metrics that represent some notion of progress or completeness The intelligent testbench merges dynamic, formal, and mixed signal verification with advanced coverage feedback techniques. The benefit of using the intelligent testbench in the verification flow is to reduce many of the manual steps that verification engineers currently perform, particularly those related to partitioning the design info portions ideally targeted for various tools and coverage analysis.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134647521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431235
D. Gomez-Prado, Q. Ren, S. Askar, M. Ciesielski, E. Boutillon
This paper presents an algorithm for variable ordering for Taylor Expansion Diagrams (TEDs). First we prove that the function implemented by the TED is independent of the order of its variables, and then that swapping of two adjacent variables in a TED is a local permutation similar to that in BDD. These two properties allow us to construct an algorithm to swap variables locally without affecting the entire TED. The proposed algorithm can be used to perform dynamic reordering, such as sifting or window permutation. We also propose a static ordering that can help reduce the permutation space and speed up the search of an optimal variable order for TEDs.
{"title":"Variable ordering for taylor expansion diagrams","authors":"D. Gomez-Prado, Q. Ren, S. Askar, M. Ciesielski, E. Boutillon","doi":"10.1109/HLDVT.2004.1431235","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431235","url":null,"abstract":"This paper presents an algorithm for variable ordering for Taylor Expansion Diagrams (TEDs). First we prove that the function implemented by the TED is independent of the order of its variables, and then that swapping of two adjacent variables in a TED is a local permutation similar to that in BDD. These two properties allow us to construct an algorithm to swap variables locally without affecting the entire TED. The proposed algorithm can be used to perform dynamic reordering, such as sifting or window permutation. We also propose a static ordering that can help reduce the permutation space and speed up the search of an optimal variable order for TEDs.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129698223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431257
V. Durairaj, P. Kalla
This paper presents hypergraph partitioning based constraint decomposition procedures to guide Boolean satisfiability search. Variable-constraint relationships are modeled on a hypergraph and partitioning based techniques are employed to decompose the constraints. Subsequently, the decomposition is analyzed to solve the CNF-SAT problem efficiently. The contributions of this research are two-fold: 1) to engineer a constraint decomposition technique using hypergraph partitioning; 2) to engineer a constraint resolution method based on this decomposition. Preliminary experiments show that our approach is fast, scalable and can significantly increase the performance (often orders of magnitude) of the SAT engine.
{"title":"Exploiting hypergraph partitioning for efficient Boolean satisfiability","authors":"V. Durairaj, P. Kalla","doi":"10.1109/HLDVT.2004.1431257","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431257","url":null,"abstract":"This paper presents hypergraph partitioning based constraint decomposition procedures to guide Boolean satisfiability search. Variable-constraint relationships are modeled on a hypergraph and partitioning based techniques are employed to decompose the constraints. Subsequently, the decomposition is analyzed to solve the CNF-SAT problem efficiently. The contributions of this research are two-fold: 1) to engineer a constraint decomposition technique using hypergraph partitioning; 2) to engineer a constraint resolution method based on this decomposition. Preliminary experiments show that our approach is fast, scalable and can significantly increase the performance (often orders of magnitude) of the SAT engine.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115023917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431256
V. Durairaj, P. Kalla
An important aspect of the Boolean satisfiability problem is to derive an ordering of variables such that branching on that order results in a faster, more efficient search. Contemporary techniques employ either variable-activity or clause-connectivity based heuristics, but not both, to guide the search. This paper advocates for simultaneous analysis of variable-activity and clause-connectivity to derive an order for SAT search. Preliminary results demonstrate that the variable order derived by our approach can significantly expedite the search. As the search proceeds, clause database is updated due to added conflict clauses. Therefore, the variable activity and connectivity information changes dynamically. Our technique analyzes this information and recomputes the variable order whenever the search is restarted. Preliminary experiments show that such a dynamic analysis of constraint-variable relationships significantly improves the performance of the SAT solvers. Our technique is very fast and this analysis time is a negligible (in milliseconds) even for instances that contain a large number of variables and constraints. This paper presents preliminary experiments, analyzes the results and comments upon future research directions.
{"title":"Dynamic analysis of constraint-variable dependencies to guide SAT diagnosis","authors":"V. Durairaj, P. Kalla","doi":"10.1109/HLDVT.2004.1431256","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431256","url":null,"abstract":"An important aspect of the Boolean satisfiability problem is to derive an ordering of variables such that branching on that order results in a faster, more efficient search. Contemporary techniques employ either variable-activity or clause-connectivity based heuristics, but not both, to guide the search. This paper advocates for simultaneous analysis of variable-activity and clause-connectivity to derive an order for SAT search. Preliminary results demonstrate that the variable order derived by our approach can significantly expedite the search. As the search proceeds, clause database is updated due to added conflict clauses. Therefore, the variable activity and connectivity information changes dynamically. Our technique analyzes this information and recomputes the variable order whenever the search is restarted. Preliminary experiments show that such a dynamic analysis of constraint-variable relationships significantly improves the performance of the SAT solvers. Our technique is very fast and this analysis time is a negligible (in milliseconds) even for instances that contain a large number of variables and constraints. This paper presents preliminary experiments, analyzes the results and comments upon future research directions.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129433871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431255
Rajat Arora, M. Hsiao
We propose a novel preprocessing technique that helps to significantly simplify a CNF instance, such that the resulting formula is easier for any SAT-solver to solve. The core of this simplification centers on a suite of lemmas and theorems derived from nontrivial Boolean reasoning. These theorems help us to deduce powerful unary and binary clauses which aid in the identification of necessary assignments, equivalent signals, complementary signals and other implication relationships among the CNF variables. The nontrivial clauses, when added to the original CNF database, subsequently simplify the CNF formula. We illustrate through experimental results that the CNF formula simplification obtained using our tool outperforms the simplification obtained using the recent preprocessors namely Hypre [F. Bacchus et al., (2003)] and NIVER [S. Subbarayan et al. (2004)]. Also, considerable savings in computation time are obtained when the simplified CNF formula is given to the SAT-solver for processing.
我们提出了一种新的预处理技术,有助于显着简化CNF实例,从而使生成的公式对任何sat求解器来说都更容易求解。这种简化的核心集中在一组从非平凡布尔推理中导出的引理和定理上。这些定理帮助我们推导出强大的一元和二元子句,这些子句有助于识别CNF变量之间的必要赋值、等效信号、互补信号和其他隐含关系。当将非平凡子句添加到原始CNF数据库中时,随后简化了CNF公式。通过实验结果表明,使用我们的工具获得的CNF公式简化优于使用最新的预处理器Hypre [F]获得的简化。Bacchus等,(2003)];Subbarayan et al.(2004)。同时,将简化后的CNF公式交给sat求解器进行处理,大大节省了计算时间。
{"title":"CNF formula simplification using implication reasoning","authors":"Rajat Arora, M. Hsiao","doi":"10.1109/HLDVT.2004.1431255","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431255","url":null,"abstract":"We propose a novel preprocessing technique that helps to significantly simplify a CNF instance, such that the resulting formula is easier for any SAT-solver to solve. The core of this simplification centers on a suite of lemmas and theorems derived from nontrivial Boolean reasoning. These theorems help us to deduce powerful unary and binary clauses which aid in the identification of necessary assignments, equivalent signals, complementary signals and other implication relationships among the CNF variables. The nontrivial clauses, when added to the original CNF database, subsequently simplify the CNF formula. We illustrate through experimental results that the CNF formula simplification obtained using our tool outperforms the simplification obtained using the recent preprocessors namely Hypre [F. Bacchus et al., (2003)] and NIVER [S. Subbarayan et al. (2004)]. Also, considerable savings in computation time are obtained when the simplified CNF formula is given to the SAT-solver for processing.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129680449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431227
S. Shamshiri, H. Esmaeilzadeh, Z. Navabi
TIS (S. Shamshiri et al., 2004) is an instruction level methodology for CPU core self-testing that enhances the instruction set of a CPU with test instructions. Since the functionality of test instructions is the same as the NOP instruction, NOP instructions can be replaced with test instructions so that online testing can be done with no performance penalty. TIS tests different parts of the CPU and detects stuck-at faults. This method can be employed in offline and online testing of all kinds of processors. Hardware-oriented implementation of TIS was proposed previously (S. Shamshiri et al., 2004) that tests just the combinational units of the processor. Contributions of this paper are first, a software-based approach that reduces the hardware overhead to a reasonable size and second, testing the sequential parts of the processor besides the combinational parts. Both hardware and software oriented approaches are implemented on a pipelined CPU core and their area overheads are compared. To demonstrate the appropriateness of the TIS test technique, several programs are executed and fault coverage results are presented.
TIS (S. Shamshiri et al., 2004)是一种用于CPU核心自我测试的指令级方法,它通过测试指令来增强CPU的指令集。由于测试指令的功能与NOP指令相同,因此可以用测试指令替换NOP指令,这样就可以在没有性能损失的情况下进行在线测试。TIS测试CPU的不同部分并检测卡在故障上。该方法可用于各种处理器的离线和在线测试。以前提出了面向硬件的TIS实现(S. Shamshiri et al., 2004),它只测试处理器的组合单元。本文的贡献在于:首先,基于软件的方法将硬件开销降低到合理的大小;其次,除了组合部分之外,还测试了处理器的顺序部分。面向硬件和面向软件的方法都是在一个流水线的CPU核心上实现的,并比较了它们的面积开销。为了证明TIS测试技术的适用性,执行了几个程序并给出了故障覆盖率结果。
{"title":"Instruction level test methodology for CPU core software-based self-testing","authors":"S. Shamshiri, H. Esmaeilzadeh, Z. Navabi","doi":"10.1109/HLDVT.2004.1431227","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431227","url":null,"abstract":"TIS (S. Shamshiri et al., 2004) is an instruction level methodology for CPU core self-testing that enhances the instruction set of a CPU with test instructions. Since the functionality of test instructions is the same as the NOP instruction, NOP instructions can be replaced with test instructions so that online testing can be done with no performance penalty. TIS tests different parts of the CPU and detects stuck-at faults. This method can be employed in offline and online testing of all kinds of processors. Hardware-oriented implementation of TIS was proposed previously (S. Shamshiri et al., 2004) that tests just the combinational units of the processor. Contributions of this paper are first, a software-based approach that reduces the hardware overhead to a reasonable size and second, testing the sequential parts of the processor besides the combinational parts. Both hardware and software oriented approaches are implemented on a pipelined CPU core and their area overheads are compared. To demonstrate the appropriateness of the TIS test technique, several programs are executed and fault coverage results are presented.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122635073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}