Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431237
T. Rajaprabhu, Ashutosh Kumar Singh, A. Jabir, D. Pradhan
Recently a mathematical framework was presented that bridges the gap between bit level BDD representation and word level representations such as BMD and TED. Here we present an approach that demonstrates that these diagrams admit fast evaluation of circuits for multiple outputs. The representation is based on characteristic function which provides faster evaluation time as well as compact representation. The average path length is used as a metric for evaluation time. The results obtained for benchmark circuits shows lesser number of nodes and faster evaluation time compared to binary representation.
{"title":"MODD for CF: a representation for fast evaluation of multiple-output functions","authors":"T. Rajaprabhu, Ashutosh Kumar Singh, A. Jabir, D. Pradhan","doi":"10.1109/HLDVT.2004.1431237","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431237","url":null,"abstract":"Recently a mathematical framework was presented that bridges the gap between bit level BDD representation and word level representations such as BMD and TED. Here we present an approach that demonstrates that these diagrams admit fast evaluation of circuits for multiple outputs. The representation is based on characteristic function which provides faster evaluation time as well as compact representation. The average path length is used as a metric for evaluation time. The results obtained for benchmark circuits shows lesser number of nodes and faster evaluation time compared to binary representation.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127928821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431262
Fulvio Corno, J. P. Acle, M. Ramasso, M. Reorda, M. Violante
The validation of networked systems is mandatory to guarantee the dependability levels that international standards impose in many safety-critical applications. In this paper we present an environment to study how soft errors affecting the memory elements of network nodes in CAN-based systems may alter the dynamic behavior of a car. The experimental evidence of the effectiveness of the approach is reported on a case study.
{"title":"Validation of the dependability of CAN-based networked systems","authors":"Fulvio Corno, J. P. Acle, M. Ramasso, M. Reorda, M. Violante","doi":"10.1109/HLDVT.2004.1431262","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431262","url":null,"abstract":"The validation of networked systems is mandatory to guarantee the dependability levels that international standards impose in many safety-critical applications. In this paper we present an environment to study how soft errors affecting the memory elements of network nodes in CAN-based systems may alter the dynamic behavior of a car. The experimental evidence of the effectiveness of the approach is reported on a case study.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121881429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431242
J. Campos, H. Al-Asaad
In this paper we present a preliminary method of validating a high-level microprocessor implementation by generating a test sequence for a collection of abstract design error models that can be used to compare the responses of the implementation against the specification. We first introduce a general description of the abstract mutation-based design error models that can be tailored to span any coverage measure for microprocessor validation. Then we present the clustering-and-partitioning technique that single-handedly makes the concurrent design error simulation of a large set of design errors efficient and allows for the acquisition of statistical data on the distribution of design errors across the design space. We finally present a method of effectively using this statistical information to guide the ATPG efforts.
{"title":"Mutation-based validation of high-level microprocessor implementations","authors":"J. Campos, H. Al-Asaad","doi":"10.1109/HLDVT.2004.1431242","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431242","url":null,"abstract":"In this paper we present a preliminary method of validating a high-level microprocessor implementation by generating a test sequence for a collection of abstract design error models that can be used to compare the responses of the implementation against the specification. We first introduce a general description of the abstract mutation-based design error models that can be tailored to span any coverage measure for microprocessor validation. Then we present the clustering-and-partitioning technique that single-handedly makes the concurrent design error simulation of a large set of design errors efficient and allows for the acquisition of statistical data on the distribution of design errors across the design space. We finally present a method of effectively using this statistical information to guide the ATPG efforts.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431230
I. Bayraktaroglu, M. d'Abreu
Application of an ATPG based functional test methodology that is tailored towards data paths to a floating point unit is described. The methodology employs the instruction set of the processor to control the inputs and to observe the outputs of the data path and utilizes an ATPG tool to generate test patterns. The test patterns are then converted to instruction sequences and applied as a functional test. This methodology provides high at-speed coverage without the performance and area overhead of the traditional structural testing. While we target stuck-at faults in this work, the methodology is applicable to other faults models, including delay faults.
{"title":"ATPG based functional test for data paths: application to a floating point unit","authors":"I. Bayraktaroglu, M. d'Abreu","doi":"10.1109/HLDVT.2004.1431230","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431230","url":null,"abstract":"Application of an ATPG based functional test methodology that is tailored towards data paths to a floating point unit is described. The methodology employs the instruction set of the processor to control the inputs and to observe the outputs of the data path and utilizes an ATPG tool to generate test patterns. The test patterns are then converted to instruction sequences and applied as a functional test. This methodology provides high at-speed coverage without the performance and area overhead of the traditional structural testing. While we target stuck-at faults in this work, the methodology is applicable to other faults models, including delay faults.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114854371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431271
H. Foster
Position - What is needed today is the ability to manage the various verification processes in an intelligent fashion, which requires: Partition the system-level verification problem info a targeted optimal lower-level solution Manage the bookkeeping and interaction between the partitioned verification blocks Define (and then measure) various metrics that represent some notion of progress or completeness The intelligent testbench merges dynamic, formal, and mixed signal verification with advanced coverage feedback techniques. The benefit of using the intelligent testbench in the verification flow is to reduce many of the manual steps that verification engineers currently perform, particularly those related to partitioning the design info portions ideally targeted for various tools and coverage analysis.
{"title":"Driving the intelligent testbanch: are we there yet?","authors":"H. Foster","doi":"10.1109/HLDVT.2004.1431271","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431271","url":null,"abstract":"Position - What is needed today is the ability to manage the various verification processes in an intelligent fashion, which requires: Partition the system-level verification problem info a targeted optimal lower-level solution Manage the bookkeeping and interaction between the partitioned verification blocks Define (and then measure) various metrics that represent some notion of progress or completeness The intelligent testbench merges dynamic, formal, and mixed signal verification with advanced coverage feedback techniques. The benefit of using the intelligent testbench in the verification flow is to reduce many of the manual steps that verification engineers currently perform, particularly those related to partitioning the design info portions ideally targeted for various tools and coverage analysis.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134647521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431235
D. Gomez-Prado, Q. Ren, S. Askar, M. Ciesielski, E. Boutillon
This paper presents an algorithm for variable ordering for Taylor Expansion Diagrams (TEDs). First we prove that the function implemented by the TED is independent of the order of its variables, and then that swapping of two adjacent variables in a TED is a local permutation similar to that in BDD. These two properties allow us to construct an algorithm to swap variables locally without affecting the entire TED. The proposed algorithm can be used to perform dynamic reordering, such as sifting or window permutation. We also propose a static ordering that can help reduce the permutation space and speed up the search of an optimal variable order for TEDs.
{"title":"Variable ordering for taylor expansion diagrams","authors":"D. Gomez-Prado, Q. Ren, S. Askar, M. Ciesielski, E. Boutillon","doi":"10.1109/HLDVT.2004.1431235","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431235","url":null,"abstract":"This paper presents an algorithm for variable ordering for Taylor Expansion Diagrams (TEDs). First we prove that the function implemented by the TED is independent of the order of its variables, and then that swapping of two adjacent variables in a TED is a local permutation similar to that in BDD. These two properties allow us to construct an algorithm to swap variables locally without affecting the entire TED. The proposed algorithm can be used to perform dynamic reordering, such as sifting or window permutation. We also propose a static ordering that can help reduce the permutation space and speed up the search of an optimal variable order for TEDs.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129698223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431257
V. Durairaj, P. Kalla
This paper presents hypergraph partitioning based constraint decomposition procedures to guide Boolean satisfiability search. Variable-constraint relationships are modeled on a hypergraph and partitioning based techniques are employed to decompose the constraints. Subsequently, the decomposition is analyzed to solve the CNF-SAT problem efficiently. The contributions of this research are two-fold: 1) to engineer a constraint decomposition technique using hypergraph partitioning; 2) to engineer a constraint resolution method based on this decomposition. Preliminary experiments show that our approach is fast, scalable and can significantly increase the performance (often orders of magnitude) of the SAT engine.
{"title":"Exploiting hypergraph partitioning for efficient Boolean satisfiability","authors":"V. Durairaj, P. Kalla","doi":"10.1109/HLDVT.2004.1431257","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431257","url":null,"abstract":"This paper presents hypergraph partitioning based constraint decomposition procedures to guide Boolean satisfiability search. Variable-constraint relationships are modeled on a hypergraph and partitioning based techniques are employed to decompose the constraints. Subsequently, the decomposition is analyzed to solve the CNF-SAT problem efficiently. The contributions of this research are two-fold: 1) to engineer a constraint decomposition technique using hypergraph partitioning; 2) to engineer a constraint resolution method based on this decomposition. Preliminary experiments show that our approach is fast, scalable and can significantly increase the performance (often orders of magnitude) of the SAT engine.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115023917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431256
V. Durairaj, P. Kalla
An important aspect of the Boolean satisfiability problem is to derive an ordering of variables such that branching on that order results in a faster, more efficient search. Contemporary techniques employ either variable-activity or clause-connectivity based heuristics, but not both, to guide the search. This paper advocates for simultaneous analysis of variable-activity and clause-connectivity to derive an order for SAT search. Preliminary results demonstrate that the variable order derived by our approach can significantly expedite the search. As the search proceeds, clause database is updated due to added conflict clauses. Therefore, the variable activity and connectivity information changes dynamically. Our technique analyzes this information and recomputes the variable order whenever the search is restarted. Preliminary experiments show that such a dynamic analysis of constraint-variable relationships significantly improves the performance of the SAT solvers. Our technique is very fast and this analysis time is a negligible (in milliseconds) even for instances that contain a large number of variables and constraints. This paper presents preliminary experiments, analyzes the results and comments upon future research directions.
{"title":"Dynamic analysis of constraint-variable dependencies to guide SAT diagnosis","authors":"V. Durairaj, P. Kalla","doi":"10.1109/HLDVT.2004.1431256","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431256","url":null,"abstract":"An important aspect of the Boolean satisfiability problem is to derive an ordering of variables such that branching on that order results in a faster, more efficient search. Contemporary techniques employ either variable-activity or clause-connectivity based heuristics, but not both, to guide the search. This paper advocates for simultaneous analysis of variable-activity and clause-connectivity to derive an order for SAT search. Preliminary results demonstrate that the variable order derived by our approach can significantly expedite the search. As the search proceeds, clause database is updated due to added conflict clauses. Therefore, the variable activity and connectivity information changes dynamically. Our technique analyzes this information and recomputes the variable order whenever the search is restarted. Preliminary experiments show that such a dynamic analysis of constraint-variable relationships significantly improves the performance of the SAT solvers. Our technique is very fast and this analysis time is a negligible (in milliseconds) even for instances that contain a large number of variables and constraints. This paper presents preliminary experiments, analyzes the results and comments upon future research directions.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129433871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431255
Rajat Arora, M. Hsiao
We propose a novel preprocessing technique that helps to significantly simplify a CNF instance, such that the resulting formula is easier for any SAT-solver to solve. The core of this simplification centers on a suite of lemmas and theorems derived from nontrivial Boolean reasoning. These theorems help us to deduce powerful unary and binary clauses which aid in the identification of necessary assignments, equivalent signals, complementary signals and other implication relationships among the CNF variables. The nontrivial clauses, when added to the original CNF database, subsequently simplify the CNF formula. We illustrate through experimental results that the CNF formula simplification obtained using our tool outperforms the simplification obtained using the recent preprocessors namely Hypre [F. Bacchus et al., (2003)] and NIVER [S. Subbarayan et al. (2004)]. Also, considerable savings in computation time are obtained when the simplified CNF formula is given to the SAT-solver for processing.
我们提出了一种新的预处理技术,有助于显着简化CNF实例,从而使生成的公式对任何sat求解器来说都更容易求解。这种简化的核心集中在一组从非平凡布尔推理中导出的引理和定理上。这些定理帮助我们推导出强大的一元和二元子句,这些子句有助于识别CNF变量之间的必要赋值、等效信号、互补信号和其他隐含关系。当将非平凡子句添加到原始CNF数据库中时,随后简化了CNF公式。通过实验结果表明,使用我们的工具获得的CNF公式简化优于使用最新的预处理器Hypre [F]获得的简化。Bacchus等,(2003)];Subbarayan et al.(2004)。同时,将简化后的CNF公式交给sat求解器进行处理,大大节省了计算时间。
{"title":"CNF formula simplification using implication reasoning","authors":"Rajat Arora, M. Hsiao","doi":"10.1109/HLDVT.2004.1431255","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431255","url":null,"abstract":"We propose a novel preprocessing technique that helps to significantly simplify a CNF instance, such that the resulting formula is easier for any SAT-solver to solve. The core of this simplification centers on a suite of lemmas and theorems derived from nontrivial Boolean reasoning. These theorems help us to deduce powerful unary and binary clauses which aid in the identification of necessary assignments, equivalent signals, complementary signals and other implication relationships among the CNF variables. The nontrivial clauses, when added to the original CNF database, subsequently simplify the CNF formula. We illustrate through experimental results that the CNF formula simplification obtained using our tool outperforms the simplification obtained using the recent preprocessors namely Hypre [F. Bacchus et al., (2003)] and NIVER [S. Subbarayan et al. (2004)]. Also, considerable savings in computation time are obtained when the simplified CNF formula is given to the SAT-solver for processing.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129680449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431260
C. Ciordas, T. Basten, A. Radulescu, K. Goossens, J. V. Meerbergen
Networks on chip (NoCs) are a scalable interconnect solution for large scale multiprocessor systems on chip (SoCs). However, little attention has been paid so far to the monitoring and debugging support for NoC-based systems. We propose a generic online event-based NoC monitoring service, based on hardware probes attached to NoC components. The proposed monitoring service offers run-time observability of NoC behavior and supports system-level and application debugging. The defined service can be accessed and configured at run-time from any network interface port. We present a probe architecture for the monitoring service, together with its associated programming model and traffic management strategies. We prove the feasibility of our approach via a prototype implementation for the AEthereal NoC. The additional monitoring traffic is low; typical monitoring connection configuration for a NoC-based SoC application needs only 4.8KB/s, which is 6 orders of magnitude lower than the 2GB/s per link raw bandwidth offered by the AEthereal NoC.
{"title":"An event-based network-on-chip monitoring service","authors":"C. Ciordas, T. Basten, A. Radulescu, K. Goossens, J. V. Meerbergen","doi":"10.1109/HLDVT.2004.1431260","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431260","url":null,"abstract":"Networks on chip (NoCs) are a scalable interconnect solution for large scale multiprocessor systems on chip (SoCs). However, little attention has been paid so far to the monitoring and debugging support for NoC-based systems. We propose a generic online event-based NoC monitoring service, based on hardware probes attached to NoC components. The proposed monitoring service offers run-time observability of NoC behavior and supports system-level and application debugging. The defined service can be accessed and configured at run-time from any network interface port. We present a probe architecture for the monitoring service, together with its associated programming model and traffic management strategies. We prove the feasibility of our approach via a prototype implementation for the AEthereal NoC. The additional monitoring traffic is low; typical monitoring connection configuration for a NoC-based SoC application needs only 4.8KB/s, which is 6 orders of magnitude lower than the 2GB/s per link raw bandwidth offered by the AEthereal NoC.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121329011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}