Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364678
J. Anders, Shaji Krishnan, G. Gronthoud
AC sensitivities guide most analogue automatic test pattern generator (AATPG) while determining the optimal frequencies of a sinusoidal test stimulus. The optimal frequencies thus determined, normally lie in the close vicinity of the operating frequency of the circuit. Although these frequencies are justifiable by the principles of the circuit, these test frequencies do not bring any added value to the ultimate goal of cheap alternatives (low frequency test signal and cheaper measurement equipment) for the analogue and RF tests. In this paper, we propose to re-configure the circuit blocks, in such a way that the operating frequencies of the respective sub-block are shifted to lower testable frequencies. We have validated our proposal on a sub-block of a satellite receiver circuit that resulted in lowering the test frequencies of the corresponding sub-blocks from 12 GHz to 4MHz, while attaining the same level of defect coverage
{"title":"Re-Configuration of Sub-blocks for Effective Application of Time Domain Tests","authors":"J. Anders, Shaji Krishnan, G. Gronthoud","doi":"10.1109/DATE.2007.364678","DOIUrl":"https://doi.org/10.1109/DATE.2007.364678","url":null,"abstract":"AC sensitivities guide most analogue automatic test pattern generator (AATPG) while determining the optimal frequencies of a sinusoidal test stimulus. The optimal frequencies thus determined, normally lie in the close vicinity of the operating frequency of the circuit. Although these frequencies are justifiable by the principles of the circuit, these test frequencies do not bring any added value to the ultimate goal of cheap alternatives (low frequency test signal and cheaper measurement equipment) for the analogue and RF tests. In this paper, we propose to re-configure the circuit blocks, in such a way that the operating frequencies of the respective sub-block are shifted to lower testable frequencies. We have validated our proposal on a sub-block of a satellite receiver circuit that resulted in lowering the test frequencies of the corresponding sub-blocks from 12 GHz to 4MHz, while attaining the same level of defect coverage","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"369 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133937749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364570
D. Mueller, H. Graeb, Ulf Schlichtmann
One of the main tasks in analog design is the sizing of the circuit parameters, such as transistor lengths and widths, in order to obtain optimal circuit performances, such as high gain or low power consumption. In most cases one performance can only be optimized at cost of others, therefore a sizing must aim at an optimal trade-off between the important circuit performances. This paper presents a new deterministic method to calculate the complete range of performance trade-offs, the so-called Pareto-optimal front, of a given circuit topology. Known deterministic methods solve a set of constrained multi-objective optimization problems independently of each other. The presented method minimizes a set of goal attainment (GA) optimization problems simultaneously. In a parallel algorithm, the individual GA optimization processes compare and exchange their iterative solutions. This leads to a significant improvement in the efficiency and quality of analog trade-off design
{"title":"Trade-Off Design of Analog Circuits using Goal Attainment and \"Wave Front\" Sequential Quadratic Programming","authors":"D. Mueller, H. Graeb, Ulf Schlichtmann","doi":"10.1109/DATE.2007.364570","DOIUrl":"https://doi.org/10.1109/DATE.2007.364570","url":null,"abstract":"One of the main tasks in analog design is the sizing of the circuit parameters, such as transistor lengths and widths, in order to obtain optimal circuit performances, such as high gain or low power consumption. In most cases one performance can only be optimized at cost of others, therefore a sizing must aim at an optimal trade-off between the important circuit performances. This paper presents a new deterministic method to calculate the complete range of performance trade-offs, the so-called Pareto-optimal front, of a given circuit topology. Known deterministic methods solve a set of constrained multi-objective optimization problems independently of each other. The presented method minimizes a set of goal attainment (GA) optimization problems simultaneously. In a parallel algorithm, the individual GA optimization processes compare and exchange their iterative solutions. This leads to a significant improvement in the efficiency and quality of analog trade-off design","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131899318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The failure rate, the sources of failures and the test costs for nanometer devices are all increasing. Therefore, to create a reliable system-on-a-chip device requires designers to implement fault tolerance. However, while system-level fault tolerance could significantly relax the quality requirements of the system's building blocks, every fault-tolerant scheme only works under certain failure mechanisms and within a certain range of error probabilities. Also, designing a system with a high failure-rate component could be very expensive because the growth rate of the design complexity and the system overhead for fault tolerance could be significantly greater than the component failure rate. Therefore, it is desirable to understand the trade-offs between component test quality and system fault-tolerant capability for achieving the desired reliability under cost constraints. In this paper, the authors propose an analysis framework for system reliability considering (a) the test quality achieved by manufacturing testing, on-line self-checking, and off-line built-in self-test; (b) the fault-tolerant and spare schemes; and (c) the component defect and error probabilities. The authors demonstrate that, through proper redundancy configurations and low-cost testing to insure a certain degree of component test quality, a low-redundant system might achieve equal or higher reliability than a high-redundant system
{"title":"A Framework for System Reliability Analysis Considering Both System Error Tolerance and Component Test Quality","authors":"Sung-Jui Pan, K. Cheng","doi":"10.1145/1266366.1266714","DOIUrl":"https://doi.org/10.1145/1266366.1266714","url":null,"abstract":"The failure rate, the sources of failures and the test costs for nanometer devices are all increasing. Therefore, to create a reliable system-on-a-chip device requires designers to implement fault tolerance. However, while system-level fault tolerance could significantly relax the quality requirements of the system's building blocks, every fault-tolerant scheme only works under certain failure mechanisms and within a certain range of error probabilities. Also, designing a system with a high failure-rate component could be very expensive because the growth rate of the design complexity and the system overhead for fault tolerance could be significantly greater than the component failure rate. Therefore, it is desirable to understand the trade-offs between component test quality and system fault-tolerant capability for achieving the desired reliability under cost constraints. In this paper, the authors propose an analysis framework for system reliability considering (a) the test quality achieved by manufacturing testing, on-line self-checking, and off-line built-in self-test; (b) the fault-tolerant and spare schemes; and (c) the component defect and error probabilities. The authors demonstrate that, through proper redundancy configurations and low-cost testing to insure a certain degree of component test quality, a low-redundant system might achieve equal or higher reliability than a high-redundant system","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114444886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364630
J. Savoj, A. Abbasfar, A. Amirkhany, B. Garlepp, M. Horowitz
In this paper, a new technique for characterization of digital-to-analog converters (DAC) used in wideband applications is described. Unlike the standard narrowband approach, this technique employs least square estimation to characterize the DAC from dc to any target frequency. Characterization is performed using a random sequence with certain temporal and probabilistic characteristics suitable for intended operating conditions. The technique provides a linear estimation of the system and decomposes nonlinearity into higher-order harmonics and deterministic periodic noise. The technique can also be used to derive the impulse response of the converter, predict its operating bandwidth, and provide far more insight into its sources of distortion
{"title":"A New Technique for Characterization of Digital-to-Analog Converters in High-Speed Systems","authors":"J. Savoj, A. Abbasfar, A. Amirkhany, B. Garlepp, M. Horowitz","doi":"10.1109/DATE.2007.364630","DOIUrl":"https://doi.org/10.1109/DATE.2007.364630","url":null,"abstract":"In this paper, a new technique for characterization of digital-to-analog converters (DAC) used in wideband applications is described. Unlike the standard narrowband approach, this technique employs least square estimation to characterize the DAC from dc to any target frequency. Characterization is performed using a random sequence with certain temporal and probabilistic characteristics suitable for intended operating conditions. The technique provides a linear estimation of the system and decomposes nonlinearity into higher-order harmonics and deterministic periodic noise. The technique can also be used to derive the impulse response of the converter, predict its operating bandwidth, and provide far more insight into its sources of distortion","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115594419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364612
Hamid Noori, Farhad Mehdipour, K. Murakami, Koji Inoue, M. Goudarzi
To improve the performance of embedded processors, an effective technique is collapsing critical computation subgraphs as application-specific instruction set extensions and executing them on custom functional units. The problems of this approach are immense cost and long time of designing. To address these issues, an adaptive extensible processor was proposed in which custom instructions (CIs) are generated and added after chip-fabrication. To support this feature, custom functional units are replaced by a reconfigurable matrix of functional units with the capability of conditional execution. Unlike previous proposed CIs, it can include multiple exits. Experimental results show that multi-exit CIs enhance the performance by 46% in average compared to CIs limited to one basic block. A maximum speedup of 2.89 compared to a 4-issue in-order RISC processor, and a speedup of 1.66 in average, was achieved on MiBench benchmark suite
{"title":"Generating and Executing Multi-Exit Custom Instructions for an Adaptive Extensible Processor","authors":"Hamid Noori, Farhad Mehdipour, K. Murakami, Koji Inoue, M. Goudarzi","doi":"10.1109/DATE.2007.364612","DOIUrl":"https://doi.org/10.1109/DATE.2007.364612","url":null,"abstract":"To improve the performance of embedded processors, an effective technique is collapsing critical computation subgraphs as application-specific instruction set extensions and executing them on custom functional units. The problems of this approach are immense cost and long time of designing. To address these issues, an adaptive extensible processor was proposed in which custom instructions (CIs) are generated and added after chip-fabrication. To support this feature, custom functional units are replaced by a reconfigurable matrix of functional units with the capability of conditional execution. Unlike previous proposed CIs, it can include multiple exits. Experimental results show that multi-exit CIs enhance the performance by 46% in average compared to CIs limited to one basic block. A maximum speedup of 2.89 compared to a 4-issue in-order RISC processor, and a speedup of 1.66 in average, was achieved on MiBench benchmark suite","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123323402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsai-Ying Lin, Tsung-Han Lin, Hui-Hsiang Tung, Rung-Bin Lin
Double-via placement is important for increasing chip manufacturing yield. Commercial tools and recent work have done a great job for it. However, they are found with a limited capability of placing more double vias (called vial) between metal 1 and metal 2. Such a limitation is caused by the way we design the standard cells and can not be resolved by developing better tools. This paper presents a double-via-driven standard cell library design approach to solving this problem. Compared to the results obtained using a commercial cell library, our library on average achieves 78% reduction in dead vias and 95% reduction in dead vials at the expense of 11% increase in total via count. We achieve these results (almost) at no extra cost in total cell area and wire length
{"title":"Double-Via-Driven Standard Cell Library Design","authors":"Tsai-Ying Lin, Tsung-Han Lin, Hui-Hsiang Tung, Rung-Bin Lin","doi":"10.1145/1266366.1266627","DOIUrl":"https://doi.org/10.1145/1266366.1266627","url":null,"abstract":"Double-via placement is important for increasing chip manufacturing yield. Commercial tools and recent work have done a great job for it. However, they are found with a limited capability of placing more double vias (called vial) between metal 1 and metal 2. Such a limitation is caused by the way we design the standard cells and can not be resolved by developing better tools. This paper presents a double-via-driven standard cell library design approach to solving this problem. Compared to the results obtained using a commercial cell library, our library on average achieves 78% reduction in dead vias and 95% reduction in dead vials at the expense of 11% increase in total via count. We achieve these results (almost) at no extra cost in total cell area and wire length","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123436955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364632
A. K. Verma, P. Ienne
Despite the progress of the last decades in electronic design automation, arithmetic circuits have always received way less attention than other classes of digital circuits. Logic synthesisers, which play a fundamental role in design today, play a minor role on most arithmetic circuits, performing some local optimisations but hardly improving the overall structure of arithmetic components. Architectural optimisations have been often studied manually, and only in the case of very common building blocks such as fast adders and multi-input adders, ad-hoc techniques have been developed. A notable case is multi-input addition, which is the core of many circuits such as multipliers, etc. The most common technique to implement multi-input addition is using compressor trees, which are often composed of carry-save adders (based on (3 : 2) counters, i.e., full adders). A large body of literature exists to implement compressor trees using large counters. However, all the large counters were built by using full and half adders recursively. In this paper we give some definite answers to issues related to the use of large counters. We present a general technique to implement large counters whose performance is much better than the ones composed of full and half adders. Also we show that it is not always useful to use larger optimised counters and sometimes a combination of various size counters gives the best performance. Our results show 15% improvement in the critical path delay. In some cases even hardware area is reduced by using our counters
{"title":"Automatic Synthesis of Compressor Trees: Reevaluating Large Counters","authors":"A. K. Verma, P. Ienne","doi":"10.1109/DATE.2007.364632","DOIUrl":"https://doi.org/10.1109/DATE.2007.364632","url":null,"abstract":"Despite the progress of the last decades in electronic design automation, arithmetic circuits have always received way less attention than other classes of digital circuits. Logic synthesisers, which play a fundamental role in design today, play a minor role on most arithmetic circuits, performing some local optimisations but hardly improving the overall structure of arithmetic components. Architectural optimisations have been often studied manually, and only in the case of very common building blocks such as fast adders and multi-input adders, ad-hoc techniques have been developed. A notable case is multi-input addition, which is the core of many circuits such as multipliers, etc. The most common technique to implement multi-input addition is using compressor trees, which are often composed of carry-save adders (based on (3 : 2) counters, i.e., full adders). A large body of literature exists to implement compressor trees using large counters. However, all the large counters were built by using full and half adders recursively. In this paper we give some definite answers to issues related to the use of large counters. We present a general technique to implement large counters whose performance is much better than the ones composed of full and half adders. Also we show that it is not always useful to use larger optimised counters and sometimes a combination of various size counters gives the best performance. Our results show 15% improvement in the critical path delay. In some cases even hardware area is reduced by using our counters","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121614435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors introduce a simple ad-hoc routing scheme that operates in the true spirit of ad-hoc networking, i.e., in a modeless fashion, without neighborhood discovery or explicit point-to-point forwarding, while offering a high (and tunable) degree of reliability, fault-tolerance and robustness. Being aimed at truly tiny devices (e.g., with 1KB of RAM), the scheme can automatically take advantage of extra memory resources to improve the quality of routes for critical nodes. In contrast to some popular low-cost solutions, like ZigBeetrade the approach involves a single node type and exhibits lower resource requirements. The presented scheme has been verified in an industrial deployment with stringent quality of service requirements
{"title":"A Tiny and Efficient Wireless Ad-hoc Protocol for Low-cost Sensor Networks","authors":"P. Gburzynski, B. Kaminska, W. Olesinski","doi":"10.1145/1266366.1266709","DOIUrl":"https://doi.org/10.1145/1266366.1266709","url":null,"abstract":"The authors introduce a simple ad-hoc routing scheme that operates in the true spirit of ad-hoc networking, i.e., in a modeless fashion, without neighborhood discovery or explicit point-to-point forwarding, while offering a high (and tunable) degree of reliability, fault-tolerance and robustness. Being aimed at truly tiny devices (e.g., with 1KB of RAM), the scheme can automatically take advantage of extra memory resources to improve the quality of routes for critical nodes. In contrast to some popular low-cost solutions, like ZigBeetrade the approach involves a single node type and exhibits lower resource requirements. The presented scheme has been verified in an industrial deployment with stringent quality of service requirements","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122570331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364561
Zhuan Ye, J. Grosspietsch, G. Memik
This paper presents the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR
{"title":"An FPGA Based All-Digital Transmitter with Radio Frequency Output for Software Defined Radio","authors":"Zhuan Ye, J. Grosspietsch, G. Memik","doi":"10.1109/DATE.2007.364561","DOIUrl":"https://doi.org/10.1109/DATE.2007.364561","url":null,"abstract":"This paper presents the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124305213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on inductive invariants in unbounded model checking to improve efficiency and scalability. First of all, it introduces optimized techniques to speedup the computation of inductive invariants, considering both equivalences and implications between pairs of nodes in the logic network. Secondly, it presents a very efficient dynamic procedure, based on an incremental SAT approach, to reduce the set of checked invariants. Finally, it shows how to effectively integrate inductive invariant computations with state-of-the-art model checking procedures. Experiments address different property verification aspects, and specifically consider cases where inductive invariants alone are not sufficient for the final proof
{"title":"Boosting the Role of Inductive Invariants in Model Checking","authors":"G. Cabodi, Sergio Nocco, S. Quer","doi":"10.1145/1266366.1266654","DOIUrl":"https://doi.org/10.1145/1266366.1266654","url":null,"abstract":"This paper focuses on inductive invariants in unbounded model checking to improve efficiency and scalability. First of all, it introduces optimized techniques to speedup the computation of inductive invariants, considering both equivalences and implications between pairs of nodes in the logic network. Secondly, it presents a very efficient dynamic procedure, based on an incremental SAT approach, to reduce the set of checked invariants. Finally, it shows how to effectively integrate inductive invariant computations with state-of-the-art model checking procedures. Experiments address different property verification aspects, and specifically consider cases where inductive invariants alone are not sufficient for the final proof","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125310434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}