Customizable processors are being used increasingly often in SoC designs. During the past few years, they have proven to be a good way to solve the conflicting flexibility and performance requirements of embedded systems design. While their usefulness has been demonstrated in a wide range of products, a few challenges remain to be addressed: 1) Is extending a standard core template the right way to customization, or is it preferable to design a fully customized core from scratch? 2) Is the automation offered by current toolchains, in particular generation of complex instructions and their reuse, enough for what users would like to see? 3) And when we look at the future with the increasing use of multi-processor SoCs, do we see a sea of identical customized processors, or a heterogeneous mix? We comment and elaborate here on these challenges and open questions
{"title":"A Future of Customizable Processors: Are We There Yet?","authors":"L. Pozzi, P. Paulin","doi":"10.1145/1266366.1266630","DOIUrl":"https://doi.org/10.1145/1266366.1266630","url":null,"abstract":"Customizable processors are being used increasingly often in SoC designs. During the past few years, they have proven to be a good way to solve the conflicting flexibility and performance requirements of embedded systems design. While their usefulness has been demonstrated in a wide range of products, a few challenges remain to be addressed: 1) Is extending a standard core template the right way to customization, or is it preferable to design a fully customized core from scratch? 2) Is the automation offered by current toolchains, in particular generation of complex instructions and their reuse, enough for what users would like to see? 3) And when we look at the future with the increasing use of multi-processor SoCs, do we see a sea of identical customized processors, or a heterogeneous mix? We comment and elaborate here on these challenges and open questions","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116938950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364630
J. Savoj, A. Abbasfar, A. Amirkhany, B. Garlepp, M. Horowitz
In this paper, a new technique for characterization of digital-to-analog converters (DAC) used in wideband applications is described. Unlike the standard narrowband approach, this technique employs least square estimation to characterize the DAC from dc to any target frequency. Characterization is performed using a random sequence with certain temporal and probabilistic characteristics suitable for intended operating conditions. The technique provides a linear estimation of the system and decomposes nonlinearity into higher-order harmonics and deterministic periodic noise. The technique can also be used to derive the impulse response of the converter, predict its operating bandwidth, and provide far more insight into its sources of distortion
{"title":"A New Technique for Characterization of Digital-to-Analog Converters in High-Speed Systems","authors":"J. Savoj, A. Abbasfar, A. Amirkhany, B. Garlepp, M. Horowitz","doi":"10.1109/DATE.2007.364630","DOIUrl":"https://doi.org/10.1109/DATE.2007.364630","url":null,"abstract":"In this paper, a new technique for characterization of digital-to-analog converters (DAC) used in wideband applications is described. Unlike the standard narrowband approach, this technique employs least square estimation to characterize the DAC from dc to any target frequency. Characterization is performed using a random sequence with certain temporal and probabilistic characteristics suitable for intended operating conditions. The technique provides a linear estimation of the system and decomposes nonlinearity into higher-order harmonics and deterministic periodic noise. The technique can also be used to derive the impulse response of the converter, predict its operating bandwidth, and provide far more insight into its sources of distortion","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115594419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364632
A. K. Verma, P. Ienne
Despite the progress of the last decades in electronic design automation, arithmetic circuits have always received way less attention than other classes of digital circuits. Logic synthesisers, which play a fundamental role in design today, play a minor role on most arithmetic circuits, performing some local optimisations but hardly improving the overall structure of arithmetic components. Architectural optimisations have been often studied manually, and only in the case of very common building blocks such as fast adders and multi-input adders, ad-hoc techniques have been developed. A notable case is multi-input addition, which is the core of many circuits such as multipliers, etc. The most common technique to implement multi-input addition is using compressor trees, which are often composed of carry-save adders (based on (3 : 2) counters, i.e., full adders). A large body of literature exists to implement compressor trees using large counters. However, all the large counters were built by using full and half adders recursively. In this paper we give some definite answers to issues related to the use of large counters. We present a general technique to implement large counters whose performance is much better than the ones composed of full and half adders. Also we show that it is not always useful to use larger optimised counters and sometimes a combination of various size counters gives the best performance. Our results show 15% improvement in the critical path delay. In some cases even hardware area is reduced by using our counters
{"title":"Automatic Synthesis of Compressor Trees: Reevaluating Large Counters","authors":"A. K. Verma, P. Ienne","doi":"10.1109/DATE.2007.364632","DOIUrl":"https://doi.org/10.1109/DATE.2007.364632","url":null,"abstract":"Despite the progress of the last decades in electronic design automation, arithmetic circuits have always received way less attention than other classes of digital circuits. Logic synthesisers, which play a fundamental role in design today, play a minor role on most arithmetic circuits, performing some local optimisations but hardly improving the overall structure of arithmetic components. Architectural optimisations have been often studied manually, and only in the case of very common building blocks such as fast adders and multi-input adders, ad-hoc techniques have been developed. A notable case is multi-input addition, which is the core of many circuits such as multipliers, etc. The most common technique to implement multi-input addition is using compressor trees, which are often composed of carry-save adders (based on (3 : 2) counters, i.e., full adders). A large body of literature exists to implement compressor trees using large counters. However, all the large counters were built by using full and half adders recursively. In this paper we give some definite answers to issues related to the use of large counters. We present a general technique to implement large counters whose performance is much better than the ones composed of full and half adders. Also we show that it is not always useful to use larger optimised counters and sometimes a combination of various size counters gives the best performance. Our results show 15% improvement in the critical path delay. In some cases even hardware area is reduced by using our counters","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121614435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364612
Hamid Noori, Farhad Mehdipour, K. Murakami, Koji Inoue, M. Goudarzi
To improve the performance of embedded processors, an effective technique is collapsing critical computation subgraphs as application-specific instruction set extensions and executing them on custom functional units. The problems of this approach are immense cost and long time of designing. To address these issues, an adaptive extensible processor was proposed in which custom instructions (CIs) are generated and added after chip-fabrication. To support this feature, custom functional units are replaced by a reconfigurable matrix of functional units with the capability of conditional execution. Unlike previous proposed CIs, it can include multiple exits. Experimental results show that multi-exit CIs enhance the performance by 46% in average compared to CIs limited to one basic block. A maximum speedup of 2.89 compared to a 4-issue in-order RISC processor, and a speedup of 1.66 in average, was achieved on MiBench benchmark suite
{"title":"Generating and Executing Multi-Exit Custom Instructions for an Adaptive Extensible Processor","authors":"Hamid Noori, Farhad Mehdipour, K. Murakami, Koji Inoue, M. Goudarzi","doi":"10.1109/DATE.2007.364612","DOIUrl":"https://doi.org/10.1109/DATE.2007.364612","url":null,"abstract":"To improve the performance of embedded processors, an effective technique is collapsing critical computation subgraphs as application-specific instruction set extensions and executing them on custom functional units. The problems of this approach are immense cost and long time of designing. To address these issues, an adaptive extensible processor was proposed in which custom instructions (CIs) are generated and added after chip-fabrication. To support this feature, custom functional units are replaced by a reconfigurable matrix of functional units with the capability of conditional execution. Unlike previous proposed CIs, it can include multiple exits. Experimental results show that multi-exit CIs enhance the performance by 46% in average compared to CIs limited to one basic block. A maximum speedup of 2.89 compared to a 4-issue in-order RISC processor, and a speedup of 1.66 in average, was achieved on MiBench benchmark suite","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123323402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsai-Ying Lin, Tsung-Han Lin, Hui-Hsiang Tung, Rung-Bin Lin
Double-via placement is important for increasing chip manufacturing yield. Commercial tools and recent work have done a great job for it. However, they are found with a limited capability of placing more double vias (called vial) between metal 1 and metal 2. Such a limitation is caused by the way we design the standard cells and can not be resolved by developing better tools. This paper presents a double-via-driven standard cell library design approach to solving this problem. Compared to the results obtained using a commercial cell library, our library on average achieves 78% reduction in dead vias and 95% reduction in dead vials at the expense of 11% increase in total via count. We achieve these results (almost) at no extra cost in total cell area and wire length
{"title":"Double-Via-Driven Standard Cell Library Design","authors":"Tsai-Ying Lin, Tsung-Han Lin, Hui-Hsiang Tung, Rung-Bin Lin","doi":"10.1145/1266366.1266627","DOIUrl":"https://doi.org/10.1145/1266366.1266627","url":null,"abstract":"Double-via placement is important for increasing chip manufacturing yield. Commercial tools and recent work have done a great job for it. However, they are found with a limited capability of placing more double vias (called vial) between metal 1 and metal 2. Such a limitation is caused by the way we design the standard cells and can not be resolved by developing better tools. This paper presents a double-via-driven standard cell library design approach to solving this problem. Compared to the results obtained using a commercial cell library, our library on average achieves 78% reduction in dead vias and 95% reduction in dead vials at the expense of 11% increase in total via count. We achieve these results (almost) at no extra cost in total cell area and wire length","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123436955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364561
Zhuan Ye, J. Grosspietsch, G. Memik
This paper presents the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR
{"title":"An FPGA Based All-Digital Transmitter with Radio Frequency Output for Software Defined Radio","authors":"Zhuan Ye, J. Grosspietsch, G. Memik","doi":"10.1109/DATE.2007.364561","DOIUrl":"https://doi.org/10.1109/DATE.2007.364561","url":null,"abstract":"This paper presents the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124305213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In deep submicron circuits, elevation in temperatures has brought new challenges in reliability, timing, performance, cooling costs and leakage power. Conventional thermal management techniques sacrifice performance to control the thermal behavior by slowing down or turning off the processors when a critical temperature threshold is exceeded. Moreover, studies have shown that in addition to high temperatures, temporal and spatial variations in temperature impact system reliability. In this work, we explore the benefits of thermally aware task scheduling for multiprocessor systems-on-a-chip (MPSoC). We design and evaluate OS-level dynamic scheduling policies with negligible performance overhead. We show that, using simple to implement policies that make decisions based on temperature measurements, better temporal and spatial thermal profiles can be achieved in comparison to state-of-art schedulers. We also enhance reactive strategies such as dynamic thread migration with our scheduling policies. This way, hot spots and temperature variations are decreased, and the performance cost is significantly reduced
{"title":"Temperature Aware Task Scheduling in MPSoCs","authors":"A. Coskun, T. Simunic, K. Whisnant","doi":"10.1145/1266366.1266730","DOIUrl":"https://doi.org/10.1145/1266366.1266730","url":null,"abstract":"In deep submicron circuits, elevation in temperatures has brought new challenges in reliability, timing, performance, cooling costs and leakage power. Conventional thermal management techniques sacrifice performance to control the thermal behavior by slowing down or turning off the processors when a critical temperature threshold is exceeded. Moreover, studies have shown that in addition to high temperatures, temporal and spatial variations in temperature impact system reliability. In this work, we explore the benefits of thermally aware task scheduling for multiprocessor systems-on-a-chip (MPSoC). We design and evaluate OS-level dynamic scheduling policies with negligible performance overhead. We show that, using simple to implement policies that make decisions based on temperature measurements, better temporal and spatial thermal profiles can be achieved in comparison to state-of-art schedulers. We also enhance reactive strategies such as dynamic thread migration with our scheduling policies. This way, hot spots and temperature variations are decreased, and the performance cost is significantly reduced","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128132990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Error-tolerance is an innovative way to enhance the effective yield of IC products. Previously a test methodology based on error-rate estimation to support error-tolerance was proposed. Without violating the system error-rate constraint specified by the user, this methodology identifies a set of faults that can be ignored during testing, thereby leading to a significant improvement in yield. However, usually the patterns detecting all of the unacceptable faults also detect a large number of acceptable faults, resulting in a degradation in achievable yield improvement. In this paper, the authors first provide a probabilistic analysis of this problem and show that a conventional ATPG procedure cannot adequately address this problem. The authors then present a novel test pattern selection procedure and an output masking technique to deal with this problem. The selection process generates a test set aimed to detect all unacceptable faults but as few acceptable faults as possible. The masking technique then examines the generated test patterns and identifies a list of output lines that can be masked (not observed) during testing so as to further avoid the detection of acceptable faults. Experimental results show that by employing the proposed techniques, only a small number of acceptable faults are still detected. In many cases the actual yield improvement approaches the optimal value that can be achieved
{"title":"Reduction of Detected Acceptable Faults for Yield Improvement via Error-Tolerance","authors":"Tong-Yu Hsieh, Kuen-Jong Lee, M. Breuer","doi":"10.1145/1266366.1266717","DOIUrl":"https://doi.org/10.1145/1266366.1266717","url":null,"abstract":"Error-tolerance is an innovative way to enhance the effective yield of IC products. Previously a test methodology based on error-rate estimation to support error-tolerance was proposed. Without violating the system error-rate constraint specified by the user, this methodology identifies a set of faults that can be ignored during testing, thereby leading to a significant improvement in yield. However, usually the patterns detecting all of the unacceptable faults also detect a large number of acceptable faults, resulting in a degradation in achievable yield improvement. In this paper, the authors first provide a probabilistic analysis of this problem and show that a conventional ATPG procedure cannot adequately address this problem. The authors then present a novel test pattern selection procedure and an output masking technique to deal with this problem. The selection process generates a test set aimed to detect all unacceptable faults but as few acceptable faults as possible. The masking technique then examines the generated test patterns and identifies a list of output lines that can be masked (not observed) during testing so as to further avoid the detection of acceptable faults. Experimental results show that by employing the proposed techniques, only a small number of acceptable faults are still detected. In many cases the actual yield improvement approaches the optimal value that can be achieved","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129921104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364671
I. Verbauwhede, P. Schaumont
The design of ubiquitous and embedded computers focuses on cost factors such as area, power-consumption, and performance. Security and trust properties, on the other hand, are often an afterthought. Yet the purpose of ubiquitous electronics is to act and negotiate on their owner s behalf, and this makes trust a first-order concern. We outline a methodology for the design of secure and trusted electronic embedded systems, which builds on identifying the secure-sensitive part of a system (the root-of-trust) and iteratively partitioning and protecting that root-of-trust over all levels of design abstraction. This includes protocols, software, hardware, and circuits. We review active research in the area of secure design methodologies
{"title":"Design methods for Security and Trust","authors":"I. Verbauwhede, P. Schaumont","doi":"10.1109/DATE.2007.364671","DOIUrl":"https://doi.org/10.1109/DATE.2007.364671","url":null,"abstract":"The design of ubiquitous and embedded computers focuses on cost factors such as area, power-consumption, and performance. Security and trust properties, on the other hand, are often an afterthought. Yet the purpose of ubiquitous electronics is to act and negotiate on their owner s behalf, and this makes trust a first-order concern. We outline a methodology for the design of secure and trusted electronic embedded systems, which builds on identifying the secure-sensitive part of a system (the root-of-trust) and iteratively partitioning and protecting that root-of-trust over all levels of design abstraction. This includes protocols, software, hardware, and circuits. We review active research in the area of secure design methodologies","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"16 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129003507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on inductive invariants in unbounded model checking to improve efficiency and scalability. First of all, it introduces optimized techniques to speedup the computation of inductive invariants, considering both equivalences and implications between pairs of nodes in the logic network. Secondly, it presents a very efficient dynamic procedure, based on an incremental SAT approach, to reduce the set of checked invariants. Finally, it shows how to effectively integrate inductive invariant computations with state-of-the-art model checking procedures. Experiments address different property verification aspects, and specifically consider cases where inductive invariants alone are not sufficient for the final proof
{"title":"Boosting the Role of Inductive Invariants in Model Checking","authors":"G. Cabodi, Sergio Nocco, S. Quer","doi":"10.1145/1266366.1266654","DOIUrl":"https://doi.org/10.1145/1266366.1266654","url":null,"abstract":"This paper focuses on inductive invariants in unbounded model checking to improve efficiency and scalability. First of all, it introduces optimized techniques to speedup the computation of inductive invariants, considering both equivalences and implications between pairs of nodes in the logic network. Secondly, it presents a very efficient dynamic procedure, based on an incremental SAT approach, to reduce the set of checked invariants. Finally, it shows how to effectively integrate inductive invariant computations with state-of-the-art model checking procedures. Experiments address different property verification aspects, and specifically consider cases where inductive invariants alone are not sufficient for the final proof","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125310434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}