G. Parthasarathy, Madhu K. Iyer, K. Cheng, Li-C. Wang
We present a novel hybrid finite-domain constraint solving engine for RTL circuits, that automatically uses data-path abstraction. We describe how DPLL search can be modified by using efficient finite-domain constraint propagation to improve communication between interacting integer and Boolean domains. This enables efficient combination of Boolean SAT and linear integer arithmetic solving techniques. We use conflict-based learning using the variables on the boundary of control and data-path for additional performance benefits. Finally, the hybrid constraint solver is experimentally analyzed using some example circuits.
{"title":"An efficient finite-domain constraint solver for circuits","authors":"G. Parthasarathy, Madhu K. Iyer, K. Cheng, Li-C. Wang","doi":"10.1145/996566.996628","DOIUrl":"https://doi.org/10.1145/996566.996628","url":null,"abstract":"We present a novel hybrid finite-domain constraint solving engine for RTL circuits, that automatically uses data-path abstraction. We describe how DPLL search can be modified by using efficient finite-domain constraint propagation to improve communication between interacting integer and Boolean domains. This enables efficient combination of Boolean SAT and linear integer arithmetic solving techniques. We use conflict-based learning using the variables on the boundary of control and data-path for additional performance benefits. Finally, the hybrid constraint solver is experimentally analyzed using some example circuits.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129922385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Circuits using nano-meter technologies are becoming increasingly vulnerable to signal interference from multiple noise sources as well as radiation-induced soft errors. One way to ensure reliable functioning of chips is to be able to analyze and identify the spots in the circuit which are susceptible to such effects (called "soft spots" in this paper), and to make sure such soft spots are "hardened" so as to resist multiple noise effects and soft errors. In this paper, we present a scalable soft spot analysis methodology to study the vulnerability of digital ICs exposed to nano-meter noise and transient soft errors. First, we define "softness" as an important characteristic to gauge system vulnerability. Then several key factors affecting softness are examined. Finally an efficient Automatic Soft Spot Analyzer (ASSA) is developed to obtain the softness distribution which reflects the unbalanced noise-tolerant capability of different regions in a design. The proposed methodology provides guidelines to reduction of severe nano-meter noise effects caused by aggressive design in the pre-manufacturing phase, and guidelines to selective insertion of on-line protection schemes to achieve higher robustness. The quality of the proposed soft-spot analysis technique is validated by HSPICE simulation, and its scalability is demonstrated on a commercial embedded processor.
{"title":"A scalable soft spot analysis methodology for compound noise effects in nano-meter circuits","authors":"Chong Zhao, Xiaoliang Bai, S. Dey","doi":"10.1145/996566.996804","DOIUrl":"https://doi.org/10.1145/996566.996804","url":null,"abstract":"Circuits using nano-meter technologies are becoming increasingly vulnerable to signal interference from multiple noise sources as well as radiation-induced soft errors. One way to ensure reliable functioning of chips is to be able to analyze and identify the spots in the circuit which are susceptible to such effects (called \"soft spots\" in this paper), and to make sure such soft spots are \"hardened\" so as to resist multiple noise effects and soft errors. In this paper, we present a scalable soft spot analysis methodology to study the vulnerability of digital ICs exposed to nano-meter noise and transient soft errors. First, we define \"softness\" as an important characteristic to gauge system vulnerability. Then several key factors affecting softness are examined. Finally an efficient Automatic Soft Spot Analyzer (ASSA) is developed to obtain the softness distribution which reflects the unbalanced noise-tolerant capability of different regions in a design. The proposed methodology provides guidelines to reduction of severe nano-meter noise effects caused by aggressive design in the pre-manufacturing phase, and guidelines to selective insertion of on-line protection schemes to achieve higher robustness. The quality of the proposed soft-spot analysis technique is validated by HSPICE simulation, and its scalability is demonstrated on a commercial embedded processor.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121527001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sang-Il Han, A. Baghdadi, M. Bonaciu, S. Chae, A. Jerraya
Massive data transfer encountered in emerging multimedia embedded applications requires architecture allowing both highly distributed memory structure and multiprocessor computation to be handled. The key issue that needs to be solved is then how to manage data transfers between large numbers of distributed memories. To overcome this issue, our paper proposes a scalable Distributed Memory Server (DMS) for multiprocessor SoC (MPSoC). The proposed DMS is composed of: (1) high-performance and flexible memory service access points (MSAPs), which execute data transfers without intervention of the processing elements, (2) data network, and (3) control network. It can handle direct massive data transfer between the distributed memories of an MPSoC. The scalability and flexibility of the proposed DMS are illustrated through the implementation of an MPEG4 video encoder for QCIF and CIF formats. The experiments show clearly how DMS can be adapted to accommodate different SoC configurations requiring various data transfer bandwidths. Synthesis results show that bandwidth can scale up to 28.8 GB/sec.
{"title":"An efficient scalable and flexible data transfer architecture for multiprocessor SoC with massive distributed memory","authors":"Sang-Il Han, A. Baghdadi, M. Bonaciu, S. Chae, A. Jerraya","doi":"10.1145/996566.996636","DOIUrl":"https://doi.org/10.1145/996566.996636","url":null,"abstract":"Massive data transfer encountered in emerging multimedia embedded applications requires architecture allowing both highly distributed memory structure and multiprocessor computation to be handled. The key issue that needs to be solved is then how to manage data transfers between large numbers of distributed memories. To overcome this issue, our paper proposes a scalable Distributed Memory Server (DMS) for multiprocessor SoC (MPSoC). The proposed DMS is composed of: (1) high-performance and flexible memory service access points (MSAPs), which execute data transfers without intervention of the processing elements, (2) data network, and (3) control network. It can handle direct massive data transfer between the distributed memories of an MPSoC. The scalability and flexibility of the proposed DMS are illustrated through the implementation of an MPEG4 video encoder for QCIF and CIF formats. The experiments show clearly how DMS can be adapted to accommodate different SoC configurations requiring various data transfer bandwidths. Synthesis results show that bandwidth can scale up to 28.8 GB/sec.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127755636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seokwoo Lee, Shidhartha Das, V. Bertacco, T. Austin, D. Blaauw, T. Mudge
Architectural simulation has achieved a prominent role in the system design cycle by providing designers the ability to quickly examine a wide variety of design choices. However, the recent trend in system design toward architectures that react to circuit-level phenomena has outstripped the capabilities of traditional cycle-based architectural simulators. In this paper, we present an architectural simulator design that incorporates a circuit modeling capability, permitting architectural-level simulations that react to circuit characteristics (such as latency,energy,or current draw) on a cycle-by-cycle basis. While these additional capabilities slow simulation speed, we show that the careful application of circuit simulation optimizations and simulation sampling techniques permit high levels of detail with sufficient speed to examine entire workloads.
{"title":"Circuit-aware architectural simulation","authors":"Seokwoo Lee, Shidhartha Das, V. Bertacco, T. Austin, D. Blaauw, T. Mudge","doi":"10.1145/996566.996656","DOIUrl":"https://doi.org/10.1145/996566.996656","url":null,"abstract":"Architectural simulation has achieved a prominent role in the system design cycle by providing designers the ability to quickly examine a wide variety of design choices. However, the recent trend in system design toward architectures that react to circuit-level phenomena has outstripped the capabilities of traditional cycle-based architectural simulators. In this paper, we present an architectural simulator design that incorporates a circuit modeling capability, permitting architectural-level simulations that react to circuit characteristics (such as latency,energy,or current draw) on a cycle-by-cycle basis. While these additional capabilities slow simulation speed, we show that the careful application of circuit simulation optimizations and simulation sampling techniques permit high levels of detail with sufficient speed to examine entire workloads.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121551180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability to control the variations in IC fabrication process is rapidly diminishing as feature sizes continue towards the sub-100 nm regime. As a result, there is an increasing uncertainty in the performance of CMOS circuits. Accounting for the worst case values of all parameters will result in an unacceptably low timing yield. Design for Variability, which involves designing to achieve a given level of confidence in the performance of ICs, is fast becoming an indispensable part of IC design methodology. This paper describes a method to identify certain paths in the circuit that are responsible for the spread of timing performance. The method is based on defining a disutility function of the gate and path delays, which includes both the means and variances of the delay random variables. Based on the moments of this disutility function, an algorithm is presented which selects a subset of paths (called undominated paths) as being most responsible for the variation in timing performance. Next, a statistical gate sizing algorithm is presented, which is aimed at minimizing the delay variability of the nodes in the selected paths subject to constraints on the critical path delay and the area penalty. Monte-Carlo simulations with ISCAS '85 benchmark circuits show that our statistical optimization approach results in significant improvements in timing yield over traditional deterministic sizing methods.
{"title":"A methodology to improve timing yield in the presence of process variations","authors":"Sreeja Raj, S. Vrudhula, Janet Roveda","doi":"10.1145/996566.996694","DOIUrl":"https://doi.org/10.1145/996566.996694","url":null,"abstract":"The ability to control the variations in IC fabrication process is rapidly diminishing as feature sizes continue towards the sub-100 nm regime. As a result, there is an increasing uncertainty in the performance of CMOS circuits. Accounting for the worst case values of all parameters will result in an unacceptably low timing yield. Design for Variability, which involves designing to achieve a given level of confidence in the performance of ICs, is fast becoming an indispensable part of IC design methodology. This paper describes a method to identify certain paths in the circuit that are responsible for the spread of timing performance. The method is based on defining a disutility function of the gate and path delays, which includes both the means and variances of the delay random variables. Based on the moments of this disutility function, an algorithm is presented which selects a subset of paths (called undominated paths) as being most responsible for the variation in timing performance. Next, a statistical gate sizing algorithm is presented, which is aimed at minimizing the delay variability of the nodes in the selected paths subject to constraints on the critical path delay and the area penalty. Monte-Carlo simulations with ISCAS '85 benchmark circuits show that our statistical optimization approach results in significant improvements in timing yield over traditional deterministic sizing methods.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133562724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop an approach to minimize total power in a dual-Vdd and dual-Vth design. The algorithm runs in two distinct phases. The first phase relies on upsizing to create slack and maximize low Vdd assignments in a backward topological manner. The second phase proceeds in a forward topological fashion and both sizes and re-assigns gates to high Vdd to enable significant static power savings through high Vth assignment. The proposed algorithm is implemented and tested on a set of combinational benchmark circuits. A comparison with traditional CVS and dual-Vth/sizing algorithms demonstrate the advantage of the algorithm over a range of activity factors, including an average power reduction of 30% (50%) at high (nominal) primary input activities.
{"title":"Power minimization using simultaneous gate sizing, dual-Vdd and dual-Vth assignment","authors":"A. Srivastava, D. Sylvester, D. Blaauw","doi":"10.1145/996566.996777","DOIUrl":"https://doi.org/10.1145/996566.996777","url":null,"abstract":"We develop an approach to minimize total power in a dual-Vdd and dual-Vth design. The algorithm runs in two distinct phases. The first phase relies on upsizing to create slack and maximize low Vdd assignments in a backward topological manner. The second phase proceeds in a forward topological fashion and both sizes and re-assigns gates to high Vdd to enable significant static power savings through high Vth assignment. The proposed algorithm is implemented and tested on a set of combinational benchmark circuits. A comparison with traditional CVS and dual-Vth/sizing algorithms demonstrate the advantage of the algorithm over a range of activity factors, including an average power reduction of 30% (50%) at high (nominal) primary input activities.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"301 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131404403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CMOS technology scaling is causing the channel lengths to be sub-wavelength of light. Parameter variation, caused by sub-wavelength lithography, will pose a major challenge for design and reliability of future high performance microprocessors in nanometer technologies. In this paper, we present the impact of these variations on processor functionality, Predictability and reliability. We propose design and CAD solutions for variation tolerance. We conclude this paper with sofi error rate scaling trends and sofl error tolerant circuits for reliabilitv enhancement.
{"title":"Design and reliability challenges in nanometer technologies","authors":"S. Borkar, T. Karnik, V. De","doi":"10.1145/996566.996588","DOIUrl":"https://doi.org/10.1145/996566.996588","url":null,"abstract":"CMOS technology scaling is causing the channel lengths to be sub-wavelength of light. Parameter variation, caused by sub-wavelength lithography, will pose a major challenge for design and reliability of future high performance microprocessors in nanometer technologies. In this paper, we present the impact of these variations on processor functionality, Predictability and reliability. We propose design and CAD solutions for variation tolerance. We conclude this paper with sofi error rate scaling trends and sofl error tolerant circuits for reliabilitv enhancement.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131436504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design-for-testability (DFT) for synchronous sequential circuits allows the generation and application of tests that rely on non-functional operation of the circuit. This can result in unnecessary yield loss due to the detection of faults that do not affect normal circuit operation. Considering single stuck-at faults in full-scan circuits, a test vector consists of a primary input vector U and a state S .We say that the test vector consisting of U and S relies on non-functional operation if S is an unreachable state, i.e., a state that cannot be reached from all the circuit states. Our goal is to obtain test sets with states S that are reachable states. Given a test set C, the solution we explore is based on a simulation-based procedure to identify reachable states that can replace unreachable states in C. No modifications are required to the test generation procedure and no sequential test generation is needed. Our results demonstrate that the proposed procedure is able to produce test sets that detect many of the circuit faults, which are detectable using scan, and practically all the sequentially irredundant faults, by using test vectors with reachable states. The procedure is applicable to any type of scan-based test set, including test sets for delay faults.
{"title":"On the generation of scan-based test sets with reachable states for testing under functional operation conditions","authors":"I. Pomeranz","doi":"10.1145/996566.996813","DOIUrl":"https://doi.org/10.1145/996566.996813","url":null,"abstract":"Design-for-testability (DFT) for synchronous sequential circuits allows the generation and application of tests that rely on non-functional operation of the circuit. This can result in unnecessary yield loss due to the detection of faults that do not affect normal circuit operation. Considering single stuck-at faults in full-scan circuits, a test vector consists of a primary input vector U and a state S .We say that the test vector consisting of U and S relies on non-functional operation if S is an unreachable state, i.e., a state that cannot be reached from all the circuit states. Our goal is to obtain test sets with states S that are reachable states. Given a test set C, the solution we explore is based on a simulation-based procedure to identify reachable states that can replace unreachable states in C. No modifications are required to the test generation procedure and no sequential test generation is needed. Our results demonstrate that the proposed procedure is able to produce test sets that detect many of the circuit faults, which are detectable using scan, and practically all the sequentially irredundant faults, by using test vectors with reachable states. The procedure is applicable to any type of scan-based test set, including test sets for delay faults.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132355267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designers can create completely new processors with custom instruction set architectures (ISA), using various methods involving configurable logic. Configurable technologies also enable designers to enhance the basic ISA of standard processors or the ISA of a proprietary processor to execute at speed workloads for which the processor has not been initially conceived. Contrary to some early beliefs, the idea behind creating a custom instruction is not to compress several existing ISA instructions in one cycle; it is to execute loops requiring hundreds or thousands of iterations, faster than in a single machine, even if it were clocked at the top frequency afforded by state-of-the-art semiconductor speeds and temperature limitations.To achieve high performance, most configurable platforms execute loop iterations in parallel; operating on multiple data in one cycle can make up for engine frequency and power limitations. Aimed at implementations in ASIC technologies, configurable platforms can be defined as designer-created mostly hardwired logic interfaced via ISA instruction enhancements.Re-configurable platforms were introduced only recently. Architectures employing FPGA-like structures instead of hardwired logic offer flexibility useful in addressing a broader range of applications and tracking evolving standards. The presentation surveys configurable and re-configurable structures including fabrics of processors, evolving trends, and the impact of soft-hardware development tools.Fabrics of processors were initially aimed at very high performance tasks in communications. This type of architecture is also beginning to be employed in low power applications where it can offer a ratio of performance-to-power exceeding that of an implementation using one or more general-purpose processors. Several emerging fabric configurations will be described and compared: base cores using a processor element (PE) and private memory for instructions and data, PEs using local instructions' memory and communicating data, PEs that can change processing capabilities depending on the function to be executed, heterogeneous PEs and others. Software development tools' issues have kept processor fabrics from being adopted by more designers: iterative optimal routing between PEs and assignment of functions have become additional burdens on the C/C++ language programmer. None of the proposed products has acquired enough traction to justify acceptance as a standard architecture. The key to a wider adoption of re-configurable engines will be found in the soft-hardware tools offered to the programmer: two types of soft-hardware tools will be described, one using program and explicit routing, the other employing hints that can generate program and routing.
{"title":"Trends in the use of re-configurable platforms","authors":"M. Baron","doi":"10.1145/996566.996685","DOIUrl":"https://doi.org/10.1145/996566.996685","url":null,"abstract":"Designers can create completely new processors with custom instruction set architectures (ISA), using various methods involving configurable logic. Configurable technologies also enable designers to enhance the basic ISA of standard processors or the ISA of a proprietary processor to execute at speed workloads for which the processor has not been initially conceived. Contrary to some early beliefs, the idea behind creating a custom instruction is not to compress several existing ISA instructions in one cycle; it is to execute loops requiring hundreds or thousands of iterations, faster than in a single machine, even if it were clocked at the top frequency afforded by state-of-the-art semiconductor speeds and temperature limitations.To achieve high performance, most configurable platforms execute loop iterations in parallel; operating on multiple data in one cycle can make up for engine frequency and power limitations. Aimed at implementations in ASIC technologies, configurable platforms can be defined as designer-created mostly hardwired logic interfaced via ISA instruction enhancements.Re-configurable platforms were introduced only recently. Architectures employing FPGA-like structures instead of hardwired logic offer flexibility useful in addressing a broader range of applications and tracking evolving standards. The presentation surveys configurable and re-configurable structures including fabrics of processors, evolving trends, and the impact of soft-hardware development tools.Fabrics of processors were initially aimed at very high performance tasks in communications. This type of architecture is also beginning to be employed in low power applications where it can offer a ratio of performance-to-power exceeding that of an implementation using one or more general-purpose processors. Several emerging fabric configurations will be described and compared: base cores using a processor element (PE) and private memory for instructions and data, PEs using local instructions' memory and communicating data, PEs that can change processing capabilities depending on the function to be executed, heterogeneous PEs and others. Software development tools' issues have kept processor fabrics from being adopted by more designers: iterative optimal routing between PEs and assignment of functions have become additional burdens on the C/C++ language programmer. None of the proposed products has acquired enough traction to justify acceptance as a standard architecture. The key to a wider adoption of re-configurable engines will be found in the soft-hardware tools offered to the programmer: two types of soft-hardware tools will be described, one using program and explicit routing, the other employing hints that can generate program and routing.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132745479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Random test generators are often used to create regression suites on-the-fly. Regression suites are commonly generated by choosing several specifications and generating a number of tests from each one, without reasoning which specification should he used and how many tests should he generated from each specification. This paper describes a technique for building high quality random regression suites. The proposed technique uses information about the probablity of each test specification covering each coverage task. This probability is used, in tun, to determine which test specifications should be included in the regression suite and how many tests should, be generated from each specification. Experimental results show that this practical technique can he used to improve the quality, and reduce the cost, of regression suites. Moreover, it enables better informed decisions regarding the size and distribution of the regression suites, and the risk involved.
{"title":"Probabilistic regression suites for functional verification","authors":"S. Fine, S. Ur, A. Ziv","doi":"10.1145/996566.996581","DOIUrl":"https://doi.org/10.1145/996566.996581","url":null,"abstract":"Random test generators are often used to create regression suites on-the-fly. Regression suites are commonly generated by choosing several specifications and generating a number of tests from each one, without reasoning which specification should he used and how many tests should he generated from each specification. This paper describes a technique for building high quality random regression suites. The proposed technique uses information about the probablity of each test specification covering each coverage task. This probability is used, in tun, to determine which test specifications should be included in the regression suite and how many tests should, be generated from each specification. Experimental results show that this practical technique can he used to improve the quality, and reduce the cost, of regression suites. Moreover, it enables better informed decisions regarding the size and distribution of the regression suites, and the risk involved.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132258548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}