Lateral heat conduction between modules affects the temperature profile of a floorplan, affecting the leakage power of individual blocks which increasingly is becoming a larger fraction of the overall power consumption with scaling of fabrication technologies. By modeling temperature dependent leakage power within a micro architecture-aware floorplanning process, we propose a method that reduces sub-threshold leakage power. To that end, two leakage models are used: a transient formulation independent of any leakage power model and a simpler formulation derived from an empirical leakage power model, both showing good fidelity to detailed transient simulations. Our algorithm can reduce subthreshold leakage by up to 15% with a minor degradation in performance, compared to a floorplanning process that does not model leakage. We also show the importance of modeling whitespace during floorplanning and its impact on leakage savings
{"title":"Microarchitecture Floorplanning for Sub-threshold Leakage Reduction","authors":"H. Mogal, K. Bazargan","doi":"10.1145/1266366.1266634","DOIUrl":"https://doi.org/10.1145/1266366.1266634","url":null,"abstract":"Lateral heat conduction between modules affects the temperature profile of a floorplan, affecting the leakage power of individual blocks which increasingly is becoming a larger fraction of the overall power consumption with scaling of fabrication technologies. By modeling temperature dependent leakage power within a micro architecture-aware floorplanning process, we propose a method that reduces sub-threshold leakage power. To that end, two leakage models are used: a transient formulation independent of any leakage power model and a simpler formulation derived from an empirical leakage power model, both showing good fidelity to detailed transient simulations. Our algorithm can reduce subthreshold leakage by up to 15% with a minor degradation in performance, compared to a floorplanning process that does not model leakage. We also show the importance of modeling whitespace during floorplanning and its impact on leakage savings","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129125722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364417
R. Galivanche, R. Kapur, A. Rubio
Testing today of a several hundred million transistor system-on-chip with analog, RF blocks, many processor cores and tens of memories is a huge task. What the test technology be like in year 2020 with hundreds of billions of transistors on a single chip? Can we get there with tweaks to today's technology? While the exact nature of the circuit styles, architectural innovations and product innovations in year 2020 are highly speculative at this point, we examine the impact of likely design and process technology trends on testing methods
{"title":"Testing in the Year 2020","authors":"R. Galivanche, R. Kapur, A. Rubio","doi":"10.1109/DATE.2007.364417","DOIUrl":"https://doi.org/10.1109/DATE.2007.364417","url":null,"abstract":"Testing today of a several hundred million transistor system-on-chip with analog, RF blocks, many processor cores and tens of memories is a huge task. What the test technology be like in year 2020 with hundreds of billions of transistors on a single chip? Can we get there with tweaks to today's technology? While the exact nature of the circuit styles, architectural innovations and product innovations in year 2020 are highly speculative at this point, we examine the impact of likely design and process technology trends on testing methods","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130927424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364408
Kai Chen, J. Sztipanovits, S. Neema
An emerging common trend in model-based design of embedded software and systems is the adoption of domain-specific modeling languages (DSMLs). While abstract syntax metamodeling enables the rapid and inexpensive development of DSMLs, the specification of DSML semantics is still a hard problem. In previous work, we have developed methods and tools for the semantic anchoring of DSMLs. Semantic anchoring introduces a set of reusable "semantic units" that provide reference semantics for basic behavioral categories using the abstract state machine (ASM) framework. In this paper, we extend the semantic anchoring framework to heterogeneous behaviors by developing a method for the composition of semantic units. Semantic unit composition reduces the required effort from DSML designers and improves the quality of the specification. The proposed method is demonstrated through a case study
{"title":"Compositional Specification of Behavioral Semantics","authors":"Kai Chen, J. Sztipanovits, S. Neema","doi":"10.1109/DATE.2007.364408","DOIUrl":"https://doi.org/10.1109/DATE.2007.364408","url":null,"abstract":"An emerging common trend in model-based design of embedded software and systems is the adoption of domain-specific modeling languages (DSMLs). While abstract syntax metamodeling enables the rapid and inexpensive development of DSMLs, the specification of DSML semantics is still a hard problem. In previous work, we have developed methods and tools for the semantic anchoring of DSMLs. Semantic anchoring introduces a set of reusable \"semantic units\" that provide reference semantics for basic behavioral categories using the abstract state machine (ASM) framework. In this paper, we extend the semantic anchoring framework to heterogeneous behaviors by developing a method for the composition of semantic units. Semantic unit composition reduces the required effort from DSML designers and improves the quality of the specification. The proposed method is demonstrated through a case study","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"98 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130243792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364689
Clemens Moser, L. Thiele, D. Brunelli, L. Benini
Recently, there has been a substantial interest in the design of systems that receive their energy from regenerative sources such as solar cells. In contrast to approaches that attempt to minimize the power consumption we are concerned with adapting parameters of the application such that a maximal utility is obtained while respecting the limited and time-varying amount of available energy. Instead of solving the optimization problem on-line which may be prohibitively complex in terms of running time and energy consumption, we propose a parameterized specification and the computation of a corresponding optimal on-line controller. The efficiency of the new approach is demonstrated by experimental results and measurements on a sensor node
{"title":"Adaptive Power Management in Energy Harvesting Systems","authors":"Clemens Moser, L. Thiele, D. Brunelli, L. Benini","doi":"10.1109/DATE.2007.364689","DOIUrl":"https://doi.org/10.1109/DATE.2007.364689","url":null,"abstract":"Recently, there has been a substantial interest in the design of systems that receive their energy from regenerative sources such as solar cells. In contrast to approaches that attempt to minimize the power consumption we are concerned with adapting parameters of the application such that a maximal utility is obtained while respecting the limited and time-varying amount of available energy. Instead of solving the optimization problem on-line which may be prohibitively complex in terms of running time and energy consumption, we propose a parameterized specification and the computation of a corresponding optimal on-line controller. The efficiency of the new approach is demonstrated by experimental results and measurements on a sensor node","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126686916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364434
Hwisung Jung, Massoud Pedram
This paper tackles the problem of dynamic power management (DPM) in nanoscale CMOS design technologies that are typically affected by increasing levels of process, voltage, and temperature (PVT) variations and fluctuations. This uncertainty significantly undermines the accuracy and effectiveness of traditional DPM approaches. More specifically, a stochastic framework was propose to improve the accuracy of decision making in power management, while considering the manufacturing process and/or design induced uncertainties. A key characteristic of the framework is that uncertainties are effectively captured by a partially observable semi-Markov decision process. As a result, the proposed framework brings the underlying probabilistic PVT effects to the forefront of power management policy determination. Experimental results with a RISC processor demonstrate the effectiveness of the technique and show that the proposed variability-aware power management technique ensures robust system-wide energy savings under probabilistic variations
{"title":"Dynamic Power Management under Uncertain Information","authors":"Hwisung Jung, Massoud Pedram","doi":"10.1109/DATE.2007.364434","DOIUrl":"https://doi.org/10.1109/DATE.2007.364434","url":null,"abstract":"This paper tackles the problem of dynamic power management (DPM) in nanoscale CMOS design technologies that are typically affected by increasing levels of process, voltage, and temperature (PVT) variations and fluctuations. This uncertainty significantly undermines the accuracy and effectiveness of traditional DPM approaches. More specifically, a stochastic framework was propose to improve the accuracy of decision making in power management, while considering the manufacturing process and/or design induced uncertainties. A key characteristic of the framework is that uncertainties are effectively captured by a partially observable semi-Markov decision process. As a result, the proposed framework brings the underlying probabilistic PVT effects to the forefront of power management policy determination. Experimental results with a RISC processor demonstrate the effectiveness of the technique and show that the proposed variability-aware power management technique ensures robust system-wide energy savings under probabilistic variations","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126409510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364390
Talal Bonny, J. Henkel
Code density is a major requirement in embedded system design since it not only reduces the need for the scarce resource memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we introduce a novel and efficient hardware-supported approach that belongs to the group of statistical compression schemes as it is based on canonical Huffman coding. In particular, our scheme is the first to also compress the necessary Look-up Tables that can become significant in size if the application is large and/or high compression is desired. Our scheme optimizes the number of generated look-up tables to improve the compression ratio. In average, we achieve compression ratios as low as 49% (already including the overhead of the lookup tables). Thereby, our scheme is entirely orthogonal to approaches that take particularities of a certain instruction set architecture into account. We have conducted evaluations using a representative set of applications and have applied it to three major embedded processor architectures, namely ARM, MIPS and PowerPC
{"title":"Efficient Code Density Through Look-up Table Compression","authors":"Talal Bonny, J. Henkel","doi":"10.1109/DATE.2007.364390","DOIUrl":"https://doi.org/10.1109/DATE.2007.364390","url":null,"abstract":"Code density is a major requirement in embedded system design since it not only reduces the need for the scarce resource memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we introduce a novel and efficient hardware-supported approach that belongs to the group of statistical compression schemes as it is based on canonical Huffman coding. In particular, our scheme is the first to also compress the necessary Look-up Tables that can become significant in size if the application is large and/or high compression is desired. Our scheme optimizes the number of generated look-up tables to improve the compression ratio. In average, we achieve compression ratios as low as 49% (already including the overhead of the lookup tables). Thereby, our scheme is entirely orthogonal to approaches that take particularities of a certain instruction set architecture into account. We have conducted evaluations using a representative set of applications and have applied it to three major embedded processor architectures, namely ARM, MIPS and PowerPC","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122263495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364615
Weihuang Wang, G. Choi
This paper presents a low-power real-time decoder that provides constant-time processing of each frame using dynamic voltage and frequency scaling. The design uses known capacity-approaching low-density parity-check (LDPC) code to contain data over fading channels. Real-time applications require guaranteed data rates. While conventional fixed-number of decoding-iteration schemes are not energy efficient for mobile devices, the proposed heuristic scheme pre-analyzes each received data frame to estimate the maximum number of necessary iterations for frame convergence. The results are then used to dynamically adjust decoder frequency. Energy use is then reduced appropriately by adjusting power supply voltage to minimum necessary for the given frequency. The resulting design provides a judicious trade-off between power consumption and error level
{"title":"Minimum-Energy LDPC Decoder for Real-Time Mobile Application","authors":"Weihuang Wang, G. Choi","doi":"10.1109/DATE.2007.364615","DOIUrl":"https://doi.org/10.1109/DATE.2007.364615","url":null,"abstract":"This paper presents a low-power real-time decoder that provides constant-time processing of each frame using dynamic voltage and frequency scaling. The design uses known capacity-approaching low-density parity-check (LDPC) code to contain data over fading channels. Real-time applications require guaranteed data rates. While conventional fixed-number of decoding-iteration schemes are not energy efficient for mobile devices, the proposed heuristic scheme pre-analyzes each received data frame to estimate the maximum number of necessary iterations for frame convergence. The results are then used to dynamically adjust decoder frequency. Energy use is then reduced appropriately by adjusting power supply voltage to minimum necessary for the given frequency. The resulting design provides a judicious trade-off between power consumption and error level","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121148046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Formal verification is an important issue in circuit and system design. In this context, bounded model checking (BMC) is one of the most successful techniques. But even if all specified properties can be verified, it is difficult to determine whether they cover the complete functional behavior of a design. We propose a pragmatic approach to estimate coverage in BMC. The approach can easily be integrated in a BMC tool with only minor changes. In our approach, a coverage property is generated for each important signal. If the considered properties do not describe the signal's entire behavior, the coverage property fails and a counter-example is generated. From the counter-example an uncovered scenario can be derived. In this way the approach also helps in design understanding. Our method is demonstrated on a RISC CPU. Based on the results we identified coverage gaps. We were able to close all of them and achieved 100% functional coverage
{"title":"Estimating Functional Coverage in Bounded Model Checking","authors":"Daniel Große, U. Kühne, R. Drechsler","doi":"10.1145/1266366.1266620","DOIUrl":"https://doi.org/10.1145/1266366.1266620","url":null,"abstract":"Formal verification is an important issue in circuit and system design. In this context, bounded model checking (BMC) is one of the most successful techniques. But even if all specified properties can be verified, it is difficult to determine whether they cover the complete functional behavior of a design. We propose a pragmatic approach to estimate coverage in BMC. The approach can easily be integrated in a BMC tool with only minor changes. In our approach, a coverage property is generated for each important signal. If the considered properties do not describe the signal's entire behavior, the coverage property fails and a counter-example is generated. From the counter-example an uncovered scenario can be derived. In this way the approach also helps in design understanding. Our method is demonstrated on a RISC CPU. Based on the results we identified coverage gaps. We were able to close all of them and achieved 100% functional coverage","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116645889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an efficient test methodology to test switches in a network-on-chip (NoC) architecture. A switch in a NoC consists of a number of ports and a router. Using the intra-switch regularity among ports of a switch and inter-switch regularity among routers of switches, the proposed method decreases the test application time and test data volume of NoC testing. Using a test source to generate test vectors and scan-based testing, this methodology broadcasts test vectors through the minimum spanning tree of the NoC and concurrently tests its switches. In addition, a possible fault is detected by comparing test results using inter- or intra- switch comparisons. The logic and memory parts of a switch are tested by appropriate memory and logic testing methods. Experimental results show less test application time and test power consumption, as compared with other methods in the literature
{"title":"Using the Inter- and Intra-Switch Regularity in NoC Switch Testing","authors":"Mohammad Hosseinabady, Atefe Dalirsani, Z. Navabi","doi":"10.5555/1266366.1266443","DOIUrl":"https://doi.org/10.5555/1266366.1266443","url":null,"abstract":"This paper proposes an efficient test methodology to test switches in a network-on-chip (NoC) architecture. A switch in a NoC consists of a number of ports and a router. Using the intra-switch regularity among ports of a switch and inter-switch regularity among routers of switches, the proposed method decreases the test application time and test data volume of NoC testing. Using a test source to generate test vectors and scan-based testing, this methodology broadcasts test vectors through the minimum spanning tree of the NoC and concurrently tests its switches. In addition, a possible fault is detected by comparing test results using inter- or intra- switch comparisons. The logic and memory parts of a switch are tested by appropriate memory and logic testing methods. Experimental results show less test application time and test power consumption, as compared with other methods in the literature","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122398473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern integrated radio systems require highly linear analog/RF circuits. Two-tone simulations are commonly used to study a circuit's nonlinear behavior. Very often, however, this approach suffers limited insight. To gain insight into nonlinear behavior, we use a multisine analysis methodology to locate the main nonlinear components (e.g. transistors) both for weakly and strongly nonlinear behavior. Under weakly nonlinear conditions, selective Volterra analysis is used to further determine the most important nonlinearities of the main nonlinear components. As shown with an example of a 90 nm CMOS wideband low-noise amplifier, the insights obtained with this approach can be used to reduce nonlinear circuit behavior, in this case with 10 dB. The approach is valid for wideband and thus practical excitation signals, and is easily applicable both to simple and complex circuits
{"title":"Nonlinearity Analysis of Analog/RF Circuits Using Combined Multisine and Volterra Analysis","authors":"J. Borremans, L. D. Locht, P. Wambacq, Y. Rolain","doi":"10.5555/1266366.1266422","DOIUrl":"https://doi.org/10.5555/1266366.1266422","url":null,"abstract":"Modern integrated radio systems require highly linear analog/RF circuits. Two-tone simulations are commonly used to study a circuit's nonlinear behavior. Very often, however, this approach suffers limited insight. To gain insight into nonlinear behavior, we use a multisine analysis methodology to locate the main nonlinear components (e.g. transistors) both for weakly and strongly nonlinear behavior. Under weakly nonlinear conditions, selective Volterra analysis is used to further determine the most important nonlinearities of the main nonlinear components. As shown with an example of a 90 nm CMOS wideband low-noise amplifier, the insights obtained with this approach can be used to reduce nonlinear circuit behavior, in this case with 10 dB. The approach is valid for wideband and thus practical excitation signals, and is easily applicable both to simple and complex circuits","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131248735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}