Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783135
P. Honzík, J. Kadlec
The presented work deals with reconfigurable systems with Self Adaptivity based on the FPGA technology. The work is based on partial dynamic reconfiguration of FPGA devices and analyzes Self Adaptive systems, their elements and features. The main part introduces placement algorithms and Step Adaptive algorithm for improving mapping on running network. The tests of algorithm are done on the sets of the test applications.
{"title":"Dynamic placement applications into Self Adaptive network on FPGA","authors":"P. Honzík, J. Kadlec","doi":"10.1109/DDECS.2011.5783135","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783135","url":null,"abstract":"The presented work deals with reconfigurable systems with Self Adaptivity based on the FPGA technology. The work is based on partial dynamic reconfiguration of FPGA devices and analyzes Self Adaptive systems, their elements and features. The main part introduces placement algorithms and Step Adaptive algorithm for improving mapping on running network. The tests of algorithm are done on the sets of the test applications.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122560060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783131
Stefan Kupferschmid, B. Becker, Tino Teige, M. Fränzle
Symbolic methods in computer-aided verification rely heavily on constraint solvers. The correctness and reliability of these solvers are of vital importance in the analysis of safety-critical systems, e.g., in the automotive context. Satisfiability results of a solver can usually be checked by probing the computed solution. This is in general not the case for un-satisfiability results. In this paper, we propose a certification method for unsatisfiability results for mixed Boolean and non-linear arithmetic constraint formulae. Such formulae arise in the analysis of hybrid discrete/continuous systems. Furthermore, we test our approach by enhancing the iSAT constraint solver to generate unsatisfiability proofs, and implemented a tool that can efficiently validate such proofs. Finally, some experimental results showing the effectiveness of our techniques are given.
{"title":"Proof certificates and non-linear arithmetic constraints","authors":"Stefan Kupferschmid, B. Becker, Tino Teige, M. Fränzle","doi":"10.1109/DDECS.2011.5783131","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783131","url":null,"abstract":"Symbolic methods in computer-aided verification rely heavily on constraint solvers. The correctness and reliability of these solvers are of vital importance in the analysis of safety-critical systems, e.g., in the automotive context. Satisfiability results of a solver can usually be checked by probing the computed solution. This is in general not the case for un-satisfiability results. In this paper, we propose a certification method for unsatisfiability results for mixed Boolean and non-linear arithmetic constraint formulae. Such formulae arise in the analysis of hybrid discrete/continuous systems. Furthermore, we test our approach by enhancing the iSAT constraint solver to generate unsatisfiability proofs, and implemented a tool that can efficiently validate such proofs. Finally, some experimental results showing the effectiveness of our techniques are given.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"71-78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783092
S. Farfeleder, T. Moser, A. Krall, T. Stålhane, H. Zojer, C. Panis
In times of ever-growing system complexity and thus increasing possibilities for errors, high-quality requirements are crucial to prevent design errors in later project phases and to facilitate design verification and validation. To ensure and improve the consistency, completeness and correctness of requirements, formal languages have been introduced as an alternative to using natural language (NL) requirement descriptions. However, in many cases existing NL requirements must be taken into account. The formalization of those requirements by now is a primarily manual task, which therefore is both cumbersome and error-prone. We introduce the tool DODT that semi-automatically transforms NL requirements into semi-formal boilerplate requirements. The transformation builds upon a domain ontology (DO) containing knowledge of the problem domain and upon natural language processing techniques. The tool strongly reduced the required manual effort for the transformation. In addition the quality of the requirements was improved.
{"title":"DODT: Increasing requirements formalism using domain ontologies for improved embedded systems development","authors":"S. Farfeleder, T. Moser, A. Krall, T. Stålhane, H. Zojer, C. Panis","doi":"10.1109/DDECS.2011.5783092","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783092","url":null,"abstract":"In times of ever-growing system complexity and thus increasing possibilities for errors, high-quality requirements are crucial to prevent design errors in later project phases and to facilitate design verification and validation. To ensure and improve the consistency, completeness and correctness of requirements, formal languages have been introduced as an alternative to using natural language (NL) requirement descriptions. However, in many cases existing NL requirements must be taken into account. The formalization of those requirements by now is a primarily manual task, which therefore is both cumbersome and error-prone. We introduce the tool DODT that semi-automatically transforms NL requirements into semi-formal boilerplate requirements. The transformation builds upon a domain ontology (DO) containing knowledge of the problem domain and upon natural language processing techniques. The tool strongly reduced the required manual effort for the transformation. In addition the quality of the requirements was improved.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126432388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783035
K. Chakrabarty
Three-dimensional integrated circuits (3D ICs) promise to overcome barriers in interconnect scaling, thereby offering an opportunity to get higher performance using CMOS technology. Despite these benefits, testing remains a major obstacle that hinders the adoption of 3D integration. Test techniques and design-for-testability (DfT) solutions for 3D ICs have remained largely unexplored in the research community, even though experts in industry have identified a number of test challenges related to the lack of probe access for wafers, test access to modules in stacked wafers/dies, thermal concerns, test economics, and new defects arising from unique processing steps such as wafer thinning, alignment, and bonding. In this embedded tutorial, the speaker will present an overview of 3D integration, its unique processing and assembly steps, testing and DfT challenges, and some of the solutions being advocated for these challenges. The speaker will focus on proposals for pre-bond testing of dies and TSVs, DfT innovations related to the optimization of die wrappers, test scheduling solutions, and access to dies and inter-die interconnects during stack testing. Time permitting, the speaker will also highlight recent work on comprehensive cost modeling for 3D ICs, which includes the cost of design, manufacture, testing, and test flows.
{"title":"Testing and design-for-testability solutions for 3D integrated circuits","authors":"K. Chakrabarty","doi":"10.1109/DDECS.2011.5783035","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783035","url":null,"abstract":"Three-dimensional integrated circuits (3D ICs) promise to overcome barriers in interconnect scaling, thereby offering an opportunity to get higher performance using CMOS technology. Despite these benefits, testing remains a major obstacle that hinders the adoption of 3D integration. Test techniques and design-for-testability (DfT) solutions for 3D ICs have remained largely unexplored in the research community, even though experts in industry have identified a number of test challenges related to the lack of probe access for wafers, test access to modules in stacked wafers/dies, thermal concerns, test economics, and new defects arising from unique processing steps such as wafer thinning, alignment, and bonding. In this embedded tutorial, the speaker will present an overview of 3D integration, its unique processing and assembly steps, testing and DfT challenges, and some of the solutions being advocated for these challenges. The speaker will focus on proposals for pre-bond testing of dies and TSVs, DfT innovations related to the optimization of die wrappers, test scheduling solutions, and access to dies and inter-die interconnects during stack testing. Time permitting, the speaker will also highlight recent work on comprehensive cost modeling for 3D ICs, which includes the cost of design, manufacture, testing, and test flows.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126436554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783089
Fabian Hopsch, M. Lindig, B. Straube, W. Vermeiren
Integrated circuits necessitate high quality and high yield. Defects and parameter variations are a main issue affecting both aspects. In this paper a method for characterization for statistical test is presented. The characterization is carried out for a set of digital cells using Monte Carlo fault simulation at electrical level. The results show that only a small amount of faults are being manifested as stuck-at faults. Many faults lead to a mix of different behaviours for various test sequences and parameter configurations. For a digital cell, the necessary test sequences for detecting all detectable faults are derived from the simulation results. Since the effort for the characterization is high, first investigations to reduce this effort are presented.
{"title":"Characterization of digital cells for statistical test","authors":"Fabian Hopsch, M. Lindig, B. Straube, W. Vermeiren","doi":"10.1109/DDECS.2011.5783089","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783089","url":null,"abstract":"Integrated circuits necessitate high quality and high yield. Defects and parameter variations are a main issue affecting both aspects. In this paper a method for characterization for statistical test is presented. The characterization is carried out for a set of digital cells using Monte Carlo fault simulation at electrical level. The results show that only a small amount of faults are being manifested as stuck-at faults. Many faults lead to a mix of different behaviours for various test sequences and parameter configurations. For a digital cell, the necessary test sequences for detecting all detectable faults are derived from the simulation results. Since the effort for the characterization is high, first investigations to reduce this effort are presented.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122063096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783060
A. Simevski, R. Kraemer, M. Krstic
Integrated circuit aging effects are more and more pronounced with the continuous technological downscaling. These effects degrade circuit operation which is mainly observed as increased input-to-output delay of circuit components. Eventually, the circuit falls out of its specifications. Countermeasures are needed to prevent or reduce such degradation. Aging monitoring can be very beneficial since it can predict circuit failure and/or activate mechanisms to avoid failure. Most of the present aging monitors are based on reporting abnormal input-to-output signal delays on the critical path of the circuit. However, present approaches introduce additional circuit complexity, use complicated analog design, use non-standard cells etc. We propose a low-complexity aging monitor based on standard library cells, offering simplicity and flexibility of its design, integration and use. The designer could instantiate many monitors throughout the integrated circuit. The user can simply read the “aging code” placed in a register in each monitor and determine the “age” of the circuit, predict a circuit failure and/or take an appropriate action. This is especially useful in microprocessors which are designed with dependability in mind.
{"title":"Low-complexity integrated circuit aging monitor","authors":"A. Simevski, R. Kraemer, M. Krstic","doi":"10.1109/DDECS.2011.5783060","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783060","url":null,"abstract":"Integrated circuit aging effects are more and more pronounced with the continuous technological downscaling. These effects degrade circuit operation which is mainly observed as increased input-to-output delay of circuit components. Eventually, the circuit falls out of its specifications. Countermeasures are needed to prevent or reduce such degradation. Aging monitoring can be very beneficial since it can predict circuit failure and/or activate mechanisms to avoid failure. Most of the present aging monitors are based on reporting abnormal input-to-output signal delays on the critical path of the circuit. However, present approaches introduce additional circuit complexity, use complicated analog design, use non-standard cells etc. We propose a low-complexity aging monitor based on standard library cells, offering simplicity and flexibility of its design, integration and use. The designer could instantiate many monitors throughout the integrated circuit. The user can simply read the “aging code” placed in a register in each monitor and determine the “age” of the circuit, predict a circuit failure and/or take an appropriate action. This is especially useful in microprocessors which are designed with dependability in mind.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128072438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783116
Jiri Jenícek, M. Rozkovec, O. Novák
This paper describes an algorithm, which utilizes a test data compression method based on test vector overlapping to compact and compress test patterns. The algorithm takes deterministic test vectors previously generated in an ATPG and compresses them by reordering and overlapping them. It is able to speed up the test generation process by using distributed ATPG processing and compress test data for various fault models. Independency of the algorithm on used ATPG is discussed and verified, the compressor is able to cooperate with industry workflow tools using Verilog and STIL formats. The compressor preprocesses the input data to determine the degree of random test resistance for each fault. This optional step allows to rearrange the test vectors more efficiently and results to 10% compression ratio improvement in average.
{"title":"Test vector overlapping based compression tool for narrow test access mechanism","authors":"Jiri Jenícek, M. Rozkovec, O. Novák","doi":"10.1109/DDECS.2011.5783116","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783116","url":null,"abstract":"This paper describes an algorithm, which utilizes a test data compression method based on test vector overlapping to compact and compress test patterns. The algorithm takes deterministic test vectors previously generated in an ATPG and compresses them by reordering and overlapping them. It is able to speed up the test generation process by using distributed ATPG processing and compress test data for various fault models. Independency of the algorithm on used ATPG is discussed and verified, the compressor is able to cooperate with industry workflow tools using Verilog and STIL formats. The compressor preprocesses the input data to determine the degree of random test resistance for each fault. This optional step allows to rearrange the test vectors more efficiently and results to 10% compression ratio improvement in average.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131405970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783127
M. Pospisilik, M. Adamek
This paper deals with a construction and practical testing of a VU meter driver that includes an accurate rectifier and logarithmic driver of a pointer-type gauge. The logarithm is taken from the rectified signal by employing a capacitor discharge voltage curve. The appropriate circuit was built and tested and the results are also discussed in this article.
{"title":"Advanced rectifier and driver for analog VU meter","authors":"M. Pospisilik, M. Adamek","doi":"10.1109/DDECS.2011.5783127","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783127","url":null,"abstract":"This paper deals with a construction and practical testing of a VU meter driver that includes an accurate rectifier and logarithmic driver of a pointer-type gauge. The logarithm is taken from the rectified signal by employing a capacitor discharge voltage curve. The appropriate circuit was built and tested and the results are also discussed in this article.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131533836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783034
B. Tillack
Future silicon based integrated circuits technology is targeting on reduced transistor dimensions, increased transistor counts and increased operating frequencies. By reaching the nanometer scale region lateral and vertical structures have to be processed which are close to atomic dimensions (ITRS “More Moore” approach). Moreover, emerging research devices and technologies are under investigation to extend the CMOS technology further on or to evaluate solutions for beyond Si CMOS technologies like introducing Ge or III–V material channel replacement. According to the ITRS the alternative “More Than Moore” approach is targeting on diversification by combining different technologies based on a reasonable scaling level. The paper gives an overview of the “More than Moore” strategy based on examples of IHP's SiGe BiCMOS technology. SiGe BiCMOS technologies combine high speed SiGe HBTs, computing power of CMOS, and high-quality passives on a single chip. RF performance of HBTs has been improved a lot over the years enabling mm-wave applications like automotive radar (77 GHz), high data rate fiber links (>100 Gb/s), and Gb/s wireless links (60 GHz, 122 GHz,). Research activities are targeting HBTs allowing THz frequencies (EU FP 7 project DOTFIVE). In a “More than Moore” approach the functionality of the BiCMOS technology is extended by integrating optical components (Si Photonics) and MEMS structures. Moreover, the monolithic or hybrid hetero-integration of Si and III/V compound semiconductor technologies are under investigation enabling new System-on-Chip-solutions.
{"title":"SiGe BiCMOS platform - baseline technology for More Than Moore process module integration","authors":"B. Tillack","doi":"10.1109/DDECS.2011.5783034","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783034","url":null,"abstract":"Future silicon based integrated circuits technology is targeting on reduced transistor dimensions, increased transistor counts and increased operating frequencies. By reaching the nanometer scale region lateral and vertical structures have to be processed which are close to atomic dimensions (ITRS “More Moore” approach). Moreover, emerging research devices and technologies are under investigation to extend the CMOS technology further on or to evaluate solutions for beyond Si CMOS technologies like introducing Ge or III–V material channel replacement. According to the ITRS the alternative “More Than Moore” approach is targeting on diversification by combining different technologies based on a reasonable scaling level. The paper gives an overview of the “More than Moore” strategy based on examples of IHP's SiGe BiCMOS technology. SiGe BiCMOS technologies combine high speed SiGe HBTs, computing power of CMOS, and high-quality passives on a single chip. RF performance of HBTs has been improved a lot over the years enabling mm-wave applications like automotive radar (77 GHz), high data rate fiber links (>100 Gb/s), and Gb/s wireless links (60 GHz, 122 GHz,). Research activities are targeting HBTs allowing THz frequencies (EU FP 7 project DOTFIVE). In a “More than Moore” approach the functionality of the BiCMOS technology is extended by integrating optical components (Si Photonics) and MEMS structures. Moreover, the monolithic or hybrid hetero-integration of Si and III/V compound semiconductor technologies are under investigation enabling new System-on-Chip-solutions.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129703345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-13DOI: 10.1109/DDECS.2011.5783080
G. D. Natale, M. Flottes, B. Rouzeyre, M. Valka, Denis Réal
Cryptographic devices can be subject to side-channel attacks. Among those attacks, Differential Power Analysis (DPA) has proven to be very effective and easy to perform. Several countermeasures have been proposed in the literature. However, the effectiveness of these counter measures is still evaluated by resort-ing to intensive DPA simulations and constitutes a very time-consuming design task. In this paper we show that the knowledge of the structure of the circuit can be exploited to improve performances of the DPA. We propose to realign power consumption traces according timing information (i.e., path delays). We show the usefulness of the proposed method by comparing the efficiency of classic DPA w.r.t. timing aware DPA.
{"title":"Power consumption traces realignment to improve differential power analysis","authors":"G. D. Natale, M. Flottes, B. Rouzeyre, M. Valka, Denis Réal","doi":"10.1109/DDECS.2011.5783080","DOIUrl":"https://doi.org/10.1109/DDECS.2011.5783080","url":null,"abstract":"Cryptographic devices can be subject to side-channel attacks. Among those attacks, Differential Power Analysis (DPA) has proven to be very effective and easy to perform. Several countermeasures have been proposed in the literature. However, the effectiveness of these counter measures is still evaluated by resort-ing to intensive DPA simulations and constitutes a very time-consuming design task. In this paper we show that the knowledge of the structure of the circuit can be exploited to improve performances of the DPA. We propose to realign power consumption traces according timing information (i.e., path delays). We show the usefulness of the proposed method by comparing the efficiency of classic DPA w.r.t. timing aware DPA.","PeriodicalId":231389,"journal":{"name":"14th IEEE International Symposium on Design and Diagnostics of Electronic Circuits and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130993043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}