Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966763
N. Tsuda
An advanced interconnection network called "ABL-tree" is proposed for constructing reconfigurable processing-node arrays with a constant network diameter of t(/spl ges/3) for expanding the size of the array. An ABL-tree can be constructed by using small subarrays of processing nodes interconnected as ring, 2-D toroidal mesh, or the complete connection scheme, whose size is determined by the minimal repetition pitch of a node-coloring pattern with an inter-node distance of t (/spl ges/3) for the base array. The processing nodes of the subarrays are connected to a tree-structured switching network with a height of two according to the node-coloring pattern. Inter-node communication is achieved in a distributed manner by selectively using a node adjacent to the source node in the subarray as a relay node whose color is the same as that of the destination node. The proposed scheme allows reconfiguration of large rings or toroidal meshes with various aspect ratios by selectively arranging the subarrays in a free order. It is advantageous in routing, fault tolerance, and the number of switching elements.
{"title":"ABL-tree: a constant diameter interconnection network for reconfigurable processor arrays capable of distributed communication","authors":"N. Tsuda","doi":"10.1109/DFTVS.2001.966763","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966763","url":null,"abstract":"An advanced interconnection network called \"ABL-tree\" is proposed for constructing reconfigurable processing-node arrays with a constant network diameter of t(/spl ges/3) for expanding the size of the array. An ABL-tree can be constructed by using small subarrays of processing nodes interconnected as ring, 2-D toroidal mesh, or the complete connection scheme, whose size is determined by the minimal repetition pitch of a node-coloring pattern with an inter-node distance of t (/spl ges/3) for the base array. The processing nodes of the subarrays are connected to a tree-structured switching network with a height of two according to the node-coloring pattern. Inter-node communication is achieved in a distributed manner by selectively using a node adjacent to the source node in the subarray as a relay node whose color is the same as that of the destination node. The proposed scheme allows reconfiguration of large rings or toroidal meshes with various aspect ratios by selectively arranging the subarrays in a free order. It is advantageous in routing, fault tolerance, and the number of switching elements.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124615140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966768
N. Oh, E. McCluskey
This paper presents a new software technique for detecting transient hardware errors. The objective is to guarantee data integrity in the presence of transient errors and to minimize energy consumption at the same time. Basically, we duplicate computations and compare their results to detect errors. There are three choices for duplicate computations: (1) duplicating every statement in the program and comparing their results, (2) re-executing procedures with duplicated procedure calls and comparing the results, (3) re-executing the whole program and comparing the final results. Our technique is the combination of (1) and (2): Given a program, our technique analyzes procedure call behavior of the program and determines which procedures should have duplicated statements (choice (1)) and which procedure calls should be duplicated (choice (2)) to minimize energy consumption while controlling error detection latency constraints. Then, our technique transforms the original program into the program that is able to detect errors with reduced energy consumption by re-executing the statements or procedures. In benchmark program simulation, we found that our technique saves over 25% of the required energy on average compared to previous techniques that do not take energy consumption into consideration.
{"title":"Procedure call duplication: minimization of energy consumption with constrained error detection latency","authors":"N. Oh, E. McCluskey","doi":"10.1109/DFTVS.2001.966768","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966768","url":null,"abstract":"This paper presents a new software technique for detecting transient hardware errors. The objective is to guarantee data integrity in the presence of transient errors and to minimize energy consumption at the same time. Basically, we duplicate computations and compare their results to detect errors. There are three choices for duplicate computations: (1) duplicating every statement in the program and comparing their results, (2) re-executing procedures with duplicated procedure calls and comparing the results, (3) re-executing the whole program and comparing the final results. Our technique is the combination of (1) and (2): Given a program, our technique analyzes procedure call behavior of the program and determines which procedures should have duplicated statements (choice (1)) and which procedure calls should be duplicated (choice (2)) to minimize energy consumption while controlling error detection latency constraints. Then, our technique transforms the original program into the program that is able to detect errors with reduced energy consumption by re-executing the statements or procedures. In benchmark program simulation, we found that our technique saves over 25% of the required energy on average compared to previous techniques that do not take energy consumption into consideration.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131818168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966800
Ahmad A. Al-Yamani, N. Oh, E. McCluskey
In algorithm-based fault tolerance (ABFT), fault tolerance is tailored to the algorithm performed. Most of the previous studies that compared ABFT schemes considered only error detection and correction capabilities. Some previous studies looked at the overhead but no previous work compared different recovery schemes for data processing applications considering throughput as the main metric. We compare the performance of two recovery schemes: recomputing and ABFT correction, for different error rates. We consider errors that occur during computation as well as those that occur during error detection, location and correction processes. A metric for performance evaluation of different design alternatives is defined. Results show that multiple error correction using ABFT has poorer performance than single error correction even at high error rates. We also present, implement and evaluate early detection in ABFT. In early detection, we try to detect the errors that occur in the checksum calculation before starting the actual computation. Early detection improves throughput in cases of intensive computations and cases of high error rates.
{"title":"Performance evaluation of checksum-based ABFT","authors":"Ahmad A. Al-Yamani, N. Oh, E. McCluskey","doi":"10.1109/DFTVS.2001.966800","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966800","url":null,"abstract":"In algorithm-based fault tolerance (ABFT), fault tolerance is tailored to the algorithm performed. Most of the previous studies that compared ABFT schemes considered only error detection and correction capabilities. Some previous studies looked at the overhead but no previous work compared different recovery schemes for data processing applications considering throughput as the main metric. We compare the performance of two recovery schemes: recomputing and ABFT correction, for different error rates. We consider errors that occur during computation as well as those that occur during error detection, location and correction processes. A metric for performance evaluation of different design alternatives is defined. Results show that multiple error correction using ABFT has poorer performance than single error correction even at high error rates. We also present, implement and evaluate early detection in ABFT. In early detection, we try to detect the errors that occur in the checksum calculation before starting the actual computation. Early detection improves throughput in cases of intensive computations and cases of high error rates.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117081656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966750
Neil Harrison
Defect limited product yields are known to have a significant contribution from resistive or open vias between metal interconnect layers. A simple tool for via duplication is presented with application results. The tool automates the addition of redundant vias to existing customer product layouts where permitted by the design rules. Significant yield benefits are obtained when the technique is applied to a real product as part of a Design for Manufacturability (DfM) exercise. The potential for improved process robustness and enhanced fault tolerance is also demonstrated. Implications for yield modeling including critical areas and the relation of random defects to gross defects are discussed.
{"title":"A simple via duplication tool for yield enhancement","authors":"Neil Harrison","doi":"10.1109/DFTVS.2001.966750","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966750","url":null,"abstract":"Defect limited product yields are known to have a significant contribution from resistive or open vias between metal interconnect layers. A simple tool for via duplication is presented with application results. The tool automates the addition of redundant vias to existing customer product layouts where permitted by the design rules. Significant yield benefits are obtained when the technique is applied to a real product as part of a Design for Manufacturability (DfM) exercise. The potential for improved process robustness and enhanced fault tolerance is also demonstrated. Implications for yield modeling including critical areas and the relation of random defects to gross defects are discussed.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129712856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966770
M. Alderighi, F. Casini, S. D'Angelo, D. Salvi, G. Sechi
Space research requires increasingly huge amounts of scientific data. Next generation satellites will have on-board supercomputing capabilities to perform efficient information processing and overcome the possible limit imposed by communication bandwidth to ground receiving stations. They will also have to survive even longer term missions; thus reliability and fault tolerance will be a major concern, to cope with radiation induced faults. Flexibility also emerges as a desirable requisite for on-board processing system to implement new functionalities and run different algorithms for the ongoing mission. The trend is towards multiprocessor architecture in which processing nodes and memories are connected through high bandwidth interconnection networks. The paper presents a fault-tolerance strategy for an FPGA implementation of a redundant multistage interconnection network (MIN), for a space multi-sensor system. The mechanism is endowed with fault diagnosis ability which allows one to exploit MIN intrinsic reconfiguration capabilities, as well as the reprogrammability of SRAM-based FPGAs.
{"title":"A fault-tolerance strategy for an FPGA-based multi-stage interconnection network in a multi-sensor system for space application","authors":"M. Alderighi, F. Casini, S. D'Angelo, D. Salvi, G. Sechi","doi":"10.1109/DFTVS.2001.966770","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966770","url":null,"abstract":"Space research requires increasingly huge amounts of scientific data. Next generation satellites will have on-board supercomputing capabilities to perform efficient information processing and overcome the possible limit imposed by communication bandwidth to ground receiving stations. They will also have to survive even longer term missions; thus reliability and fault tolerance will be a major concern, to cope with radiation induced faults. Flexibility also emerges as a desirable requisite for on-board processing system to implement new functionalities and run different algorithms for the ongoing mission. The trend is towards multiprocessor architecture in which processing nodes and memories are connected through high bandwidth interconnection networks. The paper presents a fault-tolerance strategy for an FPGA implementation of a redundant multistage interconnection network (MIN), for a space multi-sensor system. The mechanism is endowed with fault diagnosis ability which allows one to exploit MIN intrinsic reconfiguration capabilities, as well as the reprogrammability of SRAM-based FPGAs.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125930363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966779
F. Karimi, V. S. Irrinki, T. Crosby, N. Park, F. Lombardi
Presents a built-in-self test (BIST) technique to implement the parallel approach for testing multi-port memories. This approach is based on the parallel execution of the testing process so that inter-port faults (shorts and coupling faults) can be detected at no loss of coverage and with no increase in the number of tests compared with a single-port memory. In the proposed hardware scheme, address data and control sequences are generated using a BIST controller originally designed for a single port memory; a simple logic unit is also used to interface the signals for BIST to the memory ports. It is shown that the proposed BIST implementation is O(N log N), where N is the number of ports.
{"title":"Parallel testing of multi-port static random access memories for BIST","authors":"F. Karimi, V. S. Irrinki, T. Crosby, N. Park, F. Lombardi","doi":"10.1109/DFTVS.2001.966779","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966779","url":null,"abstract":"Presents a built-in-self test (BIST) technique to implement the parallel approach for testing multi-port memories. This approach is based on the parallel execution of the testing process so that inter-port faults (shorts and coupling faults) can be detected at no loss of coverage and with no increase in the number of tests compared with a single-port memory. In the proposed hardware scheme, address data and control sequences are generated using a BIST controller originally designed for a single port memory; a simple logic unit is also used to interface the signals for BIST to the memory ports. It is shown that the proposed BIST implementation is O(N log N), where N is the number of ports.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134106116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966762
I. Takanami
Gives built-in self-reconfiguring systems for mesh-connected processor arrays with faulty processors (PEs) which are directly replaced by spare PEs on two orthogonal lines at the edges of the arrays or on the diagonal line of the arrays. To do so, using a Hopfield-type neural network model, we present an algorithm for reconstructing the arrays mentioned above and show its efficiency of reconstruction by computer simulations. Next, we show how the algorithm can be realized by a digital neural circuit. The circuit can be embedded in a target processor array to reconstruct quickly the array with faulty PEs without the aid of a host computer. This implies that the proposed systems are effective in enhancing the run-time reliabilities of the processor arrays.
{"title":"Built-in self-reconfiguring systems for fault tolerant mesh-connected processor arrays by direct spare replacement","authors":"I. Takanami","doi":"10.1109/DFTVS.2001.966762","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966762","url":null,"abstract":"Gives built-in self-reconfiguring systems for mesh-connected processor arrays with faulty processors (PEs) which are directly replaced by spare PEs on two orthogonal lines at the edges of the arrays or on the diagonal line of the arrays. To do so, using a Hopfield-type neural network model, we present an algorithm for reconstructing the arrays mentioned above and show its efficiency of reconstruction by computer simulations. Next, we show how the algorithm can be realized by a digital neural circuit. The circuit can be embedded in a target processor array to reconstruct quickly the array with faulty PEs without the aid of a host computer. This implies that the proposed systems are effective in enhancing the run-time reliabilities of the processor arrays.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132739373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966785
Xiangdong Xuan, A. Chatterjee
With the use of aggressive technologies, the reliability of analog microelectronics is attracting greater attention. In this paper, a hierarchical reliability analysis approach for analog circuits is proposed. Through the use of behavioral models, electrical stress factors at the circuit inputs are propagated top-down to sub-modules and lower-level building-block components. These stress factors are then combined with physics-of-failure models to compute the performance degradation of the circuit building-block components due to electromigration and hot-carrier effects. The degradation effects are then propagated bottom-up through the design hierarchy to compute the changes in high-level circuit specification values due to electrical stress and the expected time-to-failure. A method for "hot-spot" analysis is proposed, where a "hot-spot" is defined to be a circuit component that can most likely cause circuit reliability problems. A reliability analysis tool has been developed and preliminary results are presented.
{"title":"Sensitivity and reliability evaluation for mixed-signal ICs under electromigration and hot-carrier effects","authors":"Xiangdong Xuan, A. Chatterjee","doi":"10.1109/DFTVS.2001.966785","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966785","url":null,"abstract":"With the use of aggressive technologies, the reliability of analog microelectronics is attracting greater attention. In this paper, a hierarchical reliability analysis approach for analog circuits is proposed. Through the use of behavioral models, electrical stress factors at the circuit inputs are propagated top-down to sub-modules and lower-level building-block components. These stress factors are then combined with physics-of-failure models to compute the performance degradation of the circuit building-block components due to electromigration and hot-carrier effects. The degradation effects are then propagated bottom-up through the design hierarchy to compute the changes in high-level circuit specification values due to electrical stress and the expected time-to-failure. A method for \"hot-spot\" analysis is proposed, where a \"hot-spot\" is defined to be a circuit component that can most likely cause circuit reliability problems. A reliability analysis tool has been developed and preliminary results are presented.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115416363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966746
I. Koren, Z. Koren, G. Chapman
Color digital imagers contain red, green and blue subpixels within each color pixel. Defects that develop either at fabrication time or due to environmentally induced errors over time can cause a single color subpixel (e.g., R) to fail, while leaving the remaining colors intact. This paper investigates seven software correction algorithms that interpolate the color of a pixel based on its nearest neighbors. Using several measurements of color error, all seven methods were investigated for a large number of digital images. Interpolations using only information from the single failed color (e.g., R) in the neighbors gave the poorest results. Those using all color measurements and a quadratic interpolation formula, combined with the remaining subpixel colors (e.g., G and B) produced significantly better results. A formula developed using the CIE color coordinates of tristimulus values (X, Y, Z) yielded the best results.
彩色数字成像仪在每个颜色像素中包含红、绿、蓝子像素。在制造时产生的缺陷或由于环境引起的错误随着时间的推移会导致单个颜色亚像素(例如,R)失效,而其余颜色完好无损。本文研究了基于最近邻插值像素颜色的七种软件校正算法。采用几种测量的颜色误差,研究了所有七种方法的大量数字图像。仅使用邻居中单个失败颜色(例如R)的信息进行插值会得到最差的结果。那些使用所有颜色测量和二次插值公式,结合剩余的亚像素颜色(例如,G和B)产生明显更好的结果。使用三刺激值(X, Y, Z)的CIE颜色坐标开发的公式产生了最佳结果。
{"title":"Advanced fault-tolerance techniques for a color digital camera-on-a-chip","authors":"I. Koren, Z. Koren, G. Chapman","doi":"10.1109/DFTVS.2001.966746","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966746","url":null,"abstract":"Color digital imagers contain red, green and blue subpixels within each color pixel. Defects that develop either at fabrication time or due to environmentally induced errors over time can cause a single color subpixel (e.g., R) to fail, while leaving the remaining colors intact. This paper investigates seven software correction algorithms that interpolate the color of a pixel based on its nearest neighbors. Using several measurements of color error, all seven methods were investigated for a large number of digital images. Interpolations using only information from the single failed color (e.g., R) in the neighbors gave the poorest results. Those using all color measurements and a quadratic interpolation formula, combined with the remaining subpixel colors (e.g., G and B) produced significantly better results. A formula developed using the CIE color coordinates of tristimulus values (X, Y, Z) yielded the best results.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122957321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-24DOI: 10.1109/DFTVS.2001.966759
O. Novák, J. Nosek
Proposes a method of test pattern compression, which can be used for reducing the memory requirements for storing test patterns. The patterns are decompressed during testing in the scan chain. The test-per-clock testing scheme consists of a scan chain, auxiliary outputs for capturing the signals on the internal CUT outputs and a CUT test response compactor. The test-per-scan testing scheme can be used without auxiliary outputs and output compacting scheme. The algorithm of finding the compressed scan chain sequence reorders and overlaps the patterns previously generated with the help of an ATPG. These test patterns are generated in such a way that they contain maximum number of don't care bits. The scan chain sequence can be used for exercising all considered faults from the fault list of the tested circuit. Several experiments were done with ISCAS 85 and 89 benchmark circuits. Compared with other methods the proposed method substantially reduces the number of stored bits, test application time and necessary hardware overhead.
{"title":"Test pattern decompression using a scan chain","authors":"O. Novák, J. Nosek","doi":"10.1109/DFTVS.2001.966759","DOIUrl":"https://doi.org/10.1109/DFTVS.2001.966759","url":null,"abstract":"Proposes a method of test pattern compression, which can be used for reducing the memory requirements for storing test patterns. The patterns are decompressed during testing in the scan chain. The test-per-clock testing scheme consists of a scan chain, auxiliary outputs for capturing the signals on the internal CUT outputs and a CUT test response compactor. The test-per-scan testing scheme can be used without auxiliary outputs and output compacting scheme. The algorithm of finding the compressed scan chain sequence reorders and overlaps the patterns previously generated with the help of an ATPG. These test patterns are generated in such a way that they contain maximum number of don't care bits. The scan chain sequence can be used for exercising all considered faults from the fault list of the tested circuit. Several experiments were done with ISCAS 85 and 89 benchmark circuits. Compared with other methods the proposed method substantially reduces the number of stored bits, test application time and necessary hardware overhead.","PeriodicalId":187031,"journal":{"name":"Proceedings 2001 IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128957952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}