Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5456923
A. Ferreira, Miao Zhou, S. Bock, B. Childers, R. Melhem, D. Mossé
The introduction of Phase-Change Memory (PCM) as a main memory technology has great potential to achieve a large energy reduction. PCM has desirable energy and scalability properties, but its use for main memory also poses challenges such as limited write endurance with at most 107 writes per bit cell before failure. This paper describes techniques to enhance the lifetime of PCM when used for main memory. Our techniques are (a) writeback minimization with new cache replacement policies, (b) avoidance of unnecessary writes, which write only the bit cells that are actually changed, and (c) endurance management with a novel PCM-aware swap algorithm for wear-leveling. A failure detection algorithm is also incorporated to improve the reliability of PCM. With these approaches, the lifetime of a PCM main memory is increased from just a few days to over 8 years.
{"title":"Increasing PCM main memory lifetime","authors":"A. Ferreira, Miao Zhou, S. Bock, B. Childers, R. Melhem, D. Mossé","doi":"10.1109/DATE.2010.5456923","DOIUrl":"https://doi.org/10.1109/DATE.2010.5456923","url":null,"abstract":"The introduction of Phase-Change Memory (PCM) as a main memory technology has great potential to achieve a large energy reduction. PCM has desirable energy and scalability properties, but its use for main memory also poses challenges such as limited write endurance with at most 107 writes per bit cell before failure. This paper describes techniques to enhance the lifetime of PCM when used for main memory. Our techniques are (a) writeback minimization with new cache replacement policies, (b) avoidance of unnecessary writes, which write only the bit cells that are actually changed, and (c) endurance management with a novel PCM-aware swap algorithm for wear-leveling. A failure detection algorithm is also incorporated to improve the reliability of PCM. With these approaches, the lifetime of a PCM main memory is increased from just a few days to over 8 years.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457073
N. M. Madani, T. Thorolfsson, W. R. Davis
MIMO wireless technology is required to increase the data rates for a broad range of applications, including low cost mobile devices. In this paper we present a very low area reconfigurable MIMO detector which achieves a high throughput of 103Mbps and uses 27 Kilo Gates when implemented in a commercial 180nm CMOS process. The low area is achieved by the proposed in-place architecture. This architecture implements the K-best algorithm and reduces area 4-fold compared to the widely used multi-stage architecture, while provides reconfigurability in terms of antenna configuration during real-time operation.
{"title":"A low-area flexible MIMO detector for WiFi/WiMAX standards","authors":"N. M. Madani, T. Thorolfsson, W. R. Davis","doi":"10.1109/DATE.2010.5457073","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457073","url":null,"abstract":"MIMO wireless technology is required to increase the data rates for a broad range of applications, including low cost mobile devices. In this paper we present a very low area reconfigurable MIMO detector which achieves a high throughput of 103Mbps and uses 27 Kilo Gates when implemented in a commercial 180nm CMOS process. The low area is achieved by the proposed in-place architecture. This architecture implements the K-best algorithm and reduces area 4-fold compared to the widely used multi-stage architecture, while provides reconfigurability in terms of antenna configuration during real-time operation.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127374261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5456959
A. DeHon, H. Quinn, N. Carter
We are rapidly approaching an inflection point where the conventional target of producing perfect, identical transistors that operate without upset can no longer be maintained while continuing to reduce the energy per operation. With power requirements already limiting chip performance, continuing to demand perfect, upset-free transistors would mean the end of scaling benefits. The big challenges in device variability and reliability are driven by uncommon tails in distributions, infrequent upsets, one-size-fits-all technology requirements, and a lack of information about the context of each operation. Solutions co-designed across traditional layer boundaries in our system stack can change the game, allowing architecture and software (a) to compensate for uncommon variation, environments, and events, (b) to pass down invariants and requirements for the computation, and (c) to monitor the health of collections of devices. Cross-layer codesign provides a path to continue extracting benefits from further scaled technologies despite the fact that they may be less predictable and more variable. While some limited multi-layer mitigation strategies do exist, to move forward redefining traditional layer abstractions and developing a framework that facilitates cross-layer collaboration is necessary.
{"title":"Vision for cross-layer optimization to address the dual challenges of energy and reliability","authors":"A. DeHon, H. Quinn, N. Carter","doi":"10.1109/DATE.2010.5456959","DOIUrl":"https://doi.org/10.1109/DATE.2010.5456959","url":null,"abstract":"We are rapidly approaching an inflection point where the conventional target of producing perfect, identical transistors that operate without upset can no longer be maintained while continuing to reduce the energy per operation. With power requirements already limiting chip performance, continuing to demand perfect, upset-free transistors would mean the end of scaling benefits. The big challenges in device variability and reliability are driven by uncommon tails in distributions, infrequent upsets, one-size-fits-all technology requirements, and a lack of information about the context of each operation. Solutions co-designed across traditional layer boundaries in our system stack can change the game, allowing architecture and software (a) to compensate for uncommon variation, environments, and events, (b) to pass down invariants and requirements for the computation, and (c) to monitor the health of collections of devices. Cross-layer codesign provides a path to continue extracting benefits from further scaled technologies despite the fact that they may be less predictable and more variable. While some limited multi-layer mitigation strategies do exist, to move forward redefining traditional layer abstractions and developing a framework that facilitates cross-layer collaboration is necessary.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127341934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457055
G. Diraco, A. Leone, P. Siciliano
The paper presents an active vision system for the automatic detection of falls and the recognition of several postures for elderly homecare applications. A wall-mounted Time-Of-Flight camera provides accurate measurements of the acquired scene in all illumination conditions, allowing the reliable detection of critical events. Preliminarily, an off-line calibration procedure estimates the external camera parameters automatically without landmarks, calibration patterns or user intervention. The calibration procedure searches for different planes in the scene selecting the one that accomplishes the floor plane constraints. Subsequently, the moving regions are detected in real-time by applying a Bayesian segmentation to the whole 3D points cloud. The distance of the 3D human centroid from the floor plane is evaluated by using the previously defined calibration parameters and the corresponding trend is used as feature in a thresholding-based clustering for fall detection. The fall detection shows high performances in terms of efficiency and reliability on a large real dataset in which almost one half of events are falls acquired in different conditions. The posture recognition is carried out by using both the 3D human centroid distance from the floor plane and the orientation of the body spine estimated by applying a topological approach to the range images. Experimental results on synthetic data validate the correctness of the proposed posture recognition approach.
{"title":"An active vision system for fall detection and posture recognition in elderly healthcare","authors":"G. Diraco, A. Leone, P. Siciliano","doi":"10.1109/DATE.2010.5457055","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457055","url":null,"abstract":"The paper presents an active vision system for the automatic detection of falls and the recognition of several postures for elderly homecare applications. A wall-mounted Time-Of-Flight camera provides accurate measurements of the acquired scene in all illumination conditions, allowing the reliable detection of critical events. Preliminarily, an off-line calibration procedure estimates the external camera parameters automatically without landmarks, calibration patterns or user intervention. The calibration procedure searches for different planes in the scene selecting the one that accomplishes the floor plane constraints. Subsequently, the moving regions are detected in real-time by applying a Bayesian segmentation to the whole 3D points cloud. The distance of the 3D human centroid from the floor plane is evaluated by using the previously defined calibration parameters and the corresponding trend is used as feature in a thresholding-based clustering for fall detection. The fall detection shows high performances in terms of efficiency and reliability on a large real dataset in which almost one half of events are falls acquired in different conditions. The posture recognition is carried out by using both the 3D human centroid distance from the floor plane and the orientation of the body spine estimated by applying a topological approach to the range images. Experimental results on synthetic data validate the correctness of the proposed posture recognition approach.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"14 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128987908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457196
Seyab, S. Hamdioui
Negative Bias Temperature Instability (NBTI) has become an important reliability concern for nano-scaled Complementary Metal Oxide Semiconductor (CMOS) devices. In this paper, we present an analysis of temperature impact on various sub-processes that contribute to NBTI degradation. We demonstrate our analysis on 90nm industrial design operating in temperature range 25–125°C. The key temperature impacts observed in our simulation are: (a) the threshold voltage increase in P-type Metal Oxide Semiconductor (PMOS) due to NBTI is very sensitive to temperature, and increases by 34% due to the temperature increment, (b) the hole mobility in PMOS inversion layer reduces by 11% with the temperature increment, and (c) the temperature has a marginal impact on the transistor delay, that increases by 3% with the temperature increment.
{"title":"NBTI modeling in the framework of temperature variation","authors":"Seyab, S. Hamdioui","doi":"10.1109/DATE.2010.5457196","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457196","url":null,"abstract":"Negative Bias Temperature Instability (NBTI) has become an important reliability concern for nano-scaled Complementary Metal Oxide Semiconductor (CMOS) devices. In this paper, we present an analysis of temperature impact on various sub-processes that contribute to NBTI degradation. We demonstrate our analysis on 90nm industrial design operating in temperature range 25–125°C. The key temperature impacts observed in our simulation are: (a) the threshold voltage increase in P-type Metal Oxide Semiconductor (PMOS) due to NBTI is very sensitive to temperature, and increases by 34% due to the temperature increment, (b) the hole mobility in PMOS inversion layer reduces by 11% with the temperature increment, and (c) the temperature has a marginal impact on the transistor delay, that increases by 3% with the temperature increment.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128899011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457099
K. Huang, H. Stratigopoulos, S. Mir
We discuss a fault diagnosis scheme for analog integrated circuits. Our approach is based on an assemblage of learning machines that are trained beforehand to guide us through diagnosis decisions. The central learning machine is a defect filter that distinguishes failing devices due to gross defects (hard faults) from failing devices due to excessive parametric deviations (soft faults). Thus, the defect filter is key in developing a unified hard/soft fault diagnosis approach. Two types of diagnosis can be carried out according to the decision of the defect filter: hard faults are diagnosed using a multi-class classifier, whereas soft faults are diagnosed using inverse regression functions. We show how this approach can be used to single out diagnostic scenarios in an RF low noise amplifier (LNA).
{"title":"Fault diagnosis of analog circuits based on machine learning","authors":"K. Huang, H. Stratigopoulos, S. Mir","doi":"10.1109/DATE.2010.5457099","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457099","url":null,"abstract":"We discuss a fault diagnosis scheme for analog integrated circuits. Our approach is based on an assemblage of learning machines that are trained beforehand to guide us through diagnosis decisions. The central learning machine is a defect filter that distinguishes failing devices due to gross defects (hard faults) from failing devices due to excessive parametric deviations (soft faults). Thus, the defect filter is key in developing a unified hard/soft fault diagnosis approach. Two types of diagnosis can be carried out according to the decision of the defect filter: hard faults are diagnosed using a multi-class classifier, whereas soft faults are diagnosed using inverse regression functions. We show how this approach can be used to single out diagnostic scenarios in an RF low noise amplifier (LNA).","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130646102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457093
M. Shafique, L. Bauer, J. Henkel
The limited energy resources in portable multimedia devices require the reduction of encoding complexity. The complex Motion Estimation (ME) scheme of H.264/MPEG-4 AVC accounts for a major part of the encoder energy. In this paper we present a Run-Time Adaptive Predictive Energy Budgeting (enBudget) scheme for energy-aware ME that predicts the energy budget for different video frames and different Macroblocks (MBs) in an adaptive manner considering the run-time changing scenarios of available energy, video frame characteristics, and user-defined coding constraints while keeping a good video quality. It assigns different Energy-Quality Classes to different video frames and fine-tunes at MB level depending upon the predictive energy quota in order to cope with above-mentioned run-time unpredictable scenarios. Compared to UMHexagonS, EPZS, and FastME, our enBudget scheme for energy-aware ME achieves an energy saving of up to 93%, 90%, 88% (average 88%, 77%, 66%), respectively. It suffers from an average Peak Signal to Noise Ratio (PSNR) loss of 0.29 dB compared to Full Search. We also demonstrate that enBudget is equally beneficial to various other state-of-the-art fast adaptive MEs (e.g.). We have evaluated our scheme for ASIC and various FPGAs.
{"title":"enBudget: A Run-Time Adaptive Predictive Energy-Budgeting scheme for energy-aware Motion Estimation in H.264/MPEG-4 AVC video encoder","authors":"M. Shafique, L. Bauer, J. Henkel","doi":"10.1109/DATE.2010.5457093","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457093","url":null,"abstract":"The limited energy resources in portable multimedia devices require the reduction of encoding complexity. The complex Motion Estimation (ME) scheme of H.264/MPEG-4 AVC accounts for a major part of the encoder energy. In this paper we present a Run-Time Adaptive Predictive Energy Budgeting (enBudget) scheme for energy-aware ME that predicts the energy budget for different video frames and different Macroblocks (MBs) in an adaptive manner considering the run-time changing scenarios of available energy, video frame characteristics, and user-defined coding constraints while keeping a good video quality. It assigns different Energy-Quality Classes to different video frames and fine-tunes at MB level depending upon the predictive energy quota in order to cope with above-mentioned run-time unpredictable scenarios. Compared to UMHexagonS, EPZS, and FastME, our enBudget scheme for energy-aware ME achieves an energy saving of up to 93%, 90%, 88% (average 88%, 77%, 66%), respectively. It suffers from an average Peak Signal to Noise Ratio (PSNR) loss of 0.29 dB compared to Full Search. We also demonstrate that enBudget is equally beneficial to various other state-of-the-art fast adaptive MEs (e.g.). We have evaluated our scheme for ASIC and various FPGAs.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130880449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457006
S. V. Haastregt, E. Halm, B. Kienhuis
In this paper, we present a method to analyze different implementations of stream-based applications on heterogeneous multiprocessor systems. We take both resource usage and performance constraints into account. For the first aspect we use an empirical cost model. For the second aspect we build a network of cycle-accurate processor simulators. The simulation and resource cost estimation have been integrated in an existing framework, allowing one to generate fast exploration simulations, cycle-accurate simulations and FPGA implementations from a single system level specification. We show that with our methodology cycle-accurate performance numbers of candidate systems can be obtained. In our experiments with the QR and MJPEG applications, we found that the error of our resource cost model is below two percent.
{"title":"Cost modeling and cycle-accurate co-simulation of heterogeneous multiprocessor systems","authors":"S. V. Haastregt, E. Halm, B. Kienhuis","doi":"10.1109/DATE.2010.5457006","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457006","url":null,"abstract":"In this paper, we present a method to analyze different implementations of stream-based applications on heterogeneous multiprocessor systems. We take both resource usage and performance constraints into account. For the first aspect we use an empirical cost model. For the second aspect we build a network of cycle-accurate processor simulators. The simulation and resource cost estimation have been integrated in an existing framework, allowing one to generate fast exploration simulations, cycle-accurate simulations and FPGA implementations from a single system level specification. We show that with our methodology cycle-accurate performance numbers of candidate systems can be obtained. In our experiments with the QR and MJPEG applications, we found that the error of our resource cost model is below two percent.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127918734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5456929
R. Ubar, S. Devadze, J. Raik, A. Jutman
In this paper, a new very fast fault simulation method to handle the X-fault model is proposed. The method is based on a two-phase procedure. In the first phase, a parallel exact critical path fault tracing is used to determine all the detected stuck-at faults in the circuit, and in the second phase a postprocess is launched which will determine the detectability of X-faults.
{"title":"Parallel X-fault simulation with critical path tracing technique","authors":"R. Ubar, S. Devadze, J. Raik, A. Jutman","doi":"10.1109/DATE.2010.5456929","DOIUrl":"https://doi.org/10.1109/DATE.2010.5456929","url":null,"abstract":"In this paper, a new very fast fault simulation method to handle the X-fault model is proposed. The method is based on a two-phase procedure. In the first phase, a parallel exact critical path fault tracing is used to determine all the detected stuck-at faults in the circuit, and in the second phase a postprocess is launched which will determine the detectability of X-faults.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129242561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-08DOI: 10.1109/DATE.2010.5457083
Xiaoda Pan, Fan Yang, Xuan Zeng, Yangfeng Su
Trajectory piecewise-linear macromodeling (TPWL) technique has been widely employed to characterize strong nonlinear circuits, and makes the reduction of the strong nonlinear circuits possible. The trajectory piecewise-linear macromodeling technique linearizes nonlinear circuits around multiple expansion points which are extracted from state trajectories driven by training inputs. However, the accuracy of the trajectory piecewise-linear macromodeling technique heavily relies on the extracted expansion points and the training inputs. It will lead to large error in simulation if state vector reaches regions far away from the extracted expansion points. In this paper, we propose an efficient transistor-level piecewise linearization scheme for macromodeling of nonlinear circuits. Piecewise linear models are first built for each transistor. The macromodel of the whole nonlinear circuit is then constructed by combining all the piecewise-linear models of the transistors together with appropriate weight functions. The proposed approach can cover remarkably larger state space than the TPWL method. By using the complete piecewise-linear models of the transistors, the constructed piecewise-linear models of the nonlinear circuits are capable of covering the whole state space of the nonlinear circuits. More importantly, model order reduction of the proposed transistor-level piecewise linearization macromodel is also possible, which makes the proposed method a potentially good macromodeling approach for model order reduction of nonlinear circuits.
{"title":"An efficient transistor-level piecewise-linear macromodeling approach for model order reduction of nonlinear circuits","authors":"Xiaoda Pan, Fan Yang, Xuan Zeng, Yangfeng Su","doi":"10.1109/DATE.2010.5457083","DOIUrl":"https://doi.org/10.1109/DATE.2010.5457083","url":null,"abstract":"Trajectory piecewise-linear macromodeling (TPWL) technique has been widely employed to characterize strong nonlinear circuits, and makes the reduction of the strong nonlinear circuits possible. The trajectory piecewise-linear macromodeling technique linearizes nonlinear circuits around multiple expansion points which are extracted from state trajectories driven by training inputs. However, the accuracy of the trajectory piecewise-linear macromodeling technique heavily relies on the extracted expansion points and the training inputs. It will lead to large error in simulation if state vector reaches regions far away from the extracted expansion points. In this paper, we propose an efficient transistor-level piecewise linearization scheme for macromodeling of nonlinear circuits. Piecewise linear models are first built for each transistor. The macromodel of the whole nonlinear circuit is then constructed by combining all the piecewise-linear models of the transistors together with appropriate weight functions. The proposed approach can cover remarkably larger state space than the TPWL method. By using the complete piecewise-linear models of the transistors, the constructed piecewise-linear models of the nonlinear circuits are capable of covering the whole state space of the nonlinear circuits. More importantly, model order reduction of the proposed transistor-level piecewise linearization macromodel is also possible, which makes the proposed method a potentially good macromodeling approach for model order reduction of nonlinear circuits.","PeriodicalId":432902,"journal":{"name":"2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128781865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}