Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008561
Yufu Zhang, Ankur Srivastava
Due to the effect of technology scaling, leakage power now consists of a significant portion of the total power consumption of a silicon chip. Leakage power also increases exponentially with chip temperature, while temperature itself is a strong function in total power (positive feedback effect). Most of the existing techniques for estimating the runtime chip temperature do not consider the nonlinear leakage effect. This could lead to many problems such as under-estimation of the real chip temperature, improper thermal control actions and eventually, unreliable chip behavior. In this paper we discuss two linearization techniques that can be used to extend the existing thermal tracking approaches and explicitly account for the leakage effect. The first one uses Taylor series expansion to approximate leakage power to the first order (extended Kalman filter). The second one uses concepts from probabilistic matching. Both methods can approximate leakage power with high accuracy while maintaining similar computational efficiency compared to the standard Kalman filter. The experimental results demonstrated that our approaches can reduce the temperature estimation error by 60%, thus significantly improving the thermal-awareness of chip system and enhancing the performance of many dynamic power/thermal management techniques.
{"title":"Leakage-aware Kalman filter for accurate temperature tracking","authors":"Yufu Zhang, Ankur Srivastava","doi":"10.1109/IGCC.2011.6008561","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008561","url":null,"abstract":"Due to the effect of technology scaling, leakage power now consists of a significant portion of the total power consumption of a silicon chip. Leakage power also increases exponentially with chip temperature, while temperature itself is a strong function in total power (positive feedback effect). Most of the existing techniques for estimating the runtime chip temperature do not consider the nonlinear leakage effect. This could lead to many problems such as under-estimation of the real chip temperature, improper thermal control actions and eventually, unreliable chip behavior. In this paper we discuss two linearization techniques that can be used to extend the existing thermal tracking approaches and explicitly account for the leakage effect. The first one uses Taylor series expansion to approximate leakage power to the first order (extended Kalman filter). The second one uses concepts from probabilistic matching. Both methods can approximate leakage power with high accuracy while maintaining similar computational efficiency compared to the standard Kalman filter. The experimental results demonstrated that our approaches can reduce the temperature estimation error by 60%, thus significantly improving the thermal-awareness of chip system and enhancing the performance of many dynamic power/thermal management techniques.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114489372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008549
M. M. Rafique, N. Ravi, S. Cadambi, A. Butt, S. Chakradhar
Reducing energy consumption has a significant role in mitigating the total cost of ownership of computing clusters. Building heterogeneous clusters by combining high-end and low-end server nodes (e.g., Xeons and Atoms) is a recent trend towards achieving energy-efficient computing. This requires a cluster-level power manager that has the ability to predict future load, and server nodes that can quickly transition between active and low-power sleep states. In practice however, the load is unpredictable and often punctuated by spikes, necessitating a number of extra “idling” servers. We design a cluster-level power manager that (1) identifies the optimal cluster configuration based on the power profiles of servers and workload characteristics, and (2) maximizes work done per watt by assigning P-states and S-states to the cluster servers dynamically based on current request rate. We carry out an experimental study on a web server cluster composed of high-end Xeon servers and low-end Atom-based Netbooks and share our findings.
{"title":"Power management for heterogeneous clusters: An experimental study","authors":"M. M. Rafique, N. Ravi, S. Cadambi, A. Butt, S. Chakradhar","doi":"10.1109/IGCC.2011.6008549","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008549","url":null,"abstract":"Reducing energy consumption has a significant role in mitigating the total cost of ownership of computing clusters. Building heterogeneous clusters by combining high-end and low-end server nodes (e.g., Xeons and Atoms) is a recent trend towards achieving energy-efficient computing. This requires a cluster-level power manager that has the ability to predict future load, and server nodes that can quickly transition between active and low-power sleep states. In practice however, the load is unpredictable and often punctuated by spikes, necessitating a number of extra “idling” servers. We design a cluster-level power manager that (1) identifies the optimal cluster configuration based on the power profiles of servers and workload characteristics, and (2) maximizes work done per watt by assigning P-states and S-states to the cluster servers dynamically based on current request rate. We carry out an experimental study on a web server cluster composed of high-end Xeon servers and low-end Atom-based Netbooks and share our findings.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129795132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008607
X. Wen
This paper first reviews the basics of VLSI testing, focusing on test generation and design for testability. Then it discusses the impact of test power in scan testing, and highlights the need for low-power VLSI testing.
{"title":"VLSI testing and test power","authors":"X. Wen","doi":"10.1109/IGCC.2011.6008607","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008607","url":null,"abstract":"This paper first reviews the basics of VLSI testing, focusing on test generation and design for testability. Then it discusses the impact of test power in scan testing, and highlights the need for low-power VLSI testing.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008556
Lei Ye, C. Gniady, J. Hartman
Main memory is one of the primary shared resources in a virtualized environment. Current trends in supporting a large number of virtual machines increase the demand for physical memory, making energy efficient memory management more significant. Several optimizations for memory energy consumption have been recently proposed for standalone operating system environments. However, these approaches cannot be directly used in a virtual machine environment because a layer of virtualization separates hardware from the operating system and the applications executing inside a virtual machine. We first adapt existing mechanisms to run at the VMM layer, offering transparent energy optimizations to the operating systems running inside the virtual machines. Static approaches have several weaknesses and we propose a dynamic approach that is able to optimize energy consumption for currently executing virtual machines and adapt to changing virtual machine behaviors. Through detailed trace driven simulation, we show that proposed dynamic mechanisms can reduce memory energy consumption by 63.4% with only 0.6% increase in execution time as compared to a standard virtual machine environment.
{"title":"Energy-efficient memory management in virtual machine environments","authors":"Lei Ye, C. Gniady, J. Hartman","doi":"10.1109/IGCC.2011.6008556","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008556","url":null,"abstract":"Main memory is one of the primary shared resources in a virtualized environment. Current trends in supporting a large number of virtual machines increase the demand for physical memory, making energy efficient memory management more significant. Several optimizations for memory energy consumption have been recently proposed for standalone operating system environments. However, these approaches cannot be directly used in a virtual machine environment because a layer of virtualization separates hardware from the operating system and the applications executing inside a virtual machine. We first adapt existing mechanisms to run at the VMM layer, offering transparent energy optimizations to the operating systems running inside the virtual machines. Static approaches have several weaknesses and we propose a dynamic approach that is able to optimize energy consumption for currently executing virtual machines and adapt to changing virtual machine behaviors. Through detailed trace driven simulation, we show that proposed dynamic mechanisms can reduce memory energy consumption by 63.4% with only 0.6% increase in execution time as compared to a standard virtual machine environment.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116361126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008560
Ryan Melfi, Ben Rosenblum, B. Nordman, Ken Christensen
The primary focus of Green IT has been on reducing energy use of the IT infrastructure itself. Additional significant energy savings can be achieved by using the IT infrastructure to enable energy savings in both the IT and non-IT infrastructure. Our premise is that energy can be saved by driving building operation on information gleaned from existing IT infrastructure already installed for non-energy purposes. We call our idea implicit occupancy sensing where existing IT infrastructure can be used to replace and/or supplement traditional dedicated sensors to determine building occupancy. Our implicit sensing methods are largely based on monitoring MAC and IP addresses in routers and wireless access points, and then correlating these addresses to the occupancy of a building, zone, and/or room. Occupancy data can be used to control lighting, HVAC, and other building functions to improve building functionality and reduce energy use. We experimentally evaluate the feasibility of this dual-use of IT infrastructure and assess the accuracy of implicit sensing. Our findings, based on data collected from two facilities, show that there is significant promise in implicit sensing using the existing IT infrastructure present in most modern non-residential buildings.
{"title":"Measuring building occupancy using existing network infrastructure","authors":"Ryan Melfi, Ben Rosenblum, B. Nordman, Ken Christensen","doi":"10.1109/IGCC.2011.6008560","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008560","url":null,"abstract":"The primary focus of Green IT has been on reducing energy use of the IT infrastructure itself. Additional significant energy savings can be achieved by using the IT infrastructure to enable energy savings in both the IT and non-IT infrastructure. Our premise is that energy can be saved by driving building operation on information gleaned from existing IT infrastructure already installed for non-energy purposes. We call our idea implicit occupancy sensing where existing IT infrastructure can be used to replace and/or supplement traditional dedicated sensors to determine building occupancy. Our implicit sensing methods are largely based on monitoring MAC and IP addresses in routers and wireless access points, and then correlating these addresses to the occupancy of a building, zone, and/or room. Occupancy data can be used to control lighting, HVAC, and other building functions to improve building functionality and reduce energy use. We experimentally evaluate the feasibility of this dual-use of IT infrastructure and assess the accuracy of implicit sensing. Our findings, based on data collected from two facilities, show that there is significant promise in implicit sensing using the existing IT infrastructure present in most modern non-residential buildings.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122255455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008597
T. Malkamäki, S. Ovaska
To improve energy efficiency in computer systems and data centers, accurate models of the power consumption are needed for analysis and advanced control algorithms. Developing models requires deep understanding not only of the components themselves but also their interaction. Moreover, verifying models requires accurate measurements, which in itself requires some understanding of the system. Optimal state estimation is a well established field, which comprises mathematical methods often used for sensor fusion and to handle measurement inaccuracies. Optimal state estimators combine various measurements and the physical model of the system to acquire more accurate information. Optimal state estimation can also be used to test and verify different kinds of models and to identify system parameters. These algorithms also fit well to computer environments, making them a viable candidate for use in various on-line modeling, analysis and control techniques. This paper investigates the use of optimal state estimation to verify and improve system models. A simplistic model is first derived for a typical data center powering structure including cooling system. Multiple model and parameter identifying estimators are then proposed for validating the model and estimating model parameters. Theory presented in this paper has been formulated to enable accurate measurements as well as component and system-level model analysis in an upcoming data center test facility, currently under construction.
{"title":"Optimal state estimation for improved power measurements and model verification: Theory","authors":"T. Malkamäki, S. Ovaska","doi":"10.1109/IGCC.2011.6008597","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008597","url":null,"abstract":"To improve energy efficiency in computer systems and data centers, accurate models of the power consumption are needed for analysis and advanced control algorithms. Developing models requires deep understanding not only of the components themselves but also their interaction. Moreover, verifying models requires accurate measurements, which in itself requires some understanding of the system. Optimal state estimation is a well established field, which comprises mathematical methods often used for sensor fusion and to handle measurement inaccuracies. Optimal state estimators combine various measurements and the physical model of the system to acquire more accurate information. Optimal state estimation can also be used to test and verify different kinds of models and to identify system parameters. These algorithms also fit well to computer environments, making them a viable candidate for use in various on-line modeling, analysis and control techniques. This paper investigates the use of optimal state estimation to verify and improve system models. A simplistic model is first derived for a typical data center powering structure including cooling system. Multiple model and parameter identifying estimators are then proposed for validating the model and estimating model parameters. Theory presented in this paper has been formulated to enable accurate measurements as well as component and system-level model analysis in an upcoming data center test facility, currently under construction.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132408145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008583
Yufu Zhang, Ankur Srivastava
Power/temperature constraints are among the most important design considerations for today's high performance processors. Many dynamic power or thermal management (DPM/DTM) techniques have been proposed to maintain reliable chip operation and meet power constraints. These techniques rely on runtime estimation schemes that can report accurate power and temperature status of the chip during its operation. However many such estimation schemes require prior knowledge of the statistical system power behavior to generate accurate results. In this paper we discuss the problem of extracting the statistical power characteristics of a chip at post-fabrication stage using real workload information. We first model the statistical power characteristics of a chip as a mixture of multiple Gaussian distributions. Each of these distributions essentially captures the behavior of a cluster of similar applications. We then develop an Expectation-Maximization algorithm for learning the parameters of this mixture Gaussian model. The experimental results are compared against the actual power characteristics of the chip simulated using SPEC benchmarks and are shown to be within 97% accuracy range. We also demonstrate how the statistical model learned using our approach can be exploited in a popular Kalman filter framework for accurate runtime temperature estimation.
{"title":"Statistical characterization of chip power behavior at post-fabrication stage","authors":"Yufu Zhang, Ankur Srivastava","doi":"10.1109/IGCC.2011.6008583","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008583","url":null,"abstract":"Power/temperature constraints are among the most important design considerations for today's high performance processors. Many dynamic power or thermal management (DPM/DTM) techniques have been proposed to maintain reliable chip operation and meet power constraints. These techniques rely on runtime estimation schemes that can report accurate power and temperature status of the chip during its operation. However many such estimation schemes require prior knowledge of the statistical system power behavior to generate accurate results. In this paper we discuss the problem of extracting the statistical power characteristics of a chip at post-fabrication stage using real workload information. We first model the statistical power characteristics of a chip as a mixture of multiple Gaussian distributions. Each of these distributions essentially captures the behavior of a cluster of similar applications. We then develop an Expectation-Maximization algorithm for learning the parameters of this mixture Gaussian model. The experimental results are compared against the actual power characteristics of the chip simulated using SPEC benchmarks and are shown to be within 97% accuracy range. We also demonstrate how the statistical model learned using our approach can be exploited in a popular Kalman filter framework for accurate runtime temperature estimation.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132850571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008577
Zao Liu, S. Tan, Hai Wang, Rafael Quintanilla, Ashish Gupta
This paper proposes a new thermal modeling method for package design of high-performance microprocessors. The new approach builds the thermal behavioral models from the given accurate temperature and power information by means of the subspace method. The subspace method, however, may suffer predictability problem when the practical power is given as a number of power maps where power inputs are spatially correlated. We show that the input power signal needs to meet some dependency requirements to ensure model predictability. We develop a new algorithm, which generates independent power maps to meet the spatial rank requirement and can also automatically select the order of the resulting thermal models for the given error bounds. Experimental results validates the proposed method on a practical microprocessor package constructed via COMSOL software under practical power signal inputs.
{"title":"Compact thermal modeling for package design with practical power maps","authors":"Zao Liu, S. Tan, Hai Wang, Rafael Quintanilla, Ashish Gupta","doi":"10.1109/IGCC.2011.6008577","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008577","url":null,"abstract":"This paper proposes a new thermal modeling method for package design of high-performance microprocessors. The new approach builds the thermal behavioral models from the given accurate temperature and power information by means of the subspace method. The subspace method, however, may suffer predictability problem when the practical power is given as a number of power maps where power inputs are spatially correlated. We show that the input power signal needs to meet some dependency requirements to ensure model predictability. We develop a new algorithm, which generates independent power maps to meet the spatial rank requirement and can also automatically select the order of the resulting thermal models for the given error bounds. Experimental results validates the proposed method on a practical microprocessor package constructed via COMSOL software under practical power signal inputs.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008606
Pouya Kamalinejad, S. Mirabbasi, Victor C. M. Leung
A high-efficiency CMOS rectifier for radio-frequency identification (RFID) applications is presented. Using an on-chip generated clock signal, a new switching scheme is proposed to enhance the power efficiency of the conventional 4 transistor (4T)-cell rectifier. By switching the gate of charge-transfer transistors to the intermediate nodes of preceding and succeeding stages, low on-resistance and small leakage current are obtained simultaneously. To further improve the low-voltage operation capability, an external gate-boosting technique is also applied to the proposed design which enables an efficient operation for input voltage levels well below the nominal standard threshold voltage of MOS transistors. The two proposed rectifier architectures are designed and laid out in a standard 0.13µm CMOS technology. For a 950 MHz RF input and 10 kΩ output load, post-layout simulation results confirm a power conversion efficiency (PCE) of 74% at −10 dBm and 57%at −26 dBm for switched 4T-cell and gate-boosted switched 4T-cell, respectively. While the PCE of the proposed switched 4T-cell rectifier compares favorably with that of the state-of-the-art rectifier designs, the gate-boosted version achieves a relatively high PCE while operating with a very low input power.
介绍了一种用于射频识别(RFID)应用的高效CMOS整流器。利用片上产生的时钟信号,提出了一种新的开关方案,以提高传统4晶体管(4T)整流器的功率效率。通过将电荷转移晶体管的栅极切换到前后两级的中间节点,可以同时获得低导通电阻和小漏电流。为了进一步提高低电压工作能力,外栅极升压技术也被应用到所提出的设计中,使得输入电压水平远低于MOS晶体管的标称标准阈值电压时能够有效地工作。这两种整流器架构采用标准的0.13 μ m CMOS技术进行设计和布局。对于950 MHz射频输入和10 kΩ输出负载,布局后仿真结果证实,开关4T-cell和门升压开关4T-cell在- 10 dBm和- 26 dBm时的功率转换效率(PCE)分别为74%和57%。虽然所提出的开关4t单元整流器的PCE与最先进的整流器设计相比具有优势,但栅极升压版本在输入功率非常低的情况下实现了相对较高的PCE。
{"title":"An efficient CMOS rectifier with low-voltage operation for RFID tags","authors":"Pouya Kamalinejad, S. Mirabbasi, Victor C. M. Leung","doi":"10.1109/IGCC.2011.6008606","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008606","url":null,"abstract":"A high-efficiency CMOS rectifier for radio-frequency identification (RFID) applications is presented. Using an on-chip generated clock signal, a new switching scheme is proposed to enhance the power efficiency of the conventional 4 transistor (4T)-cell rectifier. By switching the gate of charge-transfer transistors to the intermediate nodes of preceding and succeeding stages, low on-resistance and small leakage current are obtained simultaneously. To further improve the low-voltage operation capability, an external gate-boosting technique is also applied to the proposed design which enables an efficient operation for input voltage levels well below the nominal standard threshold voltage of MOS transistors. The two proposed rectifier architectures are designed and laid out in a standard 0.13µm CMOS technology. For a 950 MHz RF input and 10 kΩ output load, post-layout simulation results confirm a power conversion efficiency (PCE) of 74% at −10 dBm and 57%at −26 dBm for switched 4T-cell and gate-boosted switched 4T-cell, respectively. While the PCE of the proposed switched 4T-cell rectifier compares favorably with that of the state-of-the-art rectifier designs, the gate-boosted version achieves a relatively high PCE while operating with a very low input power.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122114478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-25DOI: 10.1109/IGCC.2011.6008586
Domenic Forte, Ankur Srivastava
There are a growing number of visual tracking applications for mobile devices such as smart phones and smart cameras. However, existing computer vision algorithms are demanding and mobile devices possess limited computational capabilities, energy, and bandwidth to support them. Conventional approaches to distributed target tracking with a camera node and a receiver node are either sender based or receiver based. Both approaches are highly suited for certain scenarios, but have limited applicability outside of their scope. In this paper, we propose two new approaches for a particle filter based tracking system. The first proposed approach reduces the energy and bandwidth typically required for the receiver based setup. The second proposed approach partitions tracking workload between sender and receiver and adapts to the frame-to-frame demands of particle filtering. In doing so, this scheme promotes better balance of computing capabilities, energy, and bandwidth among sender and receiver. Results show that the proposed solutions require low additional overhead, can improve on tracking system lifetime, and may be more effective than conventional architectures for many tracking instances.
{"title":"Resource-aware architectures for particle filter based visual target tracking","authors":"Domenic Forte, Ankur Srivastava","doi":"10.1109/IGCC.2011.6008586","DOIUrl":"https://doi.org/10.1109/IGCC.2011.6008586","url":null,"abstract":"There are a growing number of visual tracking applications for mobile devices such as smart phones and smart cameras. However, existing computer vision algorithms are demanding and mobile devices possess limited computational capabilities, energy, and bandwidth to support them. Conventional approaches to distributed target tracking with a camera node and a receiver node are either sender based or receiver based. Both approaches are highly suited for certain scenarios, but have limited applicability outside of their scope. In this paper, we propose two new approaches for a particle filter based tracking system. The first proposed approach reduces the energy and bandwidth typically required for the receiver based setup. The second proposed approach partitions tracking workload between sender and receiver and adapts to the frame-to-frame demands of particle filtering. In doing so, this scheme promotes better balance of computing capabilities, energy, and bandwidth among sender and receiver. Results show that the proposed solutions require low additional overhead, can improve on tracking system lifetime, and may be more effective than conventional architectures for many tracking instances.","PeriodicalId":306876,"journal":{"name":"2011 International Green Computing Conference and Workshops","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}