Pub Date : 2013-11-20DOI: 10.1109/VCIP.2013.6706327
M. Mirzaei, S. Prianto, J. Chardonnet, C. Pere, F. Mérienne
The paper presents a new mother wavelet adapted from a specific pattern. Wavelet multi-resolution analysis uses this wavelet to detect the position of the pattern in an Infra-Red (IR) signal under scale variation and the presence of noise. IR signal is extracted from IR image sequence recorded by an IR camera, Time of Flight (TOF) sensor configuration. The maximum correlation between the pattern and the signal of interest will be used as a criterion to define the mother wavelet. The proposed mother wavelet were tested and verified under the scale variation and the presence of noise. The experimental tests and performance analysis show promising results for both scale variation and noisy signal. 90% accuracy for the proposed wavelet under intensive noisy condition (50% of the signal amplitude) is guaranteed and high precision is expected under real condition.
{"title":"New motherwavelet for pattern detection in IR image","authors":"M. Mirzaei, S. Prianto, J. Chardonnet, C. Pere, F. Mérienne","doi":"10.1109/VCIP.2013.6706327","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706327","url":null,"abstract":"The paper presents a new mother wavelet adapted from a specific pattern. Wavelet multi-resolution analysis uses this wavelet to detect the position of the pattern in an Infra-Red (IR) signal under scale variation and the presence of noise. IR signal is extracted from IR image sequence recorded by an IR camera, Time of Flight (TOF) sensor configuration. The maximum correlation between the pattern and the signal of interest will be used as a criterion to define the mother wavelet. The proposed mother wavelet were tested and verified under the scale variation and the presence of noise. The experimental tests and performance analysis show promising results for both scale variation and noisy signal. 90% accuracy for the proposed wavelet under intensive noisy condition (50% of the signal amplitude) is guaranteed and high precision is expected under real condition.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114031485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706421
Vijay Bansal, A. Chawla, Mahesh Narain Shukla
High Efficiency Video Coding (HEVC) is being developed by Joint Collaborative Team on Video Coding (JCTVC). HEVC has transform skip mode which is only applicable to 4×4 TUs (Transform Units). Transform process is skipped when this mode is selected. By introducing transform skip there is significant gain in coding efficiency for F class sequences [1][2]. In this paper it is proposed that before variable length encoding of the transform skip cases, reverse the scanning of the residual of 4×4 block sizes. Due to this modification it is observed that coding efficiency further increased on an average by 1.6% in terms of bd-rate [3] for class F sequences in all intra (AI) testing configurations. For other GOP structures like RA (random access), low delay with B pictures (LB), and low delay with P pictures (LP) average-bit-rate gains are 1.1%, 0.64% and 0.57% respectively. Due to this change there is a negligible impact on encoding/decoding time.
{"title":"Reverse scan for transform skip mode in HEVC codec","authors":"Vijay Bansal, A. Chawla, Mahesh Narain Shukla","doi":"10.1109/VCIP.2013.6706421","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706421","url":null,"abstract":"High Efficiency Video Coding (HEVC) is being developed by Joint Collaborative Team on Video Coding (JCTVC). HEVC has transform skip mode which is only applicable to 4×4 TUs (Transform Units). Transform process is skipped when this mode is selected. By introducing transform skip there is significant gain in coding efficiency for F class sequences [1][2]. In this paper it is proposed that before variable length encoding of the transform skip cases, reverse the scanning of the residual of 4×4 block sizes. Due to this modification it is observed that coding efficiency further increased on an average by 1.6% in terms of bd-rate [3] for class F sequences in all intra (AI) testing configurations. For other GOP structures like RA (random access), low delay with B pictures (LB), and low delay with P pictures (LP) average-bit-rate gains are 1.1%, 0.64% and 0.57% respectively. Due to this change there is a negligible impact on encoding/decoding time.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115596938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706338
F. Jager, Karam Naser
3D video is a new technology, which requires transmission of depth data alongside conventional 2D video. The additional depth information allows to synthesize arbitrary viewpoints at the receiver for adaptation of perceived depth impression and for driving of multi-view auto-stereoscopic displays. Depth maps typically show different signal characteristics compared to textured video data. Piecewise smooth regions are bounded by sharp edges resembling depth discontinuities. These edges lead to strong ringing artifacts when depth maps are coded with DCT-based transform codecs, such as AVC or its successor HEVC. In this paper alternative transforms are proposed to be used for coding depth maps for 3D video. By replacing the DCT with these transforms, ringing artifacts in the reconstructed depth maps are reduced and at the same time the complexity of the transform stage is lowered significantly. For high quality depth map coding the proposed alternative transforms can even increase coding efficiency.
{"title":"Low complexity transform coding for depth maps in 3D video","authors":"F. Jager, Karam Naser","doi":"10.1109/VCIP.2013.6706338","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706338","url":null,"abstract":"3D video is a new technology, which requires transmission of depth data alongside conventional 2D video. The additional depth information allows to synthesize arbitrary viewpoints at the receiver for adaptation of perceived depth impression and for driving of multi-view auto-stereoscopic displays. Depth maps typically show different signal characteristics compared to textured video data. Piecewise smooth regions are bounded by sharp edges resembling depth discontinuities. These edges lead to strong ringing artifacts when depth maps are coded with DCT-based transform codecs, such as AVC or its successor HEVC. In this paper alternative transforms are proposed to be used for coding depth maps for 3D video. By replacing the DCT with these transforms, ringing artifacts in the reconstructed depth maps are reduced and at the same time the complexity of the transform stage is lowered significantly. For high quality depth map coding the proposed alternative transforms can even increase coding efficiency.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706325
Kai Liu, Zhicheng Zhao, Xin Guo, A. Cai
Person re-identification is a challenging problem in multi-camera surveillance systems. Most existing methods focus on metric learning which aims to match images from different cameras in a common metric space. Boosted hashing projection provides a new way of identifying instances based on pairwise similarity. However, both of these approaches ignore the underlying fact that images captured by two cameras should be seen as in different modalities. To address this drawback, we formulate person re-identification as an Anchor-supported Multi-Modality Hashing Embedding (AMMHE) problem, in which different projections are used to map data from different cameras into a common Hamming space. The data are projected to binary bits by using boosted hash projections, making the weighted Hamming distance of intra-class data pairs minimized and simultaneously those of inter-class data pairs maximized. We also introduce an anchor-supported dimension reduction method to avoid the computational burden of high feature dimensionality. Our approach obtains competitive performance compared with state-of-the-art methods on publicly available benchmarks.
{"title":"Anchor-supported multi-modality hashing embedding for person re-identification","authors":"Kai Liu, Zhicheng Zhao, Xin Guo, A. Cai","doi":"10.1109/VCIP.2013.6706325","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706325","url":null,"abstract":"Person re-identification is a challenging problem in multi-camera surveillance systems. Most existing methods focus on metric learning which aims to match images from different cameras in a common metric space. Boosted hashing projection provides a new way of identifying instances based on pairwise similarity. However, both of these approaches ignore the underlying fact that images captured by two cameras should be seen as in different modalities. To address this drawback, we formulate person re-identification as an Anchor-supported Multi-Modality Hashing Embedding (AMMHE) problem, in which different projections are used to map data from different cameras into a common Hamming space. The data are projected to binary bits by using boosted hash projections, making the weighted Hamming distance of intra-class data pairs minimized and simultaneously those of inter-class data pairs maximized. We also introduce an anchor-supported dimension reduction method to avoid the computational burden of high feature dimensionality. Our approach obtains competitive performance compared with state-of-the-art methods on publicly available benchmarks.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124261820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706454
A. Alshin, E. Alshina, Jeonghoon Park
Entropy coding is the main important part of all advanced video compression schemes. Context-adaptive binary arithmetic coding (CABAC) is entropy coding used in H.264/MPEG-4 AVC and H.265/HEVC standards. Probability estimation is the key factor of CABAC performance efficiency. In this paper high accuracy probability estimation for CABAC is presented. This technique is based on multiple estimations using different models. Proposed method was efficiently realized in integer arithmetic. High precision probability estimation for CABAC provides up-to 1,4% BD-rate gain.
{"title":"High precision probability estimation for CABAC","authors":"A. Alshin, E. Alshina, Jeonghoon Park","doi":"10.1109/VCIP.2013.6706454","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706454","url":null,"abstract":"Entropy coding is the main important part of all advanced video compression schemes. Context-adaptive binary arithmetic coding (CABAC) is entropy coding used in H.264/MPEG-4 AVC and H.265/HEVC standards. Probability estimation is the key factor of CABAC performance efficiency. In this paper high accuracy probability estimation for CABAC is presented. This technique is based on multiple estimations using different models. Proposed method was efficiently realized in integer arithmetic. High precision probability estimation for CABAC provides up-to 1,4% BD-rate gain.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124842557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706374
Wei Huang, Xiaopeng Fan, Debin Zhao
Video broadcasting is a popular application of wireless network, whose main challenge is to accommodate different users with different channel conditions. Recently, a novel `D-Cast' approach based on distributed source coding (DSC) is proposed. It can avoid error propagation and still achieve high compression efficiency in inter frame coding by utilizing coset coding and soft broadcast. However, D-CAST is not very efficient because of rough side information. In this work, we present a novel soft mobile video broadcast approach based on side information refinement algorithm (SIR-CAST) to improve the quality of the side information. Moreover, SIR-Cast optimizes the estimate of the quantifying step (Qstep) which is corresponding to the refined side information. Thus, SIR-CAST outperforms D-CAST about 1dB-2dB in video PSNR while maintaining the similar graceful degradation feature as D-CAST.
{"title":"Soft mobile video broadcast based on side information refining","authors":"Wei Huang, Xiaopeng Fan, Debin Zhao","doi":"10.1109/VCIP.2013.6706374","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706374","url":null,"abstract":"Video broadcasting is a popular application of wireless network, whose main challenge is to accommodate different users with different channel conditions. Recently, a novel `D-Cast' approach based on distributed source coding (DSC) is proposed. It can avoid error propagation and still achieve high compression efficiency in inter frame coding by utilizing coset coding and soft broadcast. However, D-CAST is not very efficient because of rough side information. In this work, we present a novel soft mobile video broadcast approach based on side information refinement algorithm (SIR-CAST) to improve the quality of the side information. Moreover, SIR-Cast optimizes the estimate of the quantifying step (Qstep) which is corresponding to the refined side information. Thus, SIR-CAST outperforms D-CAST about 1dB-2dB in video PSNR while maintaining the similar graceful degradation feature as D-CAST.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126138560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706359
Yuanbo Chen, Yanyun Zhao, A. Cai
In this paper, Sparse Coding with Non-negative and Locality constraints (SCNL) is proposed to generate discriminative feature descriptions for human action recognition. The non-negative constraint ensures that every data sample is in the convex hull of its neighbors. The locality constraint makes a data sample only represented by its related neighbor atoms. The sparsity constraint confines the dictionary atoms involved in the sample representation as fewer as possible. The SCNL model can better capture the global subspace structures of data than classical sparse coding, and are more robust to noise compared to locality-constrained linear coding. Extensive experiments testify the significant advantages of the proposed SCNL model through evaluations on three remarkable human action datasets.
{"title":"Recognizing human actions based on Sparse Coding with Non-negative and Locality constraints","authors":"Yuanbo Chen, Yanyun Zhao, A. Cai","doi":"10.1109/VCIP.2013.6706359","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706359","url":null,"abstract":"In this paper, Sparse Coding with Non-negative and Locality constraints (SCNL) is proposed to generate discriminative feature descriptions for human action recognition. The non-negative constraint ensures that every data sample is in the convex hull of its neighbors. The locality constraint makes a data sample only represented by its related neighbor atoms. The sparsity constraint confines the dictionary atoms involved in the sample representation as fewer as possible. The SCNL model can better capture the global subspace structures of data than classical sparse coding, and are more robust to noise compared to locality-constrained linear coding. Extensive experiments testify the significant advantages of the proposed SCNL model through evaluations on three remarkable human action datasets.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126944406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706342
Yonggen Ling, O. Au, Ketan Tang, Jiahao Pang, Jin Zeng, Lu Fang
Subpixel-based image down-sampling is a class of methods that can provide improved apparent resolution of the down-scaled image compared to the pixel-based methods. The frequency characteristics of all possible subpixel-based down-sampling patterns for RGB vertical stripes are analytically studied in this paper. Our proposed algorithm reveals that there are merely seven equivalent energy distributions in the luminance frequency spectrum. To achieve higher luminance resolution, we then calculate and choose the optimal down-sampling pattern with anti-aliasing low-pass filter designed for it so as to maximize the energy of the luminance component within the cut-off shape. Experimental results show that the proposed method provides sharper images compared to the state-of-art subpixel-based methods, with little color distortion.
{"title":"An analytical study of subpixel-based image down-sampling patterns in frequency domain","authors":"Yonggen Ling, O. Au, Ketan Tang, Jiahao Pang, Jin Zeng, Lu Fang","doi":"10.1109/VCIP.2013.6706342","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706342","url":null,"abstract":"Subpixel-based image down-sampling is a class of methods that can provide improved apparent resolution of the down-scaled image compared to the pixel-based methods. The frequency characteristics of all possible subpixel-based down-sampling patterns for RGB vertical stripes are analytically studied in this paper. Our proposed algorithm reveals that there are merely seven equivalent energy distributions in the luminance frequency spectrum. To achieve higher luminance resolution, we then calculate and choose the optimal down-sampling pattern with anti-aliasing low-pass filter designed for it so as to maximize the energy of the luminance component within the cut-off shape. Experimental results show that the proposed method provides sharper images compared to the state-of-art subpixel-based methods, with little color distortion.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115038752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706425
J. K. Rappel, A. Lahiri, C. Teo
The effect of hand-eye colocation in performing dexterous fine movements such as microsurgical manipulation is studied under a novel digital stereo microscope. Hand motion data is captured under conditions of hand-eye colocation and separation. Both configurations are tested with monoscopic and stereoscopic vision. A set of microsurgical task abstractions are created to reduce the effect of prior expertise. Finally the captured motion data is analyzed to determine the effect of colocation and stereopsis for surgical motion tasks.
{"title":"Surgical motion task performance in a hand eye colocated digital stereo microcsope","authors":"J. K. Rappel, A. Lahiri, C. Teo","doi":"10.1109/VCIP.2013.6706425","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706425","url":null,"abstract":"The effect of hand-eye colocation in performing dexterous fine movements such as microsurgical manipulation is studied under a novel digital stereo microscope. Hand motion data is captured under conditions of hand-eye colocation and separation. Both configurations are tested with monoscopic and stereoscopic vision. A set of microsurgical task abstractions are created to reduce the effect of prior expertise. Finally the captured motion data is analyzed to determine the effect of colocation and stereopsis for surgical motion tasks.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129701933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/VCIP.2013.6706445
Yuwen He, Markus Künstner, Srinivas Gudumasu, Eun‐Seok Ryu, Yan Ye, Xiaoyu Xiu
Mobile devices, increasingly equipped with high capability processors and connected with fast wireless networks, have become a major consumer of multi-media content. Limited battery life on mobile devices makes power saving a critical factor in delivering a good user experience. This paper proposes a power aware streaming system that combines the emerging High Efficiency Video Coding (HEVC) standard and the Dynamic Adaptive Streaming over HTTP (DASH) standard. The proposed system uses power aware HEVC encoding technologies and client side power adaptation logic to adaptively control power consumption on the client device. The proposed power aware HEVC streaming system can improve quality of experience by setting full-length video playback as client's objective. Demonstration of the proposed power aware HEVC system is available on the ASUS Transformer Xfinity (TF700T) tablet using an ARM processor.
移动设备越来越多地配备了高性能处理器并连接了快速无线网络,已成为多媒体内容的主要消费者。移动设备上有限的电池寿命使得省电成为提供良好用户体验的关键因素。本文提出了一种结合新兴的高效视频编码(High Efficiency Video Coding, HEVC)标准和HTTP动态自适应流媒体(Dynamic Adaptive streaming over HTTP, DASH)标准的功率感知流媒体系统。该系统采用功率感知HEVC编码技术和客户端功率自适应逻辑对客户端设备的功耗进行自适应控制。所提出的功率感知HEVC流媒体系统可以通过将全长视频播放作为客户端的目标来提高体验质量。在使用ARM处理器的华硕Transformer Xfinity (TF700T)平板电脑上,可以演示拟议的功率感知HEVC系统。
{"title":"Power aware HEVC streaming for mobile","authors":"Yuwen He, Markus Künstner, Srinivas Gudumasu, Eun‐Seok Ryu, Yan Ye, Xiaoyu Xiu","doi":"10.1109/VCIP.2013.6706445","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706445","url":null,"abstract":"Mobile devices, increasingly equipped with high capability processors and connected with fast wireless networks, have become a major consumer of multi-media content. Limited battery life on mobile devices makes power saving a critical factor in delivering a good user experience. This paper proposes a power aware streaming system that combines the emerging High Efficiency Video Coding (HEVC) standard and the Dynamic Adaptive Streaming over HTTP (DASH) standard. The proposed system uses power aware HEVC encoding technologies and client side power adaptation logic to adaptively control power consumption on the client device. The proposed power aware HEVC streaming system can improve quality of experience by setting full-length video playback as client's objective. Demonstration of the proposed power aware HEVC system is available on the ASUS Transformer Xfinity (TF700T) tablet using an ARM processor.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130267332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}