Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032279
Jen-Wen Wang, C. Chiu
Full image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time.
{"title":"Edge-based motion and intensity prediction for video super-resolution","authors":"Jen-Wen Wang, C. Chiu","doi":"10.1109/GlobalSIP.2014.7032279","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032279","url":null,"abstract":"Full image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115315553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032332
Rugui Yao, Yinsheng Liu, Lu Lu, Geoffrey Y. Li, A. Maaref
In this paper, we study cooperative precoder design in two-tier networks, consisting of a macro cell (MC) and small cells (SCs). By exploiting multi-user Vandermonde-subspace frequency division multiplexing (VFDM) transmission, a MC downlink can co-exist with cognitive SCs. In this paper, we first propose a cooperative cross-tier precoder (CTP) among the transmitters in the SCs to increase the transmit dimension. Moreover, the cooperative CTP allows us to use more efficient intra-tier precoder (ITP) in the SCs to handle intra-cell interference and improve the throughput of the cognitive system. And then, a capacity-achieving (CA) ITP is developed. Numerical results are presented to demonstrate the throughput improvement of the proposed scheme and the robustness to the channel estimation error.
{"title":"Cooperative capacity-achieving precoding design for multi-user VFDM transmission","authors":"Rugui Yao, Yinsheng Liu, Lu Lu, Geoffrey Y. Li, A. Maaref","doi":"10.1109/GlobalSIP.2014.7032332","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032332","url":null,"abstract":"In this paper, we study cooperative precoder design in two-tier networks, consisting of a macro cell (MC) and small cells (SCs). By exploiting multi-user Vandermonde-subspace frequency division multiplexing (VFDM) transmission, a MC downlink can co-exist with cognitive SCs. In this paper, we first propose a cooperative cross-tier precoder (CTP) among the transmitters in the SCs to increase the transmit dimension. Moreover, the cooperative CTP allows us to use more efficient intra-tier precoder (ITP) in the SCs to handle intra-cell interference and improve the throughput of the cognitive system. And then, a capacity-achieving (CA) ITP is developed. Numerical results are presented to demonstrate the throughput improvement of the proposed scheme and the robustness to the channel estimation error.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"159 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032151
Yuejie Chi
This paper proposes a simple sensing and estimation framework, called one-bit sketching, to faithfully recover the principal subspace of a data stream or dataset from a set of one-bit measurements collected at distributed sensors. Each bit indicates the comparison outcome between energy projections of the local sample covariance matrix over a pair of random directions. By leveraging low-dimensional structures, the top eigenvectors of a properly designed surrogate matrix is shown to recover the principal subspace as soon as the number of bit measurements exceeds certain threshold. The sample complexity to obtain reliable comparison outcomes is also obtained. We further develop a low-complexity algorithm to estimate the principal subspace in an online fashion when the bits arrive sequentially at the fusion center. Numerical examples on line spectrum estimation are provided to validate the proposed approach.
{"title":"One-bit principal subspace estimation","authors":"Yuejie Chi","doi":"10.1109/GlobalSIP.2014.7032151","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032151","url":null,"abstract":"This paper proposes a simple sensing and estimation framework, called one-bit sketching, to faithfully recover the principal subspace of a data stream or dataset from a set of one-bit measurements collected at distributed sensors. Each bit indicates the comparison outcome between energy projections of the local sample covariance matrix over a pair of random directions. By leveraging low-dimensional structures, the top eigenvectors of a properly designed surrogate matrix is shown to recover the principal subspace as soon as the number of bit measurements exceeds certain threshold. The sample complexity to obtain reliable comparison outcomes is also obtained. We further develop a low-complexity algorithm to estimate the principal subspace in an online fashion when the bits arrive sequentially at the fusion center. Numerical examples on line spectrum estimation are provided to validate the proposed approach.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032327
Suleman Alnatheer, H. Man
This paper considers Multi-Arms Restless Bandits problem, where each arm have time varying rewards generated from unknown two-states discrete time Markov process. Each chain is assumed irreducible, aperiodic, and non-reactive to agent actions. Optimal solution or constant value approximation to all instances of Restless Bandits problem does not exist; in fact it has been proven to be intractable even if all parameters were deterministic. A polynomial time algorithm is proposed that learns transitional parameters for each arm and selects the perceived optimal policy from a set of predefined policies using a beliefs or probability distributions. More precisely, the proposed algorithm compares mean rewards of consistently staying with best perceived arm to means rewards of Myopically accessed combination of arms using randomized probability matching or better known as Thompson Sampling. Empirical evaluations are presented at the end of the paper that show an improve performance in all instances of the problem compared to other existing algorithms except a small set of instances where arms are similar and bursty.
{"title":"Multi-policy posterior sampling for restless Markov bandits","authors":"Suleman Alnatheer, H. Man","doi":"10.1109/GlobalSIP.2014.7032327","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032327","url":null,"abstract":"This paper considers Multi-Arms Restless Bandits problem, where each arm have time varying rewards generated from unknown two-states discrete time Markov process. Each chain is assumed irreducible, aperiodic, and non-reactive to agent actions. Optimal solution or constant value approximation to all instances of Restless Bandits problem does not exist; in fact it has been proven to be intractable even if all parameters were deterministic. A polynomial time algorithm is proposed that learns transitional parameters for each arm and selects the perceived optimal policy from a set of predefined policies using a beliefs or probability distributions. More precisely, the proposed algorithm compares mean rewards of consistently staying with best perceived arm to means rewards of Myopically accessed combination of arms using randomized probability matching or better known as Thompson Sampling. Empirical evaluations are presented at the end of the paper that show an improve performance in all instances of the problem compared to other existing algorithms except a small set of instances where arms are similar and bursty.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122715929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032145
Ameya Agaskar, C. Wang, Yue M. Lu
The Kaczmarz method, or the algebraic reconstruction technique (ART), is a popular method for solving large-scale overdetermined systems of equations. Recently, Strohmer et al. proposed the randomized Kaczmarz algorithm, an improvement that guarantees exponential convergence to the solution. This has spurred much interest in the algorithm and its extensions. We provide in this paper an exact formula for the mean squared error (MSE) in the value reconstructed by the algorithm. We also compute the exponential decay rate of the MSE, which we call the "annealed" error exponent. We show that the typical performance of the algorithm is far better than the average performance. We define the "quenched" error exponent to characterize the typical performance. This is far harder to compute than the annealed error exponent, but we provide an approximation that matches empirical results. We also explore optimizing the algorithm's row-selection probabilities to speed up the algorithm's convergence.
{"title":"Randomized Kaczmarz algorithms: Exact MSE analysis and optimal sampling probabilities","authors":"Ameya Agaskar, C. Wang, Yue M. Lu","doi":"10.1109/GlobalSIP.2014.7032145","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032145","url":null,"abstract":"The Kaczmarz method, or the algebraic reconstruction technique (ART), is a popular method for solving large-scale overdetermined systems of equations. Recently, Strohmer et al. proposed the randomized Kaczmarz algorithm, an improvement that guarantees exponential convergence to the solution. This has spurred much interest in the algorithm and its extensions. We provide in this paper an exact formula for the mean squared error (MSE) in the value reconstructed by the algorithm. We also compute the exponential decay rate of the MSE, which we call the \"annealed\" error exponent. We show that the typical performance of the algorithm is far better than the average performance. We define the \"quenched\" error exponent to characterize the typical performance. This is far harder to compute than the annealed error exponent, but we provide an approximation that matches empirical results. We also explore optimizing the algorithm's row-selection probabilities to speed up the algorithm's convergence.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122855700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032273
V. Murray, P. Rodríguez, M. Pattichis
We present a first approach to a new method to compute the motion estimation in digital videos using the two-dimensional instantaneous frequency information computed using amplitude-modulation frequency-modulation (AM-FM) methods. The optical flow vectors are computed using an iteratively reweighted norm for total variation (IRN-TV) algorithm. We compare the proposed method using synthetic videos versus a previous three-dimensional AM-FM based method and available motion estimation methods such as a phase-based, Horn-Schunck and the Lucas-Kanade methods. The results are promising producing a full density estimation with more accurate results than the other methods.
{"title":"2D instantaneous frequency-based method for motion estimation using total variation","authors":"V. Murray, P. Rodríguez, M. Pattichis","doi":"10.1109/GlobalSIP.2014.7032273","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032273","url":null,"abstract":"We present a first approach to a new method to compute the motion estimation in digital videos using the two-dimensional instantaneous frequency information computed using amplitude-modulation frequency-modulation (AM-FM) methods. The optical flow vectors are computed using an iteratively reweighted norm for total variation (IRN-TV) algorithm. We compare the proposed method using synthetic videos versus a previous three-dimensional AM-FM based method and available motion estimation methods such as a phase-based, Horn-Schunck and the Lucas-Kanade methods. The results are promising producing a full density estimation with more accurate results than the other methods.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131355654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032186
T. Toda
In human-to-human speech communication, various barriers are caused by some constraints, such as physical constraints causing vocal disorders and environmental constraints making it hard to produce intelligible speech. These barriers would be overcome if our speech production was augmented so that we could produce speech sounds as we want beyond these constraints. Voice conversion (VC) is a technique for modifying speech acoustics, converting non-/para-linguistic information to any form we want while preserving the linguistic content. One of the most popular approaches to VC is based on statistical processing, which is capable of extracting a complex conversion function in a data-driven manner. Although this technique was originally studied in the context of speaker conversion, which converts the voice of a certain speaker to sound like that of another specific speaker, it has great potential to achieve various applications beyond speaker conversion. This paper briefly reviews a trajectory-based conversion method that is capable of effectively reproducing natural speech parameter trajectories utterance by utterance and highlights several techniques that extend this trajectory-based conversion method to achieve real-time conversion processing. Finally this paper shows some examples of real-time VC applications to enhance human-to-human speech communication, such as speaking-aid, silent speech communication, and voice changer/vocal effector.
{"title":"Augmented speech production based on real-time statistical voice conversion","authors":"T. Toda","doi":"10.1109/GlobalSIP.2014.7032186","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032186","url":null,"abstract":"In human-to-human speech communication, various barriers are caused by some constraints, such as physical constraints causing vocal disorders and environmental constraints making it hard to produce intelligible speech. These barriers would be overcome if our speech production was augmented so that we could produce speech sounds as we want beyond these constraints. Voice conversion (VC) is a technique for modifying speech acoustics, converting non-/para-linguistic information to any form we want while preserving the linguistic content. One of the most popular approaches to VC is based on statistical processing, which is capable of extracting a complex conversion function in a data-driven manner. Although this technique was originally studied in the context of speaker conversion, which converts the voice of a certain speaker to sound like that of another specific speaker, it has great potential to achieve various applications beyond speaker conversion. This paper briefly reviews a trajectory-based conversion method that is capable of effectively reproducing natural speech parameter trajectories utterance by utterance and highlights several techniques that extend this trajectory-based conversion method to achieve real-time conversion processing. Finally this paper shows some examples of real-time VC applications to enhance human-to-human speech communication, such as speaking-aid, silent speech communication, and voice changer/vocal effector.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116370342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032104
Shijie Cai, Lingjie Duan, Jing Wang, Rui Zhang
Traditional macro-cell networks are experiencing an explosion of data traffic, and small-cell can efficiently solve this problem by efficiently offloading the traffic from macro-cells. Given massive small-cells deployed in each over-crowed macro-cell, their aggregate power consumption (though low for an individual) can be larger than that of a macro-cell. To reduce the total power consumption of a whole heterogeneous network (HetNet) including macro-cells and small-cells, we dynamically schedule the operating modes of all small-cells (active or sleeping) in each macro-cell, while keeping the macro-cell active to avoid any service failure in coverage. When mobile users (MUs) are homogeneously distributed in a macro-cell according to a Poisson point process (PPP), we optimally propose small-cell location-based scheduling scheme to progressively decide the states of small-cells according to their distances to the corresponding macro-cell base station. Finally, we turn to a more general case where MUs are heterogeneously distributed in different small-cells. We first prove that the optimal scheduling problem is NP-hard and then propose a location-and-coverage-based scheduling algorithm which gives a suboptimal solution in polynomial-time. Simulation results show that the performance loss of our proposed algorithm is less than 1 percentage from the perspective of network power consumption.
传统的宏蜂窝网络正面临着数据流量的爆炸式增长,而小蜂窝网络可以通过有效地从宏蜂窝中卸载数据流量来有效地解决这一问题。如果在每个过度拥挤的宏单元中部署了大量的小单元,那么它们的总功耗(尽管对于单个单元来说很低)可能大于宏单元。为了降低包括宏蜂窝和小蜂窝在内的整个异构网络(HetNet)的总功耗,我们动态调度每个宏蜂窝中所有小蜂窝(活动或休眠)的工作模式,同时保持宏蜂窝的活动状态,以避免覆盖范围内的业务失败。当移动用户按泊松点过程(Poisson point process, PPP)均匀分布在宏小区中时,我们最优地提出了基于小小区位置的调度方案,根据小小区到相应宏小区基站的距离,逐步决定小小区的状态。最后,我们转向更一般的情况下,MUs是异质分布在不同的小细胞。首先证明了最优调度问题是np困难的,然后提出了一种基于位置和覆盖的调度算法,该算法在多项式时间内给出了次优解。仿真结果表明,从网络功耗的角度来看,我们提出的算法的性能损失小于1%。
{"title":"Power-saving heterogeneous networks through optimal small-cell scheduling","authors":"Shijie Cai, Lingjie Duan, Jing Wang, Rui Zhang","doi":"10.1109/GlobalSIP.2014.7032104","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032104","url":null,"abstract":"Traditional macro-cell networks are experiencing an explosion of data traffic, and small-cell can efficiently solve this problem by efficiently offloading the traffic from macro-cells. Given massive small-cells deployed in each over-crowed macro-cell, their aggregate power consumption (though low for an individual) can be larger than that of a macro-cell. To reduce the total power consumption of a whole heterogeneous network (HetNet) including macro-cells and small-cells, we dynamically schedule the operating modes of all small-cells (active or sleeping) in each macro-cell, while keeping the macro-cell active to avoid any service failure in coverage. When mobile users (MUs) are homogeneously distributed in a macro-cell according to a Poisson point process (PPP), we optimally propose small-cell location-based scheduling scheme to progressively decide the states of small-cells according to their distances to the corresponding macro-cell base station. Finally, we turn to a more general case where MUs are heterogeneously distributed in different small-cells. We first prove that the optimal scheduling problem is NP-hard and then propose a location-and-coverage-based scheduling algorithm which gives a suboptimal solution in polynomial-time. Simulation results show that the performance loss of our proposed algorithm is less than 1 percentage from the perspective of network power consumption.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125667534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032068
Xinmiao Zhang, Y. Tai
Quasi-cyclic low-density parity-check (QC-LDPC) codes are used in numerous digital communication and storage systems. Layered LDPC decoding converges faster. To further increase the throughput, multiple block rows of the QC parity check matrix can be included in a layer. However, the maximum achievable clock frequency of the prior multi-block-row layered decoder is limited by the long critical path. This paper reformulates the involved equations so that the updating of the messages belonging to different block rows in a layer does not depend on any common intrinsic message. This enables the removal of one adder and one routing network from the critical path. As a result, the proposed design can reach substantially higher clock frequency than prior design, and achieves effective throughput-area tradeoff.
{"title":"High-speed multi-block-row layered decoding for Quasi-cyclic LDPC codes","authors":"Xinmiao Zhang, Y. Tai","doi":"10.1109/GlobalSIP.2014.7032068","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032068","url":null,"abstract":"Quasi-cyclic low-density parity-check (QC-LDPC) codes are used in numerous digital communication and storage systems. Layered LDPC decoding converges faster. To further increase the throughput, multiple block rows of the QC parity check matrix can be included in a layer. However, the maximum achievable clock frequency of the prior multi-block-row layered decoder is limited by the long critical path. This paper reformulates the involved equations so that the updating of the messages belonging to different block rows in a layer does not depend on any common intrinsic message. This enables the removal of one adder and one routing network from the critical path. As a result, the proposed design can reach substantially higher clock frequency than prior design, and achieves effective throughput-area tradeoff.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126881899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032126
Victor Sanchez, Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà
This paper proposes an HEVC-based method for lossless compression of Whole Slide pathology Images (WSIs). Based on the observation that WSIs usually feature a high number of edges and multidirectional patterns due to the great variety of cellular structures and tissues depicted, we combine the advantages of sample-by-sample differential pulse code modulation (SbS-DPCM) and edge prediction into the intra coding process. The objective is to enhance the prediction performance where strong edge information is encountered. This paper also proposes an implementation of the decoding process that maintains the block-wise coding structure of HEVC when SbS-DPCM and edge prediction are employed. Experimental results on various WSIs show that the proposed method attains average bit-rate savings of 7.67%.
{"title":"HEVC-based lossless compression of Whole Slide pathology images","authors":"Victor Sanchez, Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà","doi":"10.1109/GlobalSIP.2014.7032126","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032126","url":null,"abstract":"This paper proposes an HEVC-based method for lossless compression of Whole Slide pathology Images (WSIs). Based on the observation that WSIs usually feature a high number of edges and multidirectional patterns due to the great variety of cellular structures and tissues depicted, we combine the advantages of sample-by-sample differential pulse code modulation (SbS-DPCM) and edge prediction into the intra coding process. The objective is to enhance the prediction performance where strong edge information is encountered. This paper also proposes an implementation of the decoding process that maintains the block-wise coding structure of HEVC when SbS-DPCM and edge prediction are employed. Experimental results on various WSIs show that the proposed method attains average bit-rate savings of 7.67%.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121451655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}