Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032165
Jing Yang, Zuoen Wang, Jingxian Wu
In this paper, we study the level set estimation of a spatial-temporally correlated random field by using a small number of spatially distributed sensors. The level sets of a random field are defined as regions where data values exceed a certain threshold. We propose a new active sparse sensing and inference scheme, which can accurately extract level sets in a large random field with a small number of sensors strategically and sparsely placed in the random field. In the proposed active sparse sensing scheme, a central controller dynamically selects a small number of sensing locations according to the information revealed from past measurements, with the objective to minimize the expected level set estimation errors. The expected estimation error is explicitly expressed as a function of the sensing locations, and the results are used to formulate optimal and sub-optimal selection of sensing locations. Simulation results demonstrate that the proposed algorithms can achieve significant performance gains over baseline passive sensing algorithms that do not proactively select the sensing locations.
{"title":"Level set estimation with dynamic sparse sensing","authors":"Jing Yang, Zuoen Wang, Jingxian Wu","doi":"10.1109/GlobalSIP.2014.7032165","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032165","url":null,"abstract":"In this paper, we study the level set estimation of a spatial-temporally correlated random field by using a small number of spatially distributed sensors. The level sets of a random field are defined as regions where data values exceed a certain threshold. We propose a new active sparse sensing and inference scheme, which can accurately extract level sets in a large random field with a small number of sensors strategically and sparsely placed in the random field. In the proposed active sparse sensing scheme, a central controller dynamically selects a small number of sensing locations according to the information revealed from past measurements, with the objective to minimize the expected level set estimation errors. The expected estimation error is explicitly expressed as a function of the sensing locations, and the results are used to formulate optimal and sub-optimal selection of sensing locations. Simulation results demonstrate that the proposed algorithms can achieve significant performance gains over baseline passive sensing algorithms that do not proactively select the sensing locations.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134410919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032087
Xiang Chen, Wei Chen
Joint channel-aware and buffer-aware scheduling along with rate/power adaptation is a promising solution to assure Quality of Service (QoS) and improve energy efficiency. In this paper, an analytical delay-power tradeoff and optimal threshold-based scheduling are proposed. In particular, we are interested in the scheduling policies where the scheduler only needs to decide transmitting or not. More specifically, we shall focus on the slow fading scenario, where channel coherent time is long enough to transmit several data packets. We formulate a linear programming problem where the average delay is minimized given an average power constraint. By deriving the analytical solution to this linear programming, the optimal delay-power tradeoff and the optimal scheduling policy are presented.
{"title":"A joint channel-aware and buffer-aware scheduling for energy-efficient transmission over fading channels with long coherent time","authors":"Xiang Chen, Wei Chen","doi":"10.1109/GlobalSIP.2014.7032087","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032087","url":null,"abstract":"Joint channel-aware and buffer-aware scheduling along with rate/power adaptation is a promising solution to assure Quality of Service (QoS) and improve energy efficiency. In this paper, an analytical delay-power tradeoff and optimal threshold-based scheduling are proposed. In particular, we are interested in the scheduling policies where the scheduler only needs to decide transmitting or not. More specifically, we shall focus on the slow fading scenario, where channel coherent time is long enough to transmit several data packets. We formulate a linear programming problem where the average delay is minimized given an average power constraint. By deriving the analytical solution to this linear programming, the optimal delay-power tradeoff and the optimal scheduling policy are presented.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132450397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032126
Victor Sanchez, Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà
This paper proposes an HEVC-based method for lossless compression of Whole Slide pathology Images (WSIs). Based on the observation that WSIs usually feature a high number of edges and multidirectional patterns due to the great variety of cellular structures and tissues depicted, we combine the advantages of sample-by-sample differential pulse code modulation (SbS-DPCM) and edge prediction into the intra coding process. The objective is to enhance the prediction performance where strong edge information is encountered. This paper also proposes an implementation of the decoding process that maintains the block-wise coding structure of HEVC when SbS-DPCM and edge prediction are employed. Experimental results on various WSIs show that the proposed method attains average bit-rate savings of 7.67%.
{"title":"HEVC-based lossless compression of Whole Slide pathology images","authors":"Victor Sanchez, Francesc Aulí Llinàs, Joan Bartrina-Rapesta, J. Serra-Sagristà","doi":"10.1109/GlobalSIP.2014.7032126","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032126","url":null,"abstract":"This paper proposes an HEVC-based method for lossless compression of Whole Slide pathology Images (WSIs). Based on the observation that WSIs usually feature a high number of edges and multidirectional patterns due to the great variety of cellular structures and tissues depicted, we combine the advantages of sample-by-sample differential pulse code modulation (SbS-DPCM) and edge prediction into the intra coding process. The objective is to enhance the prediction performance where strong edge information is encountered. This paper also proposes an implementation of the decoding process that maintains the block-wise coding structure of HEVC when SbS-DPCM and edge prediction are employed. Experimental results on various WSIs show that the proposed method attains average bit-rate savings of 7.67%.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121451655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032186
T. Toda
In human-to-human speech communication, various barriers are caused by some constraints, such as physical constraints causing vocal disorders and environmental constraints making it hard to produce intelligible speech. These barriers would be overcome if our speech production was augmented so that we could produce speech sounds as we want beyond these constraints. Voice conversion (VC) is a technique for modifying speech acoustics, converting non-/para-linguistic information to any form we want while preserving the linguistic content. One of the most popular approaches to VC is based on statistical processing, which is capable of extracting a complex conversion function in a data-driven manner. Although this technique was originally studied in the context of speaker conversion, which converts the voice of a certain speaker to sound like that of another specific speaker, it has great potential to achieve various applications beyond speaker conversion. This paper briefly reviews a trajectory-based conversion method that is capable of effectively reproducing natural speech parameter trajectories utterance by utterance and highlights several techniques that extend this trajectory-based conversion method to achieve real-time conversion processing. Finally this paper shows some examples of real-time VC applications to enhance human-to-human speech communication, such as speaking-aid, silent speech communication, and voice changer/vocal effector.
{"title":"Augmented speech production based on real-time statistical voice conversion","authors":"T. Toda","doi":"10.1109/GlobalSIP.2014.7032186","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032186","url":null,"abstract":"In human-to-human speech communication, various barriers are caused by some constraints, such as physical constraints causing vocal disorders and environmental constraints making it hard to produce intelligible speech. These barriers would be overcome if our speech production was augmented so that we could produce speech sounds as we want beyond these constraints. Voice conversion (VC) is a technique for modifying speech acoustics, converting non-/para-linguistic information to any form we want while preserving the linguistic content. One of the most popular approaches to VC is based on statistical processing, which is capable of extracting a complex conversion function in a data-driven manner. Although this technique was originally studied in the context of speaker conversion, which converts the voice of a certain speaker to sound like that of another specific speaker, it has great potential to achieve various applications beyond speaker conversion. This paper briefly reviews a trajectory-based conversion method that is capable of effectively reproducing natural speech parameter trajectories utterance by utterance and highlights several techniques that extend this trajectory-based conversion method to achieve real-time conversion processing. Finally this paper shows some examples of real-time VC applications to enhance human-to-human speech communication, such as speaking-aid, silent speech communication, and voice changer/vocal effector.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116370342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032279
Jen-Wen Wang, C. Chiu
Full image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time.
{"title":"Edge-based motion and intensity prediction for video super-resolution","authors":"Jen-Wen Wang, C. Chiu","doi":"10.1109/GlobalSIP.2014.7032279","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032279","url":null,"abstract":"Full image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115315553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032151
Yuejie Chi
This paper proposes a simple sensing and estimation framework, called one-bit sketching, to faithfully recover the principal subspace of a data stream or dataset from a set of one-bit measurements collected at distributed sensors. Each bit indicates the comparison outcome between energy projections of the local sample covariance matrix over a pair of random directions. By leveraging low-dimensional structures, the top eigenvectors of a properly designed surrogate matrix is shown to recover the principal subspace as soon as the number of bit measurements exceeds certain threshold. The sample complexity to obtain reliable comparison outcomes is also obtained. We further develop a low-complexity algorithm to estimate the principal subspace in an online fashion when the bits arrive sequentially at the fusion center. Numerical examples on line spectrum estimation are provided to validate the proposed approach.
{"title":"One-bit principal subspace estimation","authors":"Yuejie Chi","doi":"10.1109/GlobalSIP.2014.7032151","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032151","url":null,"abstract":"This paper proposes a simple sensing and estimation framework, called one-bit sketching, to faithfully recover the principal subspace of a data stream or dataset from a set of one-bit measurements collected at distributed sensors. Each bit indicates the comparison outcome between energy projections of the local sample covariance matrix over a pair of random directions. By leveraging low-dimensional structures, the top eigenvectors of a properly designed surrogate matrix is shown to recover the principal subspace as soon as the number of bit measurements exceeds certain threshold. The sample complexity to obtain reliable comparison outcomes is also obtained. We further develop a low-complexity algorithm to estimate the principal subspace in an online fashion when the bits arrive sequentially at the fusion center. Numerical examples on line spectrum estimation are provided to validate the proposed approach.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032198
Shiva Kiran, S. Hoyos, S. Palermo
Some proposed high speed wireline communications make use of an ADC front end to allow a feedforward equalizer (FFE) to compensate for the frequency dependent loss of the channel. High precision ADCs are expensive in terms of power. The FFE block performs multiplication and addition operations at high speed and further increases the power consumption. This paper proposes a simple forward error correction method by which the ADC resolution and the equalizer complexity can be reduced. A single parity check code implemented together with a threshold detector can provide single error correction capability. With this error correction capability, the number of taps required in the FFE block is shown to be reduced to 3 taps from 6 taps for a channel with 15dB insertion loss at 5GHz frequency with the data rate being 20Gb/s. The effective number of bits (ENOB) required from the ADC is also shown to reduce to about 3.5 bits from 6 bits. The high rate of the code and the very simple decoder architecture make this error correction mechanism well suited for the wireline application.
{"title":"A single parity check forward error correction method for high speed I/O","authors":"Shiva Kiran, S. Hoyos, S. Palermo","doi":"10.1109/GlobalSIP.2014.7032198","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032198","url":null,"abstract":"Some proposed high speed wireline communications make use of an ADC front end to allow a feedforward equalizer (FFE) to compensate for the frequency dependent loss of the channel. High precision ADCs are expensive in terms of power. The FFE block performs multiplication and addition operations at high speed and further increases the power consumption. This paper proposes a simple forward error correction method by which the ADC resolution and the equalizer complexity can be reduced. A single parity check code implemented together with a threshold detector can provide single error correction capability. With this error correction capability, the number of taps required in the FFE block is shown to be reduced to 3 taps from 6 taps for a channel with 15dB insertion loss at 5GHz frequency with the data rate being 20Gb/s. The effective number of bits (ENOB) required from the ADC is also shown to reduce to about 3.5 bits from 6 bits. The high rate of the code and the very simple decoder architecture make this error correction mechanism well suited for the wireline application.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124439686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032156
Heng Qiao, P. Pal
This paper considers the problem of estimating the symmetric and Toeplitz covariance matrix of compressive samples of wide sense stationary random vectors. A new structured deterministic sampling method known as the "Generalized Nested Sampling" is introduced which enables compressive quadratic sampling of symmetric Toeplitz matrices., by fully exploiting the inherent redundancy in the Toeplitz matrix. For a Toeplitz matrix of size N ×N, this sampling scheme can attain a compression factor of O(√N) even without assuming sparsity and/or low rank, and allows exact recovery of the original Toeplitz matrix. When the matrix is sparse, a new hybrid sampling approach is proposed which efficiently combines Generalized Nested Sampling and Random Sampling to attain even greater compression rates, which, under suitable conditions can be as large as O(√N), using a novel observation formulated in this paper.
{"title":"Generalized nested sampling for compression and exact recovery of symmetric Toeplitz matrices","authors":"Heng Qiao, P. Pal","doi":"10.1109/GlobalSIP.2014.7032156","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032156","url":null,"abstract":"This paper considers the problem of estimating the symmetric and Toeplitz covariance matrix of compressive samples of wide sense stationary random vectors. A new structured deterministic sampling method known as the \"Generalized Nested Sampling\" is introduced which enables compressive quadratic sampling of symmetric Toeplitz matrices., by fully exploiting the inherent redundancy in the Toeplitz matrix. For a Toeplitz matrix of size N ×N, this sampling scheme can attain a compression factor of O(√N) even without assuming sparsity and/or low rank, and allows exact recovery of the original Toeplitz matrix. When the matrix is sparse, a new hybrid sampling approach is proposed which efficiently combines Generalized Nested Sampling and Random Sampling to attain even greater compression rates, which, under suitable conditions can be as large as O(√N), using a novel observation formulated in this paper.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130714168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032104
Shijie Cai, Lingjie Duan, Jing Wang, Rui Zhang
Traditional macro-cell networks are experiencing an explosion of data traffic, and small-cell can efficiently solve this problem by efficiently offloading the traffic from macro-cells. Given massive small-cells deployed in each over-crowed macro-cell, their aggregate power consumption (though low for an individual) can be larger than that of a macro-cell. To reduce the total power consumption of a whole heterogeneous network (HetNet) including macro-cells and small-cells, we dynamically schedule the operating modes of all small-cells (active or sleeping) in each macro-cell, while keeping the macro-cell active to avoid any service failure in coverage. When mobile users (MUs) are homogeneously distributed in a macro-cell according to a Poisson point process (PPP), we optimally propose small-cell location-based scheduling scheme to progressively decide the states of small-cells according to their distances to the corresponding macro-cell base station. Finally, we turn to a more general case where MUs are heterogeneously distributed in different small-cells. We first prove that the optimal scheduling problem is NP-hard and then propose a location-and-coverage-based scheduling algorithm which gives a suboptimal solution in polynomial-time. Simulation results show that the performance loss of our proposed algorithm is less than 1 percentage from the perspective of network power consumption.
传统的宏蜂窝网络正面临着数据流量的爆炸式增长,而小蜂窝网络可以通过有效地从宏蜂窝中卸载数据流量来有效地解决这一问题。如果在每个过度拥挤的宏单元中部署了大量的小单元,那么它们的总功耗(尽管对于单个单元来说很低)可能大于宏单元。为了降低包括宏蜂窝和小蜂窝在内的整个异构网络(HetNet)的总功耗,我们动态调度每个宏蜂窝中所有小蜂窝(活动或休眠)的工作模式,同时保持宏蜂窝的活动状态,以避免覆盖范围内的业务失败。当移动用户按泊松点过程(Poisson point process, PPP)均匀分布在宏小区中时,我们最优地提出了基于小小区位置的调度方案,根据小小区到相应宏小区基站的距离,逐步决定小小区的状态。最后,我们转向更一般的情况下,MUs是异质分布在不同的小细胞。首先证明了最优调度问题是np困难的,然后提出了一种基于位置和覆盖的调度算法,该算法在多项式时间内给出了次优解。仿真结果表明,从网络功耗的角度来看,我们提出的算法的性能损失小于1%。
{"title":"Power-saving heterogeneous networks through optimal small-cell scheduling","authors":"Shijie Cai, Lingjie Duan, Jing Wang, Rui Zhang","doi":"10.1109/GlobalSIP.2014.7032104","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032104","url":null,"abstract":"Traditional macro-cell networks are experiencing an explosion of data traffic, and small-cell can efficiently solve this problem by efficiently offloading the traffic from macro-cells. Given massive small-cells deployed in each over-crowed macro-cell, their aggregate power consumption (though low for an individual) can be larger than that of a macro-cell. To reduce the total power consumption of a whole heterogeneous network (HetNet) including macro-cells and small-cells, we dynamically schedule the operating modes of all small-cells (active or sleeping) in each macro-cell, while keeping the macro-cell active to avoid any service failure in coverage. When mobile users (MUs) are homogeneously distributed in a macro-cell according to a Poisson point process (PPP), we optimally propose small-cell location-based scheduling scheme to progressively decide the states of small-cells according to their distances to the corresponding macro-cell base station. Finally, we turn to a more general case where MUs are heterogeneously distributed in different small-cells. We first prove that the optimal scheduling problem is NP-hard and then propose a location-and-coverage-based scheduling algorithm which gives a suboptimal solution in polynomial-time. Simulation results show that the performance loss of our proposed algorithm is less than 1 percentage from the perspective of network power consumption.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125667534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/GlobalSIP.2014.7032068
Xinmiao Zhang, Y. Tai
Quasi-cyclic low-density parity-check (QC-LDPC) codes are used in numerous digital communication and storage systems. Layered LDPC decoding converges faster. To further increase the throughput, multiple block rows of the QC parity check matrix can be included in a layer. However, the maximum achievable clock frequency of the prior multi-block-row layered decoder is limited by the long critical path. This paper reformulates the involved equations so that the updating of the messages belonging to different block rows in a layer does not depend on any common intrinsic message. This enables the removal of one adder and one routing network from the critical path. As a result, the proposed design can reach substantially higher clock frequency than prior design, and achieves effective throughput-area tradeoff.
{"title":"High-speed multi-block-row layered decoding for Quasi-cyclic LDPC codes","authors":"Xinmiao Zhang, Y. Tai","doi":"10.1109/GlobalSIP.2014.7032068","DOIUrl":"https://doi.org/10.1109/GlobalSIP.2014.7032068","url":null,"abstract":"Quasi-cyclic low-density parity-check (QC-LDPC) codes are used in numerous digital communication and storage systems. Layered LDPC decoding converges faster. To further increase the throughput, multiple block rows of the QC parity check matrix can be included in a layer. However, the maximum achievable clock frequency of the prior multi-block-row layered decoder is limited by the long critical path. This paper reformulates the involved equations so that the updating of the messages belonging to different block rows in a layer does not depend on any common intrinsic message. This enables the removal of one adder and one routing network from the critical path. As a result, the proposed design can reach substantially higher clock frequency than prior design, and achieves effective throughput-area tradeoff.","PeriodicalId":362306,"journal":{"name":"2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126881899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}