Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601025
S. L. Sweatlock, S. Dolinar, K. Andrews
Low-density parity-check (LDPC) decoders, like iterative decoders for other block codes, can be designed to stop after a variable number of iterations, dependent on the difficulty of decoding particular noisy received words, also called frames. The number of iterations the decoder spends on a given frame determines both the probability of successful decoding, and the time expended. But whereas the speed of an LDPC decoder without a buffer is determined by its most difficult frames, the speed of a variable-iterations decoder with sufficient buffering approaches that determined by frames of average difficulty. It is relatively straightforward to analyze this as a D/G/1 queuing problem combined with empirically measured probability distributions of iteration counts for specific LDPC codes. Our analysis parallels that of other researchers, e.g., (J. Vogt and A. Finger, 2001), (G. Bosco et al., 2005), (M. Rovini and A. Martinez, 2007), and examines the resulting implications on LDPC decoder design choices. We find that a buffer large enough to hold only B = 2 or 3 additional frames is sufficient to achieve near optimal performance. We prove a strong monotonicity condition: not only does a variable-iterations decoder with buffer size B +1 frames outperform one with buffer size B in terms of average error rate, every single frame is guaranteed to receive at least as many iterations from the decoder with the larger buffer, if needed. Significantly, at low error rates, a variable-iterations decoder with buffer size B can keep pace with an input data rate B +1 times faster than a fixed-iterations decoder with the same processing speed.
{"title":"Buffering requirements for variable-iterations LDPC decoders","authors":"S. L. Sweatlock, S. Dolinar, K. Andrews","doi":"10.1109/ITA.2008.4601025","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601025","url":null,"abstract":"Low-density parity-check (LDPC) decoders, like iterative decoders for other block codes, can be designed to stop after a variable number of iterations, dependent on the difficulty of decoding particular noisy received words, also called frames. The number of iterations the decoder spends on a given frame determines both the probability of successful decoding, and the time expended. But whereas the speed of an LDPC decoder without a buffer is determined by its most difficult frames, the speed of a variable-iterations decoder with sufficient buffering approaches that determined by frames of average difficulty. It is relatively straightforward to analyze this as a D/G/1 queuing problem combined with empirically measured probability distributions of iteration counts for specific LDPC codes. Our analysis parallels that of other researchers, e.g., (J. Vogt and A. Finger, 2001), (G. Bosco et al., 2005), (M. Rovini and A. Martinez, 2007), and examines the resulting implications on LDPC decoder design choices. We find that a buffer large enough to hold only B = 2 or 3 additional frames is sufficient to achieve near optimal performance. We prove a strong monotonicity condition: not only does a variable-iterations decoder with buffer size B +1 frames outperform one with buffer size B in terms of average error rate, every single frame is guaranteed to receive at least as many iterations from the decoder with the larger buffer, if needed. Significantly, at low error rates, a variable-iterations decoder with buffer size B can keep pace with an input data rate B +1 times faster than a fixed-iterations decoder with the same processing speed.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125325848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601012
S. Agarwal, A. Hagedorn, A. Trachtenberg
We present novel rateless codes that generalize and outperform LT codes (with respect to overall communication and computation complexity) when some input symbols are already available at the decoding host. This case can occur in data synchronization scenarios, or where feedback is provided or can be inferred from transmission channel models. We provide analysis and experimental evidence of this improvement, and demonstrate the efficiency of the new code through implementation on highly constrained sensor devices.
{"title":"Adaptive rateless coding under partial information","authors":"S. Agarwal, A. Hagedorn, A. Trachtenberg","doi":"10.1109/ITA.2008.4601012","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601012","url":null,"abstract":"We present novel rateless codes that generalize and outperform LT codes (with respect to overall communication and computation complexity) when some input symbols are already available at the decoding host. This case can occur in data synchronization scenarios, or where feedback is provided or can be inferred from transmission channel models. We provide analysis and experimental evidence of this improvement, and demonstrate the efficiency of the new code through implementation on highly constrained sensor devices.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601095
C. da Silva, W. Headley, J. Reed, Youping Zhao
In order for cognitive radio systems to fulfill their potential of enabling more efficient spectrum utilization by means of opportunistic spectrum use, significant advances must be made in the areas of spectrum sensing and ldquocognitiverdquo spectrum access. In this paper, we discuss two research efforts relevant to these areas; namely the development of distributed (cyclic feature-based) spectrum sensing algorithms and of available resource maps-based cognitive radio systems. It is shown that distributed spectrum sensing is a practical and efficient approach to increase the probability of signal detection and correct modulation classification and/or to reduce sensitivity requirements of individual radios. Additionally, numerical results are presented that show significant reduction of harmful interference and greater spectrum utilization efficiency of available resource maps-based cognitive radio systems.
{"title":"The application of distributed spectrum sensing and available resource maps to cognitive radio systems","authors":"C. da Silva, W. Headley, J. Reed, Youping Zhao","doi":"10.1109/ITA.2008.4601095","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601095","url":null,"abstract":"In order for cognitive radio systems to fulfill their potential of enabling more efficient spectrum utilization by means of opportunistic spectrum use, significant advances must be made in the areas of spectrum sensing and ldquocognitiverdquo spectrum access. In this paper, we discuss two research efforts relevant to these areas; namely the development of distributed (cyclic feature-based) spectrum sensing algorithms and of available resource maps-based cognitive radio systems. It is shown that distributed spectrum sensing is a practical and efficient approach to increase the probability of signal detection and correct modulation classification and/or to reduce sensitivity requirements of individual radios. Additionally, numerical results are presented that show significant reduction of harmful interference and greater spectrum utilization efficiency of available resource maps-based cognitive radio systems.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114676882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601050
Sheng Jing, Lizhong Zheng, M. Médard
Source-channel coding in time-varying channels without perfect side information at the transmitter suffers from uncertainty which may not always be averaged out. In channel coding, a main approach to address such uncertainty has been the outage formulation. In source coding, the main approaches to deal with such uncertainty have been multiple description coding (MDC) and successive refinement (SR). In this paper, we consider layered source-channel coding schemes relying on the MDC technique, originally proposed by Laneman et al, and the SR technique. We introduce the concept of distortion-diversity tradeoff, akin to the rate-diversity tradeoff, to consider the performance of these schemes. Our distortion-diversity perspective sheds some light on the performance comparison between various source-channel coding approaches in different operation regions.
{"title":"Layered source-channel coding: A distortion-diversity perspective","authors":"Sheng Jing, Lizhong Zheng, M. Médard","doi":"10.1109/ITA.2008.4601050","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601050","url":null,"abstract":"Source-channel coding in time-varying channels without perfect side information at the transmitter suffers from uncertainty which may not always be averaged out. In channel coding, a main approach to address such uncertainty has been the outage formulation. In source coding, the main approaches to deal with such uncertainty have been multiple description coding (MDC) and successive refinement (SR). In this paper, we consider layered source-channel coding schemes relying on the MDC technique, originally proposed by Laneman et al, and the SR technique. We introduce the concept of distortion-diversity tradeoff, akin to the rate-diversity tradeoff, to consider the performance of these schemes. Our distortion-diversity perspective sheds some light on the performance comparison between various source-channel coding approaches in different operation regions.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601071
S. Shamai, O. Simeone, O. Somekh, H. Poor
Multicell processing in the form of joint encoding for the downlink of a cellular system is studied under the realistic assumption that the base stations (BSs) are connected to a central unit via finite-capacity links (finite-capacity backhaul). Three scenarios are considered that present different trade-offs between global processing at the central unit and local processing at the base stations and different requirements in terms of codebook information (CI) at the BSs: 1) local encoding with CI limited to a subset of nearby BSs; 2) mixed local and central encoding with only local CI; 3) central encoding with oblivious cells (no CI). Three transmission strategies are proposed that provide achievable rates for the considered scenarios. Performance is evaluated in asymptotic regimes of interest (high backhaul capacity and extreme signal-to-noise ratio, SNR) and further corroborated by numerical results. The major finding of this work is that central encoding with oblivious cells is a very attractive option for both ease of implementation and performance, unless the application of interest requires high data rate (i.e., high SNR) and the backhaul capacity is not allowed to increase with the SNR. In this latter cases, some form of CI at the BSs becomes necessary.
{"title":"Joint multi-cell processing for downlink channels with limited-capacity backhaul","authors":"S. Shamai, O. Simeone, O. Somekh, H. Poor","doi":"10.1109/ITA.2008.4601071","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601071","url":null,"abstract":"Multicell processing in the form of joint encoding for the downlink of a cellular system is studied under the realistic assumption that the base stations (BSs) are connected to a central unit via finite-capacity links (finite-capacity backhaul). Three scenarios are considered that present different trade-offs between global processing at the central unit and local processing at the base stations and different requirements in terms of codebook information (CI) at the BSs: 1) local encoding with CI limited to a subset of nearby BSs; 2) mixed local and central encoding with only local CI; 3) central encoding with oblivious cells (no CI). Three transmission strategies are proposed that provide achievable rates for the considered scenarios. Performance is evaluated in asymptotic regimes of interest (high backhaul capacity and extreme signal-to-noise ratio, SNR) and further corroborated by numerical results. The major finding of this work is that central encoding with oblivious cells is a very attractive option for both ease of implementation and performance, unless the application of interest requires high data rate (i.e., high SNR) and the backhaul capacity is not allowed to increase with the SNR. In this latter cases, some form of CI at the BSs becomes necessary.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123749941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601081
C. Tian, S. Mohajer, S. Diggavi
We consider multiple description (MD) coding for the Gaussian source under the symmetric mean squared error distortion constraints. With focus on the three description problem, we provide inner and outer bounds for the rate region, between which the gap can be bounded by some small constants. At the heart of this result is a novel lower bound for the sum rate, which is derived through generalization of the well-known bounding technique by Ozarow. In contrast to the original method, we expand the probability space by more than one (instead of only one) random variable, and further impose a particular Markov structure on them. The outer bound is then established by applying this technique to several bounding planes of the rate region. For the inner bound, we consider a simple scheme of combining successive refinement coding and lossless multilevel diversity coding (MLD). Both the inner and outer bounds can be written as the intersection of ten half spaces with matching normal directions, and thus can be easily compared. The small gap between them, where the boundary of the MD rate region clearly resides, suggests the surprising competitiveness of this simple achievability scheme. The geometric structure of the MLD rate region provides important guidelines as to the normal directions of the outer bound hyperplanes, which demonstrates an intimate connection between MD and MLD coding. These results can be generalized and improved in various ways which are also discussed.
{"title":"On the Gaussian K-description problem under symmetric distortion constraints","authors":"C. Tian, S. Mohajer, S. Diggavi","doi":"10.1109/ITA.2008.4601081","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601081","url":null,"abstract":"We consider multiple description (MD) coding for the Gaussian source under the symmetric mean squared error distortion constraints. With focus on the three description problem, we provide inner and outer bounds for the rate region, between which the gap can be bounded by some small constants. At the heart of this result is a novel lower bound for the sum rate, which is derived through generalization of the well-known bounding technique by Ozarow. In contrast to the original method, we expand the probability space by more than one (instead of only one) random variable, and further impose a particular Markov structure on them. The outer bound is then established by applying this technique to several bounding planes of the rate region. For the inner bound, we consider a simple scheme of combining successive refinement coding and lossless multilevel diversity coding (MLD). Both the inner and outer bounds can be written as the intersection of ten half spaces with matching normal directions, and thus can be easily compared. The small gap between them, where the boundary of the MD rate region clearly resides, suggests the surprising competitiveness of this simple achievability scheme. The geometric structure of the MLD rate region provides important guidelines as to the normal directions of the outer bound hyperplanes, which demonstrates an intimate connection between MD and MLD coding. These results can be generalized and improved in various ways which are also discussed.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131719673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601058
Yu-Ching Tong, G. Pottie
We present arguments that a small number of sensors within the network provide most of the utility. That is, cooperation of more than a small number of nodes has little benefit. We present two scenarios. In the first scenario, all sensors provide identical utility, and their utilities are aggregated sequentially. The second scenario is sensor fusion with signal strength decreasing with distance. In that scenario the source is at the origin and the sensors are distributed, either uniformly or according to a planar standard normal distribution. We also vary the total number of sensors distributed in both scenarios to observe the utility/density trade off. Localization using the Fisher information as the utility metric is used to demonstrate that few sensors are sufficient to derive most of the utility out of the sensor network. Simulation results back up an order statistics analysis of the behavior. The implication is that while co-operation is useful for some objectives such as combating fading and uncertainty of individual sensors, it is inefficient as a mean to increase the utility of a sensor network if the best sensorpsilas utility is significantly short of the desired utility.
{"title":"The marginal utility of cooperation in sensor networks","authors":"Yu-Ching Tong, G. Pottie","doi":"10.1109/ITA.2008.4601058","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601058","url":null,"abstract":"We present arguments that a small number of sensors within the network provide most of the utility. That is, cooperation of more than a small number of nodes has little benefit. We present two scenarios. In the first scenario, all sensors provide identical utility, and their utilities are aggregated sequentially. The second scenario is sensor fusion with signal strength decreasing with distance. In that scenario the source is at the origin and the sensors are distributed, either uniformly or according to a planar standard normal distribution. We also vary the total number of sensors distributed in both scenarios to observe the utility/density trade off. Localization using the Fisher information as the utility metric is used to demonstrate that few sensors are sufficient to derive most of the utility out of the sensor network. Simulation results back up an order statistics analysis of the behavior. The implication is that while co-operation is useful for some objectives such as combating fading and uncertainty of individual sensors, it is inefficient as a mean to increase the utility of a sensor network if the best sensorpsilas utility is significantly short of the desired utility.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"71 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128005333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601054
Ali ParandehGheibi, A. Eryilmaz, A. Ozdaglar, M. Médard
We consider the problem of rate allocation in a fading Gaussian multiple-access channel (MAC) with fixed transmission powers. Our goal is to maximize a general concave utility function of transmission rates over the throughput capacity region. In contrast to earlier works in this context that propose solutions where a potentially complex optimization problem must be solved in every decision instant, we propose a low-complexity approximate rate allocation policy and analyze the effect of temporal channel variations on its utility performance. To the best of our knowledge, this is the first work that studies the tracking capabilities of an approximate rate allocation scheme under fading channel conditions. We build on an earlier work to present a new rate allocation policy for a fading MAC that implements a low-complexity approximate gradient projection iteration for each channel measurement, and explicitly characterize the effect of the speed of temporal channel variations on the tracking neighborhood of our policy. We further improve our results by proposing an alternative rate allocation policy for which tighter bounds on the size of the tracking neighborhood are derived. These proposed rate allocation policies are computationally efficient in our setting since they implement a single gradient projection iteration per channel measurement and each such iteration relies on approximate projections which has polynomial-complexity in the number of users.
{"title":"Dynamic rate allocation in fading multiple access channels","authors":"Ali ParandehGheibi, A. Eryilmaz, A. Ozdaglar, M. Médard","doi":"10.1109/ITA.2008.4601054","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601054","url":null,"abstract":"We consider the problem of rate allocation in a fading Gaussian multiple-access channel (MAC) with fixed transmission powers. Our goal is to maximize a general concave utility function of transmission rates over the throughput capacity region. In contrast to earlier works in this context that propose solutions where a potentially complex optimization problem must be solved in every decision instant, we propose a low-complexity approximate rate allocation policy and analyze the effect of temporal channel variations on its utility performance. To the best of our knowledge, this is the first work that studies the tracking capabilities of an approximate rate allocation scheme under fading channel conditions. We build on an earlier work to present a new rate allocation policy for a fading MAC that implements a low-complexity approximate gradient projection iteration for each channel measurement, and explicitly characterize the effect of the speed of temporal channel variations on the tracking neighborhood of our policy. We further improve our results by proposing an alternative rate allocation policy for which tighter bounds on the size of the tracking neighborhood are derived. These proposed rate allocation policies are computationally efficient in our setting since they implement a single gradient projection iteration per channel measurement and each such iteration relies on approximate projections which has polynomial-complexity in the number of users.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601060
M. Pursley, T. Royster
Capacity limits and throughput results are evaluated for slow frequency-hop transmission over channels in which the bandwidth of the partial-band noise may vary but the total power in the noise is constant. The system employs orthogonal modulation, noncoherent demodulation, and error-control coding with iterative decoding. We find that the capacity limits for the system and the performance results for turbo product codes and low-density parity-check codes show that the throughput can be a nonmonotonic function of the bandwidth of the partial-band noise. We discuss how such nonmonotonicity affects the design of adaptive-rate coding systems.
{"title":"Capacity limits and performance results for frequency-hop transmission over partial-band noise channels","authors":"M. Pursley, T. Royster","doi":"10.1109/ITA.2008.4601060","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601060","url":null,"abstract":"Capacity limits and throughput results are evaluated for slow frequency-hop transmission over channels in which the bandwidth of the partial-band noise may vary but the total power in the noise is constant. The system employs orthogonal modulation, noncoherent demodulation, and error-control coding with iterative decoding. We find that the capacity limits for the system and the performance results for turbo product codes and low-density parity-check codes show that the throughput can be a nonmonotonic function of the bandwidth of the partial-band noise. We discuss how such nonmonotonicity affects the design of adaptive-rate coding systems.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"322 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132426049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-08-15DOI: 10.1109/ITA.2008.4601040
Yindi Jing, H. Jafarkhani
This paper is on relay beamforming in wireless networks, in which the receiver has perfect information of all channels and each relay knows its own channels. Instead of the commonly used total power constraint on relays and the transmitter, we use a more practical assumption that every node in the network has its own power constraint. A two-step amplify-and-forward protocol with beamforming is used, in which the transmitter and relays are allowed to adaptively adjust their transmit power and directions according to available channel information. The optimal beamforming problem is solved analytically. The complexity of finding the exact solution is linear in the number of relays. Our results show that the transmitter should always use its maximal power and the optimal power used at a relay is not a binary function. It can take any value between zero and its maximum transmit power. Also, interestingly, this value depends on the quality of all other channels in addition to the relaypsilas own ones. Despite this coupling fact, distributive strategies are proposed in which, with the aid of a low-rate broadcast from the receiver, a relay needs only its own channel information to implement the optimal power control. Simulated performance shows that network beamforming achieves full diversity and outperforms other existing schemes.
{"title":"Beamforming in wireless relay networks","authors":"Yindi Jing, H. Jafarkhani","doi":"10.1109/ITA.2008.4601040","DOIUrl":"https://doi.org/10.1109/ITA.2008.4601040","url":null,"abstract":"This paper is on relay beamforming in wireless networks, in which the receiver has perfect information of all channels and each relay knows its own channels. Instead of the commonly used total power constraint on relays and the transmitter, we use a more practical assumption that every node in the network has its own power constraint. A two-step amplify-and-forward protocol with beamforming is used, in which the transmitter and relays are allowed to adaptively adjust their transmit power and directions according to available channel information. The optimal beamforming problem is solved analytically. The complexity of finding the exact solution is linear in the number of relays. Our results show that the transmitter should always use its maximal power and the optimal power used at a relay is not a binary function. It can take any value between zero and its maximum transmit power. Also, interestingly, this value depends on the quality of all other channels in addition to the relaypsilas own ones. Despite this coupling fact, distributive strategies are proposed in which, with the aid of a low-rate broadcast from the receiver, a relay needs only its own channel information to implement the optimal power control. Simulated performance shows that network beamforming achieves full diversity and outperforms other existing schemes.","PeriodicalId":345196,"journal":{"name":"2008 Information Theory and Applications Workshop","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127584650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}