Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7471760
Almog Lahav, Yuval Ben-Shalom, T. Chernyakova, Yonina C. Eldar
Modern imaging systems use single-carrier short pulses for transducer excitation. The usage of coded signals allowing for pulse compression is known to improve signal-to-noise ratio (SNR), for example in radar and communication. One of the main challenges in applying coded excitation (CE) to medical imaging is frequency dependent attenuation in biological tissues. Previous work overcame this challenge and verified significant improvement in SNR and imaging depth by using an array of transducer elements and applying pulse compression at each element. However, this approach results in a large computational load. A common way of reducing the cost is to apply pulse compression after beamforming, which reduces image quality. In this work we propose a high-quality low cost method for CE imaging by integrating pulse compression into the recently developed frequency domain beamforming framework. This approach yields a 26-fold reduction in computational complexity without compromising image quality. This reduction enables efficient implementation of CE in array imaging paving the way to enhanced SNR, improved imaging depth and higher frame-rate.
{"title":"Coded excitation ultrasound: Efficient implementation via frequency domain processing","authors":"Almog Lahav, Yuval Ben-Shalom, T. Chernyakova, Yonina C. Eldar","doi":"10.1109/ICASSP.2016.7471760","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471760","url":null,"abstract":"Modern imaging systems use single-carrier short pulses for transducer excitation. The usage of coded signals allowing for pulse compression is known to improve signal-to-noise ratio (SNR), for example in radar and communication. One of the main challenges in applying coded excitation (CE) to medical imaging is frequency dependent attenuation in biological tissues. Previous work overcame this challenge and verified significant improvement in SNR and imaging depth by using an array of transducer elements and applying pulse compression at each element. However, this approach results in a large computational load. A common way of reducing the cost is to apply pulse compression after beamforming, which reduces image quality. In this work we propose a high-quality low cost method for CE imaging by integrating pulse compression into the recently developed frequency domain beamforming framework. This approach yields a 26-fold reduction in computational complexity without compromising image quality. This reduction enables efficient implementation of CE in array imaging paving the way to enhanced SNR, improved imaging depth and higher frame-rate.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117273657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7471792
Stefano Cosentino, Lindsay De Vries, Rachel Scheperle, Julie Bierer, R. Carlyon
Users of cochlear implants rely on a number of electrodes to perceive acoustic information. The extent to which their hearing is restored depends on a number of factors including the electrode-to-neuron interface. We describe an approach to detect instances of poor-performing channels based on physiological data known as electrically evoked compound action potentials (ECAPs). The proposed approach - termed Panoramic ECAP ("PECAP") - combines nonlinear optimization stages with different constraints to recover neural activation patterns for all electrodes. Data were obtained from nine cochlear implant subjects and used to run the PECAP tool to identify possible instances of poor-performing channels. Data from one subject revealed a shifted peak ("dead region").
{"title":"Dual-stage algorithm to identify channels with poor electrode-to-neuron interface in cochlear implant users","authors":"Stefano Cosentino, Lindsay De Vries, Rachel Scheperle, Julie Bierer, R. Carlyon","doi":"10.1109/ICASSP.2016.7471792","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471792","url":null,"abstract":"Users of cochlear implants rely on a number of electrodes to perceive acoustic information. The extent to which their hearing is restored depends on a number of factors including the electrode-to-neuron interface. We describe an approach to detect instances of poor-performing channels based on physiological data known as electrically evoked compound action potentials (ECAPs). The proposed approach - termed Panoramic ECAP (\"PECAP\") - combines nonlinear optimization stages with different constraints to recover neural activation patterns for all electrodes. Data were obtained from nine cochlear implant subjects and used to run the PECAP tool to identify possible instances of poor-performing channels. Data from one subject revealed a shifted peak (\"dead region\").","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121075568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7471757
A. Ozerov, Ç. Bilen, P. Pérez
Audio declipping consists in recovering so-called clipped audio samples that are set to a maximum / minimum threshold. Many different approaches were proposed to solve this problem in case of singlechannel (mono) recordings. However, while most of audio recordings are multichannel nowadays, there is no method designed specifically for multichannel audio declipping, where the inter-channel correlations may be efficiently exploited for a better declipping result. In this work we propose for the first time such a multichannel audio declipping method. Our method is based on representing a multichannel audio recording as a convolutive mixture of several audio sources, and on modeling the source power spectrograms and mixing filters by nonnegative tensor factorization model and full-rank covariance matrices, respectively. A generalized expectation-maximization algorithm is proposed to estimate model parameters. It is shown experimentally that the proposed multichannel audio de-clipping algorithm outperforms in average and in most cases a state-of-the-art single-channel declipping algorithm applied to each channel independently.
{"title":"Multichannel audio declipping","authors":"A. Ozerov, Ç. Bilen, P. Pérez","doi":"10.1109/ICASSP.2016.7471757","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471757","url":null,"abstract":"Audio declipping consists in recovering so-called clipped audio samples that are set to a maximum / minimum threshold. Many different approaches were proposed to solve this problem in case of singlechannel (mono) recordings. However, while most of audio recordings are multichannel nowadays, there is no method designed specifically for multichannel audio declipping, where the inter-channel correlations may be efficiently exploited for a better declipping result. In this work we propose for the first time such a multichannel audio declipping method. Our method is based on representing a multichannel audio recording as a convolutive mixture of several audio sources, and on modeling the source power spectrograms and mixing filters by nonnegative tensor factorization model and full-rank covariance matrices, respectively. A generalized expectation-maximization algorithm is proposed to estimate model parameters. It is shown experimentally that the proposed multichannel audio de-clipping algorithm outperforms in average and in most cases a state-of-the-art single-channel declipping algorithm applied to each channel independently.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127249368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7472392
M. Simarro, V. García, F. Martínez-Zaldívar, Alberto González, A. Vidal
A new algorithm called SUMIS-BO is proposed for soft-output MIMO detection. This method is a meaningful improvement of the "Subspace Marginalization with Interference Suppression" (SUMIS) algorithm. It exhibits good performance with reduced complexity and has been evaluated and compared in terms of performance and efficiency with the SUMIS algorithm using different system parameters. Results show that the performance of the SUMIS-BO is similar to the SUMIS algorithm, however its efficiency is improved. The new algorithm is far more efficient than SUMIS, especially with large systems.
{"title":"Complexity reduction of SUMIS MIMO soft detection based on box optimization for large systems","authors":"M. Simarro, V. García, F. Martínez-Zaldívar, Alberto González, A. Vidal","doi":"10.1109/ICASSP.2016.7472392","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472392","url":null,"abstract":"A new algorithm called SUMIS-BO is proposed for soft-output MIMO detection. This method is a meaningful improvement of the \"Subspace Marginalization with Interference Suppression\" (SUMIS) algorithm. It exhibits good performance with reduced complexity and has been evaluated and compared in terms of performance and efficiency with the SUMIS algorithm using different system parameters. Results show that the performance of the SUMIS-BO is similar to the SUMIS algorithm, however its efficiency is improved. The new algorithm is far more efficient than SUMIS, especially with large systems.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127443079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7472734
M. Ribeiro, O. Watts, J. Yamagishi, R. Clark
We investigate two wavelet-based decomposition strategies of the f0 signal and their usefulness as a secondary task for speech synthesis using multi-task deep neural networks (MTL-DNN). The first decomposition strategy uses a static set of scales for all utterances in the training data. We propose a second strategy, where the scale of the mother wavelet is dynamically adjusted to the rate of each utterance. This approach is able to capture f0 variations related to the syllable, word, clitic-group, and phrase units. This method also constrains the wavelet components to be within the frequency range that previous experiments have shown to be more natural. These two strategies are evaluated as a secondary task in multi-task deep neural networks (MTL-DNNs). Results indicate that on an expressive dataset there is a strong preference for the systems using multi-task learning when compared to the baseline system.
{"title":"Wavelet-based decomposition of F0 as a secondary task for DNN-based speech synthesis with multi-task learning","authors":"M. Ribeiro, O. Watts, J. Yamagishi, R. Clark","doi":"10.1109/ICASSP.2016.7472734","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472734","url":null,"abstract":"We investigate two wavelet-based decomposition strategies of the f0 signal and their usefulness as a secondary task for speech synthesis using multi-task deep neural networks (MTL-DNN). The first decomposition strategy uses a static set of scales for all utterances in the training data. We propose a second strategy, where the scale of the mother wavelet is dynamically adjusted to the rate of each utterance. This approach is able to capture f0 variations related to the syllable, word, clitic-group, and phrase units. This method also constrains the wavelet components to be within the frequency range that previous experiments have shown to be more natural. These two strategies are evaluated as a secondary task in multi-task deep neural networks (MTL-DNNs). Results indicate that on an expressive dataset there is a strong preference for the systems using multi-task learning when compared to the baseline system.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125095420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7472303
Yinsheng Liu, Geoffrey Ye Li
Large-scale antenna (LSA) has gained a lot of attention recently since it can significantly improve the performance of wireless systems. Similar to multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) or MIMO-OFDM, LSA can be also combined with OFDM to deal with frequency selectivity in wireless channels. However, such combination suffers from substantially increased complexity proportional to the number of antennas in LSA systems. In this paper, we propose a low-complexity recursive convolutional pre-coding to address the issues above. The traditional ZF precoding is implemented through the recursive convolutional precoding in the time domain so that only one IFFT is required for each user and the matrix inversion can be also avoided. Simulation results show that the proposed approach can achieve the same performance as that of ZF but with much lower complexity.
大型天线(large - large antenna, LSA)由于能够显著提高无线系统的性能,近年来受到了广泛的关注。与多输入多输出(MIMO)正交频分复用(OFDM)或MIMO-OFDM类似,LSA也可以与OFDM相结合来处理无线信道中的频率选择性。然而,这种组合的复杂性与LSA系统中天线的数量成正比。在本文中,我们提出了一种低复杂度递归卷积预编码来解决上述问题。传统的ZF预编码是通过时域递归卷积预编码来实现的,这样每个用户只需要一个IFFT,也可以避免矩阵反转。仿真结果表明,该方法可以达到与ZF算法相同的性能,但复杂度要低得多。
{"title":"Low-complexity recursive convolutional precoding for OFDM-based large-scale antenna systems","authors":"Yinsheng Liu, Geoffrey Ye Li","doi":"10.1109/ICASSP.2016.7472303","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472303","url":null,"abstract":"Large-scale antenna (LSA) has gained a lot of attention recently since it can significantly improve the performance of wireless systems. Similar to multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) or MIMO-OFDM, LSA can be also combined with OFDM to deal with frequency selectivity in wireless channels. However, such combination suffers from substantially increased complexity proportional to the number of antennas in LSA systems. In this paper, we propose a low-complexity recursive convolutional pre-coding to address the issues above. The traditional ZF precoding is implemented through the recursive convolutional precoding in the time domain so that only one IFFT is required for each user and the matrix inversion can be also avoided. Simulation results show that the proposed approach can achieve the same performance as that of ZF but with much lower complexity.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126049737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7472178
Linlin Chao, J. Tao, Minghao Yang, Ya Li, Zhengqi Wen
Human emotion is a temporally dynamic event which can be inferred from both audio and video feature sequences. In this paper we investigate the long short term memory recurrent neural network (LSTM-RNN) based encoding method for category emotion recognition in the video. LSTM-RNN is able to incorporate knowledge about how emotion evolves over long range successive frames and emotion clues from isolated frame. After encoding, each video clip can be represented by a vector for each input feature sequence. The vectors contain both frame level and sequence level emotion information. These vectors are then concatenated and fed into support vector machine (SVM) to get the final prediction result. Extensive evaluations on Emotion Challenge in the Wild (EmotiW2015) dataset show the efficiency of the proposed encoding method and competitive results are obtained. The final recognition accuracy achieves 46.38% for audio-video emotion recognition sub-challenge, where the challenge baseline is 39.33%.
{"title":"Long short term memory recurrent neural network based encoding method for emotion recognition in video","authors":"Linlin Chao, J. Tao, Minghao Yang, Ya Li, Zhengqi Wen","doi":"10.1109/ICASSP.2016.7472178","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472178","url":null,"abstract":"Human emotion is a temporally dynamic event which can be inferred from both audio and video feature sequences. In this paper we investigate the long short term memory recurrent neural network (LSTM-RNN) based encoding method for category emotion recognition in the video. LSTM-RNN is able to incorporate knowledge about how emotion evolves over long range successive frames and emotion clues from isolated frame. After encoding, each video clip can be represented by a vector for each input feature sequence. The vectors contain both frame level and sequence level emotion information. These vectors are then concatenated and fed into support vector machine (SVM) to get the final prediction result. Extensive evaluations on Emotion Challenge in the Wild (EmotiW2015) dataset show the efficiency of the proposed encoding method and competitive results are obtained. The final recognition accuracy achieves 46.38% for audio-video emotion recognition sub-challenge, where the challenge baseline is 39.33%.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126102816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7472305
Christopher Mollén, Junil Choi, E. Larsson, R. Heath
We investigate the performance of wideband massive MIMO base stations that use one-bit ADCs for quantizing the uplink signal. Our main result is to show that the many taps of the frequency-selective channel make linear combiners asymptotically consistent and the quantization noise additive and Gaussian, which simplifies signal processing and enables the straightforward use of OFDM. We also find that single-carrier systems and OFDM systems are affected in the same way by one-bit quantizers in wideband systems because the distribution of the quantization noise becomes the same in both systems as the number of channel taps grows.
{"title":"One-bit ADCs in wideband massive MIMO systems with OFDM transmission","authors":"Christopher Mollén, Junil Choi, E. Larsson, R. Heath","doi":"10.1109/ICASSP.2016.7472305","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472305","url":null,"abstract":"We investigate the performance of wideband massive MIMO base stations that use one-bit ADCs for quantizing the uplink signal. Our main result is to show that the many taps of the frequency-selective channel make linear combiners asymptotically consistent and the quantization noise additive and Gaussian, which simplifies signal processing and enables the straightforward use of OFDM. We also find that single-carrier systems and OFDM systems are affected in the same way by one-bit quantizers in wideband systems because the distribution of the quantization noise becomes the same in both systems as the number of channel taps grows.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123292206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7471973
Chongyi Li, Jichang Quo, Yanwei Pang, Shanji Chen, Jian Wang
Restoring underwater image from a single image is know to be ill-posed, and some assumptions made in previous methods are not suitable for many situations. In this paper, we propose a method based on blue-green channels dehazing and red channel correction for underwater image restoration. Firstly, blue-green channels are recovered via dehazing algorithm based on an extension and modification of Dark Channel Prior algorithm. Then, red channel is corrected following the Gray-World assumption theory. Finally, in order to resolve the problem which some recovered image regions may look too dim or too bright, an adaptive exposure map is built. Qualitative analysis demonstrates that our method significantly improves visibility and contrast, and reduces the effects of light absorption and scattering. For quantitative analysis, our results obtain best values in terms of entropy, local feature points and average gradient, which outperform three existing physical model available methods.
{"title":"Single underwater image restoration by blue-green channels dehazing and red channel correction","authors":"Chongyi Li, Jichang Quo, Yanwei Pang, Shanji Chen, Jian Wang","doi":"10.1109/ICASSP.2016.7471973","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7471973","url":null,"abstract":"Restoring underwater image from a single image is know to be ill-posed, and some assumptions made in previous methods are not suitable for many situations. In this paper, we propose a method based on blue-green channels dehazing and red channel correction for underwater image restoration. Firstly, blue-green channels are recovered via dehazing algorithm based on an extension and modification of Dark Channel Prior algorithm. Then, red channel is corrected following the Gray-World assumption theory. Finally, in order to resolve the problem which some recovered image regions may look too dim or too bright, an adaptive exposure map is built. Qualitative analysis demonstrates that our method significantly improves visibility and contrast, and reduces the effects of light absorption and scattering. For quantitative analysis, our results obtain best values in terms of entropy, local feature points and average gradient, which outperform three existing physical model available methods.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123560293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-20DOI: 10.1109/ICASSP.2016.7472411
Màrius Caus, A. Pérez-Neira, Adrian Kliks, Quentin Bodinier, F. Bader
This paper evaluates the capacity of weighted circularly convolved FBMC/OQAM systems. A rigorous mathematical model is derived to calculate the increase of capacity that can be obtained thanks to the lattice structure of the modulation and the exploitation of the intrinsic interference. The numerical results reveal that a signal to noise ratio gain of 2dB is obtained in one resource block of 12 subcarriers, translating into a capacity increase of 11% with respect to OFDM, which nuances some previous results on this topic in the literature.
{"title":"Capacity analysis of WCC-FBMC/OQAM systems","authors":"Màrius Caus, A. Pérez-Neira, Adrian Kliks, Quentin Bodinier, F. Bader","doi":"10.1109/ICASSP.2016.7472411","DOIUrl":"https://doi.org/10.1109/ICASSP.2016.7472411","url":null,"abstract":"This paper evaluates the capacity of weighted circularly convolved FBMC/OQAM systems. A rigorous mathematical model is derived to calculate the increase of capacity that can be obtained thanks to the lattice structure of the modulation and the exploitation of the intrinsic interference. The numerical results reveal that a signal to noise ratio gain of 2dB is obtained in one resource block of 12 subcarriers, translating into a capacity increase of 11% with respect to OFDM, which nuances some previous results on this topic in the literature.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125284656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}