Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081215
V. Dudar, G. Chierchia, É. Chouzenoux, J. Pesquet, V. Semenov
In this paper, we develop a novel second-order method for training feed-forward neural nets. At each iteration, we construct a quadratic approximation to the cost function in a low-dimensional subspace. We minimize this approximation inside a trust region through a two-stage procedure: first inside the embedded positive curvature subspace, followed by a gradient descent step. This approach leads to a fast objective function decay, prevents convergence to saddle points, and alleviates the need for manually tuning parameters. We show the good performance of the proposed algorithm on benchmark datasets.
{"title":"A two-stage subspace trust region approach for deep neural network training","authors":"V. Dudar, G. Chierchia, É. Chouzenoux, J. Pesquet, V. Semenov","doi":"10.23919/EUSIPCO.2017.8081215","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081215","url":null,"abstract":"In this paper, we develop a novel second-order method for training feed-forward neural nets. At each iteration, we construct a quadratic approximation to the cost function in a low-dimensional subspace. We minimize this approximation inside a trust region through a two-stage procedure: first inside the embedded positive curvature subspace, followed by a gradient descent step. This approach leads to a fast objective function decay, prevents convergence to saddle points, and alleviates the need for manually tuning parameters. We show the good performance of the proposed algorithm on benchmark datasets.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124262578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081482
D. Jouvet, Y. Laprie
This paper analyses the performance of a large bunch of pitch detection algorithms on clean and noisy speech data. Two sets of noisy speech data are considered. One corresponds to simulated noisy data, and is obtained by adding several types of noise signals at various levels on the clean speech data of the Pitch-Tracking Database from Graz University of Technology (PTDB-TUG). The second one, SPEECON, was recorded in several different acoustic environments. The paper discusses the performance of pitch detection algorithms on the simulated noisy data, and on the real noisy data of the SPEECON corpus. Also, an analysis of the performance of the best pitch detection algorithm with respect to estimated signal-to-noise ratio (SNR) shows that very similar performance is observed on the real noisy data recorded in public places, and on the clean data with addition of babble noise.
{"title":"Performance analysis of several pitch detection algorithms on simulated and real noisy speech data","authors":"D. Jouvet, Y. Laprie","doi":"10.23919/EUSIPCO.2017.8081482","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081482","url":null,"abstract":"This paper analyses the performance of a large bunch of pitch detection algorithms on clean and noisy speech data. Two sets of noisy speech data are considered. One corresponds to simulated noisy data, and is obtained by adding several types of noise signals at various levels on the clean speech data of the Pitch-Tracking Database from Graz University of Technology (PTDB-TUG). The second one, SPEECON, was recorded in several different acoustic environments. The paper discusses the performance of pitch detection algorithms on the simulated noisy data, and on the real noisy data of the SPEECON corpus. Also, an analysis of the performance of the best pitch detection algorithm with respect to estimated signal-to-noise ratio (SNR) shows that very similar performance is observed on the real noisy data recorded in public places, and on the clean data with addition of babble noise.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125491046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081289
Remy Bayer, Philippe Laubatan
The performance in terms of minimal Bayes' error probability for detection of a random tensor is a fundamental understudied difficult problem. In this work, we assume that we observe under the alternative hypothesis a noisy rank-ñ tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size Nq × R, i.e., for 1 ≤ q ≤ Q, R,Nq → ∞ with R1/q/Nq converges to a finite constant. The detection of the random entries of the core tensor is hard to study since an analytic expression of the error probability is not easily tractable. To mitigate this technical difficulty, the Chernoff Upper Bound (CUB) and the error exponent on the error probability are derived and studied for the considered tensor-based detection problem. These two quantities are relied to a key quantity for the considered detection problem due to its strong link with the moment generating function of the log-likelihood test. However, the tightest CUB is reached for the value, denoted by s∗, which minimizes the error exponent. To solve this step, two methodologies are standard in the literature. The first one is based on the use of a costly numerical optimization algorithm. An alternative strategy is to consider the Bhattacharyya Upper Bound (BUB) for s∗ = 1/2. In this last scenario, the costly numerical optimization step is avoided but no guaranty exists on the optimality of the BUB. Based on powerful random matrix theory tools, a simple analytical expression of s∗ is provided with respect to the Signal to Noise Ratio (SNR) and for low rank CPD. Associated to a compact expression of the CUB, an easily tractable expression of the tightest CUB and the error exponent are provided and analyzed. A main conclusion of this work is that the BUB is the tightest bound at low SNRs. At contrary, this property is no longer true for higher SNRs.
{"title":"Large deviation analysis of the CPD detection problem based on random tensor theory","authors":"Remy Bayer, Philippe Laubatan","doi":"10.23919/EUSIPCO.2017.8081289","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081289","url":null,"abstract":"The performance in terms of minimal Bayes' error probability for detection of a random tensor is a fundamental understudied difficult problem. In this work, we assume that we observe under the alternative hypothesis a noisy rank-ñ tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size Nq × R, i.e., for 1 ≤ q ≤ Q, R,Nq → ∞ with R1/q/Nq converges to a finite constant. The detection of the random entries of the core tensor is hard to study since an analytic expression of the error probability is not easily tractable. To mitigate this technical difficulty, the Chernoff Upper Bound (CUB) and the error exponent on the error probability are derived and studied for the considered tensor-based detection problem. These two quantities are relied to a key quantity for the considered detection problem due to its strong link with the moment generating function of the log-likelihood test. However, the tightest CUB is reached for the value, denoted by s∗, which minimizes the error exponent. To solve this step, two methodologies are standard in the literature. The first one is based on the use of a costly numerical optimization algorithm. An alternative strategy is to consider the Bhattacharyya Upper Bound (BUB) for s∗ = 1/2. In this last scenario, the costly numerical optimization step is avoided but no guaranty exists on the optimality of the BUB. Based on powerful random matrix theory tools, a simple analytical expression of s∗ is provided with respect to the Signal to Noise Ratio (SNR) and for low rank CPD. Associated to a compact expression of the CUB, an easily tractable expression of the tightest CUB and the error exponent are provided and analyzed. A main conclusion of this work is that the BUB is the tightest bound at low SNRs. At contrary, this property is no longer true for higher SNRs.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081411
Clément Gaultier, Srdan Kitic, N. Bertin, R. Gribonval
This work aims at introducing a new algorithm, AUDASCITY, and comparing its performance to the time-frequency block thresholding algorithm for the ill-posed problem of audio denoising. We propose a heuristics which combines time-frequency structure, cosparsity, and an adaptive scheme to denoise audio signals corrupted with white noise. We report that AUDASCITY outperforms state-of-the-art for each numerical comparison. While there is still room for some perceptual improvements, AUDASCITY's usefulness is shown when used as a front-end for a classification task.
{"title":"AUDASCITY: AUdio denoising by adaptive social CosparsITY","authors":"Clément Gaultier, Srdan Kitic, N. Bertin, R. Gribonval","doi":"10.23919/EUSIPCO.2017.8081411","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081411","url":null,"abstract":"This work aims at introducing a new algorithm, AUDASCITY, and comparing its performance to the time-frequency block thresholding algorithm for the ill-posed problem of audio denoising. We propose a heuristics which combines time-frequency structure, cosparsity, and an adaptive scheme to denoise audio signals corrupted with white noise. We report that AUDASCITY outperforms state-of-the-art for each numerical comparison. While there is still room for some perceptual improvements, AUDASCITY's usefulness is shown when used as a front-end for a classification task.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127667928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081548
A. Mian, J. Ovarlez, G. Ginolhac, A. Atto
In this paper, we propose a novel methodology for Change Detection between two monovariate complex SAR images. Linear Time-Frequency tools are used in order to recover a spectral and angular diversity of the scatterers present in the scene. This diversity is used in bi-date change detection framework to develop a detector, whose performances are better than the classic detector on monovariate SAR images.
{"title":"Multivariate change detection on high resolution monovariate SAR image using linear time-frequency analysis","authors":"A. Mian, J. Ovarlez, G. Ginolhac, A. Atto","doi":"10.23919/EUSIPCO.2017.8081548","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081548","url":null,"abstract":"In this paper, we propose a novel methodology for Change Detection between two monovariate complex SAR images. Linear Time-Frequency tools are used in order to recover a spectral and angular diversity of the scatterers present in the scene. This diversity is used in bi-date change detection framework to develop a detector, whose performances are better than the classic detector on monovariate SAR images.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081636
Pierre-Antoine Thouvenin, N. Dobigeon, J. Tourneret
A classical problem in hyperspectral imaging, referred to as hyperspectral unmixing, consists in estimating spectra associated with each material present in an image and their proportions in each pixel. In practice, illumination variations (e.g., due to declivity or complex interactions with the observed materials) and the possible presence of outliers can result in significant changes in both the shape and the amplitude of the measurements, thus modifying the extracted signatures. In this context, sequences of hyperspectral images are expected to be simultaneously affected by such phenomena when acquired on the same area at different time instants. Thus, we propose a hierarchical Bayesian model to simultaneously account for smooth and abrupt spectral variations affecting a set of multitemporal hyperspectral images to be jointly unmixed. This model assumes that smooth variations can be interpreted as the result of endmember variability, whereas abrupt variations are due to significant changes in the imaged scene (e.g., presence of outliers, additional endmembers, etc.). The parameters of this Bayesian model are estimated using samples generated by a Gibbs sampler according to its posterior. Performance assessment is conducted on synthetic data in comparison with state-of-the-art unmixing methods.
{"title":"Unmixing multitemporal hyperspectral images accounting for smooth and abrupt variations","authors":"Pierre-Antoine Thouvenin, N. Dobigeon, J. Tourneret","doi":"10.23919/EUSIPCO.2017.8081636","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081636","url":null,"abstract":"A classical problem in hyperspectral imaging, referred to as hyperspectral unmixing, consists in estimating spectra associated with each material present in an image and their proportions in each pixel. In practice, illumination variations (e.g., due to declivity or complex interactions with the observed materials) and the possible presence of outliers can result in significant changes in both the shape and the amplitude of the measurements, thus modifying the extracted signatures. In this context, sequences of hyperspectral images are expected to be simultaneously affected by such phenomena when acquired on the same area at different time instants. Thus, we propose a hierarchical Bayesian model to simultaneously account for smooth and abrupt spectral variations affecting a set of multitemporal hyperspectral images to be jointly unmixed. This model assumes that smooth variations can be interpreted as the result of endmember variability, whereas abrupt variations are due to significant changes in the imaged scene (e.g., presence of outliers, additional endmembers, etc.). The parameters of this Bayesian model are estimated using samples generated by a Gibbs sampler according to its posterior. Performance assessment is conducted on synthetic data in comparison with state-of-the-art unmixing methods.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123558310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081357
F. Scheidegger, L. Cavigelli, Michael Schaffner, A. Malossi, C. Bekas, L. Benini
In this paper we evaluate three state-of-the-art neural-network-based approaches for large-scale video classification, where the computational efficiency of the inference step is of particular importance due to the ever increasing amount of data throughput for video streams. Our evaluation focuses on finding good efficiency vs. accuracy tradeoffs by evaluating different network configurations and parameterizations. In particular, we investigate the use of different temporal subsampling strategies, and show that they can be used to effectively trade computational workload against classification accuracy. Using a subset of the YouTube-8M dataset, we demonstrate that workload reductions in the order of 10×, 50× and 100× can be achieved with accuracy reductions of only 1.3%, 6.2% and 10.8%, respectively. Our results show that temporal subsampling is a simple and generic approach that behaves consistently over the considered classification pipelines and which does not require retraining of the underlying networks.
{"title":"Impact of temporal subsampling on accuracy and performance in practical video classification","authors":"F. Scheidegger, L. Cavigelli, Michael Schaffner, A. Malossi, C. Bekas, L. Benini","doi":"10.23919/EUSIPCO.2017.8081357","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081357","url":null,"abstract":"In this paper we evaluate three state-of-the-art neural-network-based approaches for large-scale video classification, where the computational efficiency of the inference step is of particular importance due to the ever increasing amount of data throughput for video streams. Our evaluation focuses on finding good efficiency vs. accuracy tradeoffs by evaluating different network configurations and parameterizations. In particular, we investigate the use of different temporal subsampling strategies, and show that they can be used to effectively trade computational workload against classification accuracy. Using a subset of the YouTube-8M dataset, we demonstrate that workload reductions in the order of 10×, 50× and 100× can be achieved with accuracy reductions of only 1.3%, 6.2% and 10.8%, respectively. Our results show that temporal subsampling is a simple and generic approach that behaves consistently over the considered classification pipelines and which does not require retraining of the underlying networks.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133683381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081300
Hidayet Zaimaga, A. Fraysse, M. Lambert
This paper presents a two-step inverse process which allows sparse recovery of the unknown (complex) dielectric profiles of scatterers for nonlinear microwave imaging. The proposed approach is applied to a nonlinear inverse scattering problem arising in microwave imaging and correlated with joint sparsity which gives multiple sparse solutions that share a common nonzero support. Numerical results demonstrate the potential of the proposed two step inversion approach when compared to existing sparse recovery algorithm for the case of small scatterers.
{"title":"Sparse reconstruction algorithms for nonlinear microwave imaging","authors":"Hidayet Zaimaga, A. Fraysse, M. Lambert","doi":"10.23919/EUSIPCO.2017.8081300","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081300","url":null,"abstract":"This paper presents a two-step inverse process which allows sparse recovery of the unknown (complex) dielectric profiles of scatterers for nonlinear microwave imaging. The proposed approach is applied to a nonlinear inverse scattering problem arising in microwave imaging and correlated with joint sparsity which gives multiple sparse solutions that share a common nonzero support. Numerical results demonstrate the potential of the proposed two step inversion approach when compared to existing sparse recovery algorithm for the case of small scatterers.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081448
O. Flasseur, L. Denis, C. Fournier, É. Thiébaut
Lensless microscopy, also known as in-line digital holography, is a 3D quantitative imaging method used in various fields including microfluidics and biomedical imaging. To estimate the size and 3D location of microscopic objects in holograms, maximum likelihood methods have been shown to outperform traditional approaches based on 3D image reconstruction followed by 3D image analysis. However, the presence of objects other than the object of interest may bias maximum likelihood estimates. Using experimental videos of holograms, we show that replacing the maximum likelihood with a robust estimation procedure reduces this bias. We propose a criterion based on the intersection of confidence intervals in order to automatically set the level that distinguishes between inliers and outliers. We show that this criterion achieves a bias / variance trade-off. We also show that joint analysis of a sequence of holograms using the robust procedure is shown to further improve estimation accuracy.
{"title":"Robust object characterization from lensless microscopy videos","authors":"O. Flasseur, L. Denis, C. Fournier, É. Thiébaut","doi":"10.23919/EUSIPCO.2017.8081448","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081448","url":null,"abstract":"Lensless microscopy, also known as in-line digital holography, is a 3D quantitative imaging method used in various fields including microfluidics and biomedical imaging. To estimate the size and 3D location of microscopic objects in holograms, maximum likelihood methods have been shown to outperform traditional approaches based on 3D image reconstruction followed by 3D image analysis. However, the presence of objects other than the object of interest may bias maximum likelihood estimates. Using experimental videos of holograms, we show that replacing the maximum likelihood with a robust estimation procedure reduces this bias. We propose a criterion based on the intersection of confidence intervals in order to automatically set the level that distinguishes between inliers and outliers. We show that this criterion achieves a bias / variance trade-off. We also show that joint analysis of a sequence of holograms using the robust procedure is shown to further improve estimation accuracy.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125669357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-28DOI: 10.23919/EUSIPCO.2017.8081271
Aggeliki Vlachostergiou, George Marandianos, S. Kollias
This paper investigates the problem of context incorporation into human language systems and particular in Sentiment Analysis (SA) systems. So far, the analysis of how different features, when incorporated into such systems, improve their performance, has been discussed in a number of studies. However, a complete picture of their effectiveness remains unexplored. With this work, we attempt to extend the pool of the context — aware language features at the sentence level and to provide the foundations for a concise analysis of the importance of the various types of contextual features, using data from two different in type and size datasets: the Movie Review Dataset (MR) and the Finegrained Sentiment Dataset (FSD).
{"title":"Context incorporation using context — aware language features","authors":"Aggeliki Vlachostergiou, George Marandianos, S. Kollias","doi":"10.23919/EUSIPCO.2017.8081271","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081271","url":null,"abstract":"This paper investigates the problem of context incorporation into human language systems and particular in Sentiment Analysis (SA) systems. So far, the analysis of how different features, when incorporated into such systems, improve their performance, has been discussed in a number of studies. However, a complete picture of their effectiveness remains unexplored. With this work, we attempt to extend the pool of the context — aware language features at the sentence level and to provide the foundations for a concise analysis of the importance of the various types of contextual features, using data from two different in type and size datasets: the Movie Review Dataset (MR) and the Finegrained Sentiment Dataset (FSD).","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128704218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}