首页 > 最新文献

2017 25th European Signal Processing Conference (EUSIPCO)最新文献

英文 中文
A two-stage subspace trust region approach for deep neural network training 一种用于深度神经网络训练的两阶段子空间信任域方法
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081215
V. Dudar, G. Chierchia, É. Chouzenoux, J. Pesquet, V. Semenov
In this paper, we develop a novel second-order method for training feed-forward neural nets. At each iteration, we construct a quadratic approximation to the cost function in a low-dimensional subspace. We minimize this approximation inside a trust region through a two-stage procedure: first inside the embedded positive curvature subspace, followed by a gradient descent step. This approach leads to a fast objective function decay, prevents convergence to saddle points, and alleviates the need for manually tuning parameters. We show the good performance of the proposed algorithm on benchmark datasets.
在本文中,我们提出了一种新的二阶方法来训练前馈神经网络。在每次迭代中,我们在低维子空间中构造代价函数的二次逼近。我们通过两个阶段的过程在一个信任区域内最小化这个近似:首先在嵌入的正曲率子空间内,然后是梯度下降步骤。这种方法使目标函数衰减快,防止收敛到鞍点,减轻了手动调整参数的需要。我们在基准数据集上证明了该算法的良好性能。
{"title":"A two-stage subspace trust region approach for deep neural network training","authors":"V. Dudar, G. Chierchia, É. Chouzenoux, J. Pesquet, V. Semenov","doi":"10.23919/EUSIPCO.2017.8081215","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081215","url":null,"abstract":"In this paper, we develop a novel second-order method for training feed-forward neural nets. At each iteration, we construct a quadratic approximation to the cost function in a low-dimensional subspace. We minimize this approximation inside a trust region through a two-stage procedure: first inside the embedded positive curvature subspace, followed by a gradient descent step. This approach leads to a fast objective function decay, prevents convergence to saddle points, and alleviates the need for manually tuning parameters. We show the good performance of the proposed algorithm on benchmark datasets.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124262578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Performance analysis of several pitch detection algorithms on simulated and real noisy speech data 几种基音检测算法在模拟和真实噪声语音数据上的性能分析
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081482
D. Jouvet, Y. Laprie
This paper analyses the performance of a large bunch of pitch detection algorithms on clean and noisy speech data. Two sets of noisy speech data are considered. One corresponds to simulated noisy data, and is obtained by adding several types of noise signals at various levels on the clean speech data of the Pitch-Tracking Database from Graz University of Technology (PTDB-TUG). The second one, SPEECON, was recorded in several different acoustic environments. The paper discusses the performance of pitch detection algorithms on the simulated noisy data, and on the real noisy data of the SPEECON corpus. Also, an analysis of the performance of the best pitch detection algorithm with respect to estimated signal-to-noise ratio (SNR) shows that very similar performance is observed on the real noisy data recorded in public places, and on the clean data with addition of babble noise.
本文分析了大量的基音检测算法在清洁和噪声语音数据上的性能。考虑了两组有噪声的语音数据。其中一个对应于模拟噪声数据,是通过在格拉茨工业大学(PTDB-TUG)的Pitch-Tracking Database的干净语音数据上添加不同级别的几种类型的噪声信号得到的。第二个,speech,是在几个不同的声学环境中录制的。本文讨论了基音检测算法在模拟噪声数据和真实语音语料库噪声数据上的性能。对最佳基音检测算法的估计信噪比(SNR)性能进行了分析,结果表明,在公共场所记录的真实噪声数据和添加了呀啊语噪声的干净数据上,其性能非常相似。
{"title":"Performance analysis of several pitch detection algorithms on simulated and real noisy speech data","authors":"D. Jouvet, Y. Laprie","doi":"10.23919/EUSIPCO.2017.8081482","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081482","url":null,"abstract":"This paper analyses the performance of a large bunch of pitch detection algorithms on clean and noisy speech data. Two sets of noisy speech data are considered. One corresponds to simulated noisy data, and is obtained by adding several types of noise signals at various levels on the clean speech data of the Pitch-Tracking Database from Graz University of Technology (PTDB-TUG). The second one, SPEECON, was recorded in several different acoustic environments. The paper discusses the performance of pitch detection algorithms on the simulated noisy data, and on the real noisy data of the SPEECON corpus. Also, an analysis of the performance of the best pitch detection algorithm with respect to estimated signal-to-noise ratio (SNR) shows that very similar performance is observed on the real noisy data recorded in public places, and on the clean data with addition of babble noise.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125491046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Large deviation analysis of the CPD detection problem based on random tensor theory 基于随机张量理论的CPD检测问题大偏差分析
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081289
Remy Bayer, Philippe Laubatan
The performance in terms of minimal Bayes' error probability for detection of a random tensor is a fundamental understudied difficult problem. In this work, we assume that we observe under the alternative hypothesis a noisy rank-ñ tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size Nq × R, i.e., for 1 ≤ q ≤ Q, R,Nq → ∞ with R1/q/Nq converges to a finite constant. The detection of the random entries of the core tensor is hard to study since an analytic expression of the error probability is not easily tractable. To mitigate this technical difficulty, the Chernoff Upper Bound (CUB) and the error exponent on the error probability are derived and studied for the considered tensor-based detection problem. These two quantities are relied to a key quantity for the considered detection problem due to its strong link with the moment generating function of the log-likelihood test. However, the tightest CUB is reached for the value, denoted by s∗, which minimizes the error exponent. To solve this step, two methodologies are standard in the literature. The first one is based on the use of a costly numerical optimization algorithm. An alternative strategy is to consider the Bhattacharyya Upper Bound (BUB) for s∗ = 1/2. In this last scenario, the costly numerical optimization step is avoided but no guaranty exists on the optimality of the BUB. Based on powerful random matrix theory tools, a simple analytical expression of s∗ is provided with respect to the Signal to Noise Ratio (SNR) and for low rank CPD. Associated to a compact expression of the CUB, an easily tractable expression of the tightest CUB and the error exponent are provided and analyzed. A main conclusion of this work is that the BUB is the tightest bound at low SNRs. At contrary, this property is no longer true for higher SNRs.
基于最小贝叶斯误差概率的随机张量检测性能是一个尚未得到充分研究的基本难题。在此工作中,我们假设在备择假设下,我们观察到一个具有大因子Nq × R的q阶正则多进分解(CPD)的噪声秩-ñ张量,即当1≤q≤q时,R,Nq→∞与R1/q/Nq收敛于一个有限常数。由于错误概率的解析表达式不容易处理,因此对核心张量的随机项的检测很难研究。为了解决这一技术难题,针对所考虑的基于张量的检测问题,推导并研究了Chernoff上界(CUB)和误差概率的误差指数。由于这两个量与对数似然检验的矩生成函数有很强的联系,因此对于所考虑的检测问题,这两个量依赖于一个关键量。然而,对于用s *表示的值,达到了最小的CUB,它使误差指数最小。为了解决这一步骤,文献中有两种标准的方法。第一种是基于使用昂贵的数值优化算法。另一种策略是考虑s * = 1/2时的Bhattacharyya上界(BUB)。在最后一种情况下,避免了代价高昂的数值优化步骤,但不能保证BUB的最优性。基于强大的随机矩阵理论工具,给出了s *关于信噪比(SNR)和低阶CPD的简单解析表达式。结合CUB的紧凑表达式,给出并分析了最紧密CUB的易于处理的表达式和误差指数。这项工作的一个主要结论是,在低信噪比时,BUB是最紧密的边界。相反,对于较高的信噪比,这一性质不再成立。
{"title":"Large deviation analysis of the CPD detection problem based on random tensor theory","authors":"Remy Bayer, Philippe Laubatan","doi":"10.23919/EUSIPCO.2017.8081289","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081289","url":null,"abstract":"The performance in terms of minimal Bayes' error probability for detection of a random tensor is a fundamental understudied difficult problem. In this work, we assume that we observe under the alternative hypothesis a noisy rank-ñ tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size Nq × R, i.e., for 1 ≤ q ≤ Q, R,Nq → ∞ with R1/q/Nq converges to a finite constant. The detection of the random entries of the core tensor is hard to study since an analytic expression of the error probability is not easily tractable. To mitigate this technical difficulty, the Chernoff Upper Bound (CUB) and the error exponent on the error probability are derived and studied for the considered tensor-based detection problem. These two quantities are relied to a key quantity for the considered detection problem due to its strong link with the moment generating function of the log-likelihood test. However, the tightest CUB is reached for the value, denoted by s∗, which minimizes the error exponent. To solve this step, two methodologies are standard in the literature. The first one is based on the use of a costly numerical optimization algorithm. An alternative strategy is to consider the Bhattacharyya Upper Bound (BUB) for s∗ = 1/2. In this last scenario, the costly numerical optimization step is avoided but no guaranty exists on the optimality of the BUB. Based on powerful random matrix theory tools, a simple analytical expression of s∗ is provided with respect to the Signal to Noise Ratio (SNR) and for low rank CPD. Associated to a compact expression of the CUB, an easily tractable expression of the tightest CUB and the error exponent are provided and analyzed. A main conclusion of this work is that the BUB is the tightest bound at low SNRs. At contrary, this property is no longer true for higher SNRs.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AUDASCITY: AUdio denoising by adaptive social CosparsITY 基于自适应社会稀疏度的音频去噪
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081411
Clément Gaultier, Srdan Kitic, N. Bertin, R. Gribonval
This work aims at introducing a new algorithm, AUDASCITY, and comparing its performance to the time-frequency block thresholding algorithm for the ill-posed problem of audio denoising. We propose a heuristics which combines time-frequency structure, cosparsity, and an adaptive scheme to denoise audio signals corrupted with white noise. We report that AUDASCITY outperforms state-of-the-art for each numerical comparison. While there is still room for some perceptual improvements, AUDASCITY's usefulness is shown when used as a front-end for a classification task.
本工作旨在介绍一种新的算法AUDASCITY,并将其性能与用于音频去噪的病态问题的时频块阈值算法进行比较。我们提出了一种结合时频结构、协稀疏性和自适应方案的启发式方法来去除被白噪声污染的音频信号。我们报告说,AUDASCITY在每个数值比较中都优于最先进的技术。虽然在感知方面仍有一些改进的空间,但当将AUDASCITY用作分类任务的前端时,它的实用性得到了体现。
{"title":"AUDASCITY: AUdio denoising by adaptive social CosparsITY","authors":"Clément Gaultier, Srdan Kitic, N. Bertin, R. Gribonval","doi":"10.23919/EUSIPCO.2017.8081411","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081411","url":null,"abstract":"This work aims at introducing a new algorithm, AUDASCITY, and comparing its performance to the time-frequency block thresholding algorithm for the ill-posed problem of audio denoising. We propose a heuristics which combines time-frequency structure, cosparsity, and an adaptive scheme to denoise audio signals corrupted with white noise. We report that AUDASCITY outperforms state-of-the-art for each numerical comparison. While there is still room for some perceptual improvements, AUDASCITY's usefulness is shown when used as a front-end for a classification task.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127667928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multivariate change detection on high resolution monovariate SAR image using linear time-frequency analysis 基于线性时频分析的高分辨率单变量SAR图像多变量变化检测
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081548
A. Mian, J. Ovarlez, G. Ginolhac, A. Atto
In this paper, we propose a novel methodology for Change Detection between two monovariate complex SAR images. Linear Time-Frequency tools are used in order to recover a spectral and angular diversity of the scatterers present in the scene. This diversity is used in bi-date change detection framework to develop a detector, whose performances are better than the classic detector on monovariate SAR images.
在本文中,我们提出了一种新的方法来检测两个单变量复杂SAR图像之间的变化。线性时频工具用于恢复场景中存在的散射体的光谱和角度多样性。将这种多样性应用于双数据变化检测框架中,开发了一种检测器,该检测器在单变量SAR图像上的性能优于经典检测器。
{"title":"Multivariate change detection on high resolution monovariate SAR image using linear time-frequency analysis","authors":"A. Mian, J. Ovarlez, G. Ginolhac, A. Atto","doi":"10.23919/EUSIPCO.2017.8081548","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081548","url":null,"abstract":"In this paper, we propose a novel methodology for Change Detection between two monovariate complex SAR images. Linear Time-Frequency tools are used in order to recover a spectral and angular diversity of the scatterers present in the scene. This diversity is used in bi-date change detection framework to develop a detector, whose performances are better than the classic detector on monovariate SAR images.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Unmixing multitemporal hyperspectral images accounting for smooth and abrupt variations 考虑平滑和突变变化的多时相高光谱图像的解混
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081636
Pierre-Antoine Thouvenin, N. Dobigeon, J. Tourneret
A classical problem in hyperspectral imaging, referred to as hyperspectral unmixing, consists in estimating spectra associated with each material present in an image and their proportions in each pixel. In practice, illumination variations (e.g., due to declivity or complex interactions with the observed materials) and the possible presence of outliers can result in significant changes in both the shape and the amplitude of the measurements, thus modifying the extracted signatures. In this context, sequences of hyperspectral images are expected to be simultaneously affected by such phenomena when acquired on the same area at different time instants. Thus, we propose a hierarchical Bayesian model to simultaneously account for smooth and abrupt spectral variations affecting a set of multitemporal hyperspectral images to be jointly unmixed. This model assumes that smooth variations can be interpreted as the result of endmember variability, whereas abrupt variations are due to significant changes in the imaged scene (e.g., presence of outliers, additional endmembers, etc.). The parameters of this Bayesian model are estimated using samples generated by a Gibbs sampler according to its posterior. Performance assessment is conducted on synthetic data in comparison with state-of-the-art unmixing methods.
高光谱成像中的一个经典问题,称为高光谱解混,包括估计与图像中存在的每种物质相关的光谱及其在每个像素中的比例。在实践中,光照变化(例如,由于坡度或与观测材料的复杂相互作用)和可能存在的异常值会导致测量的形状和幅度发生重大变化,从而修改提取的特征。在这种情况下,当在不同时刻在同一区域获得高光谱图像序列时,预计会同时受到这种现象的影响。因此,我们提出了一个分层贝叶斯模型,以同时考虑影响一组多时相高光谱图像的平滑和突变的光谱变化。该模型假设平滑变化可以解释为端元变异性的结果,而突变变化是由于成像场景的显著变化(例如,异常值的存在,额外的端元等)。该贝叶斯模型的参数是使用吉布斯采样器根据其后验产生的样本来估计的。对合成数据进行性能评估,并与最先进的解混方法进行比较。
{"title":"Unmixing multitemporal hyperspectral images accounting for smooth and abrupt variations","authors":"Pierre-Antoine Thouvenin, N. Dobigeon, J. Tourneret","doi":"10.23919/EUSIPCO.2017.8081636","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081636","url":null,"abstract":"A classical problem in hyperspectral imaging, referred to as hyperspectral unmixing, consists in estimating spectra associated with each material present in an image and their proportions in each pixel. In practice, illumination variations (e.g., due to declivity or complex interactions with the observed materials) and the possible presence of outliers can result in significant changes in both the shape and the amplitude of the measurements, thus modifying the extracted signatures. In this context, sequences of hyperspectral images are expected to be simultaneously affected by such phenomena when acquired on the same area at different time instants. Thus, we propose a hierarchical Bayesian model to simultaneously account for smooth and abrupt spectral variations affecting a set of multitemporal hyperspectral images to be jointly unmixed. This model assumes that smooth variations can be interpreted as the result of endmember variability, whereas abrupt variations are due to significant changes in the imaged scene (e.g., presence of outliers, additional endmembers, etc.). The parameters of this Bayesian model are estimated using samples generated by a Gibbs sampler according to its posterior. Performance assessment is conducted on synthetic data in comparison with state-of-the-art unmixing methods.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123558310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Impact of temporal subsampling on accuracy and performance in practical video classification 时间子采样对视频分类精度和性能的影响
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081357
F. Scheidegger, L. Cavigelli, Michael Schaffner, A. Malossi, C. Bekas, L. Benini
In this paper we evaluate three state-of-the-art neural-network-based approaches for large-scale video classification, where the computational efficiency of the inference step is of particular importance due to the ever increasing amount of data throughput for video streams. Our evaluation focuses on finding good efficiency vs. accuracy tradeoffs by evaluating different network configurations and parameterizations. In particular, we investigate the use of different temporal subsampling strategies, and show that they can be used to effectively trade computational workload against classification accuracy. Using a subset of the YouTube-8M dataset, we demonstrate that workload reductions in the order of 10×, 50× and 100× can be achieved with accuracy reductions of only 1.3%, 6.2% and 10.8%, respectively. Our results show that temporal subsampling is a simple and generic approach that behaves consistently over the considered classification pipelines and which does not require retraining of the underlying networks.
在本文中,我们评估了三种最先进的基于神经网络的大规模视频分类方法,其中,由于视频流的数据吞吐量不断增加,推理步骤的计算效率尤为重要。我们的评估侧重于通过评估不同的网络配置和参数化来找到良好的效率与准确性之间的权衡。特别地,我们研究了不同时间子采样策略的使用,并表明它们可以有效地使用计算工作量和分类精度进行权衡。使用YouTube-8M数据集的一个子集,我们证明可以实现10倍、50倍和100倍的工作量减少,而准确性分别仅降低1.3%、6.2%和10.8%。我们的结果表明,时间子抽样是一种简单而通用的方法,在考虑的分类管道上表现一致,并且不需要对底层网络进行重新训练。
{"title":"Impact of temporal subsampling on accuracy and performance in practical video classification","authors":"F. Scheidegger, L. Cavigelli, Michael Schaffner, A. Malossi, C. Bekas, L. Benini","doi":"10.23919/EUSIPCO.2017.8081357","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081357","url":null,"abstract":"In this paper we evaluate three state-of-the-art neural-network-based approaches for large-scale video classification, where the computational efficiency of the inference step is of particular importance due to the ever increasing amount of data throughput for video streams. Our evaluation focuses on finding good efficiency vs. accuracy tradeoffs by evaluating different network configurations and parameterizations. In particular, we investigate the use of different temporal subsampling strategies, and show that they can be used to effectively trade computational workload against classification accuracy. Using a subset of the YouTube-8M dataset, we demonstrate that workload reductions in the order of 10×, 50× and 100× can be achieved with accuracy reductions of only 1.3%, 6.2% and 10.8%, respectively. Our results show that temporal subsampling is a simple and generic approach that behaves consistently over the considered classification pipelines and which does not require retraining of the underlying networks.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133683381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Sparse reconstruction algorithms for nonlinear microwave imaging 非线性微波成像的稀疏重建算法
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081300
Hidayet Zaimaga, A. Fraysse, M. Lambert
This paper presents a two-step inverse process which allows sparse recovery of the unknown (complex) dielectric profiles of scatterers for nonlinear microwave imaging. The proposed approach is applied to a nonlinear inverse scattering problem arising in microwave imaging and correlated with joint sparsity which gives multiple sparse solutions that share a common nonzero support. Numerical results demonstrate the potential of the proposed two step inversion approach when compared to existing sparse recovery algorithm for the case of small scatterers.
本文提出了一种两步反演方法,可对非线性微波成像散射体的未知(复杂)介电剖面进行稀疏恢复。将该方法应用于微波成像中的非线性逆散射问题,该问题与联合稀疏性相关,给出了具有共同非零支持的多个稀疏解。数值结果表明,在小散射体情况下,与现有的稀疏恢复算法相比,所提出的两步反演方法具有很大的潜力。
{"title":"Sparse reconstruction algorithms for nonlinear microwave imaging","authors":"Hidayet Zaimaga, A. Fraysse, M. Lambert","doi":"10.23919/EUSIPCO.2017.8081300","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081300","url":null,"abstract":"This paper presents a two-step inverse process which allows sparse recovery of the unknown (complex) dielectric profiles of scatterers for nonlinear microwave imaging. The proposed approach is applied to a nonlinear inverse scattering problem arising in microwave imaging and correlated with joint sparsity which gives multiple sparse solutions that share a common nonzero support. Numerical results demonstrate the potential of the proposed two step inversion approach when compared to existing sparse recovery algorithm for the case of small scatterers.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust object characterization from lensless microscopy videos 从无透镜显微镜视频健壮的对象表征
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081448
O. Flasseur, L. Denis, C. Fournier, É. Thiébaut
Lensless microscopy, also known as in-line digital holography, is a 3D quantitative imaging method used in various fields including microfluidics and biomedical imaging. To estimate the size and 3D location of microscopic objects in holograms, maximum likelihood methods have been shown to outperform traditional approaches based on 3D image reconstruction followed by 3D image analysis. However, the presence of objects other than the object of interest may bias maximum likelihood estimates. Using experimental videos of holograms, we show that replacing the maximum likelihood with a robust estimation procedure reduces this bias. We propose a criterion based on the intersection of confidence intervals in order to automatically set the level that distinguishes between inliers and outliers. We show that this criterion achieves a bias / variance trade-off. We also show that joint analysis of a sequence of holograms using the robust procedure is shown to further improve estimation accuracy.
无透镜显微镜,也称为在线数字全息,是一种三维定量成像方法,应用于微流体和生物医学成像等各个领域。为了估计全息图中微观物体的大小和三维位置,极大似然方法已经被证明优于基于三维图像重建和三维图像分析的传统方法。然而,除了感兴趣的对象之外的其他对象的存在可能会使最大似然估计产生偏差。利用全息图的实验视频,我们证明了用鲁棒估计程序代替最大似然可以减少这种偏差。我们提出了一种基于置信区间相交的准则,以便自动设置区分内线和离群点的水平。我们表明,该标准实现了偏差/方差权衡。我们还表明,使用鲁棒程序对一系列全息图进行联合分析可以进一步提高估计精度。
{"title":"Robust object characterization from lensless microscopy videos","authors":"O. Flasseur, L. Denis, C. Fournier, É. Thiébaut","doi":"10.23919/EUSIPCO.2017.8081448","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081448","url":null,"abstract":"Lensless microscopy, also known as in-line digital holography, is a 3D quantitative imaging method used in various fields including microfluidics and biomedical imaging. To estimate the size and 3D location of microscopic objects in holograms, maximum likelihood methods have been shown to outperform traditional approaches based on 3D image reconstruction followed by 3D image analysis. However, the presence of objects other than the object of interest may bias maximum likelihood estimates. Using experimental videos of holograms, we show that replacing the maximum likelihood with a robust estimation procedure reduces this bias. We propose a criterion based on the intersection of confidence intervals in order to automatically set the level that distinguishes between inliers and outliers. We show that this criterion achieves a bias / variance trade-off. We also show that joint analysis of a sequence of holograms using the robust procedure is shown to further improve estimation accuracy.","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125669357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Context incorporation using context — aware language features 使用上下文感知语言特性的上下文整合
Pub Date : 2017-08-28 DOI: 10.23919/EUSIPCO.2017.8081271
Aggeliki Vlachostergiou, George Marandianos, S. Kollias
This paper investigates the problem of context incorporation into human language systems and particular in Sentiment Analysis (SA) systems. So far, the analysis of how different features, when incorporated into such systems, improve their performance, has been discussed in a number of studies. However, a complete picture of their effectiveness remains unexplored. With this work, we attempt to extend the pool of the context — aware language features at the sentence level and to provide the foundations for a concise analysis of the importance of the various types of contextual features, using data from two different in type and size datasets: the Movie Review Dataset (MR) and the Finegrained Sentiment Dataset (FSD).
本文研究了人类语言系统,特别是情感分析系统中的语境整合问题。到目前为止,已经有许多研究讨论了如何将不同的特征整合到这样的系统中,从而提高它们的性能。然而,其有效性的完整图景仍未被探索。通过这项工作,我们试图在句子级别扩展上下文感知语言特征池,并使用来自两个不同类型和大小的数据集:电影评论数据集(MR)和细粒度情感数据集(FSD)的数据,为简明分析各种类型上下文特征的重要性提供基础。
{"title":"Context incorporation using context — aware language features","authors":"Aggeliki Vlachostergiou, George Marandianos, S. Kollias","doi":"10.23919/EUSIPCO.2017.8081271","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2017.8081271","url":null,"abstract":"This paper investigates the problem of context incorporation into human language systems and particular in Sentiment Analysis (SA) systems. So far, the analysis of how different features, when incorporated into such systems, improve their performance, has been discussed in a number of studies. However, a complete picture of their effectiveness remains unexplored. With this work, we attempt to extend the pool of the context — aware language features at the sentence level and to provide the foundations for a concise analysis of the importance of the various types of contextual features, using data from two different in type and size datasets: the Movie Review Dataset (MR) and the Finegrained Sentiment Dataset (FSD).","PeriodicalId":346811,"journal":{"name":"2017 25th European Signal Processing Conference (EUSIPCO)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128704218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 25th European Signal Processing Conference (EUSIPCO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1