首页 > 最新文献

2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)最新文献

英文 中文
Mutual singular spectrum analysis for bioacoustics classification 生物声学分类的互奇异谱分析
B. Gatto, J. Colonna, E. M. Santos, E. Nakamura
Bioacoustics signals classification is an important instrument used in environmental monitoring as it gives the means to efficiently acquire information from the areas, which most of the time are unfeasible to approach. To address these challenges, bioacoustics signals classification systems should meet some requirements, such as low computational resources capabilities. In this paper, we propose a novel bioacoustics signals classification method where no preprocessing techniques are involved and which is able to match sets of signals. The advantages of our proposed method include: a novel and compact representation for bioacoustics signals, which is independent of the signals length. In addition, no preprocessing is required, such as segmentation, noise reduction or syllable extraction. We show that our method is theoretically and practically attractive through experimental results employing a publicity available bioacoustics signal dataset.
生物声学信号分类是环境监测中的一项重要手段,它为有效获取环境监测中难以接近的区域信息提供了手段。为了应对这些挑战,生物声学信号分类系统必须满足一些要求,例如低计算资源能力。在本文中,我们提出了一种新的生物声学信号分类方法,该方法不涉及预处理技术,并且能够匹配信号集。该方法的优点包括:一种新颖而紧凑的生物声学信号表示,与信号长度无关。此外,不需要预处理,如分割,降噪或音节提取。我们通过使用公开可用的生物声学信号数据集的实验结果表明,我们的方法在理论上和实践上都具有吸引力。
{"title":"Mutual singular spectrum analysis for bioacoustics classification","authors":"B. Gatto, J. Colonna, E. M. Santos, E. Nakamura","doi":"10.1109/MLSP.2017.8168113","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168113","url":null,"abstract":"Bioacoustics signals classification is an important instrument used in environmental monitoring as it gives the means to efficiently acquire information from the areas, which most of the time are unfeasible to approach. To address these challenges, bioacoustics signals classification systems should meet some requirements, such as low computational resources capabilities. In this paper, we propose a novel bioacoustics signals classification method where no preprocessing techniques are involved and which is able to match sets of signals. The advantages of our proposed method include: a novel and compact representation for bioacoustics signals, which is independent of the signals length. In addition, no preprocessing is required, such as segmentation, noise reduction or syllable extraction. We show that our method is theoretically and practically attractive through experimental results employing a publicity available bioacoustics signal dataset.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"325 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76548095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A layer-block-wise pipeline for memory and bandwidth reduction in distributed deep learning 分布式深度学习中减少内存和带宽的分层块管道
Haruki Mori, Tetsuya Youkawa, S. Izumi, M. Yoshimoto, H. Kawaguchi, Atsuki Inoue
This paper describes a pipelined stochastic gradient descent (SGD) algorithm and its hardware architecture with a memory distributed structure. In the proposed architecture, a pipeline stage takes charge of multiple layers: a “layer block.” The layer-block-wise pipeline has much less weight parameters for network training than conventional multithreading because weight memory is distributed to workers assigned to pipeline stages. The memory capacity of 2.25 GB for the four-stage proposed pipeline is about half of the 3.82 GB for multithreading when a batch size is 32 in VGG-F. Unlike multithreaded data parallelism, no parameter server for weight update or shared I/O data bus is necessary. Therefore, the memory bandwidth is drastically reduced. The proposed four-stage pipeline only needs memory bandwidths of 36.3 MB and 17.0 MB per batch, respectively, for forward propagation and backpropagation processes, whereas four-thread multithreading requires a bandwidth of 974 MB overall for send and receive processes to unify its weight parameters. At the parallelization degree of four, the proposed pipeline maintains training convergence by a factor of 1.12, compared with the conventional multithreaded architecture although the memory capacity and the memory bandwidth are decreased.
介绍了一种基于内存分布式结构的流水线随机梯度下降算法及其硬件结构。在提议的体系结构中,管道阶段负责多个层:一个“层块”。与传统多线程相比,分层块管道的网络训练权重参数要少得多,因为权重内存被分配给分配到管道阶段的工作人员。在VGG-F中,当批处理大小为32时,四级管道的内存容量为2.25 GB,大约是多线程的3.82 GB的一半。与多线程数据并行不同,权重更新或共享I/O数据总线不需要参数服务器。因此,内存带宽大大降低。所提出的四阶段管道每批仅需要36.3 MB和17.0 MB的内存带宽,用于前向传播和反向传播进程,而四线程多线程需要974 MB的带宽用于发送和接收进程以统一其权重参数。在并行度为4的情况下,与传统多线程架构相比,该管道的训练收敛性提高了1.12倍,尽管内存容量和内存带宽有所降低。
{"title":"A layer-block-wise pipeline for memory and bandwidth reduction in distributed deep learning","authors":"Haruki Mori, Tetsuya Youkawa, S. Izumi, M. Yoshimoto, H. Kawaguchi, Atsuki Inoue","doi":"10.1109/MLSP.2017.8168127","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168127","url":null,"abstract":"This paper describes a pipelined stochastic gradient descent (SGD) algorithm and its hardware architecture with a memory distributed structure. In the proposed architecture, a pipeline stage takes charge of multiple layers: a “layer block.” The layer-block-wise pipeline has much less weight parameters for network training than conventional multithreading because weight memory is distributed to workers assigned to pipeline stages. The memory capacity of 2.25 GB for the four-stage proposed pipeline is about half of the 3.82 GB for multithreading when a batch size is 32 in VGG-F. Unlike multithreaded data parallelism, no parameter server for weight update or shared I/O data bus is necessary. Therefore, the memory bandwidth is drastically reduced. The proposed four-stage pipeline only needs memory bandwidths of 36.3 MB and 17.0 MB per batch, respectively, for forward propagation and backpropagation processes, whereas four-thread multithreading requires a bandwidth of 974 MB overall for send and receive processes to unify its weight parameters. At the parallelization degree of four, the proposed pipeline maintains training convergence by a factor of 1.12, compared with the conventional multithreaded architecture although the memory capacity and the memory bandwidth are decreased.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74535184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A regularized sequential dictionary learning algorithm for fmri data analysis 一种用于fmri数据分析的正则顺序字典学习算法
A. Seghouane, Asif Iqbal
Sequential dictionary learning algorithms have been successfully applied to a number of image processing problems. In a number of these problems however, the data used for dictionary learning are structured matrices with notions of smoothness in the column direction. This prior information which can be traduced as a smoothness constraint on the learned dictionary atoms has not been included in existing dictionary learning algorithms. In this paper, we remedy to this situation by proposing a regularized sequential dictionary learning algorithm. The proposed algorithm differs from the existing ones in their dictionary update stage. The proposed algorithm generates smooth dictionary atoms via the solution of a regularized rank-one matrix approximation problem where regularization is introduced via penalization in the dictionary update stage. Experimental results on synthetic and real data illustrating the performance of the proposed algorithm are provided.
顺序字典学习算法已经成功地应用于许多图像处理问题。然而,在许多这样的问题中,用于字典学习的数据是具有列方向平滑概念的结构化矩阵。现有的字典学习算法中没有包含这种先验信息,这种先验信息可以作为学习到的字典原子的平滑性约束。在本文中,我们通过提出一种正则化顺序字典学习算法来纠正这种情况。该算法与现有算法在字典更新阶段有所不同。该算法通过求解一个正则化的秩一矩阵近似问题来生成光滑的字典原子,其中正则化是在字典更新阶段通过惩罚引入的。给出了合成数据和真实数据的实验结果,说明了该算法的性能。
{"title":"A regularized sequential dictionary learning algorithm for fmri data analysis","authors":"A. Seghouane, Asif Iqbal","doi":"10.1109/MLSP.2017.8168146","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168146","url":null,"abstract":"Sequential dictionary learning algorithms have been successfully applied to a number of image processing problems. In a number of these problems however, the data used for dictionary learning are structured matrices with notions of smoothness in the column direction. This prior information which can be traduced as a smoothness constraint on the learned dictionary atoms has not been included in existing dictionary learning algorithms. In this paper, we remedy to this situation by proposing a regularized sequential dictionary learning algorithm. The proposed algorithm differs from the existing ones in their dictionary update stage. The proposed algorithm generates smooth dictionary atoms via the solution of a regularized rank-one matrix approximation problem where regularization is introduced via penalization in the dictionary update stage. Experimental results on synthetic and real data illustrating the performance of the proposed algorithm are provided.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"51 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79798684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Estimation of interventional effects of features on prediction 特征对预测介入效应的估计
Patrick Blöbaum, Shohei Shimizu
The interpretability of prediction mechanisms with respect to the underlying prediction problem is often unclear. While several studies have focused on developing prediction models with meaningful parameters, the causal relationships between the predictors and the actual prediction have not been considered. Here, we connect the underlying causal structure of a data generation process and the causal structure of a prediction mechanism. To achieve this, we propose a framework that identifies the feature with the greatest causal influence on the prediction and estimates the necessary causal intervention of a feature such that a desired prediction is obtained. The general concept of the framework has no restrictions regarding data linearity; however, we focus on an implementation for linear data here. The framework applicability is evaluated using artificial data and demonstrated using real-world data.
相对于潜在的预测问题,预测机制的可解释性常常是不清楚的。虽然一些研究侧重于开发具有有意义参数的预测模型,但尚未考虑预测因子与实际预测之间的因果关系。在这里,我们将数据生成过程的潜在因果结构与预测机制的因果结构联系起来。为了实现这一目标,我们提出了一个框架,该框架识别对预测具有最大因果影响的特征,并估计特征的必要因果干预,从而获得所需的预测。框架的一般概念对数据线性没有限制;然而,我们在这里关注的是线性数据的实现。使用人工数据评估框架的适用性,并使用实际数据进行演示。
{"title":"Estimation of interventional effects of features on prediction","authors":"Patrick Blöbaum, Shohei Shimizu","doi":"10.1109/MLSP.2017.8168175","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168175","url":null,"abstract":"The interpretability of prediction mechanisms with respect to the underlying prediction problem is often unclear. While several studies have focused on developing prediction models with meaningful parameters, the causal relationships between the predictors and the actual prediction have not been considered. Here, we connect the underlying causal structure of a data generation process and the causal structure of a prediction mechanism. To achieve this, we propose a framework that identifies the feature with the greatest causal influence on the prediction and estimates the necessary causal intervention of a feature such that a desired prediction is obtained. The general concept of the framework has no restrictions regarding data linearity; however, we focus on an implementation for linear data here. The framework applicability is evaluated using artificial data and demonstrated using real-world data.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"20 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88589018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Adaptive sparse modeling and shifted-poisson likelihood based approach for low-dosect image reconstruction 基于位移泊松似然的自适应稀疏建模和低剂量图像重建方法
Siqi Ye, S. Ravishankar, Y. Long, J. Fessler
Recent research in computed tomographic imaging has focused on developing techniques that enable reduction of the X-ray radiation dose without loss of quality of the reconstructed images or volumes. While penalized weighted-least squares (PWLS) approaches have been popular for CT image reconstruction, their performance degrades for very low dose levels due to the inaccuracy of the underlying WLS statistical model. We propose a new formulation for low-dose CT image reconstruction based on a shifted-Poisson model based likelihood function and a data-adaptive regularizer using the sparsifying transform model for images. The sparsifying transform is pre-learned from a dataset of patches extracted from CT images. The nonconvex cost function of the proposed penalized-likelihood reconstruction with sparsifying transforms regularizer (PL-ST) is optimized by alternating between a sparse coding step and an image update step. The image update step deploys a series of convex quadratic majorizers that are optimized using a relaxed linearized augmented Lagrangian method with ordered-subsets, reducing the number of (expensive) forward and backward projection operations. Numerical experiments show that for low dose levels, the proposed data-driven PL-ST approach outperforms prior methods employing a nonadaptive edge-preserving regularizer. PL-ST also outperforms prior PWLS-ST approach at very low X-ray doses.
最近在计算机断层成像方面的研究集中在开发能够在不损失重建图像质量或体积的情况下减少x射线辐射剂量的技术。虽然惩罚加权最小二乘(PWLS)方法在CT图像重建中很受欢迎,但由于WLS统计模型的不准确性,它们的性能在非常低的剂量水平下会下降。本文提出了一种基于移位泊松模型的似然函数和基于图像稀疏化变换模型的数据自适应正则化器的低剂量CT图像重构新方法。稀疏化变换是从CT图像中提取的补丁数据集中预学习的。利用稀疏化变换正则化器(PL-ST)对惩罚似然重构的非凸代价函数进行了优化,并在稀疏编码步骤和图像更新步骤之间交替进行。图像更新步骤部署了一系列凸二次优化器,这些优化器使用有序子集的松弛线性化增广拉格朗日方法进行优化,减少了(昂贵的)前向和后向投影操作的数量。数值实验表明,在低剂量水平下,所提出的数据驱动PL-ST方法优于先前采用非自适应保边正则器的方法。在非常低的x射线剂量下,PL-ST也优于先前的PWLS-ST方法。
{"title":"Adaptive sparse modeling and shifted-poisson likelihood based approach for low-dosect image reconstruction","authors":"Siqi Ye, S. Ravishankar, Y. Long, J. Fessler","doi":"10.1109/MLSP.2017.8168124","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168124","url":null,"abstract":"Recent research in computed tomographic imaging has focused on developing techniques that enable reduction of the X-ray radiation dose without loss of quality of the reconstructed images or volumes. While penalized weighted-least squares (PWLS) approaches have been popular for CT image reconstruction, their performance degrades for very low dose levels due to the inaccuracy of the underlying WLS statistical model. We propose a new formulation for low-dose CT image reconstruction based on a shifted-Poisson model based likelihood function and a data-adaptive regularizer using the sparsifying transform model for images. The sparsifying transform is pre-learned from a dataset of patches extracted from CT images. The nonconvex cost function of the proposed penalized-likelihood reconstruction with sparsifying transforms regularizer (PL-ST) is optimized by alternating between a sparse coding step and an image update step. The image update step deploys a series of convex quadratic majorizers that are optimized using a relaxed linearized augmented Lagrangian method with ordered-subsets, reducing the number of (expensive) forward and backward projection operations. Numerical experiments show that for low dose levels, the proposed data-driven PL-ST approach outperforms prior methods employing a nonadaptive edge-preserving regularizer. PL-ST also outperforms prior PWLS-ST approach at very low X-ray doses.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"86 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85993930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Compact kernel classifiers trained with minimum classification error criterion 以最小分类误差标准训练的紧凑核分类器
Ryoma Tani, Hideyuki Watanabe, S. Katagiri, M. Ohsaki
Unlike Support Vector Machine (SVM), Kernel Minimum Classification Error (KMCE) training frees kernels from training samples and jointly optimizes weights and kernel locations. Focusing on this feature of KMCE training, we propose a new method for developing compact (small scale but highly accurate) kernel classifiers by applying KMCE training to support vectors (SVs) that are selected (based on the weight vector norm) from the original SVs produced by the Multi-class SVM (MSVM). We evaluate our proposed method in four classification tasks and clearly demonstrate its effectiveness: only a 3% drop in classification accuracy (from 99.1 to 89.1%) with just 10% of the original SVs. In addition, we mathematically reveal that the value of MSVM's kernel weight indicates the geometric relation between a training sample and margin boundaries.
与支持向量机(SVM)不同,核最小分类误差(KMCE)训练将核从训练样本中解放出来,并联合优化权值和核位置。针对KMCE训练的这一特点,我们提出了一种开发紧凑(小规模但高精度)核分类器的新方法,该方法是将KMCE训练应用于从多类支持向量机(MSVM)产生的原始支持向量(SVs)中选择(基于权重向量范数)的支持向量(SVs)。我们在四个分类任务中评估了我们提出的方法,并清楚地证明了它的有效性:仅使用10%的原始SVs,分类准确率仅下降3%(从99.1降至89.1%)。此外,我们从数学上揭示了MSVM的核权值表示训练样本与边缘边界之间的几何关系。
{"title":"Compact kernel classifiers trained with minimum classification error criterion","authors":"Ryoma Tani, Hideyuki Watanabe, S. Katagiri, M. Ohsaki","doi":"10.1109/MLSP.2017.8168184","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168184","url":null,"abstract":"Unlike Support Vector Machine (SVM), Kernel Minimum Classification Error (KMCE) training frees kernels from training samples and jointly optimizes weights and kernel locations. Focusing on this feature of KMCE training, we propose a new method for developing compact (small scale but highly accurate) kernel classifiers by applying KMCE training to support vectors (SVs) that are selected (based on the weight vector norm) from the original SVs produced by the Multi-class SVM (MSVM). We evaluate our proposed method in four classification tasks and clearly demonstrate its effectiveness: only a 3% drop in classification accuracy (from 99.1 to 89.1%) with just 10% of the original SVs. In addition, we mathematically reveal that the value of MSVM's kernel weight indicates the geometric relation between a training sample and margin boundaries.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"71 1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83703392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DTW-Approach for uncorrelated multivariate time series imputation 非相关多元时间序列imputation的dtw方法
Thi-Thu-Hong Phan, É. Poisson, A. Bigand, A. Lefebvre
Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper, we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of recurrent data. This method involves two main steps. Firstly, we find the most similar sub-sequence to the sub-sequence before (resp. after) a gap based on the shape-features extraction and Dynamic Time Warping algorithms. Secondly, we fill in the gap by the next (resp. previous) sub-sequence of the most similar one on the signal containing missing values. Experimental results show that our approach performs better than several related methods in case of multivariate time series having low/non-correlations and effective information on each signal.
在几乎所有的应用科学领域,数据缺失都是不可避免的。缺失值的数据分析可能导致效率的损失和不可靠的结果,特别是对于大的缺失子序列。一些著名的多变量时间序列插值方法要求序列之间或序列特征之间具有高度的相关性。本文提出了一种基于低/非相关多元时间序列在循环数据假设下的形状-行为关系的方法。这种方法包括两个主要步骤。首先,我们找到了与之前的子序列最相似的子序列。基于形状特征提取和动态时间翘曲算法的间隙。其次,我们在下一章中填补空白。信号上包含缺失值的最相似的子序列。实验结果表明,在每个信号具有低相关性或非相关性和有效信息的多元时间序列情况下,我们的方法比几种相关方法表现得更好。
{"title":"DTW-Approach for uncorrelated multivariate time series imputation","authors":"Thi-Thu-Hong Phan, É. Poisson, A. Bigand, A. Lefebvre","doi":"10.1109/MLSP.2017.8168165","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168165","url":null,"abstract":"Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper, we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of recurrent data. This method involves two main steps. Firstly, we find the most similar sub-sequence to the sub-sequence before (resp. after) a gap based on the shape-features extraction and Dynamic Time Warping algorithms. Secondly, we fill in the gap by the next (resp. previous) sub-sequence of the most similar one on the signal containing missing values. Experimental results show that our approach performs better than several related methods in case of multivariate time series having low/non-correlations and effective information on each signal.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85671399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Approximate method of variational Bayesian matrix factorization with sparse prior 稀疏先验变分贝叶斯矩阵分解的近似方法
Ryota Kawasumi, K. Takeda
We study the problem of matrix factorization by variational Bayes method, under the assumption that observed matrix is the product of low-rank dense and sparse matrices with additional noise. Under assumption of Laplace distribution for sparse matrix prior, we analytically derive an approximate solution of matrix factorization by minimizing Kullback-Leibler divergence between posterior and trial function. By evaluating our solution numerically, we also discuss accuracy of matrix factorization of our analytical solution.
本文研究了变分贝叶斯方法的矩阵分解问题,假设观测矩阵是低秩密集矩阵和稀疏矩阵的乘积,并附加了噪声。在稀疏矩阵先验的拉普拉斯分布假设下,通过最小化后验函数与试验函数之间的Kullback-Leibler散度,解析导出了矩阵分解的近似解。通过数值计算,讨论了解析解的矩阵分解精度。
{"title":"Approximate method of variational Bayesian matrix factorization with sparse prior","authors":"Ryota Kawasumi, K. Takeda","doi":"10.1109/MLSP.2017.8168156","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168156","url":null,"abstract":"We study the problem of matrix factorization by variational Bayes method, under the assumption that observed matrix is the product of low-rank dense and sparse matrices with additional noise. Under assumption of Laplace distribution for sparse matrix prior, we analytically derive an approximate solution of matrix factorization by minimizing Kullback-Leibler divergence between posterior and trial function. By evaluating our solution numerically, we also discuss accuracy of matrix factorization of our analytical solution.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"18 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88919391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Missing component restoration for masked speech signals based on time-domain spectrogram factorization 基于时域谱图分解的被屏蔽语音信号缺失分量恢复
Shogo Seki, H. Kameoka, T. Toda, K. Takeda
While time-frequency masking is a powerful approach for speech enhancement in terms of signal recovery accuracy (e.g., signal-to-noise ratio), it can over-suppress and damage speech components, leading to limited performance of succeeding speech processing systems. To overcome this shortcoming, this paper proposes a method to restore missing components of time-frequency masked speech spectrograms based on direct estimation of a time domain signal. The proposed method allows us to take account of the local interdepen-dencies of the elements of the complex spectrogram derived from the redundancy of a time-frequency representation as well as the global structure of the magnitude spectrogram. The effectiveness of the proposed method is demonstrated through experimental evaluation, using spectrograms filtered with masks to enhance of noisy speech. Experimental results show that the proposed method significantly outperformed conventional methods, and has the potential to estimate both phase and magnitude spectra simultaneously and precisely.
虽然时频掩蔽在信号恢复精度(如信噪比)方面是一种强大的语音增强方法,但它可能过度抑制和破坏语音成分,导致后续语音处理系统的性能有限。为了克服这一缺点,本文提出了一种基于时域信号直接估计的时频掩码语音谱图缺失分量恢复方法。所提出的方法允许我们考虑由时频表示的冗余衍生的复杂谱图元素的局部相互依赖性以及幅度谱图的全局结构。通过实验验证了该方法的有效性,利用掩模滤波后的频谱图增强了含噪语音。实验结果表明,该方法明显优于传统方法,具有同时准确估计相位和幅度谱的潜力。
{"title":"Missing component restoration for masked speech signals based on time-domain spectrogram factorization","authors":"Shogo Seki, H. Kameoka, T. Toda, K. Takeda","doi":"10.1109/MLSP.2017.8168125","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168125","url":null,"abstract":"While time-frequency masking is a powerful approach for speech enhancement in terms of signal recovery accuracy (e.g., signal-to-noise ratio), it can over-suppress and damage speech components, leading to limited performance of succeeding speech processing systems. To overcome this shortcoming, this paper proposes a method to restore missing components of time-frequency masked speech spectrograms based on direct estimation of a time domain signal. The proposed method allows us to take account of the local interdepen-dencies of the elements of the complex spectrogram derived from the redundancy of a time-frequency representation as well as the global structure of the magnitude spectrogram. The effectiveness of the proposed method is demonstrated through experimental evaluation, using spectrograms filtered with masks to enhance of noisy speech. Experimental results show that the proposed method significantly outperformed conventional methods, and has the potential to estimate both phase and magnitude spectra simultaneously and precisely.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"23 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90155060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differential mutual information forward search for multi-kernel discriminant-component selection with an application to privacy-preserving classification 差分互信息前向搜索多核鉴别成分选择及其在隐私保护分类中的应用
Thee Chanyaswad, Mert Al, J. M. Chang, S. Kung
In machine learning, feature engineering has been a pivotal stage in building a high-quality predictor. Particularly, this work explores the multiple Kernel Discriminant Component Analysis (mKDCA) feature-map and its variants. However, seeking the right subset of kernels for mKDCA feature-map can be challenging. Therefore, we consider the problem of kernel selection, and propose an algorithm based on Differential Mutual Information (DMI) and incremental forward search. DMI serves as an effective metric for selecting kernels, as is theoretically supported by mutual information and Fisher's discriminant analysis. On the other hand, incremental forward search plays a role in removing redundancy among kernels. Finally, we illustrate the potential of the method via an application in privacy-aware classification, and show on three mobile-sensing datasets that selecting an effective set of kernels for mKDCA feature-maps can enhance the utility classification performance, while successfully preserve the data privacy. Specifically, the results show that the proposed DMI forward search method can perform better than the state-of-the-art, and, with much smaller computational cost, can perform as well as the optimal, yet computationally expensive, exhaustive search.
在机器学习中,特征工程是构建高质量预测器的关键阶段。特别地,本工作探讨了多核判别成分分析(mKDCA)特征映射及其变体。然而,为mKDCA特征映射寻找正确的内核子集可能具有挑战性。为此,我们考虑核选择问题,提出了一种基于差分互信息和增量前向搜索的核选择算法。DMI作为选择核的有效度量,在理论上得到互信息和Fisher判别分析的支持。另一方面,增量正向搜索在消除核之间的冗余方面发挥了作用。最后,我们通过在隐私感知分类中的应用说明了该方法的潜力,并在三个移动传感数据集上展示了为mKDCA特征图选择一组有效的核集可以提高效用分类性能,同时成功地保护了数据隐私。具体来说,结果表明,所提出的DMI前向搜索方法比目前的方法性能更好,并且计算成本要小得多,可以与最优的穷举搜索一样好,但计算成本很高。
{"title":"Differential mutual information forward search for multi-kernel discriminant-component selection with an application to privacy-preserving classification","authors":"Thee Chanyaswad, Mert Al, J. M. Chang, S. Kung","doi":"10.1109/MLSP.2017.8168177","DOIUrl":"https://doi.org/10.1109/MLSP.2017.8168177","url":null,"abstract":"In machine learning, feature engineering has been a pivotal stage in building a high-quality predictor. Particularly, this work explores the multiple Kernel Discriminant Component Analysis (mKDCA) feature-map and its variants. However, seeking the right subset of kernels for mKDCA feature-map can be challenging. Therefore, we consider the problem of kernel selection, and propose an algorithm based on Differential Mutual Information (DMI) and incremental forward search. DMI serves as an effective metric for selecting kernels, as is theoretically supported by mutual information and Fisher's discriminant analysis. On the other hand, incremental forward search plays a role in removing redundancy among kernels. Finally, we illustrate the potential of the method via an application in privacy-aware classification, and show on three mobile-sensing datasets that selecting an effective set of kernels for mKDCA feature-maps can enhance the utility classification performance, while successfully preserve the data privacy. Specifically, the results show that the proposed DMI forward search method can perform better than the state-of-the-art, and, with much smaller computational cost, can perform as well as the optimal, yet computationally expensive, exhaustive search.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"23 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73191416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1