Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287661
Hesam Araghi, M. Babaie-zadeh, S. Achard
Graph signal processing (GSP) have found many applications in different domains. The underlying graph may not be available in all applications, and it should be learned from the data. There exist complicated data, where the graph changes over time. Hence, it is necessary to estimate the dynamic graph. In this paper, a new dynamic graph learning algorithm, called dynamic K -graphs, is proposed. This algorithm is capable of both estimating the time-varying graph and clustering the temporal graph signals. Numerical experiments demonstrate the high performance of this algorithm compared with other algorithms.
{"title":"Dynamic K-Graphs: an Algorithm for Dynamic Graph Learning and Temporal Graph Signal Clustering","authors":"Hesam Araghi, M. Babaie-zadeh, S. Achard","doi":"10.23919/Eusipco47968.2020.9287661","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287661","url":null,"abstract":"Graph signal processing (GSP) have found many applications in different domains. The underlying graph may not be available in all applications, and it should be learned from the data. There exist complicated data, where the graph changes over time. Hence, it is necessary to estimate the dynamic graph. In this paper, a new dynamic graph learning algorithm, called dynamic K -graphs, is proposed. This algorithm is capable of both estimating the time-varying graph and clustering the temporal graph signals. Numerical experiments demonstrate the high performance of this algorithm compared with other algorithms.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"60 6 1","pages":"2195-2199"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86376344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287410
Bong-Ki Lee
This paper presents a speech enhancement algorithm using a DNN classification model combined with noise classification-based ensemble. Although various single-channel speech enhancement algorithms based on deep learning have been recently developed, since it is optimized for reducing the mean square error, it can not accurately estimate the actual target values in a regression task, resulting in muffled enhanced speech. Therefore, this paper proposes the DNN classification-based single-channel speech enhancement algorithm to overcome disadvantages of the existing DNN regression-based speech enhancement algorithms. To replace the DNN regression task into the classification task, gain mask templates are predefined using k-means clustering among the gain masks. The input feature vector extracted from the microphone input signal is fed into the DNN’s input and then an optimal gain mask is selected from the gain mask templates. Furthermore, we define the gain mask templates for each noise environment using the DNN-based noise classification to cover various noise environments and use an ensemble structure based on a probability of the noise classification stage.
{"title":"DNN Classification Model-based Speech Enhancement Using Mask Selection Technique","authors":"Bong-Ki Lee","doi":"10.23919/Eusipco47968.2020.9287410","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287410","url":null,"abstract":"This paper presents a speech enhancement algorithm using a DNN classification model combined with noise classification-based ensemble. Although various single-channel speech enhancement algorithms based on deep learning have been recently developed, since it is optimized for reducing the mean square error, it can not accurately estimate the actual target values in a regression task, resulting in muffled enhanced speech. Therefore, this paper proposes the DNN classification-based single-channel speech enhancement algorithm to overcome disadvantages of the existing DNN regression-based speech enhancement algorithms. To replace the DNN regression task into the classification task, gain mask templates are predefined using k-means clustering among the gain masks. The input feature vector extracted from the microphone input signal is fed into the DNN’s input and then an optimal gain mask is selected from the gain mask templates. Furthermore, we define the gain mask templates for each noise environment using the DNN-based noise classification to cover various noise environments and use an ensemble structure based on a probability of the noise classification stage.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"40 1","pages":"436-440"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86387084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287332
Jinghua Li, W. Xia
In this work, we consider a distributed reduced-rank beam coordination problem over array networks. We develop an inherently adaptive combination scheme based on combination matrix for beam coordination problem. Two adaptive efficient implementation strategies for diffusion reduced-rank beamforming are proposed. Illustrative simulations validate that the proposed distributed reduced-rank adaptive algorithms could remarkably improve the convergence speed in comparison with the existing techniques under the condition of small samples.
{"title":"Beam Coordination Via Diffusion Reduced-Rank Adaptation Over Array Networks","authors":"Jinghua Li, W. Xia","doi":"10.23919/Eusipco47968.2020.9287332","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287332","url":null,"abstract":"In this work, we consider a distributed reduced-rank beam coordination problem over array networks. We develop an inherently adaptive combination scheme based on combination matrix for beam coordination problem. Two adaptive efficient implementation strategies for diffusion reduced-rank beamforming are proposed. Illustrative simulations validate that the proposed distributed reduced-rank adaptive algorithms could remarkably improve the convergence speed in comparison with the existing techniques under the condition of small samples.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"15 1","pages":"1822-1826"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87491248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287339
Paula Štancelová, E. Sikudová, Z. Černeková
In recent years, computer vision research has focused on extracting features from 3D data. In this work, we reviewed methods of extracting local features from objects represented in the form of point clouds. The goal of the work was to make theoretical overview and evaluation of selected point cloud detectors and descriptors. We performed an experimental assessment of the repeatability and computational efficiency of individual methods using the well known Stanford 3D Scanning Repository database with the aim of identifying a method which is computationally-efficient in finding good corresponding points between two point clouds. We also compared the efficiency of detector-descriptor pairing showing that the choice of a descriptor affects the performance of the object recognition based on the descriptor matching. We summarized the results into graphs and described them with respect to the individual tested properties of the methods.
{"title":"3D Feature Detector-Descriptor Pair Evaluation on Point Clouds","authors":"Paula Štancelová, E. Sikudová, Z. Černeková","doi":"10.23919/Eusipco47968.2020.9287339","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287339","url":null,"abstract":"In recent years, computer vision research has focused on extracting features from 3D data. In this work, we reviewed methods of extracting local features from objects represented in the form of point clouds. The goal of the work was to make theoretical overview and evaluation of selected point cloud detectors and descriptors. We performed an experimental assessment of the repeatability and computational efficiency of individual methods using the well known Stanford 3D Scanning Repository database with the aim of identifying a method which is computationally-efficient in finding good corresponding points between two point clouds. We also compared the efficiency of detector-descriptor pairing showing that the choice of a descriptor affects the performance of the object recognition based on the descriptor matching. We summarized the results into graphs and described them with respect to the individual tested properties of the methods.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"22 1","pages":"590-594"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88288700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287536
M. Salman Asif, C. Hegde
Phase retrieval, or signal recovery from magnitude-only measurements, is a challenging signal processing problem. Recent progress has revealed that measurement- and computational-complexity challenges can be alleviated if the underlying signal belongs to certain low-dimensional model families, including sparsity, low-rank, or neural generative models. However, the remaining bottleneck in most of these approaches is the requirement of a carefully chosen initial signal estimate. In this paper, we assume that a portion of the signal is already known a priori as "side information" (this assumption is natural in applications such as holographic coherent diffraction imaging). When such side information is available, we show that a much simpler initialization can provably succeed with considerably reduced costs. We supplement our theory with a range of simulation results.
{"title":"The Benefits of Side Information for Structured Phase Retrieval","authors":"M. Salman Asif, C. Hegde","doi":"10.23919/Eusipco47968.2020.9287536","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287536","url":null,"abstract":"Phase retrieval, or signal recovery from magnitude-only measurements, is a challenging signal processing problem. Recent progress has revealed that measurement- and computational-complexity challenges can be alleviated if the underlying signal belongs to certain low-dimensional model families, including sparsity, low-rank, or neural generative models. However, the remaining bottleneck in most of these approaches is the requirement of a carefully chosen initial signal estimate. In this paper, we assume that a portion of the signal is already known a priori as \"side information\" (this assumption is natural in applications such as holographic coherent diffraction imaging). When such side information is available, we show that a much simpler initialization can provably succeed with considerably reduced costs. We supplement our theory with a range of simulation results.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"127 1","pages":"775-778"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80094892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287635
Kyohei Suzuki, M. Yukawa
We propose a robust approach to recovering the jointly-sparse signals in the presence of outliers. We formulate the recovering task as a minimization problem involving three terms: (i) the minimax concave (MC) loss function, (ii) the MC penalty function, and (iii) the squared Frobenius norm. The MC-based loss and penalty functions enhance robustness and group sparsity, respectively, while the squared Frobenius norm induces the convexity. The problem is solved, via reformulation, by the primal-dual splitting method, for which the convergence condition is derived. Numerical examples show that the proposed approach enjoys remarkable outlier robustness.
{"title":"Robust Jointly-Sparse Signal Recovery Based on Minimax Concave Loss Function","authors":"Kyohei Suzuki, M. Yukawa","doi":"10.23919/Eusipco47968.2020.9287635","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287635","url":null,"abstract":"We propose a robust approach to recovering the jointly-sparse signals in the presence of outliers. We formulate the recovering task as a minimization problem involving three terms: (i) the minimax concave (MC) loss function, (ii) the MC penalty function, and (iii) the squared Frobenius norm. The MC-based loss and penalty functions enhance robustness and group sparsity, respectively, while the squared Frobenius norm induces the convexity. The problem is solved, via reformulation, by the primal-dual splitting method, for which the convergence condition is derived. Numerical examples show that the proposed approach enjoys remarkable outlier robustness.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"109 1","pages":"2070-2074"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79216931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287316
Gökhan Gül, M. Baßler
An iterative algorithm is derived for multilevel quantization of sensor observations in distributed sensor networks, where each sensor transmits a summary of its observation to the fusion center and the fusion center makes the final decision. The proposed scheme is composed of a person-by-person optimum quantization at each sensor and a Gaussian approximation to the distribution of the test statistic at the fusion center. The complexity of the algorithm is linear both for identically and non-identically distributed independent sensors. Experimental results indicate that the proposed scheme is promising in comparison to the current state-of-the-art.
{"title":"Fast Multilevel Quantization for Distributed Detection Based on Gaussian Approximation","authors":"Gökhan Gül, M. Baßler","doi":"10.23919/Eusipco47968.2020.9287316","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287316","url":null,"abstract":"An iterative algorithm is derived for multilevel quantization of sensor observations in distributed sensor networks, where each sensor transmits a summary of its observation to the fusion center and the fusion center makes the final decision. The proposed scheme is composed of a person-by-person optimum quantization at each sensor and a Gaussian approximation to the distribution of the test statistic at the fusion center. The complexity of the algorithm is linear both for identically and non-identically distributed independent sensors. Experimental results indicate that the proposed scheme is promising in comparison to the current state-of-the-art.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"99 1","pages":"2433-2437"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81226759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287475
R. A. Costa, M. Eisencraft
We present a discrete-time linear recursive filter representation for a piecewise-linear map that generates chaotic signals. It can be used to easily deduce analytical formulas for power spectral density of chaotic signals, providing useful results for chaos-based communication systems and signal processing. Numerical simulations are used to validate the theoretical results.
{"title":"Chaotic signals representation and spectral characterization using linear discrete-time filters","authors":"R. A. Costa, M. Eisencraft","doi":"10.23919/Eusipco47968.2020.9287475","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287475","url":null,"abstract":"We present a discrete-time linear recursive filter representation for a piecewise-linear map that generates chaotic signals. It can be used to easily deduce analytical formulas for power spectral density of chaotic signals, providing useful results for chaos-based communication systems and signal processing. Numerical simulations are used to validate the theoretical results.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"26 1","pages":"2235-2238"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86108056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287597
Mario Banuelos, Omar DeGuchy, Suzanne S. Sindi, Roummel F. Marcia
The human genome, composed of nucleotides, is represented by a long sequence of the letters A,C,G,T. Typically, organisms in the same species have similar genomes that differ by only a few sequences of varying lengths at varying positions. These differences can be observed in the form of regions where letters are inserted, deleted or inverted. These anomalies are known as structural variants (SVs) and are difficult to detect. The standard approach for identifying SVs involves comparing fragments of DNA from the genome of interest and comparing them to a reference genome. This process is usually complicated by errors produced in both the sequencing and mapping process which may result in an increase in false positive detections. In this work we propose two different approaches for reducing the number of false positives. We focus our attention on refining deletions detected by the popular SV tool delly. In particular, we consider the ability of simultaneously considering sequencing data from a parent and a child using a neural network and gradient boosting as a post-processing step. We compare the performance of each method on simulated and real parent-child data and show that including related individuals in training data greatly improves the ability to detect true SVs.
{"title":"Related Inference: A Supervised Learning Approach to Detect Signal Variation in Genome Data","authors":"Mario Banuelos, Omar DeGuchy, Suzanne S. Sindi, Roummel F. Marcia","doi":"10.23919/Eusipco47968.2020.9287597","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287597","url":null,"abstract":"The human genome, composed of nucleotides, is represented by a long sequence of the letters A,C,G,T. Typically, organisms in the same species have similar genomes that differ by only a few sequences of varying lengths at varying positions. These differences can be observed in the form of regions where letters are inserted, deleted or inverted. These anomalies are known as structural variants (SVs) and are difficult to detect. The standard approach for identifying SVs involves comparing fragments of DNA from the genome of interest and comparing them to a reference genome. This process is usually complicated by errors produced in both the sequencing and mapping process which may result in an increase in false positive detections. In this work we propose two different approaches for reducing the number of false positives. We focus our attention on refining deletions detected by the popular SV tool delly. In particular, we consider the ability of simultaneously considering sequencing data from a parent and a child using a neural network and gradient boosting as a post-processing step. We compare the performance of each method on simulated and real parent-child data and show that including related individuals in training data greatly improves the ability to detect true SVs.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"13 1","pages":"1215-1219"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88443634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-24DOI: 10.23919/Eusipco47968.2020.9287593
S. Thé, É. Thiébaut, L. Denis, F. Soulez
Many inverse problems in imaging require estimating the parameters of a bi-linear model, e.g., the crisp image and the blur in blind deconvolution. In all these models, there is a scaling indetermination: multiplication of one term by an arbitrary factor can be compensated for by dividing the other by the same factor.To solve such inverse problems and identify each term of the bi-linear model, reconstruction methods rely on prior models that enforce some form of regularity. If these regularization terms verify a homogeneity property, the optimal scaling with respect to the regularization functions can be determined. This has two benefits: hyper-parameter tuning is simplified (a single parameter needs to be chosen) and the computation of the maximum a posteriori estimate is more efficient.Illustrations on a blind deconvolution problem are given with an unsupervised strategy to tune the hyper-parameter.
{"title":"Exploiting the scaling indetermination of bi-linear models in inverse problems","authors":"S. Thé, É. Thiébaut, L. Denis, F. Soulez","doi":"10.23919/Eusipco47968.2020.9287593","DOIUrl":"https://doi.org/10.23919/Eusipco47968.2020.9287593","url":null,"abstract":"Many inverse problems in imaging require estimating the parameters of a bi-linear model, e.g., the crisp image and the blur in blind deconvolution. In all these models, there is a scaling indetermination: multiplication of one term by an arbitrary factor can be compensated for by dividing the other by the same factor.To solve such inverse problems and identify each term of the bi-linear model, reconstruction methods rely on prior models that enforce some form of regularity. If these regularization terms verify a homogeneity property, the optimal scaling with respect to the regularization functions can be determined. This has two benefits: hyper-parameter tuning is simplified (a single parameter needs to be chosen) and the computation of the maximum a posteriori estimate is more efficient.Illustrations on a blind deconvolution problem are given with an unsupervised strategy to tune the hyper-parameter.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"56 7","pages":"2358-2362"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91464957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}