Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553553
N. Damer, F. Boutros, Philipp Terhörst, Andreas Braun, Arjan Kuijper
Normalization is an important step for different fusion, classification, and decision making applications. Previous normalization approaches considered bringing values from different sources into a common range or distribution characteristics. In this work we propose a new normalization approach that transfers values into a normalized space where their relative performance in binary decision making is aligned across their whole range. Multi-biometric verification is a typical problem where information from different sources are normalized and fused to make a binary decision and therefore a good platform to evaluate the proposed normalization. We conducted an evaluation on two publicly available databases and showed that the normalization solution we are proposing consistently outperformed state-of-the-art and best practice approaches, e.g. by reducing the false rejection rate at 0.01% false acceptance rate by 60-75% compared to the widely used z-score normalization under the sum-rule fusion.
{"title":"P-Score: Performance Aligned Normalization and an Evaluation in Score-Level Multi-Biometric Fusion","authors":"N. Damer, F. Boutros, Philipp Terhörst, Andreas Braun, Arjan Kuijper","doi":"10.23919/EUSIPCO.2018.8553553","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553553","url":null,"abstract":"Normalization is an important step for different fusion, classification, and decision making applications. Previous normalization approaches considered bringing values from different sources into a common range or distribution characteristics. In this work we propose a new normalization approach that transfers values into a normalized space where their relative performance in binary decision making is aligned across their whole range. Multi-biometric verification is a typical problem where information from different sources are normalized and fused to make a binary decision and therefore a good platform to evaluate the proposed normalization. We conducted an evaluation on two publicly available databases and showed that the normalization solution we are proposing consistently outperformed state-of-the-art and best practice approaches, e.g. by reducing the false rejection rate at 0.01% false acceptance rate by 60-75% compared to the widely used z-score normalization under the sum-rule fusion.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133962426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553380
Abin Jose, Timo Horstmann, J. Ohm
In this paper, we use a Siamese Neural Network based hashing method for generating binary codes with certain properties. The training architecture takes a pair of images as input. The loss function trains the network so that similar images are mapped to similar binary codes and dissimilar images to different binary codes. We add additional constraints in form of loss functions that enforce certain properties on the binary codes. The main motivation of incorporating the first constraint is maximization of entropy by generating binary codes with the same number of 1s and Os. The second constraint minimizes the mutual information between binary codes by generating orthogonal binary codes for dissimilar images. For this, we introduce orthogonality criterion for binary codes consisting of the binary values 0 and 1. Furthermore, we evaluate the properties such as mutual information and entropy of the binary codes generated with the additional constraints. We also analyze the influence of different bit sizes on those properties. The retrieval performance is evaluated by measuring Mean Average Precision (MAP) values and the results are compared with other state-of-the-art approaches.
{"title":"Optimized Binary Hashing Codes Generated by Siamese Neural Networks for Image Retrieval","authors":"Abin Jose, Timo Horstmann, J. Ohm","doi":"10.23919/EUSIPCO.2018.8553380","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553380","url":null,"abstract":"In this paper, we use a Siamese Neural Network based hashing method for generating binary codes with certain properties. The training architecture takes a pair of images as input. The loss function trains the network so that similar images are mapped to similar binary codes and dissimilar images to different binary codes. We add additional constraints in form of loss functions that enforce certain properties on the binary codes. The main motivation of incorporating the first constraint is maximization of entropy by generating binary codes with the same number of 1s and Os. The second constraint minimizes the mutual information between binary codes by generating orthogonal binary codes for dissimilar images. For this, we introduce orthogonality criterion for binary codes consisting of the binary values 0 and 1. Furthermore, we evaluate the properties such as mutual information and entropy of the binary codes generated with the additional constraints. We also analyze the influence of different bit sizes on those properties. The retrieval performance is evaluated by measuring Mean Average Precision (MAP) values and the results are compared with other state-of-the-art approaches.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133979667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553557
Achanna Anil Kumar, N. Narendra, M. Chandra, Kriti Kumar
The problem of sampling and reconstruction of band-limited graph signals is considered in this paper. A new sampling and reconstruction method based on the idea of error and erasure correction is proposed. We visualize the process of sampling as removal of nodes akin to introducing erasures, due to which the graph syndromes of a sampled signal gives rise to significant values, which otherwise would be minuscule for a band-limited signal. A reconstruction method by making use of these significant values in the graph syndromes is described and correspondingly the necessary and sufficient conditions for unique recovery and some key properties is provided. Additionally, this method allows for robust reconstruction i.e., reconstruction in the presence of few corrupted sampled nodes and a method based on weighted $ell_{1}$ - norm is described. Simulation results are provided to demonstrate the efficiency of the method which shows better mean squared error performance compared to existing methods.
{"title":"Sampling and Reconstruction of Band-limited Graph Signals using Graph Syndromes","authors":"Achanna Anil Kumar, N. Narendra, M. Chandra, Kriti Kumar","doi":"10.23919/EUSIPCO.2018.8553557","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553557","url":null,"abstract":"The problem of sampling and reconstruction of band-limited graph signals is considered in this paper. A new sampling and reconstruction method based on the idea of error and erasure correction is proposed. We visualize the process of sampling as removal of nodes akin to introducing erasures, due to which the graph syndromes of a sampled signal gives rise to significant values, which otherwise would be minuscule for a band-limited signal. A reconstruction method by making use of these significant values in the graph syndromes is described and correspondingly the necessary and sufficient conditions for unique recovery and some key properties is provided. Additionally, this method allows for robust reconstruction i.e., reconstruction in the presence of few corrupted sampled nodes and a method based on weighted $ell_{1}$ - norm is described. Simulation results are provided to demonstrate the efficiency of the method which shows better mean squared error performance compared to existing methods.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553575
A. Lavrenko, F. Roemer, G. D. Galdo, R. Thomä
In compressed sensing, the choice of the sensing matrix plays a crucial role: it defines the required hardware effort and determines the achievable recovery performance. Recent studies indicate that by optimizing a sensing matrix, one can potentially improve system performance compared to random ensembles. In this work, we analyze the sensitivity of a sensing matrix design to random perturbations, e.g., caused by hardware imperfections, with respect to the total (average) matrix coherence. We derive an exact expression for the average deterioration of the total coherence in the presence of Gaussian perturbations as a function of the perturbations' variance and the sensing matrix itself. We then numerically evaluate the impact it has on the recovery performance.
{"title":"Sensing Matrix Sensitivity to Random Gaussian Perturbations in Compressed Sensing","authors":"A. Lavrenko, F. Roemer, G. D. Galdo, R. Thomä","doi":"10.23919/EUSIPCO.2018.8553575","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553575","url":null,"abstract":"In compressed sensing, the choice of the sensing matrix plays a crucial role: it defines the required hardware effort and determines the achievable recovery performance. Recent studies indicate that by optimizing a sensing matrix, one can potentially improve system performance compared to random ensembles. In this work, we analyze the sensitivity of a sensing matrix design to random perturbations, e.g., caused by hardware imperfections, with respect to the total (average) matrix coherence. We derive an exact expression for the average deterioration of the total coherence in the presence of Gaussian perturbations as a function of the perturbations' variance and the sensing matrix itself. We then numerically evaluate the impact it has on the recovery performance.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"436 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134356739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic emotion recognition with good accuracy has been demonstrated for clean speech, but the performance deteriorates quickly when speech is contaminated with noise. In this paper, we propose a front-end voice activity detector (VAD)-based unsupervised method to select the frames with a relatively better signal to noise ratio (SNR) in the spoken utterances. Then we extract a large number of statistical features from low-level audio descriptors for the purpose of emotion recognition by using state-of-art classifiers. Extensive experimentation on two standard databases contaminated with 5 types of noise (Babble, F-16, Factory, Volvo, and HF-channel) from the Noisex-92 noise database at 5 different SNR levels (0, 5, 10, 15, 20dB) have been carried out. While performing all experiments to classify emotions both at the categorical and the dimensional spaces, the proposed technique outperforms a Recurrent Neural Network (RNN)-based VAD across all 5 types and levels of noises, and for both the databases.
{"title":"An Unsupervised frame Selection Technique for Robust Emotion Recognition in Noisy Speech","authors":"Meghna Pandharipande, Rupayan Chakraborty, Ashish Panda, Sunil Kumar Kopparapu","doi":"10.23919/EUSIPCO.2018.8553202","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553202","url":null,"abstract":"Automatic emotion recognition with good accuracy has been demonstrated for clean speech, but the performance deteriorates quickly when speech is contaminated with noise. In this paper, we propose a front-end voice activity detector (VAD)-based unsupervised method to select the frames with a relatively better signal to noise ratio (SNR) in the spoken utterances. Then we extract a large number of statistical features from low-level audio descriptors for the purpose of emotion recognition by using state-of-art classifiers. Extensive experimentation on two standard databases contaminated with 5 types of noise (Babble, F-16, Factory, Volvo, and HF-channel) from the Noisex-92 noise database at 5 different SNR levels (0, 5, 10, 15, 20dB) have been carried out. While performing all experiments to classify emotions both at the categorical and the dimensional spaces, the proposed technique outperforms a Recurrent Neural Network (RNN)-based VAD across all 5 types and levels of noises, and for both the databases.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131556982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553588
Denis Bujoreanu, Y. Benane, H. Liebgott, B. Nicolas, O. Basset, D. Friboulet
In the quest for faster ultrasound image acquisition rate, low echo signal to noise ratio is often an issue. Binary Phase Shift Keyed (BPSK) Golay codes have been implemented in a large number of imaging methods, and their ability to increase the image quality is already proven. In this paper we propose an improvement of the BPSK modulation, where the effect of the narrow-band ultrasound probe, used for acquisition, is compensated. The optimized excitation signals are implemented in a Plane Wave Compounding (PWC) imaging approach. Simulation and experimental results are presented. Numerical studies show 41% improvement of axial resolution and bandwidth, over the classical BPSK modulated Golay codes. Experimental acquisitions on cyst phantom show an improvement of image resolution of 32%. The method is also compared to classical pulse (small wave packets) emission and 25% boost of resolution is achieved for a 6dB higher echo signal to noise ratio. The experimental results obtained using UlaOp 256 prove the feasibility of the method on a research scanner while the theoretical formulation shows that the optimization of the excitation signals can be applied to any binary sequence and does not depend on the emission/reception beamforming.
{"title":"A Resolution Enhancement Technique for Ultrafast Coded Medical Ultrasound","authors":"Denis Bujoreanu, Y. Benane, H. Liebgott, B. Nicolas, O. Basset, D. Friboulet","doi":"10.23919/EUSIPCO.2018.8553588","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553588","url":null,"abstract":"In the quest for faster ultrasound image acquisition rate, low echo signal to noise ratio is often an issue. Binary Phase Shift Keyed (BPSK) Golay codes have been implemented in a large number of imaging methods, and their ability to increase the image quality is already proven. In this paper we propose an improvement of the BPSK modulation, where the effect of the narrow-band ultrasound probe, used for acquisition, is compensated. The optimized excitation signals are implemented in a Plane Wave Compounding (PWC) imaging approach. Simulation and experimental results are presented. Numerical studies show 41% improvement of axial resolution and bandwidth, over the classical BPSK modulated Golay codes. Experimental acquisitions on cyst phantom show an improvement of image resolution of 32%. The method is also compared to classical pulse (small wave packets) emission and 25% boost of resolution is achieved for a 6dB higher echo signal to noise ratio. The experimental results obtained using UlaOp 256 prove the feasibility of the method on a research scanner while the theoretical formulation shows that the optimization of the excitation signals can be applied to any binary sequence and does not depend on the emission/reception beamforming.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131747152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553136
Ahmet Levent Kandemir, T. Özkurt
Bicoherence is a useful tool to detect nonlinear interactions within the brain with high computational cost. Latest attempts to reduce this computational cost suggest calculating a particular ‘slice’ of the bicoherence matrix. In this study, we investigate the information content of the bicoherence matrix in resting state. We use publicly available Human Connectome Project data in our calculations. We show that the most prominent information of the bicoherence matrix is concentrated on the main diagonal, i.e. $f_{1}=f_{2}$
{"title":"On the Most Informative Slice of Bicoherence That Characterizes Resting State Brain Connectivity","authors":"Ahmet Levent Kandemir, T. Özkurt","doi":"10.23919/EUSIPCO.2018.8553136","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553136","url":null,"abstract":"Bicoherence is a useful tool to detect nonlinear interactions within the brain with high computational cost. Latest attempts to reduce this computational cost suggest calculating a particular ‘slice’ of the bicoherence matrix. In this study, we investigate the information content of the bicoherence matrix in resting state. We use publicly available Human Connectome Project data in our calculations. We show that the most prominent information of the bicoherence matrix is concentrated on the main diagonal, i.e. $f_{1}=f_{2}$","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129394714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553164
Pascal Schöttle, Alexander Schlögl, Cecilia Pasquini, Rainer Böhme
Adversarial classification is the task of performing robust classification in the presence of a strategic attacker. Originating from information hiding and multimedia forensics, adversarial classification recently received a lot of attention in a broader security context. In the domain of machine learning-based image classification, adversarial classification can be interpreted as detecting so-called adversarial examples, which are slightly altered versions of benign images. They are specifically crafted to be misclassified with a very high probability by the classifier under attack. Neural networks, which dominate among modern image classifiers, have been shown to be especially vulnerable to these adversarial examples. However, detecting subtle changes in digital images has always been the goal of multimedia forensics and steganalysis, two major subfields of multimedia security. We highlight the conceptual similarities between these fields and secure machine learning. Furthermore, we adapt a linear filter, similar to early steganal-ysis methods, to detect adversarial examples that are generated with the projected gradient descent (PGD) method, the state-of-the-art algorithm for this task. We test our method on the MNIST database and show for several parameter combinations of PGD that our method can reliably detect adversarial examples. Additionally, the combination of adversarial re-training and our detection method effectively reduces the attack surface of attacks against neural networks. Thus, we conclude that adversarial examples for image classification possibly do not withstand detection methods from steganalysis, and future work should explore the effectiveness of known techniques from multimedia security in other adversarial settings.
{"title":"Detecting Adversarial Examples - a Lesson from Multimedia Security","authors":"Pascal Schöttle, Alexander Schlögl, Cecilia Pasquini, Rainer Böhme","doi":"10.23919/EUSIPCO.2018.8553164","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553164","url":null,"abstract":"Adversarial classification is the task of performing robust classification in the presence of a strategic attacker. Originating from information hiding and multimedia forensics, adversarial classification recently received a lot of attention in a broader security context. In the domain of machine learning-based image classification, adversarial classification can be interpreted as detecting so-called adversarial examples, which are slightly altered versions of benign images. They are specifically crafted to be misclassified with a very high probability by the classifier under attack. Neural networks, which dominate among modern image classifiers, have been shown to be especially vulnerable to these adversarial examples. However, detecting subtle changes in digital images has always been the goal of multimedia forensics and steganalysis, two major subfields of multimedia security. We highlight the conceptual similarities between these fields and secure machine learning. Furthermore, we adapt a linear filter, similar to early steganal-ysis methods, to detect adversarial examples that are generated with the projected gradient descent (PGD) method, the state-of-the-art algorithm for this task. We test our method on the MNIST database and show for several parameter combinations of PGD that our method can reliably detect adversarial examples. Additionally, the combination of adversarial re-training and our detection method effectively reduces the attack surface of attacks against neural networks. Thus, we conclude that adversarial examples for image classification possibly do not withstand detection methods from steganalysis, and future work should explore the effectiveness of known techniques from multimedia security in other adversarial settings.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126593370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553046
Kjell Le, T. Eftestøl, K. Engan, S. Ørn, Ø. Kleiven
After acquisition of new clinical electrocardiogram (ECG) signals the first step is often to preprocess and have a signal quality assessment to uncover noise. There might be restriction on the signal length and other issue that impose limitation where it is not possible to discard the whole signal if noise is present. Thus there is a great need to retain as much noise free regions as possible. A noise detection method is evaluated on a manually annotated subset (2146 leads) of a data base of 12-lead ECG recordings from 1006 bicycle race participants. The aim is to apply the noise detector on the unlabelled part of the data set before any further analysis is conducted. The proposed noise detector can be divided into 3 parts: 1) Select a high frequency signal as a base signal. 2) Apply a thresholding strategy on the base signal. 3) Use a noise detection strategy. In this work receiver operating characteristic (ROC) curve and area under the curve (AUC) will be used to assess a high frequency noise detector designed for ECG signals. Even though ROC analysis is widely used to assess prediction models, it has its own limitation. However, it is a good starting point to assess discriminatory ability. To generate the ROC curve the performance evaluation is based on sample-level. That is, each sample has a label whether it is noise or not. The threshold strategy and the chosen threshold will be the varying factor to generate ROC curves. The best model has an average AUC of 0.862, which shows a good detector to discriminate noise. This threshold strategy will be used for noise detection on the unlabelled part of the data set.
{"title":"High Frequency Noise Detection and Handling in ECG Signals","authors":"Kjell Le, T. Eftestøl, K. Engan, S. Ørn, Ø. Kleiven","doi":"10.23919/EUSIPCO.2018.8553046","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553046","url":null,"abstract":"After acquisition of new clinical electrocardiogram (ECG) signals the first step is often to preprocess and have a signal quality assessment to uncover noise. There might be restriction on the signal length and other issue that impose limitation where it is not possible to discard the whole signal if noise is present. Thus there is a great need to retain as much noise free regions as possible. A noise detection method is evaluated on a manually annotated subset (2146 leads) of a data base of 12-lead ECG recordings from 1006 bicycle race participants. The aim is to apply the noise detector on the unlabelled part of the data set before any further analysis is conducted. The proposed noise detector can be divided into 3 parts: 1) Select a high frequency signal as a base signal. 2) Apply a thresholding strategy on the base signal. 3) Use a noise detection strategy. In this work receiver operating characteristic (ROC) curve and area under the curve (AUC) will be used to assess a high frequency noise detector designed for ECG signals. Even though ROC analysis is widely used to assess prediction models, it has its own limitation. However, it is a good starting point to assess discriminatory ability. To generate the ROC curve the performance evaluation is based on sample-level. That is, each sample has a label whether it is noise or not. The threshold strategy and the chosen threshold will be the varying factor to generate ROC curves. The best model has an average AUC of 0.862, which shows a good detector to discriminate noise. This threshold strategy will be used for noise detection on the unlabelled part of the data set.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132925545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.23919/EUSIPCO.2018.8553559
Achanna Anil Kumar, N. Narendra, P. Balamuralidhar, M. Chandra
Polarimetric multi-view stereo (PMS) reconstructs the dense 3D surface of a feature sparse object by combining the photometric information from polarization with the epipolar constraints from multiple views. In this paper, we propose a new approach based on the recent advances in graph signal processing (GSP) for efficient ambiguity resolution in PMS. A smooth graph which effectively captures the relational structure of the azimuth values is constructed using the estimated phase angle. By visualizing the actual azimuth available at the reliable depth points (corresponding to the feature-rich region) as sampled graph signal, the azimuth at the remaining feature-limited region is estimated. Unlike the existing ambiguity resolution scheme in PMS which resolves only the π/2-ambiguity, the proposed approach resolves both the π and π/2-ambiguity. Simulation results are presented, which shows that in addition to resolving both the ambiguities, the proposed GSP based method performs significantly better in resolving the π/2-ambiguity than the existing approach.
{"title":"Efficient Ambiguity Resolution in Polarimetric Multi-View Stereo","authors":"Achanna Anil Kumar, N. Narendra, P. Balamuralidhar, M. Chandra","doi":"10.23919/EUSIPCO.2018.8553559","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553559","url":null,"abstract":"Polarimetric multi-view stereo (PMS) reconstructs the dense 3D surface of a feature sparse object by combining the photometric information from polarization with the epipolar constraints from multiple views. In this paper, we propose a new approach based on the recent advances in graph signal processing (GSP) for efficient ambiguity resolution in PMS. A smooth graph which effectively captures the relational structure of the azimuth values is constructed using the estimated phase angle. By visualizing the actual azimuth available at the reliable depth points (corresponding to the feature-rich region) as sampled graph signal, the azimuth at the remaining feature-limited region is estimated. Unlike the existing ambiguity resolution scheme in PMS which resolves only the π/2-ambiguity, the proposed approach resolves both the π and π/2-ambiguity. Simulation results are presented, which shows that in addition to resolving both the ambiguities, the proposed GSP based method performs significantly better in resolving the π/2-ambiguity than the existing approach.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133558103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}