Pub Date : 2019-05-16DOI: 10.1109/ICASSP.2019.8683339
Connor Delaosa, J. Pestana, N. Goddard, S. Somasundaram, Stephan Weiss
Estimation errors are incurred when calculating the sample space-time covariance matrix. We formulate the variance of this estimator when operating on a finite sample set, compare it to known results, and demonstrate its precision in simulations. The variance of the estimation links directly to previously explored perturbation of the analytic eigenvalues and eigenspaces of a parahermitian cross-spectral density matrix when estimated from finite data.
{"title":"Sample Space-time Covariance Matrix Estimation","authors":"Connor Delaosa, J. Pestana, N. Goddard, S. Somasundaram, Stephan Weiss","doi":"10.1109/ICASSP.2019.8683339","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8683339","url":null,"abstract":"Estimation errors are incurred when calculating the sample space-time covariance matrix. We formulate the variance of this estimator when operating on a finite sample set, compare it to known results, and demonstrate its precision in simulations. The variance of the estimation links directly to previously explored perturbation of the analytic eigenvalues and eigenspaces of a parahermitian cross-spectral density matrix when estimated from finite data.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"128 1","pages":"8033-8037"},"PeriodicalIF":0.0,"publicationDate":"2019-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80339428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-16DOI: 10.1109/ICASSP.2019.8682407
Stephan Weiss, I. Proudler, Fraser K. Coutts, J. Pestana
We present an algorithm that extracts analytic eigenvalues from a parahermitian matrix. Operating in the discrete Fourier transform domain, an inner iteration re-establishes the lost association between bins via a maximum likelihood sequence detection driven by a smoothness criterion. An outer iteration continues until a desired accuracy for the approximation of the extracted eigenvalues has been achieved. The approach is compared to existing algorithms.
{"title":"Iterative Approximation of Analytic Eigenvalues of a Parahermitian Matrix EVD","authors":"Stephan Weiss, I. Proudler, Fraser K. Coutts, J. Pestana","doi":"10.1109/ICASSP.2019.8682407","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8682407","url":null,"abstract":"We present an algorithm that extracts analytic eigenvalues from a parahermitian matrix. Operating in the discrete Fourier transform domain, an inner iteration re-establishes the lost association between bins via a maximum likelihood sequence detection driven by a smoothness criterion. An outer iteration continues until a desired accuracy for the approximation of the extracted eigenvalues has been achieved. The approach is compared to existing algorithms.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"160 1","pages":"8038-8042"},"PeriodicalIF":0.0,"publicationDate":"2019-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75431596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-15DOI: 10.1109/ICASSP.2019.8682975
Fernando Gama, A. Marques, Alejandro Ribeiro, G. Leus
Graph neural networks (GNNs) regularize classical neural networks by exploiting the underlying irregular structure supporting graph data, extending its application to broader data domains. The aggregation GNN presented here is a novel GNN that exploits the fact that the data collected at a single node by means of successive local exchanges with neighbors exhibits a regular structure. Thus, regular convolution and regular pooling yield an appropriately regularized GNN. To address some scalability issues that arise when collecting all the information at a single node, we propose a multi-node aggregation GNN that constructs regional features that are later aggregated into more global features and so on. We show superior performance in a source localization problem on synthetic graphs and on the authorship attribution problem.
{"title":"Aggregation Graph Neural Networks","authors":"Fernando Gama, A. Marques, Alejandro Ribeiro, G. Leus","doi":"10.1109/ICASSP.2019.8682975","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8682975","url":null,"abstract":"Graph neural networks (GNNs) regularize classical neural networks by exploiting the underlying irregular structure supporting graph data, extending its application to broader data domains. The aggregation GNN presented here is a novel GNN that exploits the fact that the data collected at a single node by means of successive local exchanges with neighbors exhibits a regular structure. Thus, regular convolution and regular pooling yield an appropriately regularized GNN. To address some scalability issues that arise when collecting all the information at a single node, we propose a multi-node aggregation GNN that constructs regional features that are later aggregated into more global features and so on. We show superior performance in a source localization problem on synthetic graphs and on the authorship attribution problem.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"150 1","pages":"4943-4947"},"PeriodicalIF":0.0,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85171621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-15DOI: 10.1109/ICASSP.2019.8683871
Gaël Le Lan, Vincent Frey
This paper investigates the use of recurrent neural networks to secure PIN code based authentication on smartphones, in a scenario where the user is invited to draw digits on the touchscreen. From the sequence of successive positions of the users finger on the touchscreen, a bidirectional recurrent neural network computes a discriminative embedding in terms of writer traits, carrying the contextual information of the written digit. This allows to reject impostors who would have knowledge of the PIN code. The neural network is trained to recognize both users and digits of a training dataset. Evaluations are run on two datasets of 43 and 33 users, respectively, absent from the training dataset. Results show that when enrolling the users on 4 examples of each digit, the Equal Error Rate reaches 4.9% for a 4-digit PIN code. Including digit value prediction during training is key to achieve good performances.
{"title":"Securing Smartphone Handwritten Pin Codes with Recurrent Neural Networks","authors":"Gaël Le Lan, Vincent Frey","doi":"10.1109/ICASSP.2019.8683871","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8683871","url":null,"abstract":"This paper investigates the use of recurrent neural networks to secure PIN code based authentication on smartphones, in a scenario where the user is invited to draw digits on the touchscreen. From the sequence of successive positions of the users finger on the touchscreen, a bidirectional recurrent neural network computes a discriminative embedding in terms of writer traits, carrying the contextual information of the written digit. This allows to reject impostors who would have knowledge of the PIN code. The neural network is trained to recognize both users and digits of a training dataset. Evaluations are run on two datasets of 43 and 33 users, respectively, absent from the training dataset. Results show that when enrolling the users on 4 examples of each digit, the Equal Error Rate reaches 4.9% for a 4-digit PIN code. Including digit value prediction during training is key to achieve good performances.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"76 1","pages":"2612-2616"},"PeriodicalIF":0.0,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89383792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-15DOI: 10.1109/ICASSP.2019.8683726
Y. Marghi, Aziz Koçanaoğulları, M. Akçakaya, Deniz Erdoğmuş
In dynamic state-space models, the state can be estimated through recursive computation of the posterior distribution of the state given all measurements. In scenarios where active sensing/querying is possible, a hard decision is made when the state posterior achieves a pre-set confidence threshold. This mandate to meet a hard threshold may sometimes unnecessarily require more queries. In application domains where sensing/querying cost is of concern, some potential accuracy may be sacrificed for greater gains in sensing cost. In this paper, we (a) propose a criterion based on a linear combination of state posterior and its changes, (b) show that for discrete-valued state estimation scenarios the proposed objective is more likely to sort correct and incorrect estimates appropriately compared to just looking at the posterior, and finally (c) demonstrate that the method can lead to significant human intent estimation speed increase without significant loss of accuracy in a brain-computer interface application.
{"title":"A History-based Stopping Criterion in Recursive Bayesian State Estimation","authors":"Y. Marghi, Aziz Koçanaoğulları, M. Akçakaya, Deniz Erdoğmuş","doi":"10.1109/ICASSP.2019.8683726","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8683726","url":null,"abstract":"In dynamic state-space models, the state can be estimated through recursive computation of the posterior distribution of the state given all measurements. In scenarios where active sensing/querying is possible, a hard decision is made when the state posterior achieves a pre-set confidence threshold. This mandate to meet a hard threshold may sometimes unnecessarily require more queries. In application domains where sensing/querying cost is of concern, some potential accuracy may be sacrificed for greater gains in sensing cost. In this paper, we (a) propose a criterion based on a linear combination of state posterior and its changes, (b) show that for discrete-valued state estimation scenarios the proposed objective is more likely to sort correct and incorrect estimates appropriately compared to just looking at the posterior, and finally (c) demonstrate that the method can lead to significant human intent estimation speed increase without significant loss of accuracy in a brain-computer interface application.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"86 1","pages":"3362-3366"},"PeriodicalIF":0.0,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89889054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-14DOI: 10.1109/ICASSP.2019.8682513
Hu-Cheng Lee, Chih-Yu Lin, P. Hsu, Winston H. Hsu
Despite the recent success of multi-modal action recognition in videos, in reality, we usually confront the situation that some data are not available beforehand, especially for multi-modal data. For example, while vision and audio data are required to address the multi-modal action recognition, audio tracks in videos are easily lost due to the broken files or the limitation of devices. To cope with this sound-missing problem, we present an approach to simulating deep audio feature from merely spatial-temporal vision data. We demonstrate that adding the simulating sound feature can significantly assist the multi-modal action recognition task. Evaluating our method on the Moments in Time (MIT) Dataset , we show that our proposed method performs favorably against the two-stream architecture, enabling a richer understanding of multi-modal action recognition in video.
{"title":"Audio Feature Generation for Missing Modality Problem in Video Action Recognition","authors":"Hu-Cheng Lee, Chih-Yu Lin, P. Hsu, Winston H. Hsu","doi":"10.1109/ICASSP.2019.8682513","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8682513","url":null,"abstract":"Despite the recent success of multi-modal action recognition in videos, in reality, we usually confront the situation that some data are not available beforehand, especially for multi-modal data. For example, while vision and audio data are required to address the multi-modal action recognition, audio tracks in videos are easily lost due to the broken files or the limitation of devices. To cope with this sound-missing problem, we present an approach to simulating deep audio feature from merely spatial-temporal vision data. We demonstrate that adding the simulating sound feature can significantly assist the multi-modal action recognition task. Evaluating our method on the Moments in Time (MIT) Dataset , we show that our proposed method performs favorably against the two-stream architecture, enabling a richer understanding of multi-modal action recognition in video.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"32 1","pages":"3956-3960"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87004055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-14DOI: 10.1109/ICASSP.2019.8683076
C. Tsinos, P. Diniz
The huge volume of data that are available today requires data-selective processing approaches that avoid the costs in computational complexity via appropriately treating the non-innovative data. In this paper, extensions of the well-known adaptive filtering LMS-Newton and LMS-Quasi-Newton Algorithms are developed that enable data selection while also addressing the censorship of outliers that emerge due to high measurement errors. The proposed solutions allow the prescription of how often the acquired data are expected to be incorporated into the learning process based on some a priori information regarding the environment. Simulation results on both synthetic and real-world data verify the effectiveness of the proposed algorithms that may achieve significant reductions in computational costs without sacrificing estimation accuracy due to the selection of the data.
{"title":"Data-selective LMS-Newton and LMS-Quasi-Newton Algorithms","authors":"C. Tsinos, P. Diniz","doi":"10.1109/ICASSP.2019.8683076","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8683076","url":null,"abstract":"The huge volume of data that are available today requires data-selective processing approaches that avoid the costs in computational complexity via appropriately treating the non-innovative data. In this paper, extensions of the well-known adaptive filtering LMS-Newton and LMS-Quasi-Newton Algorithms are developed that enable data selection while also addressing the censorship of outliers that emerge due to high measurement errors. The proposed solutions allow the prescription of how often the acquired data are expected to be incorporated into the learning process based on some a priori information regarding the environment. Simulation results on both synthetic and real-world data verify the effectiveness of the proposed algorithms that may achieve significant reductions in computational costs without sacrificing estimation accuracy due to the selection of the data.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"37 1","pages":"4848-4852"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83843709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-14DOI: 10.1109/ICASSP.2019.8682740
Sergio Matiz, K. Barner
Conformal prediction uses the degree of strangeness (nonconformity) of new data instances to determine the confidence values of new predictions. We propose an inductive conformal predictor for sparse coding classifiers, referred to as ICP-SCC. Our contribution is twofold: first, we present two nonconformity measures that produce reliable confidence values; second, we propose a batch mode active learning algorithm within the conformal prediction framework to improve classification performance by selecting training instances based on two criteria, informativeness and diversity. Experiments conducted on face and object recognition databases demonstrate that ICP-SCC improves the classification accuracy of state-of-the-art dictionary learning algorithms while producing reliable confidence values.
{"title":"Inductive Conformal Predictor for Sparse Coding Classifiers: Applications to Image Classification","authors":"Sergio Matiz, K. Barner","doi":"10.1109/ICASSP.2019.8682740","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8682740","url":null,"abstract":"Conformal prediction uses the degree of strangeness (nonconformity) of new data instances to determine the confidence values of new predictions. We propose an inductive conformal predictor for sparse coding classifiers, referred to as ICP-SCC. Our contribution is twofold: first, we present two nonconformity measures that produce reliable confidence values; second, we propose a batch mode active learning algorithm within the conformal prediction framework to improve classification performance by selecting training instances based on two criteria, informativeness and diversity. Experiments conducted on face and object recognition databases demonstrate that ICP-SCC improves the classification accuracy of state-of-the-art dictionary learning algorithms while producing reliable confidence values.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"14 1","pages":"3307-3311"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89425108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-13DOI: 10.1109/ICASSP.2019.8683794
S. Jha, C. Busso
Predicting the gaze of a user can have important applications in human computer interactions (HCI). They find applications in areas such as social interaction, driver distraction, human robot interaction and education. Appearance based models for gaze estimation have significantly improved due to recent advances in convolutional neural network (CNN). This paper proposes a method to predict the gaze of a user with deep models purely based on CNNs. A key novelty of the proposed model is that it produces a probabilistic map describing the gaze distribution (as opposed to predicting a single gaze direction). This approach is achieved by converting the regression problem into a classification problem, predicting the probability at the output instead of a single direction. The framework relies in a sequence of downsampling followed by upsampling to obtain the probabilistic gaze map. We observe that our proposed approach works better than a regression model in terms of prediction accuracy. The average mean squared error between the predicted gaze and the true gaze is observed to be 6.89◦ in a model trained and tested on the MSP-Gaze database, without any calibration or adaptation to the target user.
{"title":"Estimation of Gaze Region Using Two Dimensional Probabilistic Maps Constructed Using Convolutional Neural Networks","authors":"S. Jha, C. Busso","doi":"10.1109/ICASSP.2019.8683794","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8683794","url":null,"abstract":"Predicting the gaze of a user can have important applications in human computer interactions (HCI). They find applications in areas such as social interaction, driver distraction, human robot interaction and education. Appearance based models for gaze estimation have significantly improved due to recent advances in convolutional neural network (CNN). This paper proposes a method to predict the gaze of a user with deep models purely based on CNNs. A key novelty of the proposed model is that it produces a probabilistic map describing the gaze distribution (as opposed to predicting a single gaze direction). This approach is achieved by converting the regression problem into a classification problem, predicting the probability at the output instead of a single direction. The framework relies in a sequence of downsampling followed by upsampling to obtain the probabilistic gaze map. We observe that our proposed approach works better than a regression model in terms of prediction accuracy. The average mean squared error between the predicted gaze and the true gaze is observed to be 6.89◦ in a model trained and tested on the MSP-Gaze database, without any calibration or adaptation to the target user.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"92 1","pages":"3792-3796"},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83892115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-13DOI: 10.1109/ICASSP.2019.8683522
Sai Deepika Regani, Qinyi Xu, Beibei Wang, Min Wu, K. J. R. Liu
Automobiles have become an essential part of everyday lives. In this work, we attempt to make them smarter by introducing the idea of in-car driver authentication using wireless sensing. Our aim is to develop a model which can recognize drivers automatically. Firstly, we address the problem of "changing in-car environments", where the existing wireless sensing based human identification system fails. To this end, we build the first in-car driver radio biometric dataset to understand the effect of changing environments on human radio biometrics. This dataset consists of radio biometrics of five people collected over a period of two months. We leverage this dataset-to create machine learning (ML) models that make the proposed system adaptive to new in-car environments. We obtained a maximum accuracy of 99.3% in classifying two drivers and 90.66% accuracy in validating a single driver.
{"title":"In-Car Driver Authentication Using Wireless Sensing","authors":"Sai Deepika Regani, Qinyi Xu, Beibei Wang, Min Wu, K. J. R. Liu","doi":"10.1109/ICASSP.2019.8683522","DOIUrl":"https://doi.org/10.1109/ICASSP.2019.8683522","url":null,"abstract":"Automobiles have become an essential part of everyday lives. In this work, we attempt to make them smarter by introducing the idea of in-car driver authentication using wireless sensing. Our aim is to develop a model which can recognize drivers automatically. Firstly, we address the problem of \"changing in-car environments\", where the existing wireless sensing based human identification system fails. To this end, we build the first in-car driver radio biometric dataset to understand the effect of changing environments on human radio biometrics. This dataset consists of radio biometrics of five people collected over a period of two months. We leverage this dataset-to create machine learning (ML) models that make the proposed system adaptive to new in-car environments. We obtained a maximum accuracy of 99.3% in classifying two drivers and 90.66% accuracy in validating a single driver.","PeriodicalId":13203,"journal":{"name":"ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"2 1","pages":"7595-7599"},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87572892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}