Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313129
Talal Ahmed, W. Bajwa
Statistical inference can be computationally prohibitive in ultrahigh-dimensional linear models. Correlation-based variable screening, in which one leverages marginal correlations for removal of irrelevant variables from the model prior to statistical inference, can be used to overcome this challenge. Prior works on correlation-based variable screening either impose strong statistical priors on the linear model or assume specific post-screening inference methods. This paper extends the analysis of correlation-based variable screening to arbitrary linear models and post-screening inference techniques. In particular, (i) it shows that a condition — termed the screening condition — is sufficient for successful correlation-based screening of linear models, and (ii) it provides insights into the dependence of marginal correlation-based screening on different problem parameters. Finally, numerical experiments confirm that the insights of this paper are not mere artifacts of analysis; rather, they are reflective of the challenges associated with marginal correlation-based variable screening.
{"title":"Correlation-Based ultrahigh-dimensional variable screening","authors":"Talal Ahmed, W. Bajwa","doi":"10.1109/CAMSAP.2017.8313129","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313129","url":null,"abstract":"Statistical inference can be computationally prohibitive in ultrahigh-dimensional linear models. Correlation-based variable screening, in which one leverages marginal correlations for removal of irrelevant variables from the model prior to statistical inference, can be used to overcome this challenge. Prior works on correlation-based variable screening either impose strong statistical priors on the linear model or assume specific post-screening inference methods. This paper extends the analysis of correlation-based variable screening to arbitrary linear models and post-screening inference techniques. In particular, (i) it shows that a condition — termed the screening condition — is sufficient for successful correlation-based screening of linear models, and (ii) it provides insights into the dependence of marginal correlation-based screening on different problem parameters. Finally, numerical experiments confirm that the insights of this paper are not mere artifacts of analysis; rather, they are reflective of the challenges associated with marginal correlation-based variable screening.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121509335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313222
Yan Liu
With the development of sensor and satellite technologies, massive amount of multiway data emerges in many applications. Low-rank tensor regression, as a powerful technique for analyzing tensor data, attracted significant interest from the machine learning community. In this paper, we discuss a series of fast algorithms for solving low-rank tensor regression in different learning scenarios, including (a) a greedy algorithm for batch learning; (b) Accelerated Low-rank Tensor Online Learning (ALTO) algorithm for online learning; (c) subsampled tensor projected gradient for memory efficient learning.
{"title":"Low-Rank tensor regression: Scalability and applications","authors":"Yan Liu","doi":"10.1109/CAMSAP.2017.8313222","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313222","url":null,"abstract":"With the development of sensor and satellite technologies, massive amount of multiway data emerges in many applications. Low-rank tensor regression, as a powerful technique for analyzing tensor data, attracted significant interest from the machine learning community. In this paper, we discuss a series of fast algorithms for solving low-rank tensor regression in different learning scenarios, including (a) a greedy algorithm for batch learning; (b) Accelerated Low-rank Tensor Online Learning (ALTO) algorithm for online learning; (c) subsampled tensor projected gradient for memory efficient learning.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125761905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313136
Abderrahim Halimi, P. Connolly, Ximing Ren, Y. Altmann, I. Gyöngy, R. Henderson, S. Mclaughlin, G. Buller
This paper presents a new algorithm for the joint restoration of depth and intensity (DI) images constructed using a gated SPAD-array imaging system. The three dimensional (3D) data consists of two spatial dimensions and one temporal dimension, and contains photon counts (i.e., histograms). The algorithm is based on two steps: (i) construction of a graph connecting patches of pixels with similar temporal responses, and (ii) estimation of the DI values for pixels belonging to homogeneous spatial classes. The first step is achieved by building a graph representation of the 3D data, while giving a special attention to the computational complexity of the algorithm. The second step is achieved using a Fisher scoring gradient descent algorithm while accounting for the data statistics and the Laplacian regularization term. Results on laboratory data show the benefit of the proposed strategy that improves the quality of the estimated DI images.
{"title":"Restoration of depth and intensity images using a graph laplacian regularization","authors":"Abderrahim Halimi, P. Connolly, Ximing Ren, Y. Altmann, I. Gyöngy, R. Henderson, S. Mclaughlin, G. Buller","doi":"10.1109/CAMSAP.2017.8313136","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313136","url":null,"abstract":"This paper presents a new algorithm for the joint restoration of depth and intensity (DI) images constructed using a gated SPAD-array imaging system. The three dimensional (3D) data consists of two spatial dimensions and one temporal dimension, and contains photon counts (i.e., histograms). The algorithm is based on two steps: (i) construction of a graph connecting patches of pixels with similar temporal responses, and (ii) estimation of the DI values for pixels belonging to homogeneous spatial classes. The first step is achieved by building a graph representation of the 3D data, while giving a special attention to the computational complexity of the algorithm. The second step is achieved using a Fisher scoring gradient descent algorithm while accounting for the data statistics and the Laplacian regularization term. Results on laboratory data show the benefit of the proposed strategy that improves the quality of the estimated DI images.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131130067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313063
J. Vilà‐Valls, P. Closas, Á. F. García-Fernández, C. Fernández-Prades
This article presents a new multiple state-partitioning solution to the Bayesian smoothing problem in nonlinear high-dimensional Gaussian systems. The key idea is to partition the original state into several low-dimensional subspaces, and apply an individual smoother to each of them. The main goal is to reduce the state dimension each filter has to explore, to reduce the curse of dimensionality and eventual loss of accuracy. We provide the theoretical multiple smoothing formulation and a new nested sigma-point approximation to the resulting smoothing solution. The performance of the new approach is shown for the 40-dimensional Lorenz model.
{"title":"Multiple sigma-point Kalman smoothers for high-dimensional state-space models","authors":"J. Vilà‐Valls, P. Closas, Á. F. García-Fernández, C. Fernández-Prades","doi":"10.1109/CAMSAP.2017.8313063","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313063","url":null,"abstract":"This article presents a new multiple state-partitioning solution to the Bayesian smoothing problem in nonlinear high-dimensional Gaussian systems. The key idea is to partition the original state into several low-dimensional subspaces, and apply an individual smoother to each of them. The main goal is to reduce the state dimension each filter has to explore, to reduce the curse of dimensionality and eventual loss of accuracy. We provide the theoretical multiple smoothing formulation and a new nested sigma-point approximation to the resulting smoothing solution. The performance of the new approach is shown for the 40-dimensional Lorenz model.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133227689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313180
Daniel Franz, V. Kuehn
The phase retrieval problem of recovering a data vector from the squared magnitude of its Fourier transform in general can not be solved uniquely, since the magnitude of the Fourier transform is invariant to a global phase shift, cyclic spatial shift and the conjugate reversal of the signal. We discuss a method of introducing reference points in the signal to resolve aforementioned ambiguities. After specifying requirements for these reference points we present a modification of the GESPAR algorithm to solve the obtained problem.
{"title":"Greedy phase retrieval with reference points and bounded sparsity","authors":"Daniel Franz, V. Kuehn","doi":"10.1109/CAMSAP.2017.8313180","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313180","url":null,"abstract":"The phase retrieval problem of recovering a data vector from the squared magnitude of its Fourier transform in general can not be solved uniquely, since the magnitude of the Fourier transform is invariant to a global phase shift, cyclic spatial shift and the conjugate reversal of the signal. We discuss a method of introducing reference points in the signal to resolve aforementioned ambiguities. After specifying requirements for these reference points we present a modification of the GESPAR algorithm to solve the obtained problem.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127655080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313159
Shubham Chamadia, D. Pados
We consider the problem of extracting a sparse Li-norm principal component from a data matrix X ∊ RD×N of N observation vectors of dimension D. Recently, an optimal algorithm was presented in the literature for the computation of sparse L1-norm principal components with complexity O(NS) where S is the desired sparsity. In this paper, we present an efficient suboptimal algorithm of complexity O(N2(N + D)). Extensive numerical studies demonstrate the near-optimal performance of the proposed algorithm and its strong resistance to faulty measurements/outliers in the data matrix.
{"title":"Computational advances in sparse L1-norm principal-component analysis of multi-dimensional data","authors":"Shubham Chamadia, D. Pados","doi":"10.1109/CAMSAP.2017.8313159","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313159","url":null,"abstract":"We consider the problem of extracting a sparse Li-norm principal component from a data matrix X ∊ R<sup>D×N</sup> of N observation vectors of dimension D. Recently, an optimal algorithm was presented in the literature for the computation of sparse L<inf>1</inf>-norm principal components with complexity O(N<sup>S</sup>) where S is the desired sparsity. In this paper, we present an efficient suboptimal algorithm of complexity O(N<sup>2</sup>(N + D)). Extensive numerical studies demonstrate the near-optimal performance of the proposed algorithm and its strong resistance to faulty measurements/outliers in the data matrix.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117140298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313119
Huikang Liu, Yuen-Man Pun, A. M. So
We consider the problem of single source localization using time-difference-of-arrival (TDOA) measurements. By analyzing the maximum-likelihood (ML) formulation of the problem, we show that under certain mild assumptions on the measurement noise, the estimation errors of both the closed-form least-squares estimate proposed in [1] and the ML estimate, as measured by their distances to the true source location, are of the same order. We then use this to establish the curious result that the objective function of the ML estimation problem is actually locally strongly convex at an optimal solution. This implies that some lightweight solution methods, such as the gradient descent (GD) and Levenberg-Marquardt (LM) methods, will converge to an optimal solution to the ML estimation problem when properly initialized, and the convergence rates can be determined by standard arguments. To the best of our knowledge, these results are new and contribute to the growing literature on the effectiveness of lightweight solution methods for structured non-convex optimization problems. Lastly, we demonstrate via simulations that the GD and LM methods can indeed produce more accurate estimates of the source location than some existing methods, including the widely used semidefinite relaxation-based methods.
{"title":"Local strong convexity of maximum-likelihood TDOA-Based source localization and its algorithmic implications","authors":"Huikang Liu, Yuen-Man Pun, A. M. So","doi":"10.1109/CAMSAP.2017.8313119","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313119","url":null,"abstract":"We consider the problem of single source localization using time-difference-of-arrival (TDOA) measurements. By analyzing the maximum-likelihood (ML) formulation of the problem, we show that under certain mild assumptions on the measurement noise, the estimation errors of both the closed-form least-squares estimate proposed in [1] and the ML estimate, as measured by their distances to the true source location, are of the same order. We then use this to establish the curious result that the objective function of the ML estimation problem is actually locally strongly convex at an optimal solution. This implies that some lightweight solution methods, such as the gradient descent (GD) and Levenberg-Marquardt (LM) methods, will converge to an optimal solution to the ML estimation problem when properly initialized, and the convergence rates can be determined by standard arguments. To the best of our knowledge, these results are new and contribute to the growing literature on the effectiveness of lightweight solution methods for structured non-convex optimization problems. Lastly, we demonstrate via simulations that the GD and LM methods can indeed produce more accurate estimates of the source location than some existing methods, including the widely used semidefinite relaxation-based methods.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127156689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313060
R. Pribic, G. Leus
A stochastic approach to resolution based on information distances computed from the geometry of data models which is characterized by the Fisher information is explored. Stochastic resolution includes probability of resolution and signal-to-noise ratio (SNR). The probability of resolution is assessed from a hypothesis test by exploiting information distances in a likelihood ratio. Taking SNR into account is especially relevant in compressive sensing (CS) due to its fewer measurements. Based on this information-geometry approach, we demonstrate the stochastic resolution analysis in test cases from array processing. In addition, we also compare our stochastic resolution bounds with the actual resolution obtained numerically from sparse signal processing which nowadays is a major component of the back end of any CS sensor. Results demonstrate the suitability of the proposed stochastic resolution analysis due to its ability to include crucial features in the resolution performance guarantees: array configuration or sensor design, SNR, separation and probability of resolution.
{"title":"Information distances for radar resolution analysis","authors":"R. Pribic, G. Leus","doi":"10.1109/CAMSAP.2017.8313060","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313060","url":null,"abstract":"A stochastic approach to resolution based on information distances computed from the geometry of data models which is characterized by the Fisher information is explored. Stochastic resolution includes probability of resolution and signal-to-noise ratio (SNR). The probability of resolution is assessed from a hypothesis test by exploiting information distances in a likelihood ratio. Taking SNR into account is especially relevant in compressive sensing (CS) due to its fewer measurements. Based on this information-geometry approach, we demonstrate the stochastic resolution analysis in test cases from array processing. In addition, we also compare our stochastic resolution bounds with the actual resolution obtained numerically from sparse signal processing which nowadays is a major component of the back end of any CS sensor. Results demonstrate the suitability of the proposed stochastic resolution analysis due to its ability to include crucial features in the resolution performance guarantees: array configuration or sensor design, SNR, separation and probability of resolution.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313105
M. Coutiño, E. Isufi, G. Leus
The main challenges distributed graph filters face in practice are the communication overhead and computational complexity. In this work, we extend the state-of-the-art distributed finite impulse response (FIR) graph filters to an edge-variant (EV) version, i.e., a filter where every node weights the signals from its neighbors with different values. Besides having the potential to reduce the filter order leading to amenable communication and complexity savings, the EV graph filter generalizes the class of classical and node-variant FIR graph filters. Numerical tests validate our findings and illustrate the potential of the EV graph filters to (i) approximate a user-provided frequency response; and (ii) implement distributed consensus with much lower orders than its direct contenders.
{"title":"Distributed edge-variant graph filters","authors":"M. Coutiño, E. Isufi, G. Leus","doi":"10.1109/CAMSAP.2017.8313105","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313105","url":null,"abstract":"The main challenges distributed graph filters face in practice are the communication overhead and computational complexity. In this work, we extend the state-of-the-art distributed finite impulse response (FIR) graph filters to an edge-variant (EV) version, i.e., a filter where every node weights the signals from its neighbors with different values. Besides having the potential to reduce the filter order leading to amenable communication and complexity savings, the EV graph filter generalizes the class of classical and node-variant FIR graph filters. Numerical tests validate our findings and illustrate the potential of the EV graph filters to (i) approximate a user-provided frequency response; and (ii) implement distributed consensus with much lower orders than its direct contenders.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126399252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/CAMSAP.2017.8313140
Martijn Boussé, Nico Vervliet, Otto Debals, L. D. Lathauwer
Various parameters influence face recognition such as expression, pose, and illumination. In contrast to matrices, tensors can be used to naturally accommodate for the different modes of variation. The multilinear singular value decomposition (MLSVD) then allows one to describe each mode with a factor matrix and the interaction between the modes with a coefficient tensor. In this paper, we show that each image in the tensor satisfying an MLSVD model can be expressed as a structured linear system called a Kronecker Product Equation (KPE). By solving a similar KPE for a new image, we can extract a feature vector that allows us to recognize the person with high performance. Additionally, more robust results can be obtained by using multiple images of the same person under different conditions, leading to a coupled KPE. Finally, our method can be used to update the database with an unknown person using only a few images instead of an image for each combination of conditions. We illustrate our method for the extended Yale Face Database B, achieving better performance than conventional methods such as Eigenfaces and other tensor-based techniques.
{"title":"Face recognition as a kronecker product equation","authors":"Martijn Boussé, Nico Vervliet, Otto Debals, L. D. Lathauwer","doi":"10.1109/CAMSAP.2017.8313140","DOIUrl":"https://doi.org/10.1109/CAMSAP.2017.8313140","url":null,"abstract":"Various parameters influence face recognition such as expression, pose, and illumination. In contrast to matrices, tensors can be used to naturally accommodate for the different modes of variation. The multilinear singular value decomposition (MLSVD) then allows one to describe each mode with a factor matrix and the interaction between the modes with a coefficient tensor. In this paper, we show that each image in the tensor satisfying an MLSVD model can be expressed as a structured linear system called a Kronecker Product Equation (KPE). By solving a similar KPE for a new image, we can extract a feature vector that allows us to recognize the person with high performance. Additionally, more robust results can be obtained by using multiple images of the same person under different conditions, leading to a coupled KPE. Finally, our method can be used to update the database with an unknown person using only a few images instead of an image for each combination of conditions. We illustrate our method for the extended Yale Face Database B, achieving better performance than conventional methods such as Eigenfaces and other tensor-based techniques.","PeriodicalId":315977,"journal":{"name":"2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)","volume":"50 s26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113957070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}