This paper presents a new interactive scatter plot visualization for multi-dimensional data analysis. We apply RST to reduce the visual complexity through dimensionality reduction. We use an innovative point-to-region mouse click concept to enable direct interactions with scatter points that are theoretically impossible. To show the decision trend we use a virtual Z dimension to display a set of linear flows showing approximation of the decision trend. We have conducted a case study to demonstrate the effectiveness and usefulness of our new technique for identifying the impact sources of wine quality through the visual analytics of a wine dataset consisting of 12 attributes with 4898 samples.
{"title":"An Interactive Scatter Plot Metrics Visualization for Decision Trend Analysis","authors":"Tze-Haw Huang, M. Huang, Kang Zhang","doi":"10.1109/ICMLA.2012.164","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.164","url":null,"abstract":"This paper presents a new interactive scatter plot visualization for multi-dimensional data analysis. We apply RST to reduce the visual complexity through dimensionality reduction. We use an innovative point-to-region mouse click concept to enable direct interactions with scatter points that are theoretically impossible. To show the decision trend we use a virtual Z dimension to display a set of linear flows showing approximation of the decision trend. We have conducted a case study to demonstrate the effectiveness and usefulness of our new technique for identifying the impact sources of wine quality through the visual analytics of a wine dataset consisting of 12 attributes with 4898 samples.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125996170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We compare classifiers for the classification of myoelectric signals and show that the performance can be improved by using spatial features that are extracted by independent component analysis. The obtained filters can be interpreted as reflecting the spatial structure of the data source. We find that the performance improves for several preprocessing algorithms, but it affects the relative performance for various classifiers in different ways. A critical performance difference is especially seen when non-stationary signal regimes during the onset of static contractions are included. Although a practically utilizable performance appears to be reached for the present data set by a certain combination of classification and preprocessing algorithms, it remains to be further optimized in order to keep this level for more realistic data sets.
{"title":"Spatial Feature Extraction for Classification of Nonstationary Myoelectric Signals","authors":"David Hofmann","doi":"10.1109/ICMLA.2012.222","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.222","url":null,"abstract":"We compare classifiers for the classification of myoelectric signals and show that the performance can be improved by using spatial features that are extracted by independent component analysis. The obtained filters can be interpreted as reflecting the spatial structure of the data source. We find that the performance improves for several preprocessing algorithms, but it affects the relative performance for various classifiers in different ways. A critical performance difference is especially seen when non-stationary signal regimes during the onset of static contractions are included. Although a practically utilizable performance appears to be reached for the present data set by a certain combination of classification and preprocessing algorithms, it remains to be further optimized in order to keep this level for more realistic data sets.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A trend in machine learning is the application of existing algorithms to ever-larger datasets. Support Vector Machines (SVM) have been shown to be very effective, but have been difficult to scale to large-data problems. Some approaches have sought to scale SVM training by approximating and parallelizing the underlying quadratic optimization problem. This paper pursues a different approach. Our algorithm, which we call Sampled SVM, uses an existing SVM training algorithm to create a new SVM training algorithm. It uses randomized data sampling to better extend SVMs to large data applications. Experiments on several datasets show that our method is faster than and comparably accurate to both the original SVM algorithm it is based on and the Cascade SVM, the leading data organization approach for SVMs in the literature. Further, we show that our approach is more amenable to parallelization than Cascade SVM.
{"title":"Randomized Sampling for Large Data Applications of SVM","authors":"Erik M. Ferragut, J. Laska","doi":"10.1109/ICMLA.2012.65","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.65","url":null,"abstract":"A trend in machine learning is the application of existing algorithms to ever-larger datasets. Support Vector Machines (SVM) have been shown to be very effective, but have been difficult to scale to large-data problems. Some approaches have sought to scale SVM training by approximating and parallelizing the underlying quadratic optimization problem. This paper pursues a different approach. Our algorithm, which we call Sampled SVM, uses an existing SVM training algorithm to create a new SVM training algorithm. It uses randomized data sampling to better extend SVMs to large data applications. Experiments on several datasets show that our method is faster than and comparably accurate to both the original SVM algorithm it is based on and the Cascade SVM, the leading data organization approach for SVMs in the literature. Further, we show that our approach is more amenable to parallelization than Cascade SVM.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128643953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new centroid-based approach to classify web pages by genre using character ngrams extracted from different information sources such as URL, title, headings and anchors. To deal with the complexity of web pages and the rapid evolution of web genres, our approach implements a multi-label and adaptive classification scheme in which web pages are classified one by one and each web page can affect more than one genre. According to the similarity between the new page and each genre centroid, our approach either adapts the genre centroid under consideration or considers the new page as noise page and discards it. The experiment results show that our approach is very fast and achieves better results than existing multi-label classifiers.
{"title":"A Multi-label and Adaptive Genre Classification of Web Pages","authors":"Chaker Jebari","doi":"10.1109/ICMLA.2012.106","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.106","url":null,"abstract":"This paper proposes a new centroid-based approach to classify web pages by genre using character ngrams extracted from different information sources such as URL, title, headings and anchors. To deal with the complexity of web pages and the rapid evolution of web genres, our approach implements a multi-label and adaptive classification scheme in which web pages are classified one by one and each web page can affect more than one genre. According to the similarity between the new page and each genre centroid, our approach either adapts the genre centroid under consideration or considers the new page as noise page and discards it. The experiment results show that our approach is very fast and achieves better results than existing multi-label classifiers.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129357226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alisson Marques da Silva, W. Caminhas, A. Lemos, F. Gomide
This paper introduces a neural fuzzy network approach for evolving system modeling. The approach uses neofuzzy neurons and a neural fuzzy structure monished with an incremental learning algorithm that includes adaptive feature selection. The feature selection mechanism starts considering one or more input variables from a given set of variables, and decides if a new variable should be added, or if an existing variable should be excluded or kept as an input. The decision process uses statistical tests and information about the current model performance. The incremental learning scheme simultaneously selects the input variables and updates the neural network weights. The weights are adjusted using a gradient-based scheme with optimal learning rate. The performance of the models obtained with the neural fuzzy modeling approach is evaluated considering weather temperature forecasting problems. Computational results show that the approach is competitive with alternatives reported in the literature, especially in on-line modeling situations where processing time and learning are critical.
{"title":"Evolving Neural Fuzzy Network with Adaptive Feature Selection","authors":"Alisson Marques da Silva, W. Caminhas, A. Lemos, F. Gomide","doi":"10.1109/ICMLA.2012.184","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.184","url":null,"abstract":"This paper introduces a neural fuzzy network approach for evolving system modeling. The approach uses neofuzzy neurons and a neural fuzzy structure monished with an incremental learning algorithm that includes adaptive feature selection. The feature selection mechanism starts considering one or more input variables from a given set of variables, and decides if a new variable should be added, or if an existing variable should be excluded or kept as an input. The decision process uses statistical tests and information about the current model performance. The incremental learning scheme simultaneously selects the input variables and updates the neural network weights. The weights are adjusted using a gradient-based scheme with optimal learning rate. The performance of the models obtained with the neural fuzzy modeling approach is evaluated considering weather temperature forecasting problems. Computational results show that the approach is competitive with alternatives reported in the literature, especially in on-line modeling situations where processing time and learning are critical.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127167502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of a classification system depends on various aspects, including encoding techniques. In fact, encoding techniques play a primary role in the process of tuning a classifier/predictor, as choosing the most appropriate encoder may greatly affect its performance. As of now, evaluating the impact of an encoding technique on a classification system typically requires to train the system and test it by means of a performance metric deemed relevant (e.g., precision, recall, and Matthews correlation coefficients). For this reason, assessing a single encoding technique is a time consuming activity, which introduces some additional degrees of freedom (e.g., parameters of the training algorithm) that may be uncorrelated with the encoding technique to be assessed. In this paper, we propose a family of methods to measure the performance of encoding techniques used in classification tasks, based on the correlation between encoded input data and the corresponding output. The proposed approach provides correlation-based metrics, devised with the primary goal of focusing on the encoding technique, leading other unrelated aspects apart. Notably, the proposed technique allows to save computational time to a great extent, as it needs only a tiny fraction of the time required by standard methods.
{"title":"Assessing Encoding Techniques through Correlation-Based Metrics","authors":"G. Armano, E. Tamponi","doi":"10.1109/ICMLA.2012.118","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.118","url":null,"abstract":"The performance of a classification system depends on various aspects, including encoding techniques. In fact, encoding techniques play a primary role in the process of tuning a classifier/predictor, as choosing the most appropriate encoder may greatly affect its performance. As of now, evaluating the impact of an encoding technique on a classification system typically requires to train the system and test it by means of a performance metric deemed relevant (e.g., precision, recall, and Matthews correlation coefficients). For this reason, assessing a single encoding technique is a time consuming activity, which introduces some additional degrees of freedom (e.g., parameters of the training algorithm) that may be uncorrelated with the encoding technique to be assessed. In this paper, we propose a family of methods to measure the performance of encoding techniques used in classification tasks, based on the correlation between encoded input data and the corresponding output. The proposed approach provides correlation-based metrics, devised with the primary goal of focusing on the encoding technique, leading other unrelated aspects apart. Notably, the proposed technique allows to save computational time to a great extent, as it needs only a tiny fraction of the time required by standard methods.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132023179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Abdelmaseeh, P. Poupart, Benn Smith, D. Stashuk
Needle Electromyography, in combination with nerve conduction studies, is the gold standard methodology for assessing the neurophysiologic effects of neuromuscular diseases. Muscle categorization is typically based on visual and auditory assessment of the morphology and activation patterns of its constituent motor units. A procedure which is highly dependent on the skills and level of experience of the examiner. This motivates the development of automated or semi-automated categorization techniques. This paper describes a 2-stage Gaussian mixture model based approach. In the first stage, a muscle is classified as neurogenic or myopathic. The second stage uses a classifier specific to each disease category to confirm or refute the disease involvement. A total of 2556 motor unit potentials sampled from 48 normal, 30 neurogenic and 20 myopathic tibialis anterior muscles were utilized for this study. The proposed approach showed an average accuracy of 91.25%, which is higher than the compared linear and non-linear multi-class schemas. In addition to improved accuracy, the 2-stage approach is more suitable for the muscle categorization, because it has a hierarchical decision structure similar to current clinical practice, and its output can be interpreted as a measure of confidence.
{"title":"Muscle Categorization Using Quantitative Needle Electromyography: A 2-Stage Gaussian Mixture Model Based Approach","authors":"M. Abdelmaseeh, P. Poupart, Benn Smith, D. Stashuk","doi":"10.1109/ICMLA.2012.100","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.100","url":null,"abstract":"Needle Electromyography, in combination with nerve conduction studies, is the gold standard methodology for assessing the neurophysiologic effects of neuromuscular diseases. Muscle categorization is typically based on visual and auditory assessment of the morphology and activation patterns of its constituent motor units. A procedure which is highly dependent on the skills and level of experience of the examiner. This motivates the development of automated or semi-automated categorization techniques. This paper describes a 2-stage Gaussian mixture model based approach. In the first stage, a muscle is classified as neurogenic or myopathic. The second stage uses a classifier specific to each disease category to confirm or refute the disease involvement. A total of 2556 motor unit potentials sampled from 48 normal, 30 neurogenic and 20 myopathic tibialis anterior muscles were utilized for this study. The proposed approach showed an average accuracy of 91.25%, which is higher than the compared linear and non-linear multi-class schemas. In addition to improved accuracy, the 2-stage approach is more suitable for the muscle categorization, because it has a hierarchical decision structure similar to current clinical practice, and its output can be interpreted as a measure of confidence.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132079755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing He, Zhi Dong, Fuzhen Zhuang, Tianfeng Shang, Zhongzhi Shi
Time series shapelets are small and local time series subsequences which are in some sense maximally representative of a class. E.Keogh uses distance of the shapelet to classify objects. Even though shapelet classification can be interpretable and more accurate than many state-of-the-art classifiers, there is one main limitation of shapelets, i.e. shapelet classification training process is offline, and uses subsequence early abandon and admissible entropy pruning strategies, the time to compute is still significant. In this work, we address the later problem by introducing a novel algorithm that finds time series shapelet in significantly less time than the current methods by extracting infrequent time series shapelet candidates. Subsequences that are distinguishable are usually infrequent compared to other subsequences. The algorithm called ISDT (Infrequent Shapelet Decision Tree) uses infrequent shapelet candidates extracting to find shapelet. Experiments demonstrate the efficiency of ISDT algorithm on several benchmark time series datasets. The result shows that ISDT significantly outperforms the current shapelet algorithm.
{"title":"Fast Time Series Classification Based on Infrequent Shapelets","authors":"Qing He, Zhi Dong, Fuzhen Zhuang, Tianfeng Shang, Zhongzhi Shi","doi":"10.1109/ICMLA.2012.44","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.44","url":null,"abstract":"Time series shapelets are small and local time series subsequences which are in some sense maximally representative of a class. E.Keogh uses distance of the shapelet to classify objects. Even though shapelet classification can be interpretable and more accurate than many state-of-the-art classifiers, there is one main limitation of shapelets, i.e. shapelet classification training process is offline, and uses subsequence early abandon and admissible entropy pruning strategies, the time to compute is still significant. In this work, we address the later problem by introducing a novel algorithm that finds time series shapelet in significantly less time than the current methods by extracting infrequent time series shapelet candidates. Subsequences that are distinguishable are usually infrequent compared to other subsequences. The algorithm called ISDT (Infrequent Shapelet Decision Tree) uses infrequent shapelet candidates extracting to find shapelet. Experiments demonstrate the efficiency of ISDT algorithm on several benchmark time series datasets. The result shows that ISDT significantly outperforms the current shapelet algorithm.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130833479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Probabilistic latent semantic analysis (PLSA) has been widely used in the machine learning community. However, the original PLSAs are not capable of modeling real-valued observations and usually have severe problems with over fitting. To address both issues, we propose a novel, regularized Gaussian PLSA (RG-PLSA) model that combines Gaussian PLSAs and hierarchical Gaussian mixture models (HGMM). We evaluate our model on supervised human action recognition tasks, using two publicly available datasets. Average classification accuracies of 97.69% and 93.72% are achieved on the Weizmann and KTH Action Datasets, respectively, which demonstrate that the RG-PLSA model outperforms Gaussian PLSAs and HGMMs, and is comparable to the state of the art.
{"title":"Regularized Probabilistic Latent Semantic Analysis with Continuous Observations","authors":"Hao Zhang, Richard E. Edwards, L. Parker","doi":"10.1109/ICMLA.2012.102","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.102","url":null,"abstract":"Probabilistic latent semantic analysis (PLSA) has been widely used in the machine learning community. However, the original PLSAs are not capable of modeling real-valued observations and usually have severe problems with over fitting. To address both issues, we propose a novel, regularized Gaussian PLSA (RG-PLSA) model that combines Gaussian PLSAs and hierarchical Gaussian mixture models (HGMM). We evaluate our model on supervised human action recognition tasks, using two publicly available datasets. Average classification accuracies of 97.69% and 93.72% are achieved on the Weizmann and KTH Action Datasets, respectively, which demonstrate that the RG-PLSA model outperforms Gaussian PLSAs and HGMMs, and is comparable to the state of the art.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131346155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even though facial expressions have universal meaning in communications, their appearances show a large amount of variation due to many factors, such as different image acquisition setups, different ages, genders, and cultural backgrounds etc. Collecting enough amounts of annotated samples for each target domain is impractical, this paper investigates the problem of facial expression recognition in the more challenging situation, where the training and testing samples are taken from different domains. To address this problem, after observing the fact of unsatisfactory performance of the Kernel Mean Matching (KMM) algorithm, we propose a supervised extension that matches the distributions in a class-to-class manner, called Supervised Kernel Mean Matching (SKMM). The new approach stands out by taking into consideration both matching the distributions and preserving the discriminative information between classes at the same time. The extensive experimental studies on four cross-dataset facial expression recognition tasks show promising improvements of the proposed method, in which a small number of labeled samples guide the matching process.
{"title":"Cross-Domain Facial Expression Recognition Using Supervised Kernel Mean Matching","authors":"Yun-Qian Miao, Rodrigo Araujo, M. Kamel","doi":"10.1109/ICMLA.2012.178","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.178","url":null,"abstract":"Even though facial expressions have universal meaning in communications, their appearances show a large amount of variation due to many factors, such as different image acquisition setups, different ages, genders, and cultural backgrounds etc. Collecting enough amounts of annotated samples for each target domain is impractical, this paper investigates the problem of facial expression recognition in the more challenging situation, where the training and testing samples are taken from different domains. To address this problem, after observing the fact of unsatisfactory performance of the Kernel Mean Matching (KMM) algorithm, we propose a supervised extension that matches the distributions in a class-to-class manner, called Supervised Kernel Mean Matching (SKMM). The new approach stands out by taking into consideration both matching the distributions and preserving the discriminative information between classes at the same time. The extensive experimental studies on four cross-dataset facial expression recognition tasks show promising improvements of the proposed method, in which a small number of labeled samples guide the matching process.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}