Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972537
Javier Tejedor, A. Echeverría, Dong Wang
We propose a new discriminative confidence measurement approach based on an evolution strategy for spoken term detection (STD). Our evolutionary algorithm, named evolutionary discriminant analysis (EDA), optimizes classification errors directly, which is a salient advantage compared with some conventional discriminative models which optimize objective functions based on certain class encoding, e.g. MLPs and SVMs. In addition, with the intrinsic randomness of the evolution strategy, EDA largely reduces the risk of converging to local minimums in model training. This is particularly valuable when the decision boundary is complex, which is the case when dealing with out-of-vocabulary (OOV) terms in STD. Experimental results on the meeting domain in English demonstrate considerable performance improvement with the EDA-based confidence for OOV terms compared with MLPs- and SVMs-based confidences; for in-vocabulary terms, however, no significant difference is observed with the three models. This confirms our conjecture that EDA exhibits more advantage for tasks with complex decision boundaries.
{"title":"An evolutionary confidence measurement for spoken term detection","authors":"Javier Tejedor, A. Echeverría, Dong Wang","doi":"10.1109/CBMI.2011.5972537","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972537","url":null,"abstract":"We propose a new discriminative confidence measurement approach based on an evolution strategy for spoken term detection (STD). Our evolutionary algorithm, named evolutionary discriminant analysis (EDA), optimizes classification errors directly, which is a salient advantage compared with some conventional discriminative models which optimize objective functions based on certain class encoding, e.g. MLPs and SVMs. In addition, with the intrinsic randomness of the evolution strategy, EDA largely reduces the risk of converging to local minimums in model training. This is particularly valuable when the decision boundary is complex, which is the case when dealing with out-of-vocabulary (OOV) terms in STD. Experimental results on the meeting domain in English demonstrate considerable performance improvement with the EDA-based confidence for OOV terms compared with MLPs- and SVMs-based confidences; for in-vocabulary terms, however, no significant difference is observed with the three models. This confirms our conjecture that EDA exhibits more advantage for tasks with complex decision boundaries.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123205812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972527
Juan C. Sanmiguel, Marcos Escudero-Viñolo, J. Sanchez, Jesús Bescós
This paper presents a real-time video event recognition system for controlled environments. It is able to recognize human activities and interactions with the objects of the environment by exploiting different cues like trajectory analysis, skin detection and people recognition of the foreground blobs of the scene. Time variations of these features are studied and combined using Bayesian inference to detect the events. Contextual information, including fixed objects' location, object types and event hierarchical definitions, is formally included in the system. A corpus of video sequences has been designed and recorded considering different complexity levels for object extraction. Experimental results show that our approach can recognize five kinds of events (two activities and three human-object interactions) with high precision operating at real-time.
{"title":"Real-time single-view video event recognition in controlled environments","authors":"Juan C. Sanmiguel, Marcos Escudero-Viñolo, J. Sanchez, Jesús Bescós","doi":"10.1109/CBMI.2011.5972527","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972527","url":null,"abstract":"This paper presents a real-time video event recognition system for controlled environments. It is able to recognize human activities and interactions with the objects of the environment by exploiting different cues like trajectory analysis, skin detection and people recognition of the foreground blobs of the scene. Time variations of these features are studied and combined using Bayesian inference to detect the events. Contextual information, including fixed objects' location, object types and event hierarchical definitions, is formally included in the system. A corpus of video sequences has been designed and recorded considering different complexity levels for object extraction. Experimental results show that our approach can recognize five kinds of events (two activities and three human-object interactions) with high precision operating at real-time.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125260478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972532
Golnaz Abdollahian, M. Birinci, F. Díaz-de-María, M. Gabbouj, E. Delp
In this paper we propose an image matching approach that selects the method of matching for each region in the image based on the region properties. This method can be used to find images similar to a query image from a database, which is useful for automatic image and video annotation. In this approach, each image is first divided into large homogeneous areas, identified as “texture areas”, and non-texture areas. Local descriptors are then used to match the keypoints in the non-texture areas, while texture regions are matched based on low level visual features. Experimental results prove that while exclusion of texture areas from local descriptor matching increases the efficiency of the whole process, utilization of appropriate measures for different regions can also increase the overall performance.
{"title":"A region-dependent image matching method for image and video annotation","authors":"Golnaz Abdollahian, M. Birinci, F. Díaz-de-María, M. Gabbouj, E. Delp","doi":"10.1109/CBMI.2011.5972532","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972532","url":null,"abstract":"In this paper we propose an image matching approach that selects the method of matching for each region in the image based on the region properties. This method can be used to find images similar to a query image from a database, which is useful for automatic image and video annotation. In this approach, each image is first divided into large homogeneous areas, identified as “texture areas”, and non-texture areas. Local descriptors are then used to match the keypoints in the non-texture areas, while texture regions are matched based on low level visual features. Experimental results prove that while exclusion of texture areas from local descriptor matching increases the efficiency of the whole process, utilization of appropriate measures for different regions can also increase the overall performance.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122105186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972552
Filipe Coelho, Cristina Ribeiro
Journalists and bloggers need to find useful images to illustrate news stories and blog entries with high quality photos. The dpikt text illustration system uses multimedia information retrieval to assist this content enrichment task. Users query the system with text fragments and get collections of candidate photos. Images in the results can be visually sorted according to a selected photo, or be used as a seed for interactive searches over the entire collection. dpikt incorporates a recent visual descriptor, the Joint Composite Descriptor, and an approximate indexing scheme designed for large-scale image collections, the Permutation-Prefix Index. We have used the SAPO-Labs large-scale news stories photo collection, containing almost two million high quality photos with short descriptions, as the resource for the illustration task.
{"title":"dpikt — Automatic illustration system for media content","authors":"Filipe Coelho, Cristina Ribeiro","doi":"10.1109/CBMI.2011.5972552","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972552","url":null,"abstract":"Journalists and bloggers need to find useful images to illustrate news stories and blog entries with high quality photos. The dpikt text illustration system uses multimedia information retrieval to assist this content enrichment task. Users query the system with text fragments and get collections of candidate photos. Images in the results can be visually sorted according to a selected photo, or be used as a seed for interactive searches over the entire collection. dpikt incorporates a recent visual descriptor, the Joint Composite Descriptor, and an approximate indexing scheme designed for large-scale image collections, the Permutation-Prefix Index. We have used the SAPO-Labs large-scale news stories photo collection, containing almost two million high quality photos with short descriptions, as the resource for the illustration task.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130428650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972519
Giuseppe Amato, Paolo Bolettieri, F. Falchi, C. Gennaro, F. Rabitti
In this paper we propose a novel approach that allows processing image content based queries expressed as arbitrary combinations of local and global visual features, by using a single index realized as an inverted file. The index was implemented on top of the Lucene retrieval engine. This is particularly useful to allow people to efficiently and interactively check the quality of the retrieval result by exploiting combinations of various features when using content based retrieval systems.
{"title":"Combining local and global visual feature similarity using a text search engine","authors":"Giuseppe Amato, Paolo Bolettieri, F. Falchi, C. Gennaro, F. Rabitti","doi":"10.1109/CBMI.2011.5972519","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972519","url":null,"abstract":"In this paper we propose a novel approach that allows processing image content based queries expressed as arbitrary combinations of local and global visual features, by using a single index realized as an inverted file. The index was implemented on top of the Lucene retrieval engine. This is particularly useful to allow people to efficiently and interactively check the quality of the retrieval result by exploiting combinations of various features when using content based retrieval systems.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131997459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972538
Thai V. Hoang, S. Tabbone
The beneficial properties of the Radon transform make it an useful intermediate representation for the extraction of invariant features from pattern images for the purpose of indexing/matching. This paper revisits the problem with a generic view on a popular Radon-based pattern descriptor, the R-signature, bringing in a class of descriptors spatially describing patterns at all the directions and at different levels. The domain of this class and the selection of its representative are also discussed. Theoretical arguments validate the robustness of the generic R-signature to additive noise and experimental results show its effectiveness.
{"title":"Generic R-transform for invariant pattern representation","authors":"Thai V. Hoang, S. Tabbone","doi":"10.1109/CBMI.2011.5972538","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972538","url":null,"abstract":"The beneficial properties of the Radon transform make it an useful intermediate representation for the extraction of invariant features from pattern images for the purpose of indexing/matching. This paper revisits the problem with a generic view on a popular Radon-based pattern descriptor, the R-signature, bringing in a class of descriptors spatially describing patterns at all the directions and at different levels. The domain of this class and the selection of its representative are also discussed. Theoretical arguments validate the robustness of the generic R-signature to additive noise and experimental results show its effectiveness.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120916036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972530
Pierre Tirilly, Kun Lu, Xiangming Mu, Tian Zhao, Yu Cao
Medical databases have been a popular application field for image retrieval techniques during the last decade. More recently, much attention has been paid to the prediction of medical image modality (X-rays, MRI…) and the integration of the predicted modality into image retrieval systems. This paper addresses these two issues. On the one hand, we believe it is possible to design specific visual descriptors to determine image modality much more efficiently than the traditional image descriptors currently used for this task. We propose very light image descriptors that better describe the modality properties and show promising results. On the other hand, we present a comparison of different existing or new modality integration methods. This comprehensive study provide insights on the behavior of these models with respect to the initial classification and retrieval systems. These results can be extended to other applications with a similar framework. All the experiments presented in this work are performed using datasets provided during the 2009 and 2010 ImageCLEF medical tracks.
{"title":"On modality classification and its use in text-based image retrieval in medical databases","authors":"Pierre Tirilly, Kun Lu, Xiangming Mu, Tian Zhao, Yu Cao","doi":"10.1109/CBMI.2011.5972530","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972530","url":null,"abstract":"Medical databases have been a popular application field for image retrieval techniques during the last decade. More recently, much attention has been paid to the prediction of medical image modality (X-rays, MRI…) and the integration of the predicted modality into image retrieval systems. This paper addresses these two issues. On the one hand, we believe it is possible to design specific visual descriptors to determine image modality much more efficiently than the traditional image descriptors currently used for this task. We propose very light image descriptors that better describe the modality properties and show promising results. On the other hand, we present a comparison of different existing or new modality integration methods. This comprehensive study provide insights on the behavior of these models with respect to the initial classification and retrieval systems. These results can be extended to other applications with a similar framework. All the experiments presented in this work are performed using datasets provided during the 2009 and 2010 ImageCLEF medical tracks.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"87 27 Pt 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126305505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972513
Mayank Agarwal, Javed Mostafa
This paper describes ViewFinder Medicine (vfM) as an application of content-based image retrieval to the domain of Alzheimer's disease and medical imaging in general. The system follows a multi-tier architecture which provides the flexibility in experimenting with different representation, classification, ranking and feedback techniques. Classification is central to the system because besides providing an estimate of what stage of the disease the input query may belong to, it also helps adapt and rank the search results. It was found that using our multi-level approach, the classification performance matched the best result reported in the medical imaging literature. Up to 87% of patients were correctly classified in their respective classes, leading to an average precision of about 0.8 without any relevance feedback from the user. To encourage engagement and leverage physicians' knowledge, a relevance feedback function was subsequently added and as result precision improved to 0.89.
本文描述了ViewFinder Medicine (vfM)作为一种基于内容的图像检索在阿尔茨海默病和医学成像领域的应用。该系统遵循多层架构,提供了试验不同表示、分类、排名和反馈技术的灵活性。分类是系统的核心,因为除了提供输入查询可能属于疾病的哪个阶段的估计外,它还有助于调整搜索结果并对其进行排序。我们发现,采用我们的多层次方法,分类性能与医学影像学文献报道的最佳结果相匹配。高达87%的患者在各自的类别中被正确分类,在没有用户任何相关反馈的情况下,平均精度约为0.8。为了鼓励参与和利用医生的知识,随后增加了相关反馈功能,结果精度提高到0.89。
{"title":"Content-based image retrieval for Alzheimer's disease detection","authors":"Mayank Agarwal, Javed Mostafa","doi":"10.1109/CBMI.2011.5972513","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972513","url":null,"abstract":"This paper describes ViewFinder Medicine (vfM) as an application of content-based image retrieval to the domain of Alzheimer's disease and medical imaging in general. The system follows a multi-tier architecture which provides the flexibility in experimenting with different representation, classification, ranking and feedback techniques. Classification is central to the system because besides providing an estimate of what stage of the disease the input query may belong to, it also helps adapt and rank the search results. It was found that using our multi-level approach, the classification performance matched the best result reported in the medical imaging literature. Up to 87% of patients were correctly classified in their respective classes, leading to an average precision of about 0.8 without any relevance feedback from the user. To encourage engagement and leverage physicians' knowledge, a relevance feedback function was subsequently added and as result precision improved to 0.89.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130354809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972523
D. Vallet, Martin Halvey, J. Jose, P. Castells
In this paper we present a study of exploratory video search tasks and recommendation techniques based on a graph representation of past user com-munity interactions with the system, which have been used in a number of multimedia retrieval systems. We propose an extension for such graph-based usage representation techniques based on the creation of additional soft links between nodes. It is demonstrated how soft links can be incorporated into a graph-based representation and how different state of the art techniques can be adapted to use soft links. Our evaluation, based on a simulation-oriented technique and real interaction data gathered from users, shows how our soft links can help in improving the diversity and, in some cases, the accuracy of the studied recommendation techniques.
{"title":"Applying soft links to diversify video recommendations","authors":"D. Vallet, Martin Halvey, J. Jose, P. Castells","doi":"10.1109/CBMI.2011.5972523","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972523","url":null,"abstract":"In this paper we present a study of exploratory video search tasks and recommendation techniques based on a graph representation of past user com-munity interactions with the system, which have been used in a number of multimedia retrieval systems. We propose an extension for such graph-based usage representation techniques based on the creation of additional soft links between nodes. It is demonstrated how soft links can be incorporated into a graph-based representation and how different state of the art techniques can be adapted to use soft links. Our evaluation, based on a simulation-oriented technique and real interaction data gathered from users, shows how our soft links can help in improving the diversity and, in some cases, the accuracy of the studied recommendation techniques.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129778331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972535
Meriem Bendris, Delphine Charlet, G. Chollet
Our goal is to structure TV-content by person allowing a user to navigate through the sequences of the same person. To let a user browse through the content without restriction on people within it, this structuration has to be done without any pre-defined dictionary of people. To this end, most methods propose to index people independently by the audio and visual information, and associate the indexes to obtain the talking-face one. Unfortunately, this approach combines clustering errors provided in each modality. In this work, we propose a mutual correction scheme of audio and visual clustering errors. First, the clustering errors are detected using indicators suspecting a talking-face presence. Then, the incorrect label is corrected according to an automatic modification scheme. Two modification schemes are proposed and evaluated : one based on systematic correction of the a priori supposed less reliable modality while the second proposes to compare unsupervised audio-visual models scores to determine which modality failed. Experiments on a TV-show database show that the proposed correction schemes yield significant improvement in performance, mainly due to an important reduction of missed talking-faces.
{"title":"People indexing in TV-content using lip-activity and unsupervised audio-visual identity verification","authors":"Meriem Bendris, Delphine Charlet, G. Chollet","doi":"10.1109/CBMI.2011.5972535","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972535","url":null,"abstract":"Our goal is to structure TV-content by person allowing a user to navigate through the sequences of the same person. To let a user browse through the content without restriction on people within it, this structuration has to be done without any pre-defined dictionary of people. To this end, most methods propose to index people independently by the audio and visual information, and associate the indexes to obtain the talking-face one. Unfortunately, this approach combines clustering errors provided in each modality. In this work, we propose a mutual correction scheme of audio and visual clustering errors. First, the clustering errors are detected using indicators suspecting a talking-face presence. Then, the incorrect label is corrected according to an automatic modification scheme. Two modification schemes are proposed and evaluated : one based on systematic correction of the a priori supposed less reliable modality while the second proposes to compare unsupervised audio-visual models scores to determine which modality failed. Experiments on a TV-show database show that the proposed correction schemes yield significant improvement in performance, mainly due to an important reduction of missed talking-faces.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122904166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}