Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972515
Filipe Coelho, Cristina Ribeiro
In this paper, we approach the task of finding suitable images to illustrate text, from specific news stories to more generic blog entries. We have developed an automatic illustration system supported by multimedia information retrieval, that analyzes text and presents a list of candidate images to illustrate it. The system was tested on the SAPO-Labs media collection, containing almost two million images with short descriptions, and the MIRFlickr-25000 collection, with photos and user tags from Flickr. Visual content is described by the Joint Composite Descriptor and indexed by a Permutation-Prefix Index. Illustration is a three-stage process using textual search, score filtering and visual clustering. A preliminary evaluation using exhaustive and approximate visual searches demonstrates the capabilities of the visual descriptor and approximate indexing scheme used.
{"title":"Automatic illustration with cross-media retrieval in large-scale collections","authors":"Filipe Coelho, Cristina Ribeiro","doi":"10.1109/CBMI.2011.5972515","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972515","url":null,"abstract":"In this paper, we approach the task of finding suitable images to illustrate text, from specific news stories to more generic blog entries. We have developed an automatic illustration system supported by multimedia information retrieval, that analyzes text and presents a list of candidate images to illustrate it. The system was tested on the SAPO-Labs media collection, containing almost two million images with short descriptions, and the MIRFlickr-25000 collection, with photos and user tags from Flickr. Visual content is described by the Joint Composite Descriptor and indexed by a Permutation-Prefix Index. Illustration is a three-stage process using textual search, score filtering and visual clustering. A preliminary evaluation using exhaustive and approximate visual searches demonstrates the capabilities of the visual descriptor and approximate indexing scheme used.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114799472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972541
Mikko Roininen, E. Guldogan, M. Gabbouj
The recognition of the surrounding context from video recordings offers interesting possibilities for context awareness of video capable mobile devices. Multimodal analysis provides means for improved recognition accuracy and robustness in different use conditions. We present a mul-timodal video context recognition system fusing audio and video cues with support vector machines (SVM) and simple rules with genetic algorithm (GA) optimized weights. Mul-timodal recognition is shown to outperform the unimodal approaches in recognizing between 21 everyday contexts. The highest correct classification rate of 0.844 is achieved with SVM-based fusion.
{"title":"Audiovisual video context recognition using SVM and genetic algorithm fusion rule weighting","authors":"Mikko Roininen, E. Guldogan, M. Gabbouj","doi":"10.1109/CBMI.2011.5972541","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972541","url":null,"abstract":"The recognition of the surrounding context from video recordings offers interesting possibilities for context awareness of video capable mobile devices. Multimodal analysis provides means for improved recognition accuracy and robustness in different use conditions. We present a mul-timodal video context recognition system fusing audio and video cues with support vector machines (SVM) and simple rules with genetic algorithm (GA) optimized weights. Mul-timodal recognition is shown to outperform the unimodal approaches in recognizing between 21 everyday contexts. The highest correct classification rate of 0.844 is achieved with SVM-based fusion.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972522
Florian Kaiser, Marina Georgia Arvanitidou, T. Sikora
Audio similarity matrices have become a popular tool in the MIR community for their ability to reveal segments of high acoustical self-similarity and repetitive patterns. This is particularly useful for the task of music structure segmentation. The performance of such systems however relies on the nature of the studied music pieces and it is often assumed that harmonic and timbre variations remain low within musical sections. While this condition is rarely fulfilled, similarity matrices are often too complex and structural information can hardly be extracted. In this paper we propose an image-oriented pre-processing of similarity matrices to highlight the conveyed musical information and reduce their complexity. The image segmentation processing step handles the image characteristics in order to provide us meaningful spatial segments and enhance thus the music segmentation. Evaluation of a reference structure segmentation algorithm using the enhanced matrices is provided, and we show that our method strongly improves the segmentation performances.
{"title":"Audio similarity matrices enhancement in an image processing framework","authors":"Florian Kaiser, Marina Georgia Arvanitidou, T. Sikora","doi":"10.1109/CBMI.2011.5972522","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972522","url":null,"abstract":"Audio similarity matrices have become a popular tool in the MIR community for their ability to reveal segments of high acoustical self-similarity and repetitive patterns. This is particularly useful for the task of music structure segmentation. The performance of such systems however relies on the nature of the studied music pieces and it is often assumed that harmonic and timbre variations remain low within musical sections. While this condition is rarely fulfilled, similarity matrices are often too complex and structural information can hardly be extracted. In this paper we propose an image-oriented pre-processing of similarity matrices to highlight the conveyed musical information and reduce their complexity. The image segmentation processing step handles the image characteristics in order to provide us meaningful spatial segments and enhance thus the music segmentation. Evaluation of a reference structure segmentation algorithm using the enhanced matrices is provided, and we show that our method strongly improves the segmentation performances.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125458420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972542
D. Giordano, I. Kavasidis, C. Pino, C. Spampinato
In this paper we present a domain-independent multimedia retrieval (MMR) platform. Currently, the use of MMR systems for different domains poses several limitations, mainly related to the poor flexibility and adaptability to different domains and user requirements. A semantic-based platform that uses ontologies for describing not only the application domain but also the processing workflow to be followed for the retrieval, according to user's requirements and domain characteristics is here proposed. In detail, an ontological model (domain-processing ontology) that integrates domain peculiarities and processing algorithms allows self-adaptation of the retrieval mechanism to the specified application domain. According to the instances generated for each user request, our platform generates the appropriate interface (GUI) for the specified application domain (e.g. music, sport video, medical images, etc…) by a procedure guided by the defined domain-processing ontology. A use case on content based music retrieval is here presented in order to show how the proposed platform also facilitates the process of multimedia retrieval system implementation.
{"title":"A semantic-based and adaptive architecture for automatic multimedia retrieval composition","authors":"D. Giordano, I. Kavasidis, C. Pino, C. Spampinato","doi":"10.1109/CBMI.2011.5972542","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972542","url":null,"abstract":"In this paper we present a domain-independent multimedia retrieval (MMR) platform. Currently, the use of MMR systems for different domains poses several limitations, mainly related to the poor flexibility and adaptability to different domains and user requirements. A semantic-based platform that uses ontologies for describing not only the application domain but also the processing workflow to be followed for the retrieval, according to user's requirements and domain characteristics is here proposed. In detail, an ontological model (domain-processing ontology) that integrates domain peculiarities and processing algorithms allows self-adaptation of the retrieval mechanism to the specified application domain. According to the instances generated for each user request, our platform generates the appropriate interface (GUI) for the specified application domain (e.g. music, sport video, medical images, etc…) by a procedure guided by the defined domain-processing ontology. A use case on content based music retrieval is here presented in order to show how the proposed platform also facilitates the process of multimedia retrieval system implementation.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125928518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972525
Nikolaos Gkalelis, V. Mezaris, Y. Kompatsiaris
In this paper a new approach to video event detection is presented, combining visual concept detection scores with a new dimensionality reduction technique. Specifically, a video is first decomposed to a sequence of shots, and trained visual concept detectors are used to represent video content with model vector sequences. Subsequently, an improved subclass discriminant analysis method is used to derive a concept subspace for detecting and recognizing high-level events. In this space, the median Hausdorff distance is used to implicitly align and compare event videos of different lengths, and the nearest neighbor rule is used for recognizing the event depicted in the video. Evaluation results obtained by our participation in the Multimedia Event Detection Task of the TRECVID 2010 competition verify the effectiveness of the proposed approach for event detection and recognition in large scale video collections.
{"title":"High-level event detection in video exploiting discriminant concepts","authors":"Nikolaos Gkalelis, V. Mezaris, Y. Kompatsiaris","doi":"10.1109/CBMI.2011.5972525","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972525","url":null,"abstract":"In this paper a new approach to video event detection is presented, combining visual concept detection scores with a new dimensionality reduction technique. Specifically, a video is first decomposed to a sequence of shots, and trained visual concept detectors are used to represent video content with model vector sequences. Subsequently, an improved subclass discriminant analysis method is used to derive a concept subspace for detecting and recognizing high-level events. In this space, the median Hausdorff distance is used to implicitly align and compare event videos of different lengths, and the nearest neighbor rule is used for recognizing the event depicted in the video. Evaluation results obtained by our participation in the Multimedia Event Detection Task of the TRECVID 2010 competition verify the effectiveness of the proposed approach for event detection and recognition in large scale video collections.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131621262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972547
Hichem Bannour, C. Hudelot
Due to the well-known semantic gap problem, a wide number of approaches have been proposed during the last decade for automatic image annotation, i.e. the textual description of images. Since these approaches are still not sufficiently efficient, a new trend is to use semantic hierarchies of concepts or ontologies to improve the image annotation process. This paper presents an overview and an analysis of the use of semantic hierarchies and ontologies to provide a deeper image understanding and a better image annotation in order to furnish retrieval facilities to users.
{"title":"Towards ontologies for image interpretation and annotation","authors":"Hichem Bannour, C. Hudelot","doi":"10.1109/CBMI.2011.5972547","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972547","url":null,"abstract":"Due to the well-known semantic gap problem, a wide number of approaches have been proposed during the last decade for automatic image annotation, i.e. the textual description of images. Since these approaches are still not sufficiently efficient, a new trend is to use semantic hierarchies of concepts or ontologies to improve the image annotation process. This paper presents an overview and an analysis of the use of semantic hierarchies and ontologies to provide a deeper image understanding and a better image annotation in order to furnish retrieval facilities to users.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131502083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972545
Miriam Redi, B. Mérialdo
Traditional window-based color indexing techniques have been widely used in image analysis and retrieval systems. In the existing approaches, all the image regions are treated with equal importance. However, some image areas carry more information about their content (e.g. the scene foreground). The human visual system bases indeed the categorization process on such set of perceptually salient region. Therefore, in order to improve the discriminative abilities of the color features for image recognition, higher importance should be given to the chromatic characteristics of more informative windows. In this paper, we present an informativeness-aware color descriptor based on the Color Moments feature [17]. We first define a saliency-based measure to quantify the amount of information carried by each image window; we then change the window-based CM feature according to the computed local informativeness. Finally, we show that this new hybrid feature outperforms the traditional Color Moments in a variety of challenging dataset for scene categorization, object recognition and video retrieval.
{"title":"Saliency-aware color moments features for image categorization and retrieval","authors":"Miriam Redi, B. Mérialdo","doi":"10.1109/CBMI.2011.5972545","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972545","url":null,"abstract":"Traditional window-based color indexing techniques have been widely used in image analysis and retrieval systems. In the existing approaches, all the image regions are treated with equal importance. However, some image areas carry more information about their content (e.g. the scene foreground). The human visual system bases indeed the categorization process on such set of perceptually salient region. Therefore, in order to improve the discriminative abilities of the color features for image recognition, higher importance should be given to the chromatic characteristics of more informative windows. In this paper, we present an informativeness-aware color descriptor based on the Color Moments feature [17]. We first define a saliency-based measure to quantify the amount of information carried by each image window; we then change the window-based CM feature according to the computed local informativeness. Finally, we show that this new hybrid feature outperforms the traditional Color Moments in a variety of challenging dataset for scene categorization, object recognition and video retrieval.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128818785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972521
Timo Mertens, R. Wallace, Daniel Schneider
The design and evaluation of subword-based spoken term detection (STD) systems depends on various factors, such as language, type of the speech to be searched and application scenario. The choice of the subword unit and search approach, however, is oftentimes made regardless of these factors. Therefore, we evaluate two subword STD systems across two data sets with varying properties to investigate the influence of different subword units on STD performance when working with different data types. Results show that on German broadcast news data, constrained search in syllable lattices is effective, whereas fuzzy phone lattice search is superior in more challenging English conversational telephone speech. By combining the key features of the two systems at an early stage, we achieve improvements in Figure of Merit of up to 13.4% absolute on the German data. We also show that the choice of the appropriate evaluation metric is crucial when comparing retrieval performances across systems.
{"title":"Cross-site combination and evaluation of subword spoken term detection systems","authors":"Timo Mertens, R. Wallace, Daniel Schneider","doi":"10.1109/CBMI.2011.5972521","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972521","url":null,"abstract":"The design and evaluation of subword-based spoken term detection (STD) systems depends on various factors, such as language, type of the speech to be searched and application scenario. The choice of the subword unit and search approach, however, is oftentimes made regardless of these factors. Therefore, we evaluate two subword STD systems across two data sets with varying properties to investigate the influence of different subword units on STD performance when working with different data types. Results show that on German broadcast news data, constrained search in syllable lattices is effective, whereas fuzzy phone lattice search is superior in more challenging English conversational telephone speech. By combining the key features of the two systems at an early stage, we achieve improvements in Figure of Merit of up to 13.4% absolute on the German data. We also show that the choice of the appropriate evaluation metric is crucial when comparing retrieval performances across systems.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133597897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972528
Damien Connaghan, Philip Kelly, N. O’Connor
Identifying events in sports video offers great potential for advancing visual sports coaching applications. In this paper, we present our results for detecting key events in a tennis match. Our overall goal is to automatically index a complete tennis match into all the main tennis events, so that a match can be recorded using affordable visual sensing equipment and then be automatically indexed into key events for retrieval and editing. The tennis events detected in this paper are a tennis game, a change of end and a tennis serve — all of which share temporal commonalities. There are of course other events in tennis which we aim to index in our overall indexing system, but this paper focuses solely on the aforementioned tennis events. This paper proposes a novel approach to detect key events in an instrumented tennis environment by analysing a players location and the visual features of a player.
{"title":"Game, shot and match: Event-based indexing of tennis","authors":"Damien Connaghan, Philip Kelly, N. O’Connor","doi":"10.1109/CBMI.2011.5972528","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972528","url":null,"abstract":"Identifying events in sports video offers great potential for advancing visual sports coaching applications. In this paper, we present our results for detecting key events in a tennis match. Our overall goal is to automatically index a complete tennis match into all the main tennis events, so that a match can be recorded using affordable visual sensing equipment and then be automatically indexed into key events for retrieval and editing. The tennis events detected in this paper are a tennis game, a change of end and a tennis serve — all of which share temporal commonalities. There are of course other events in tennis which we aim to index in our overall indexing system, but this paper focuses solely on the aforementioned tennis events. This paper proposes a novel approach to detect key events in an instrumented tennis environment by analysing a players location and the visual features of a player.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130960010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-13DOI: 10.1109/CBMI.2011.5972524
Svebor Karaman, J. Benois-Pineau, R. Mégret, J. Pinquier, Yann Gaëstel, J. Dartigues
This paper presents a method for indexing human activities in videos captured from a wearable camera being worn by patients, for studies of progression of the dementia diseases. Our method aims to produce indexes to facilitate the navigation throughout the individual video recordings, which could help doctors search for early signs of the disease in the activities of daily living. The recorded videos have strong motion and sharp lighting changes, inducing noise for the analysis. The proposed approach is based on a two steps analysis. First, we propose a new approach to segment this type of video, based on apparent motion. Each segment is characterized by two original motion descriptors, as well as color, and audio descriptors. Second, a Hidden-Markov Model formulation is used to merge the multimodal audio and video features, and classify the test segments. Experiments show the good properties of the approach on real data.
{"title":"Activities of daily living indexing by hierarchical HMM for dementia diagnostics","authors":"Svebor Karaman, J. Benois-Pineau, R. Mégret, J. Pinquier, Yann Gaëstel, J. Dartigues","doi":"10.1109/CBMI.2011.5972524","DOIUrl":"https://doi.org/10.1109/CBMI.2011.5972524","url":null,"abstract":"This paper presents a method for indexing human activities in videos captured from a wearable camera being worn by patients, for studies of progression of the dementia diseases. Our method aims to produce indexes to facilitate the navigation throughout the individual video recordings, which could help doctors search for early signs of the disease in the activities of daily living. The recorded videos have strong motion and sharp lighting changes, inducing noise for the analysis. The proposed approach is based on a two steps analysis. First, we propose a new approach to segment this type of video, based on apparent motion. Each segment is characterized by two original motion descriptors, as well as color, and audio descriptors. Second, a Hidden-Markov Model formulation is used to merge the multimodal audio and video features, and classify the test segments. Experiments show the good properties of the approach on real data.","PeriodicalId":358337,"journal":{"name":"2011 9th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116246172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}