J. O. Wallgrün, F. Hardisty, A. MacEachren, M. Karimzadeh, Yiting Ju, Scott Pezanowski
This article presents an approach to place reference corpus building and application of the approach to a Geo-Microblog Corpus that will foster research and development in the areas of microblog/twitter geoparsing and geographic information retrieval. Our corpus currently consists of 6000 tweets with identified and georeferenced place names. 30% of the tweets contain at least one place name. The corpus is intended to support the evaluation, comparison, and training of geoparsers. We introduce our corpus building framework, which is developed to be generally applicable beyond microblogs, and explain how we use crowdsourcing and geovisual analytics technology to support the construction of relatively large corpora. We then report on the corpus building work and present an analysis of causes of disagreement between the lay persons performing place identification in our crowdsourcing approach.
{"title":"Construction and first analysis of a corpus for the evaluation and training of microblog/twitter geoparsers","authors":"J. O. Wallgrün, F. Hardisty, A. MacEachren, M. Karimzadeh, Yiting Ju, Scott Pezanowski","doi":"10.1145/2675354.2675701","DOIUrl":"https://doi.org/10.1145/2675354.2675701","url":null,"abstract":"This article presents an approach to place reference corpus building and application of the approach to a Geo-Microblog Corpus that will foster research and development in the areas of microblog/twitter geoparsing and geographic information retrieval. Our corpus currently consists of 6000 tweets with identified and georeferenced place names. 30% of the tweets contain at least one place name. The corpus is intended to support the evaluation, comparison, and training of geoparsers. We introduce our corpus building framework, which is developed to be generally applicable beyond microblogs, and explain how we use crowdsourcing and geovisual analytics technology to support the construction of relatively large corpora. We then report on the corpus building work and present an analysis of causes of disagreement between the lay persons performing place identification in our crowdsourcing approach.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114088542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet users share large quantities of text and multimedia content that becomes easily accessible to others via hyperlinks and search engine results. However, structured datasets generally lack this level of exposure. One example is the travel itinerary, which many Internet users post online in the form of a spreadsheet or web page table, yet the collection of such itineraries remains difficult to search or browse due to insufficient parsing and indexing by search engines. Enabling interaction with user-uploaded itineraries could provide valuable information to trip planners who are researching travel options and to businesses attempting to understand travel patterns. This work examines the challenges of identifying and extracting itineraries from spreadsheets and web page tables to support such applications, with a focus on differentiating between itineraries and other documents with geographic content.
{"title":"Itinerary retrieval: travelers, like traveling salesmen, prefer efficient routes","authors":"M. Adelfio, H. Samet","doi":"10.1145/2675354.2675355","DOIUrl":"https://doi.org/10.1145/2675354.2675355","url":null,"abstract":"Internet users share large quantities of text and multimedia content that becomes easily accessible to others via hyperlinks and search engine results. However, structured datasets generally lack this level of exposure. One example is the travel itinerary, which many Internet users post online in the form of a spreadsheet or web page table, yet the collection of such itineraries remains difficult to search or browse due to insufficient parsing and indexing by search engines. Enabling interaction with user-uploaded itineraries could provide valuable information to trip planners who are researching travel options and to businesses attempting to understand travel patterns. This work examines the challenges of identifying and extracting itineraries from spreadsheets and web page tables to support such applications, with a focus on differentiating between itineraries and other documents with geographic content.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"84 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131457356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The determination of the geographic scope of documents is important for many applications in geographic information retrieval (GIR). Many techniques require the use of gazetteers as a source of reference data. However, creating and maintaining gazetteers is still a complex and demanding task. We propose using linked data sources to put together gazetteer data that can be both broad (e.g. planetary) and deep (e.g., down to urban detail). Linked data sources also allow enriching the resulting gazetteer with a set of geographic and semantic relationships involving place names and other geographic and non-geographic terms, thus expanding the possibilities for solving typical GIR problems such as disambiguation and filtering. This work shows the results of efforts to combine two linked data sources of gazetteer data, namely GeoNames and DBPedia, to populate an integrated and semantically-enriched gazetteer. We used evidence contained in attributes, such as Wikipedia URLs, Linked Data predicates that indicate that places in both sources are the same, and some additional criteria. The resulting gazetteer contains 8,729,833 places, of which 426;317 are found in both data sources. This relatively small overlap is analyzed, indicating that GeoNames and DBPedia are complementary, covering typically different classes of places, thus leading to the idea that further expansion can be achieved by integrating gazetteer data from additional Linked Data sources.
{"title":"Integration of linked data sources for gazetteer expansion","authors":"T. Moura, C. Davis","doi":"10.1145/2675354.2675357","DOIUrl":"https://doi.org/10.1145/2675354.2675357","url":null,"abstract":"The determination of the geographic scope of documents is important for many applications in geographic information retrieval (GIR). Many techniques require the use of gazetteers as a source of reference data. However, creating and maintaining gazetteers is still a complex and demanding task. We propose using linked data sources to put together gazetteer data that can be both broad (e.g. planetary) and deep (e.g., down to urban detail). Linked data sources also allow enriching the resulting gazetteer with a set of geographic and semantic relationships involving place names and other geographic and non-geographic terms, thus expanding the possibilities for solving typical GIR problems such as disambiguation and filtering. This work shows the results of efforts to combine two linked data sources of gazetteer data, namely GeoNames and DBPedia, to populate an integrated and semantically-enriched gazetteer. We used evidence contained in attributes, such as Wikipedia URLs, Linked Data predicates that indicate that places in both sources are the same, and some additional criteria. The resulting gazetteer contains 8,729,833 places, of which 426;317 are found in both data sources. This relatively small overlap is analyzed, indicating that GeoNames and DBPedia are complementary, covering typically different classes of places, thus leading to the idea that further expansion can be achieved by integrating gazetteer data from additional Linked Data sources.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131218112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommending interesting locations to users is a challenge for social and productive networks. The evidence of the content produced by users must be considered in this task, which may be simplified by the use of the meta-data associated with the content, i.e., the categorization supported by the network -- descriptive keywords and geographic coordinates. In this paper we present an extension to a productive network representation model, originally designed to discover indirect keywords. Our extension adds a spatial dimension to the information that represents the user production, enabling indirect location discovery methods through the interpretation of the network as a graph, solely relying on keywords and locations that categorize or describe productive items. The model and indirect location discovery methods presented in this paper avoid content analysis, and are a new step towards a generic approach to the identification of relevant information, otherwise hidden from the users. The evaluation of the model extension and methods is accomplished by an experiment that performs a classification analysis over the Twitter network. The results show that we can efficiently recommend locations to users.
{"title":"Indirect location recommendation","authors":"André Sabino, A. Rodrigues","doi":"10.1145/2675354.2675697","DOIUrl":"https://doi.org/10.1145/2675354.2675697","url":null,"abstract":"Recommending interesting locations to users is a challenge for social and productive networks. The evidence of the content produced by users must be considered in this task, which may be simplified by the use of the meta-data associated with the content, i.e., the categorization supported by the network -- descriptive keywords and geographic coordinates. In this paper we present an extension to a productive network representation model, originally designed to discover indirect keywords. Our extension adds a spatial dimension to the information that represents the user production, enabling indirect location discovery methods through the interpretation of the network as a graph, solely relying on keywords and locations that categorize or describe productive items. The model and indirect location discovery methods presented in this paper avoid content analysis, and are a new step towards a generic approach to the identification of relevant information, otherwise hidden from the users. The evaluation of the model extension and methods is accomplished by an experiment that performs a classification analysis over the Twitter network. The results show that we can efficiently recommend locations to users.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114624098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identifying micro-bloggers who are likely witnesses to events is beneficial in numerous applications, including event detection and credibility assessment. This paper presents research in-progress on testing of a conceptual model, which defines witness and related accounts from micro-blogs about events. The case study events considered have varying spatial and temporal characteristics, and include a shark sighting, a music concert, a protest, and a cyclone. Results indicate that witnessing characteristics are influenced by numerous factors in addition to the spatial and temporal characteristics of the events, including the motivation of the witnesses themselves. Additionally, the results suggest enhancements to the conceptual model to provide a more sophisticated generic implementation, and insights for future automation approaches.
{"title":"Testing a model of witness accounts in social media","authors":"M. Truelove, M. Vasardani, S. Winter","doi":"10.1145/2675354.2675699","DOIUrl":"https://doi.org/10.1145/2675354.2675699","url":null,"abstract":"Identifying micro-bloggers who are likely witnesses to events is beneficial in numerous applications, including event detection and credibility assessment. This paper presents research in-progress on testing of a conceptual model, which defines witness and related accounts from micro-blogs about events. The case study events considered have varying spatial and temporal characteristics, and include a shark sighting, a music concert, a protest, and a cyclone. Results indicate that witnessing characteristics are influenced by numerous factors in addition to the spatial and temporal characteristics of the events, including the motivation of the witnesses themselves. Additionally, the results suggest enhancements to the conceptual model to provide a more sophisticated generic implementation, and insights for future automation approaches.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Various methods for automatically detecting events from social media have been developed in recent years. However, little progress has been made towards extracting structured representations of such events, which severely limits the way in which the resulting event databases can be queried. As a first step to address this issue, we focus on the problem of discovering the semantic type of events. While current methods are almost exclusively based on bag-of-words methods, we show that additionally using location features can substantially improve the results. In particular, we use the tags associated with Flickr photos and the types of the known events near the venue of the event as context information.
{"title":"Estimating the semantic type of events using location features from Flickr","authors":"Steven Van Canneyt, S. Schockaert, B. Dhoedt","doi":"10.1145/2675354.2675700","DOIUrl":"https://doi.org/10.1145/2675354.2675700","url":null,"abstract":"Various methods for automatically detecting events from social media have been developed in recent years. However, little progress has been made towards extracting structured representations of such events, which severely limits the way in which the resulting event databases can be queried. As a first step to address this issue, we focus on the problem of discovering the semantic type of events. While current methods are almost exclusively based on bag-of-words methods, we show that additionally using location features can substantially improve the results. In particular, we use the tags associated with Flickr photos and the types of the known events near the venue of the event as context information.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122495733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergey Nepomnyachiy, Bluma S. Gelley, Wei Jiang, Tehila Minkus
With the adoption of timestamps and geotags on Web data, search engines are increasingly being asked questions of "where" and "when" in addition to the classic "what." In the case of Twitter, many tweets are tagged with location information as well as timestamps, creating a demand for query processors that can search both of these dimensions along with text. We propose 3W, a search framework for geo-temporal stamped documents. It exploits the structure of time-stamped data to dramatically shrink the temporal search space and uses a shallow tree based on the spatial distribution of tweets to allow speedy search over the spatial and text dimensions. Our evaluation on 30 million tweets shows that the prototype system outperforms the baseline approach that uses a monolithic index.
{"title":"What, where, and when: keyword search with spatio-temporal ranges","authors":"Sergey Nepomnyachiy, Bluma S. Gelley, Wei Jiang, Tehila Minkus","doi":"10.1145/2675354.2675358","DOIUrl":"https://doi.org/10.1145/2675354.2675358","url":null,"abstract":"With the adoption of timestamps and geotags on Web data, search engines are increasingly being asked questions of \"where\" and \"when\" in addition to the classic \"what.\" In the case of Twitter, many tweets are tagged with location information as well as timestamps, creating a demand for query processors that can search both of these dimensions along with text. We propose 3W, a search framework for geo-temporal stamped documents. It exploits the structure of time-stamped data to dramatically shrink the temporal search space and uses a shallow tree based on the spatial distribution of tweets to allow speedy search over the spatial and text dimensions. Our evaluation on 30 million tweets shows that the prototype system outperforms the baseline approach that uses a monolithic index.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124473773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial language, despite decades of research, still poses substantial challenges for automated systems, for instance in geographic information retrieval or human-robot interaction. We describe an approach to building a corpus of natural language expressions extracted from web documents for analyzing and modeling spatial relational expressions (SRE). The unique characteristic of this corpus is that it is built around georeferenced triplets, with each triplet containing two entities (including their latitude/longitude coordinates) related by a spatial expression such as near. While the approach is still experimental, our first results are promising, in that we believe they will form the foundation for a comprehensive contextualized model for interpreting spatial natural language expressions. For the time being, we are focusing on a single domain, hotel reviews. This domain restriction allowed us to implement a proof-of-concept that this approach, with advances in natural language technologies, will indeed deliver a comprehensive corpus. The potential to collect larger corpora, and associated challenges, is discussed.
{"title":"Building a corpus of spatial relational expressions extracted from web documents","authors":"J. O. Wallgrün, A. Klippel, Timothy Baldwin","doi":"10.1145/2675354.2675702","DOIUrl":"https://doi.org/10.1145/2675354.2675702","url":null,"abstract":"Spatial language, despite decades of research, still poses substantial challenges for automated systems, for instance in geographic information retrieval or human-robot interaction. We describe an approach to building a corpus of natural language expressions extracted from web documents for analyzing and modeling spatial relational expressions (SRE). The unique characteristic of this corpus is that it is built around georeferenced triplets, with each triplet containing two entities (including their latitude/longitude coordinates) related by a spatial expression such as near. While the approach is still experimental, our first results are promising, in that we believe they will form the foundation for a comprehensive contextualized model for interpreting spatial natural language expressions. For the time being, we are focusing on a single domain, hotel reviews. This domain restriction allowed us to implement a proof-of-concept that this approach, with advances in natural language technologies, will indeed deliver a comprehensive corpus. The potential to collect larger corpora, and associated challenges, is discussed.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toponyms in texts and search queries are often used figuratively and do not directly refer to the locations they reference in their literal sense. Different usage kinds and stylistic devices characterize toponym usages in texts. It is thus crucial for a Geographic Information Retrieval (GIR) system to precisely distinguish these different toponym usages at indexing and at query time in order to best address a given information need and the geospatial footprint of a document. For that purpose, we analyze which of the classic stylistic devices such as allegories, metaphors, or metonymies are used together with toponyms. We use these categories as a foundation for a systematic approach towards the characterization of toponym usages in texts which we believe is necessary to further boost retrieval effectiveness of future GIR systems. A prototype implements this characterization exemplary for texts written in German. We evaluate the effectiveness of our approach against a reference corpus to show the general feasibility. Our approach provides a basis for a wide range of more sophisticated applications such as for example text genre detection.
{"title":"Characterization of toponym usages in texts","authors":"S. Wolf, A. Henrich, Daniel Blank","doi":"10.1145/2675354.2675703","DOIUrl":"https://doi.org/10.1145/2675354.2675703","url":null,"abstract":"Toponyms in texts and search queries are often used figuratively and do not directly refer to the locations they reference in their literal sense. Different usage kinds and stylistic devices characterize toponym usages in texts. It is thus crucial for a Geographic Information Retrieval (GIR) system to precisely distinguish these different toponym usages at indexing and at query time in order to best address a given information need and the geospatial footprint of a document. For that purpose, we analyze which of the classic stylistic devices such as allegories, metaphors, or metonymies are used together with toponyms. We use these categories as a foundation for a systematic approach towards the characterization of toponym usages in texts which we believe is necessary to further boost retrieval effectiveness of future GIR systems. A prototype implements this characterization exemplary for texts written in German. We evaluate the effectiveness of our approach against a reference corpus to show the general feasibility. Our approach provides a basis for a wide range of more sophisticated applications such as for example text genre detection.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122649468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of systems have been recently constructed that make use of a map query interface to access documents by the locations that they mention. These mentions are often ambiguous in the sense that many interpretations exist for the locations which are not always expressed along with all the necessary qualifiers. In other words, users are assumed to be able to make the appropriate identification based either on knowledge of prior queries or the nature of the document containing the references as well as knowledge of the target audience. The disambiguation process is known as toponym resolution. The map query interface results in the placement of icons and links to the appropriate documents at the corresponding location on the map. Assuming that all toponyms have been recognized (i.e., 100% rate of recall for toponym recognition), it is shown how to achieve an effective 100% rate of recall for toponym resolution for all interpretations of a toponym that the toponym recognition process associates with at least one document. This is done with the aid of a minimap that shows all of these interpretations which means that a user has access to all documents that mention a specific location as long as the textual specification to the location has been recognized as a location rather than as the name of another entity such as a person, company, organization, etc. It also assumes that the user is capable of determining the correct interpretation of each toponym. This is important as it enables the determination of precision and recall.
{"title":"Using minimaps to enable toponym resolution with an effective 100% rate of recall","authors":"H. Samet","doi":"10.1145/2675354.2675698","DOIUrl":"https://doi.org/10.1145/2675354.2675698","url":null,"abstract":"A number of systems have been recently constructed that make use of a map query interface to access documents by the locations that they mention. These mentions are often ambiguous in the sense that many interpretations exist for the locations which are not always expressed along with all the necessary qualifiers. In other words, users are assumed to be able to make the appropriate identification based either on knowledge of prior queries or the nature of the document containing the references as well as knowledge of the target audience. The disambiguation process is known as toponym resolution. The map query interface results in the placement of icons and links to the appropriate documents at the corresponding location on the map. Assuming that all toponyms have been recognized (i.e., 100% rate of recall for toponym recognition), it is shown how to achieve an effective 100% rate of recall for toponym resolution for all interpretations of a toponym that the toponym recognition process associates with at least one document. This is done with the aid of a minimap that shows all of these interpretations which means that a user has access to all documents that mention a specific location as long as the textual specification to the location has been recognized as a location rather than as the name of another entity such as a person, company, organization, etc. It also assumes that the user is capable of determining the correct interpretation of each toponym. This is important as it enables the determination of precision and recall.","PeriodicalId":286892,"journal":{"name":"Proceedings of the 8th Workshop on Geographic Information Retrieval","volume":"366 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122769345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}