Weather is a typical topic of daily conversations, so it is a natural idea to use social data to observe weather. Geotagging is a key to use social data for weather applications because weather is a highly localized phenomenon on the earth. Hence we developed software called GeoNLP for toponym-based geotagging, and applied it to Twitter data stream to find toponyms (place names) from Japanese tweets talking about precipitation events. We observed that less than 10 percent of the tweets contain toponym information, but it can capture precipitation events for each place. We also show temporal relationship between rain events and tweets. A case study shows that the relative number of tweets about rain and snow indicates the status of weather. In a few months, we collected almost million tweets about precipitation events with toponyms, but bias of tweets toward highly populated area is a big problem for applying the method to rural areas. The result indicates that social data streams can be used as complementary data source to scientific data streams.
{"title":"Toponym-based geotagging for observing precipitation from social and scientific data streams","authors":"A. Kitamoto, T. Sagara","doi":"10.1145/2390790.2390799","DOIUrl":"https://doi.org/10.1145/2390790.2390799","url":null,"abstract":"Weather is a typical topic of daily conversations, so it is a natural idea to use social data to observe weather. Geotagging is a key to use social data for weather applications because weather is a highly localized phenomenon on the earth. Hence we developed software called GeoNLP for toponym-based geotagging, and applied it to Twitter data stream to find toponyms (place names) from Japanese tweets talking about precipitation events. We observed that less than 10 percent of the tweets contain toponym information, but it can capture precipitation events for each place. We also show temporal relationship between rain events and tweets. A case study shows that the relative number of tweets about rain and snow indicates the status of weather. In a few months, we collected almost million tweets about precipitation events with toponyms, but bias of tweets toward highly populated area is a big problem for applying the method to rural areas. The result indicates that social data streams can be used as complementary data source to scientific data streams.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"310 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117232021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assigning location information to online media has grown from a fringe activity to an automated process aided by supportive hardware and software. This has lead to increasing availability of geographic context for the picture, videos and audio users share on the web. Such data has allowed systems to search through, browse and analyse media according to location without being wholly dependant on expensive manual annotation. However, the current extent of much geographic metadata consists of describing a single pair of coordinates per item. There is difficulty in encoding the multiple locations pertinent to the content of an image, the location of the camera, as well as handling the multiple locations that may be associated with sections of video. Current media retrieval systems use geographic indexing to return results that are relevant to an area defined as within the radius of a point or an official bounding box. This ignores the complexity of canonical geographic boundaries, as well as the colloquial nature of many commonly-referred to geographic points and areas, highlighting the relationship between these two complementary geographies. By addressing these current limitations, future geographic multimedia retrieval systems will be able to better support user engagement with media.
{"title":"State of the Geotag: where are we?","authors":"Adam Rae","doi":"10.1145/2390790.2390792","DOIUrl":"https://doi.org/10.1145/2390790.2390792","url":null,"abstract":"Assigning location information to online media has grown from a fringe activity to an automated process aided by supportive hardware and software. This has lead to increasing availability of geographic context for the picture, videos and audio users share on the web. Such data has allowed systems to search through, browse and analyse media according to location without being wholly dependant on expensive manual annotation. However, the current extent of much geographic metadata consists of describing a single pair of coordinates per item. There is difficulty in encoding the multiple locations pertinent to the content of an image, the location of the camera, as well as handling the multiple locations that may be associated with sections of video. Current media retrieval systems use geographic indexing to return results that are relevant to an area defined as within the radius of a point or an official bounding box. This ignores the complexity of canonical geographic boundaries, as well as the colloquial nature of many commonly-referred to geographic points and areas, highlighting the relationship between these two complementary geographies. By addressing these current limitations, future geographic multimedia retrieval systems will be able to better support user engagement with media.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130496244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On-line photo sharing websites such as Flickr not only allow users to share their precious memories with others, they also act as a repository of all kinds of information carried by their photos and tags. In this work, we investigate the problem of geographic discovery, particularly land-use classification, through crowdsourcing of geographic information from Flickr's geotagged photo collections. Our results show that the visual information contained in these photo collections enables us to classify three types of land-use classes on two university campuses. We also show that text entries accompanying these photos are informative for geographic discovery.
{"title":"Exploring Geotagged images for land-use classification","authors":"Daniel Leung, S. Newsam","doi":"10.1145/2390790.2390794","DOIUrl":"https://doi.org/10.1145/2390790.2390794","url":null,"abstract":"On-line photo sharing websites such as Flickr not only allow users to share their precious memories with others, they also act as a repository of all kinds of information carried by their photos and tags. In this work, we investigate the problem of geographic discovery, particularly land-use classification, through crowdsourcing of geographic information from Flickr's geotagged photo collections. Our results show that the visual information contained in these photo collections enables us to classify three types of land-use classes on two university campuses. We also show that text entries accompanying these photos are informative for geographic discovery.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122198663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most existing approaches to pedestrian detection only use the visual appearances as the main source in real world images. However, the visual information cannot always provide reliable guidance since pedestrians often change pose or wear different clothes under different conditions. In this work, by leveraging a vast amount of Web images, we first construct a contextual image database, in which each image is automatically attached with geographic location (i.e., latitude and longitude) and environment information (i.e., season, time and weather condition), assisted by image metadata and a few pre-trained classifiers. For the further pedestrian detection, an annotation scheme is presented which can sharply decrease manual labeling efforts. Several properties of the contextual image database are studied including whether the database is authentic and helpful for pedestrian detection. Moreover, we propose a context-based pedestrian detection approach by jointly exploring visual and contextual cues in a probabilistic model. Encouraging results are reported on our contextual image database.
{"title":"Find you wherever you are: geographic location and environment context-based pedestrian detection","authors":"Yuan Liu, Zhongchao Shi, G. Wang, Haike Guan","doi":"10.1145/2390790.2390801","DOIUrl":"https://doi.org/10.1145/2390790.2390801","url":null,"abstract":"Most existing approaches to pedestrian detection only use the visual appearances as the main source in real world images. However, the visual information cannot always provide reliable guidance since pedestrians often change pose or wear different clothes under different conditions. In this work, by leveraging a vast amount of Web images, we first construct a contextual image database, in which each image is automatically attached with geographic location (i.e., latitude and longitude) and environment information (i.e., season, time and weather condition), assisted by image metadata and a few pre-trained classifiers. For the further pedestrian detection, an annotation scheme is presented which can sharply decrease manual labeling efforts. Several properties of the contextual image database are studied including whether the database is authentic and helpful for pedestrian detection. Moreover, we propose a context-based pedestrian detection approach by jointly exploring visual and contextual cues in a probabilistic model. Encouraging results are reported on our contextual image database.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122610837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geo-tagged content from social media platforms such as Flickr provide large amounts of data about any given location, which can be used to create models of the language used to describe locations. To date, models of location have ignored the differences between users. This paper focuses on one aspect of demographics, namely gender, and explores the relationship between gender and location in a large-scale corpus of geo-tagged Flickr images. We find that male users are much more likely to geo-tag their photos than female users, and that the geo-tagged photos of male users have wider geographic coverage than those of females. We create gender-based language models of location using the Flickr tags describing geo-tagged photos, and find that Flickr tags created by male users contain more geographic information than those created by female users, and that they can be located based on their tags far more accurately. Further, models created exclusively with data from male users are more accurate than those created from female users' data. Although our results suggest that there is some benefit from using gender-specific models, this benefit is quite minor, and is overwhelmed by the richer location information in the male data. The results also show that gender-based differences in location models are more important at the hyper-local level.
{"title":"Gender-based models of location from flickr","authors":"Neil O'Hare, Vanessa Murdock","doi":"10.1145/2390790.2390802","DOIUrl":"https://doi.org/10.1145/2390790.2390802","url":null,"abstract":"Geo-tagged content from social media platforms such as Flickr provide large amounts of data about any given location, which can be used to create models of the language used to describe locations. To date, models of location have ignored the differences between users. This paper focuses on one aspect of demographics, namely gender, and explores the relationship between gender and location in a large-scale corpus of geo-tagged Flickr images. We find that male users are much more likely to geo-tag their photos than female users, and that the geo-tagged photos of male users have wider geographic coverage than those of females. We create gender-based language models of location using the Flickr tags describing geo-tagged photos, and find that Flickr tags created by male users contain more geographic information than those created by female users, and that they can be located based on their tags far more accurately. Further, models created exclusively with data from male users are more accurate than those created from female users' data. Although our results suggest that there is some benefit from using gender-specific models, this benefit is quite minor, and is overwhelmed by the richer location information in the male data. The results also show that gender-based differences in location models are more important at the hyper-local level.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131191087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, an enormous number of photographic images are uploaded on the Internet by casual users. In this study, we consider the concept of embedding geographical identification of locations as geotags in images. We attempt to retrieve images having certain similarities (or identical objects) from a geotagged image dataset. We then define the images having identical objects as orthologous images. Using content-based image retrieval (CBIR), we propose a ranking function--orthologous identity function (OIF)--to estimate the degree to which two images contain similarities in the form of identical objects; OIF is a similarity rating function that uses the geographic distance and image distance of photographs. Further, we evaluate the OIF as a ranking function by calculating the mean reciprocal rank (MRR) using our experimental dataset. The results reveal that the OIF can improve the efficiency of retrieving orthologous images as compared to using only geographic distance or image distance.
{"title":"Conjunctive ranking function using geographic distance and image distance for geotagged image retrieval","authors":"J. Kamahara, Takashi Nagamatsu, Naoki Tanaka","doi":"10.1145/2390790.2390795","DOIUrl":"https://doi.org/10.1145/2390790.2390795","url":null,"abstract":"Nowadays, an enormous number of photographic images are uploaded on the Internet by casual users. In this study, we consider the concept of embedding geographical identification of locations as geotags in images. We attempt to retrieve images having certain similarities (or identical objects) from a geotagged image dataset. We then define the images having identical objects as orthologous images. Using content-based image retrieval (CBIR), we propose a ranking function--orthologous identity function (OIF)--to estimate the degree to which two images contain similarities in the form of identical objects; OIF is a similarity rating function that uses the geographic distance and image distance of photographs. Further, we evaluate the OIF as a ranking function by calculating the mean reciprocal rank (MRR) using our experimental dataset. The results reveal that the OIF can improve the efficiency of retrieving orthologous images as compared to using only geographic distance or image distance.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123098983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Marc Finsterwald, G. Grefenstette, Julien Law-To, Hugues Bouchard, Amar-Djalil Mezaour
It is easy to find large-scale, free-to-use, computer readable geographical databases nowadays, and many people are examining how to exploit them, crossing them with other linked data to produce new applications. We present here a publicly accessible web application for searching for movies based on location, mashing up information from DBpedia and GeoNames. We also describe its architecture and the solutions we developed to address the entity ambiguity problems we encountered.
{"title":"The movie mashup application MoMa: geolocalizing and finding movies","authors":"Jean-Marc Finsterwald, G. Grefenstette, Julien Law-To, Hugues Bouchard, Amar-Djalil Mezaour","doi":"10.1145/2390790.2390797","DOIUrl":"https://doi.org/10.1145/2390790.2390797","url":null,"abstract":"It is easy to find large-scale, free-to-use, computer readable geographical databases nowadays, and many people are examining how to exploit them, crossing them with other linked data to produce new applications. We present here a publicly accessible web application for searching for movies based on location, mashing up information from DBpedia and GeoNames. We also describe its architecture and the solutions we developed to address the entity ambiguity problems we encountered.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114080039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasingly, users are enjoying and technology is supporting, capturing, sharing and accessing videos shot from their perspectives and experiences. Video is appreciated for its power to capture and present events and scenarios with great authenticity, realism and emotional impact. 360º video allows a step further towards more immersive experiences, by capturing the image all around, while hypervideo supports structuring and navigation of video and related information; and technology for capturing 360º video and its trajectory are becoming more accessible to the users, allowing to support more powerful video user experiences. In this paper, we present Sight Surfers, an interactive web application for the visualization and navigation of georeferenced 360º hypervideos, designed to empower users in their immersive video experiences, both accessing other users' videos and sharing their own, as novel forms of entertainment, culture and even art. We identify main challenges and describe its design and first user evaluation, with very encouraging results.
{"title":"Sight surfers: 360º videos and maps navigation","authors":"Gonçalo Noronha, Carlos Álvares, T. Chambel","doi":"10.1145/2390790.2390798","DOIUrl":"https://doi.org/10.1145/2390790.2390798","url":null,"abstract":"Increasingly, users are enjoying and technology is supporting, capturing, sharing and accessing videos shot from their perspectives and experiences. Video is appreciated for its power to capture and present events and scenarios with great authenticity, realism and emotional impact. 360º video allows a step further towards more immersive experiences, by capturing the image all around, while hypervideo supports structuring and navigation of video and related information; and technology for capturing 360º video and its trajectory are becoming more accessible to the users, allowing to support more powerful video user experiences. In this paper, we present Sight Surfers, an interactive web application for the visualization and navigation of georeferenced 360º hypervideos, designed to empower users in their immersive video experiences, both accessing other users' videos and sharing their own, as novel forms of entertainment, culture and even art. We identify main challenges and describe its design and first user evaluation, with very encouraging results.","PeriodicalId":441886,"journal":{"name":"GeoMM '12","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128830303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}