Geerajit Rattanaritnont, Masashi Toyoda, M. Kitsuregawa
Nowadays people can share useful information on social networking sites such as Facebook and Twitter. The information is spread over the networks when it is forwarded or copied repeatedly from friends to friends. This phenomenon is so called "information cascade", and has been studied long time since it sometimes has an impact on the real world. Various social activities tends to have different ways of cascade on the social networks. Our focus in this study is on characterizing the cascade patterns according to users' influence and posting behaviors in various topics. The cascade patterns could be useful for various organizations to consider the strategy of public relations activities. We explore four measures which are cascade ratio, tweet ratio, time of tweet, and exposure curve. Our results show that hashtags in different topics have different cascade patterns in term of these measures. However, some hashtags even in the same topic have different cascade patterns. We discover that such kind of hidden relationship between topics can be surprisingly revealed by using only our four measures rather than considering tweet contents. Finally, our results also show that cascade ratio and time of tweet are the most effective measures to distinguish cascade patterns in different topics.
{"title":"Analyzing patterns of information cascades based on users' influence and posting behaviors","authors":"Geerajit Rattanaritnont, Masashi Toyoda, M. Kitsuregawa","doi":"10.1145/2169095.2169097","DOIUrl":"https://doi.org/10.1145/2169095.2169097","url":null,"abstract":"Nowadays people can share useful information on social networking sites such as Facebook and Twitter. The information is spread over the networks when it is forwarded or copied repeatedly from friends to friends. This phenomenon is so called \"information cascade\", and has been studied long time since it sometimes has an impact on the real world. Various social activities tends to have different ways of cascade on the social networks. Our focus in this study is on characterizing the cascade patterns according to users' influence and posting behaviors in various topics. The cascade patterns could be useful for various organizations to consider the strategy of public relations activities. We explore four measures which are cascade ratio, tweet ratio, time of tweet, and exposure curve. Our results show that hashtags in different topics have different cascade patterns in term of these measures. However, some hashtags even in the same topic have different cascade patterns. We discover that such kind of hidden relationship between topics can be surprisingly revealed by using only our four measures rather than considering tweet contents. Finally, our results also show that cascade ratio and time of tweet are the most effective measures to distinguish cascade patterns in different topics.","PeriodicalId":132536,"journal":{"name":"TempWeb '12","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122518358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, large-scale knowledge bases have been constructed by automatically extracting relational facts from text. Unfortunately, most of the current knowledge bases focus on static facts and ignore the temporal dimension. However, the vast majority of facts are evolving with time or are valid only during a particular time period. Thus, time is a significant dimension that should be included in knowledge bases. In this paper, we introduce a complete information extraction framework that harvests temporal facts and events from semi-structured data and free text of Wikipedia articles to create a temporal ontology. First, we extend a temporal data representation model by making it aware of events. Second, we develop an information extraction method which harvests temporal facts and events from Wikipedia infoboxes, categories, lists, and article titles in order to build a temporal knowledge base. Third, we show how the system can use its extracted knowledge for further growing the knowledge base. We demonstrate the effectiveness of our proposed methods through several experiments. We extracted more than one million temporal facts with precision over 90% for extraction from semi-structured data and almost 70% for extraction from text.
{"title":"Extraction of temporal facts and events from Wikipedia","authors":"Erdal Kuzey, G. Weikum","doi":"10.1145/2169095.2169101","DOIUrl":"https://doi.org/10.1145/2169095.2169101","url":null,"abstract":"Recently, large-scale knowledge bases have been constructed by automatically extracting relational facts from text. Unfortunately, most of the current knowledge bases focus on static facts and ignore the temporal dimension. However, the vast majority of facts are evolving with time or are valid only during a particular time period. Thus, time is a significant dimension that should be included in knowledge bases.\u0000 In this paper, we introduce a complete information extraction framework that harvests temporal facts and events from semi-structured data and free text of Wikipedia articles to create a temporal ontology. First, we extend a temporal data representation model by making it aware of events. Second, we develop an information extraction method which harvests temporal facts and events from Wikipedia infoboxes, categories, lists, and article titles in order to build a temporal knowledge base. Third, we show how the system can use its extracted knowledge for further growing the knowledge base.\u0000 We demonstrate the effectiveness of our proposed methods through several experiments. We extracted more than one million temporal facts with precision over 90% for extraction from semi-structured data and almost 70% for extraction from text.","PeriodicalId":132536,"journal":{"name":"TempWeb '12","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132401215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generically, search engines fail to understand the user's temporal intents when expressed as implicit temporal queries. This causes the retrieval of less relevant information and prevents users from being aware of the possible temporal dimension of the query results. In this paper, we aim to develop a language-independent model that tackles the temporal dimensions of a query and identifies its most relevant time periods. For this purpose, we propose a temporal similarity measure capable of associating a relevant date(s) to a given query and filtering out irrelevant ones. Our approach is based on the exploitation of temporal information from web content, particularly within the set of k-top retrieved web snippets returned in response to a query. We particularly focus on extracting years, which are a kind of temporal information that often appears in this type of collection. We evaluate our methodology using a set of real-world text temporal queries, which are clear concepts (i.e. queries which are non-ambiguous in concept and temporal in their purpose). Experiments show that when compared to baseline methods, determining the most relevant dates relating to any given implicit temporal query can be improved with a new temporal similarity measure.
{"title":"Enriching temporal query understanding through date identification: how to tag implicit temporal queries?","authors":"Ricardo Campos, G. Dias, A. Jorge, C. Nunes","doi":"10.1145/2169095.2169103","DOIUrl":"https://doi.org/10.1145/2169095.2169103","url":null,"abstract":"Generically, search engines fail to understand the user's temporal intents when expressed as implicit temporal queries. This causes the retrieval of less relevant information and prevents users from being aware of the possible temporal dimension of the query results. In this paper, we aim to develop a language-independent model that tackles the temporal dimensions of a query and identifies its most relevant time periods. For this purpose, we propose a temporal similarity measure capable of associating a relevant date(s) to a given query and filtering out irrelevant ones. Our approach is based on the exploitation of temporal information from web content, particularly within the set of k-top retrieved web snippets returned in response to a query. We particularly focus on extracting years, which are a kind of temporal information that often appears in this type of collection. We evaluate our methodology using a set of real-world text temporal queries, which are clear concepts (i.e. queries which are non-ambiguous in concept and temporal in their purpose). Experiments show that when compared to baseline methods, determining the most relevant dates relating to any given implicit temporal query can be improved with a new temporal similarity measure.","PeriodicalId":132536,"journal":{"name":"TempWeb '12","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128659930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporal information is very common in textual documents, and thus, identifying, normalizing, and organizing temporal expressions is an important task in IR. Although there are some tools for temporal tagging, there is a lack in research focusing on the relevance of temporal expressions. Besides counting their frequency and verifying whether they satisfy a temporal search query, temporal expressions are often considered in isolation only. There are no methods to calculate the relevance of temporal expressions, neither in general nor with respect to a query. In this paper, we present an approach to identify top relevant temporal expressions in documents using expression-, document-, corpus-, and query-based features. We present two relevance functions: one to calculate relevance scores for temporal expressions in general, and one with respect to a search query, which consists of a textual part, a temporal part, or both. Using two evaluation scenarios, we demonstrate the effectiveness of our approach.
{"title":"Identification of top relevant temporal expressions in documents","authors":"Jannik Strotgen, Omar Alonso, Michael Gertz","doi":"10.1145/2169095.2169102","DOIUrl":"https://doi.org/10.1145/2169095.2169102","url":null,"abstract":"Temporal information is very common in textual documents, and thus, identifying, normalizing, and organizing temporal expressions is an important task in IR. Although there are some tools for temporal tagging, there is a lack in research focusing on the relevance of temporal expressions. Besides counting their frequency and verifying whether they satisfy a temporal search query, temporal expressions are often considered in isolation only. There are no methods to calculate the relevance of temporal expressions, neither in general nor with respect to a query.\u0000 In this paper, we present an approach to identify top relevant temporal expressions in documents using expression-, document-, corpus-, and query-based features. We present two relevance functions: one to calculate relevance scores for temporal expressions in general, and one with respect to a search query, which consists of a textual part, a temporal part, or both. Using two evaluation scenarios, we demonstrate the effectiveness of our approach.","PeriodicalId":132536,"journal":{"name":"TempWeb '12","volume":"13 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124193801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the same information appears on many Web pages, we often want to know which page is the first one that discussed it, or how the information has spread on the Web as time passes. In this paper, we develop two methods: a method of detecting the first page that discussed the given information, and a method of generating a graph showing how the number of pages discussing it has changed along the timeline. To extract such information, we need to determine which pages discuss the given topic, and also need to determine when these pages were created. For the former step, we design a metric for estimating inclusion degree between information and a page. For the latter step, we develop a technique of extracting creation timestamps on web pages. Although timestamp extraction is a crucial component in temporal Web analysis, no research has shown how to do it in detail. Both steps are, however, still error-prone. In order to improve noise elimination, we examine not only the properties of each page, but also temporal relationship between pages. If temporal relationship between some candidate page and other pages are unlikely in typical patterns of information spread on the Web, we eliminate the candidate page as a noise. Results of our experiments show that our methods achieve high precision and can be used for practical use.
{"title":"Noise robust detection of the emergence and spread of topics on the web","authors":"Masahiro Inoue, Keishi Tajima","doi":"10.1145/2169095.2169098","DOIUrl":"https://doi.org/10.1145/2169095.2169098","url":null,"abstract":"As the same information appears on many Web pages, we often want to know which page is the first one that discussed it, or how the information has spread on the Web as time passes. In this paper, we develop two methods: a method of detecting the first page that discussed the given information, and a method of generating a graph showing how the number of pages discussing it has changed along the timeline. To extract such information, we need to determine which pages discuss the given topic, and also need to determine when these pages were created. For the former step, we design a metric for estimating inclusion degree between information and a page. For the latter step, we develop a technique of extracting creation timestamps on web pages. Although timestamp extraction is a crucial component in temporal Web analysis, no research has shown how to do it in detail. Both steps are, however, still error-prone. In order to improve noise elimination, we examine not only the properties of each page, but also temporal relationship between pages. If temporal relationship between some candidate page and other pages are unlikely in typical patterns of information spread on the Web, we eliminate the candidate page as a noise. Results of our experiments show that our methods achieve high precision and can be used for practical use.","PeriodicalId":132536,"journal":{"name":"TempWeb '12","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131116726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margarita Karkali, Vassilis Plachouras, Constantinos Stefanatos, M. Vazirgiannis
Keyword extraction from web pages is essential to various text mining tasks including contextual advertising, recommendation selection, user profiling and personalization. For example, extracted keywords in contextual advertising are used to match advertisements with the web page currently browsed by a user. Most of the keyword extraction methods mainly rely on the content of a single web page, ignoring the browsing history of a user, and hence, potentially leading to the same advertisements or recommendations. In this work we propose a new feature scoring algorithm for web page terms extraction that, assuming a recent browsing history per user, takes into account the freshness of keywords in the current page as means of shifting users interests. We propose BM25H, a variant of BM25 scoring function, implemented on the client-side, that takes into account the user browsing history and suggests keywords relevant to the currently browsed page, but also fresh with respect to the user's recent browsing history. In this way, for each web page we obtain a set of keywords, representing the time shifting interests of the user. BM25H avoids repetitions of keywords which may be simply domain specific stop-words, or may result in matching the same ads or similar recommendations. Our experimental results show that BM25H achieves more than 70% in precision at 20 extracted keywords (based on human blind evaluation) and outperforms our baselines (TF and BM25 scoring functions), while it succeeds in keeping extracted keywords fresh compared to recent user history.
{"title":"Keeping keywords fresh: a BM25 variation for personalized keyword extraction","authors":"Margarita Karkali, Vassilis Plachouras, Constantinos Stefanatos, M. Vazirgiannis","doi":"10.1145/2169095.2169099","DOIUrl":"https://doi.org/10.1145/2169095.2169099","url":null,"abstract":"Keyword extraction from web pages is essential to various text mining tasks including contextual advertising, recommendation selection, user profiling and personalization. For example, extracted keywords in contextual advertising are used to match advertisements with the web page currently browsed by a user. Most of the keyword extraction methods mainly rely on the content of a single web page, ignoring the browsing history of a user, and hence, potentially leading to the same advertisements or recommendations.\u0000 In this work we propose a new feature scoring algorithm for web page terms extraction that, assuming a recent browsing history per user, takes into account the freshness of keywords in the current page as means of shifting users interests. We propose BM25H, a variant of BM25 scoring function, implemented on the client-side, that takes into account the user browsing history and suggests keywords relevant to the currently browsed page, but also fresh with respect to the user's recent browsing history. In this way, for each web page we obtain a set of keywords, representing the time shifting interests of the user. BM25H avoids repetitions of keywords which may be simply domain specific stop-words, or may result in matching the same ads or similar recommendations. Our experimental results show that BM25H achieves more than 70% in precision at 20 extracted keywords (based on human blind evaluation) and outperforms our baselines (TF and BM25 scoring functions), while it succeeds in keeping extracted keywords fresh compared to recent user history.","PeriodicalId":132536,"journal":{"name":"TempWeb '12","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132859866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}