J. Mahmud, James Caverlee, Jeffrey Nichols, J. O'Donovan, Michelle X. Zhou
Massive amounts of data are being generated on social media sites, such as Twitter and Facebook. This data can be used to better understand people, such as their personality traits, perceptions, and preferences, and predict their behavior. This deeper understanding of users and their behaviors can benefit a wide range of intelligent applications, such as advertising, social recommender systems, and personalized knowledge management. These applications will also benefit individual users themselves by optimizing their experiences across a wide variety of domains, such as retail, healthcare, and education. Since mining and understanding user behavior from social media often requires interdisciplinary effort, including machine learning, text mining, human-computer interaction, and social science, our workshop aims to bring together researchers and practitioners from multiple fields to discuss the creation of deeper models of individual users by mining the content that they publish and the social networking behavior that they exhibit.
{"title":"DUBMMSM'12: international workshop on data-driven user behavioral modeling and mining from social media","authors":"J. Mahmud, James Caverlee, Jeffrey Nichols, J. O'Donovan, Michelle X. Zhou","doi":"10.1145/2396761.2398751","DOIUrl":"https://doi.org/10.1145/2396761.2398751","url":null,"abstract":"Massive amounts of data are being generated on social media sites, such as Twitter and Facebook. This data can be used to better understand people, such as their personality traits, perceptions, and preferences, and predict their behavior. This deeper understanding of users and their behaviors can benefit a wide range of intelligent applications, such as advertising, social recommender systems, and personalized knowledge management. These applications will also benefit individual users themselves by optimizing their experiences across a wide variety of domains, such as retail, healthcare, and education. Since mining and understanding user behavior from social media often requires interdisciplinary effort, including machine learning, text mining, human-computer interaction, and social science, our workshop aims to bring together researchers and practitioners from multiple fields to discuss the creation of deeper models of individual users by mining the content that they publish and the social networking behavior that they exhibit.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114770250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel topic model, Author-Conference Topic-Connection (ACTC) Model for academic network search. The ACTC Model extends the author-conference-topic (ACT) model by adding subject of the conference and the latent mapping information between subjects and topics. It simultaneously models topical aspects of papers, authors and conferences with two latent topic layers: a subject layer corresponding to conference topic, and a topic layer corresponding to the word topic. Each author would be associated with a multinomial distribution over subjects of conference (eg., KM, DB, IR for CIKM 2012), the conference(CIKM 2012), and the topics are respectively generated from a sampled subject. Then the words are generated from the sampled topics. We conduct experiments on a data set with 8,523 authors, 22,487 papers and 1,243 conferences from the well-known Arnetminer website, and train the model with different number of subjects and topics. For a qualitative evaluation, we compare ACTC with three others models LDA, Author-Topic (AT) and ACT in academic search services. Experiments show that ACTC can effectively capture the semantic connection between different types of information in academic network and perform well in expert searching and conference searching.
本文提出了一种新颖的学术网络搜索主题模型——作者-会议-主题连接模型。ACTC模型通过添加会议主题和主题与主题之间的潜在映射信息,扩展了作者-会议-主题(ACT)模型。它同时用两个潜在的主题层对论文、作者和会议的主题方面进行建模:一个与会议主题对应的主题层,一个与单词主题对应的主题层。每个作者将与会议主题的多项分布相关联。, KM, DB, IR (CIKM 2012),会议(CIKM 2012)和主题分别由采样主题生成。然后从采样的主题中生成单词。我们对来自知名网站Arnetminer的8,523位作者,22,487篇论文和1,243次会议的数据集进行实验,并使用不同数量的主题和主题来训练模型。为了进行定性评价,我们将ACTC与学术搜索服务中的其他三种模型LDA、作者-主题(AT)和ACT进行了比较。实验表明,ACTC可以有效地捕获学术网络中不同类型信息之间的语义联系,在专家搜索和会议搜索中表现良好。
{"title":"Author-conference topic-connection model for academic network search","authors":"Jianwen Wang, Xiaohua Hu, Xinhui Tu, Tingting He","doi":"10.1145/2396761.2398597","DOIUrl":"https://doi.org/10.1145/2396761.2398597","url":null,"abstract":"This paper proposes a novel topic model, Author-Conference Topic-Connection (ACTC) Model for academic network search. The ACTC Model extends the author-conference-topic (ACT) model by adding subject of the conference and the latent mapping information between subjects and topics. It simultaneously models topical aspects of papers, authors and conferences with two latent topic layers: a subject layer corresponding to conference topic, and a topic layer corresponding to the word topic. Each author would be associated with a multinomial distribution over subjects of conference (eg., KM, DB, IR for CIKM 2012), the conference(CIKM 2012), and the topics are respectively generated from a sampled subject. Then the words are generated from the sampled topics. We conduct experiments on a data set with 8,523 authors, 22,487 papers and 1,243 conferences from the well-known Arnetminer website, and train the model with different number of subjects and topics. For a qualitative evaluation, we compare ACTC with three others models LDA, Author-Topic (AT) and ACT in academic search services. Experiments show that ACTC can effectively capture the semantic connection between different types of information in academic network and perform well in expert searching and conference searching.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127770067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a topic model that discovers the correlation patterns in a given time-stamped document collection and how these patterns evolve over time. Our proposal, the theme chronicle model (TCM) divides traditional topics into temporal and stable topics to detect the change of each theme over time; previous topic models ignore these differences and characterize trends as merely bursts of topics. TCM introduces a theme topic (stable topic), a trend topic (temporal topic), timestamps, and a latent switch variable in each token to realize these differences. Its topic layers allow TCM to capture not only word co-occurrence patterns in each theme, but also word co-occurrence patterns at any given time in each theme as trends. Experiments on various data sets show that the proposed model is useful as a generative model to discover fine-grained tightly coherent topics, takes advantage of previous models, and then assigns values for new documents.
{"title":"Theme chronicle model: chronicle consists of timestamp and topical words over each theme","authors":"N. Kawamae","doi":"10.1145/2396761.2398573","DOIUrl":"https://doi.org/10.1145/2396761.2398573","url":null,"abstract":"This paper presents a topic model that discovers the correlation patterns in a given time-stamped document collection and how these patterns evolve over time. Our proposal, the theme chronicle model (TCM) divides traditional topics into temporal and stable topics to detect the change of each theme over time; previous topic models ignore these differences and characterize trends as merely bursts of topics. TCM introduces a theme topic (stable topic), a trend topic (temporal topic), timestamps, and a latent switch variable in each token to realize these differences. Its topic layers allow TCM to capture not only word co-occurrence patterns in each theme, but also word co-occurrence patterns at any given time in each theme as trends. Experiments on various data sets show that the proposed model is useful as a generative model to discover fine-grained tightly coherent topics, takes advantage of previous models, and then assigns values for new documents.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126532772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ognjen Savkovic, Paramita Mirza, Sergey Paramonov, W. Nutt
MAGIK demonstrates how to use meta-information about the completeness of a database to assess the quality of the answers returned by a query. The system holds so-called table-completeness (TC) statements, by which one can express that a table is partially complete, that is, it contains all facts about some aspect of the domain. Given a query, MAGIK determines from such meta-information whether the database contains sufficient data for the query answer to be complete. If, according to the TC statements, the database content is not sufficient for a complete answer, MAGIK explains which further TC statements are needed to guarantee completeness. MAGIK extends and complements theoretical work on modeling and reasoning about data completeness by providing the first implementation of a reasoner. The reasoner operates by translating completeness reasoning tasks into logic programs, which are executed by an answer set engine.
{"title":"MAGIK: managing completeness of data","authors":"Ognjen Savkovic, Paramita Mirza, Sergey Paramonov, W. Nutt","doi":"10.1145/2396761.2398741","DOIUrl":"https://doi.org/10.1145/2396761.2398741","url":null,"abstract":"MAGIK demonstrates how to use meta-information about the completeness of a database to assess the quality of the answers returned by a query. The system holds so-called table-completeness (TC) statements, by which one can express that a table is partially complete, that is, it contains all facts about some aspect of the domain. Given a query, MAGIK determines from such meta-information whether the database contains sufficient data for the query answer to be complete. If, according to the TC statements, the database content is not sufficient for a complete answer, MAGIK explains which further TC statements are needed to guarantee completeness. MAGIK extends and complements theoretical work on modeling and reasoning about data completeness by providing the first implementation of a reasoner. The reasoner operates by translating completeness reasoning tasks into logic programs, which are executed by an answer set engine.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127905048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During the past decade, various GPS-equipped devices have generated a tremendous amount of data with time and location information, which we refer to as big spatio-temporal data. In this paper, we present the design and implementation of CloST, a scalable big spatio-temporal data storage system to support data analytics using Hadoop. The main objective of CloST is to avoid scan the whole dataset when a spatio-temporal range is given. To this end, we propose a novel data model which has special treatments on three core attributes including an object id, a location and a time. Based on this data model, CloST hierarchically partitions data using all core attributes which enables efficient parallel processing of spatio-temporal range scans. According to the data characteristics, we devise a compact storage structure which reduces the storage size by an order of magnitude. In addition, we proposes scalable bulk loading algorithms capable of incrementally adding new data into the system. We conduct our experiments using a very large GPS log dataset and the results show that CloST has fast data loading speed, desirable scalability in query processing, as well as high data compression ratio.
{"title":"CloST: a hadoop-based storage system for big spatio-temporal data analytics","authors":"Haoyu Tan, Wuman Luo, L. Ni","doi":"10.1145/2396761.2398589","DOIUrl":"https://doi.org/10.1145/2396761.2398589","url":null,"abstract":"During the past decade, various GPS-equipped devices have generated a tremendous amount of data with time and location information, which we refer to as big spatio-temporal data. In this paper, we present the design and implementation of CloST, a scalable big spatio-temporal data storage system to support data analytics using Hadoop. The main objective of CloST is to avoid scan the whole dataset when a spatio-temporal range is given. To this end, we propose a novel data model which has special treatments on three core attributes including an object id, a location and a time. Based on this data model, CloST hierarchically partitions data using all core attributes which enables efficient parallel processing of spatio-temporal range scans. According to the data characteristics, we devise a compact storage structure which reduces the storage size by an order of magnitude. In addition, we proposes scalable bulk loading algorithms capable of incrementally adding new data into the system. We conduct our experiments using a very large GPS log dataset and the results show that CloST has fast data loading speed, desirable scalability in query processing, as well as high data compression ratio.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121795093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Liu, Andrey Kan, Jeffrey Chan, J. Bailey, C. Leckie, J. Pei, K. Ramamohanarao
Existing graph compression techniquesmostly focus on static graphs. However for many practical graphs such as social networks the edge weights frequently change over time. This phenomenon raises the question of how to compress dynamic graphs while maintaining most of their intrinsic structural patterns at each time snapshot. In this paper we show that the encoding cost of a dynamic graph is proportional to the heterogeneity of a three dimensional tensor that represents the dynamic graph. We propose an effective algorithm that compresses a dynamic graph by reducing the heterogeneity of its tensor representation, and at the same time also maintains a maximum lossy compression error at any time stamp of the dynamic graph. The bounded compression error benefits compressed graphs in that they retain good approximations of the original edge weights, and hence properties of the original graph (such as shortest paths) are well preserved. To the best of our knowledge, this is the first work that compresses weighted dynamic graphs with bounded lossy compression error at any time snapshot of the graph.
{"title":"On compressing weighted time-evolving graphs","authors":"Wei Liu, Andrey Kan, Jeffrey Chan, J. Bailey, C. Leckie, J. Pei, K. Ramamohanarao","doi":"10.1145/2396761.2398630","DOIUrl":"https://doi.org/10.1145/2396761.2398630","url":null,"abstract":"Existing graph compression techniquesmostly focus on static graphs. However for many practical graphs such as social networks the edge weights frequently change over time. This phenomenon raises the question of how to compress dynamic graphs while maintaining most of their intrinsic structural patterns at each time snapshot. In this paper we show that the encoding cost of a dynamic graph is proportional to the heterogeneity of a three dimensional tensor that represents the dynamic graph. We propose an effective algorithm that compresses a dynamic graph by reducing the heterogeneity of its tensor representation, and at the same time also maintains a maximum lossy compression error at any time stamp of the dynamic graph. The bounded compression error benefits compressed graphs in that they retain good approximations of the original edge weights, and hence properties of the original graph (such as shortest paths) are well preserved. To the best of our knowledge, this is the first work that compresses weighted dynamic graphs with bounded lossy compression error at any time snapshot of the graph.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115861310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahzad Rajput, Matthew Ekstrand-Abueg, Virgil Pavlu, J. Aslam
The goal of a typical information retrieval system is to satisfy a user's information need---e.g., by providing an answer or information "nugget"---while the actual search space of a typical information retrieval system consists of documents---i.e., collections of nuggets. In this paper, we characterize this relationship between nuggets and documents and discuss applications to system evaluation. In particular, for the problem of test collection construction for IR system evaluation, we demonstrate a highly efficient algorithm for simultaneously obtaining both relevant documents and relevant information. Our technique exploits the mutually reinforcing relationship between relevant documents and relevant information, yielding document-based test collections whose efficiency and efficacy exceed those of typical Cranfield-style test collections, while also generating sets of highly relevant information.
{"title":"Constructing test collections by inferring document relevance via extracted relevant information","authors":"Shahzad Rajput, Matthew Ekstrand-Abueg, Virgil Pavlu, J. Aslam","doi":"10.1145/2396761.2396783","DOIUrl":"https://doi.org/10.1145/2396761.2396783","url":null,"abstract":"The goal of a typical information retrieval system is to satisfy a user's information need---e.g., by providing an answer or information \"nugget\"---while the actual search space of a typical information retrieval system consists of documents---i.e., collections of nuggets. In this paper, we characterize this relationship between nuggets and documents and discuss applications to system evaluation. In particular, for the problem of test collection construction for IR system evaluation, we demonstrate a highly efficient algorithm for simultaneously obtaining both relevant documents and relevant information. Our technique exploits the mutually reinforcing relationship between relevant documents and relevant information, yielding document-based test collections whose efficiency and efficacy exceed those of typical Cranfield-style test collections, while also generating sets of highly relevant information.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131356785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Xin, Irwin King, Ritesh Agrawal, Michael R. Lyu, Heyan Huang
Traditionally click models predict click-through rate (CTR) of an advertisement (ad) independent of other ads. Recent researches however indicate that the CTR of an ad is dependent on the quality of the ad itself but also of the neighboring ads. Using historical click-through data of a commercially available ad server, we identify two types (competing and collaborating) of influences among sponsored ads and further propose a novel click-model, Full Relation Model (FRM), which explicitly models dependencies between ads. On a test data, FRM shows significant improvement in CTR prediction as compared to earlier click models.
{"title":"Do ads compete or collaborate?: designing click models with full relationship incorporated","authors":"Xin Xin, Irwin King, Ritesh Agrawal, Michael R. Lyu, Heyan Huang","doi":"10.1145/2396761.2398528","DOIUrl":"https://doi.org/10.1145/2396761.2398528","url":null,"abstract":"Traditionally click models predict click-through rate (CTR) of an advertisement (ad) independent of other ads. Recent researches however indicate that the CTR of an ad is dependent on the quality of the ad itself but also of the neighboring ads. Using historical click-through data of a commercially available ad server, we identify two types (competing and collaborating) of influences among sponsored ads and further propose a novel click-model, Full Relation Model (FRM), which explicitly models dependencies between ads. On a test data, FRM shows significant improvement in CTR prediction as compared to earlier click models.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130000944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key step for the mobile app usage analysis is to classify apps into some predefined categories. However, it is a nontrivial task to effectively classify mobile apps due to the limited contextual information available for the analysis. To this end, in this paper, we propose an approach to first enrich the contextual information of mobile apps by exploiting the additional Web knowledge from the Web search engine. Then, inspired by the observation that different types of mobile apps may be relevant to different real-world contexts, we also extract some contextual features for mobile apps from the context-rich device logs of mobile users. Finally, we combine all the enriched contextual information into a Maximum Entropy model for training a mobile app classifier. The experimental results based on 443 mobile users' device logs clearly show that our approach outperforms two state-of-the-art benchmark methods with a significant margin.
{"title":"Exploiting enriched contextual information for mobile app classification","authors":"Hengshu Zhu, Huanhuan Cao, Enhong Chen, Hui Xiong, Jilei Tian","doi":"10.1145/2396761.2398484","DOIUrl":"https://doi.org/10.1145/2396761.2398484","url":null,"abstract":"A key step for the mobile app usage analysis is to classify apps into some predefined categories. However, it is a nontrivial task to effectively classify mobile apps due to the limited contextual information available for the analysis. To this end, in this paper, we propose an approach to first enrich the contextual information of mobile apps by exploiting the additional Web knowledge from the Web search engine. Then, inspired by the observation that different types of mobile apps may be relevant to different real-world contexts, we also extract some contextual features for mobile apps from the context-rich device logs of mobile users. Finally, we combine all the enriched contextual information into a Maximum Entropy model for training a mobile app classifier. The experimental results based on 443 mobile users' device logs clearly show that our approach outperforms two state-of-the-art benchmark methods with a significant margin.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130042412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing search engines have page as the unit of information of retrieval. They typically return a ranked list of pages, each being a search result containing the query keywords. This within-one-page constraint disallows utilization of relationship information that is often available and greatly beneficial. To utilize relationship information and improve search precision, we explore cross-page search, where each answer is a logical page consisting of multiple closely related pages that collectively contain the query keywords. We have implemented a prototype Cager, providing cross-page search and visualization over real dataset.
{"title":"Cager: a framework for cross-page search","authors":"Zhumin Chen, Byron J. Gao, Qi Kang","doi":"10.1145/2396761.2398733","DOIUrl":"https://doi.org/10.1145/2396761.2398733","url":null,"abstract":"Existing search engines have page as the unit of information of retrieval. They typically return a ranked list of pages, each being a search result containing the query keywords. This within-one-page constraint disallows utilization of relationship information that is often available and greatly beneficial. To utilize relationship information and improve search precision, we explore cross-page search, where each answer is a logical page consisting of multiple closely related pages that collectively contain the query keywords. We have implemented a prototype Cager, providing cross-page search and visualization over real dataset.","PeriodicalId":313414,"journal":{"name":"Proceedings of the 21st ACM international conference on Information and knowledge management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130143284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}