首页 > 最新文献

Proceedings of the 13th International Conference on Web Search and Data Mining最新文献

英文 中文
WebShapes: Network Visualization with 3D Shapes WebShapes:具有3D形状的网络可视化
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371867
Shengmin Jin, Richard Wituszynski, Max Caiello-Gingold, R. Zafarani
Network visualization has played a critical role in graph analysis, as it not only presents a big picture of a network but also helps reveal the structural information of a network. The most popular visual representation of networks is the node-link diagram. However, visualizing a large network with the node-link diagram can be challenging due to the difficulty in obtaining an optimal graph layout. To address this challenge, a recent advancement in network representation: network shape, allows one to compactly represent a network and its subgraphs with the distribution of their embeddings. Inspired by this research, we have designed a web platform WebShapes that enables researchers and practitioners to visualize their network data as customized 3D shapes (http://b.link/webshapes). Furthermore, we provide a case study on real-world networks to explore the sensitivity of network shapes to different graph sampling, embedding, and fitting methods, and we show examples of understanding networks through their network shapes.
网络可视化在图分析中起着至关重要的作用,因为它不仅能呈现网络的全貌,而且有助于揭示网络的结构信息。最流行的网络可视化表示是节点链接图。然而,由于难以获得最佳的图布局,使用节点链接图可视化大型网络可能具有挑战性。为了应对这一挑战,网络表示的最新进展:网络形状,允许人们用嵌入的分布紧凑地表示网络及其子图。受到这项研究的启发,我们设计了一个网络平台WebShapes,使研究人员和从业者能够将他们的网络数据可视化为定制的3D形状(http://b.link/webshapes)。此外,我们提供了一个现实世界网络的案例研究,以探索网络形状对不同图采样、嵌入和拟合方法的敏感性,并展示了通过网络形状理解网络的示例。
{"title":"WebShapes: Network Visualization with 3D Shapes","authors":"Shengmin Jin, Richard Wituszynski, Max Caiello-Gingold, R. Zafarani","doi":"10.1145/3336191.3371867","DOIUrl":"https://doi.org/10.1145/3336191.3371867","url":null,"abstract":"Network visualization has played a critical role in graph analysis, as it not only presents a big picture of a network but also helps reveal the structural information of a network. The most popular visual representation of networks is the node-link diagram. However, visualizing a large network with the node-link diagram can be challenging due to the difficulty in obtaining an optimal graph layout. To address this challenge, a recent advancement in network representation: network shape, allows one to compactly represent a network and its subgraphs with the distribution of their embeddings. Inspired by this research, we have designed a web platform WebShapes that enables researchers and practitioners to visualize their network data as customized 3D shapes (http://b.link/webshapes). Furthermore, we provide a case study on real-world networks to explore the sensitivity of network shapes to different graph sampling, embedding, and fitting methods, and we show examples of understanding networks through their network shapes.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Learning to Compare: Self-Attentive Prospective Customer Recommendation in Location based Social Networks 对抗性学习比较:基于位置的社交网络中自我关注的潜在客户推荐
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371841
Ruirui Li, Xian Wu, Wei Wang
Recommendation systems tend to suffer severely from the sparse training data. A large portion of users and items usually have a very limited number of training instances. The data sparsity issue prevents us from accurately understanding users' preferences and items' characteristics and jeopardize the recommendation performance eventually. In addition, models, trained with sparse data, lack abundant training supports and tend to be vulnerable to adversarial perturbations, which implies possibly large errors in generalization. In this work, we investigate the recommendation task in the context of prospective customer recommendation in location based social networks. To comprehensively utilize the training data, we explicitly learn to compare users' historical check-in businesses utilizing self-attention mechanisms. To enhance the robustness of a recommender system and improve its generalization performance, we perform adversarial training. Adversarial perturbations are dynamically constructed during training and models are trained to be tolerant of such nuisance perturbations. In a nutshell, we introduce a Self-Attentive prospective Customer RecommendAtion framework, SACRA, which learns to recommend by making comparisons among users' historical check-ins with adversarial training. To evaluate the proposed model, we conduct a series of experiments to extensively compare with 12 existing methods using two real-world datasets. The results demonstrate that SACRA significantly outperforms all baselines.
推荐系统往往受到稀疏训练数据的严重影响。很大一部分用户和道具通常只有非常有限的训练实例。数据稀疏性问题使我们无法准确理解用户的偏好和商品的特征,最终会影响推荐的效果。此外,使用稀疏数据训练的模型缺乏丰富的训练支持,容易受到对抗性扰动的影响,这意味着在泛化过程中可能会出现较大的误差。在这项工作中,我们研究了基于位置的社交网络中潜在客户推荐背景下的推荐任务。为了全面利用训练数据,我们明确学习利用自关注机制来比较用户的历史签入业务。为了增强推荐系统的鲁棒性并提高其泛化性能,我们进行了对抗性训练。在训练过程中动态构建对抗性扰动,并训练模型以容忍这种讨厌的扰动。简而言之,我们引入了一个自我关注的潜在客户推荐框架SACRA,它通过对比用户的历史登记和对抗性训练来学习推荐。为了评估所提出的模型,我们使用两个真实世界的数据集进行了一系列实验,与12种现有方法进行了广泛的比较。结果表明,SACRA显著优于所有基线。
{"title":"Adversarial Learning to Compare: Self-Attentive Prospective Customer Recommendation in Location based Social Networks","authors":"Ruirui Li, Xian Wu, Wei Wang","doi":"10.1145/3336191.3371841","DOIUrl":"https://doi.org/10.1145/3336191.3371841","url":null,"abstract":"Recommendation systems tend to suffer severely from the sparse training data. A large portion of users and items usually have a very limited number of training instances. The data sparsity issue prevents us from accurately understanding users' preferences and items' characteristics and jeopardize the recommendation performance eventually. In addition, models, trained with sparse data, lack abundant training supports and tend to be vulnerable to adversarial perturbations, which implies possibly large errors in generalization. In this work, we investigate the recommendation task in the context of prospective customer recommendation in location based social networks. To comprehensively utilize the training data, we explicitly learn to compare users' historical check-in businesses utilizing self-attention mechanisms. To enhance the robustness of a recommender system and improve its generalization performance, we perform adversarial training. Adversarial perturbations are dynamically constructed during training and models are trained to be tolerant of such nuisance perturbations. In a nutshell, we introduce a Self-Attentive prospective Customer RecommendAtion framework, SACRA, which learns to recommend by making comparisons among users' historical check-ins with adversarial training. To evaluate the proposed model, we conduct a series of experiments to extensively compare with 12 existing methods using two real-world datasets. The results demonstrate that SACRA significantly outperforms all baselines.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126999163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Learning from Heterogeneous Networks: Methods and Applications 从异构网络学习:方法和应用
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3372182
Chuxu Zhang
Complex systems in different disciplines are usually modeled as heterogeneous networks. Different from homogeneous networks or attributed networks, heterogeneous networks are associated with complexity in heterogeneous structure or heterogeneous content or both. The abundant information in heterogeneous networks provide opportunities yet pose challenges for researchers and practitioners to develop customized machine learning solutions for solving different problems in complex systems. We are motivated to do significant work for learning from heterogeneous networks. In this paper, we first introduce the motivation and background of this research. Later, we present our current work which include a series of proposed methods and applications. These methods will be introduced in the perspectives of personalization in web-based systems and heterogeneous network embedding. In the end, we raise several research directions as future agenda.
不同学科的复杂系统通常被建模为异构网络。与同质网络或属性网络不同,异质网络与异质结构或异质内容的复杂性有关,或两者兼而有之。异构网络中丰富的信息为研究人员和实践者提供了机遇,但也提出了挑战,以开发定制的机器学习解决方案来解决复杂系统中的不同问题。我们被激励去做重要的工作,从异构网络中学习。本文首先介绍了本研究的动机和背景。随后,我们介绍了我们目前的工作,包括一系列提出的方法和应用。这些方法将从基于web的系统的个性化和异构网络嵌入的角度进行介绍。最后,提出了今后的研究方向。
{"title":"Learning from Heterogeneous Networks: Methods and Applications","authors":"Chuxu Zhang","doi":"10.1145/3336191.3372182","DOIUrl":"https://doi.org/10.1145/3336191.3372182","url":null,"abstract":"Complex systems in different disciplines are usually modeled as heterogeneous networks. Different from homogeneous networks or attributed networks, heterogeneous networks are associated with complexity in heterogeneous structure or heterogeneous content or both. The abundant information in heterogeneous networks provide opportunities yet pose challenges for researchers and practitioners to develop customized machine learning solutions for solving different problems in complex systems. We are motivated to do significant work for learning from heterogeneous networks. In this paper, we first introduce the motivation and background of this research. Later, we present our current work which include a series of proposed methods and applications. These methods will be introduced in the perspectives of personalization in web-based systems and heterogeneous network embedding. In the end, we raise several research directions as future agenda.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131950483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep Bayesian Data Mining 深度贝叶斯数据挖掘
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371870
Jen-Tzung Chien
This tutorial addresses the fundamentals and advances in deep Bayesian mining and learning for natural language with ubiquitous applications ranging from speech recognition to document summarization, text classification, text segmentation, information extraction, image caption generation, sentence generation, dialogue control, sentiment classification, recommendation system, question answering and machine translation, to name a few. Traditionally, "deep learning" is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The "semantic structure" in words, sentences, entities, actions and documents drawn from a large vocabulary may not be well expressed or correctly optimized in mathematical logic or computer programs. The "distribution function" in discrete or continuous latent variable model for natural language may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including hierarchical Dirichlet process, Chinese restaurant process, hierarchical Pitman-Yor process, Indian buffet process, recurrent neural network (RNN), long short-term memory, sequence-to-sequence model, variational auto-encoder (VAE), generative adversarial network (GAN), attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in natural language. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The word and sentence embeddings, clustering and co-clustering are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian mining, searching, learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for data mining and natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.
本教程介绍了自然语言深度贝叶斯挖掘和学习的基本原理和进展,其广泛应用包括语音识别、文档摘要、文本分类、文本分割、信息提取、图像标题生成、句子生成、对话控制、情感分类、推荐系统、问答和机器翻译等。传统上,“深度学习”被认为是一个基于实值确定性模型的推理或优化的学习过程。从大量词汇中提取的单词、句子、实体、动作和文档中的“语义结构”在数学逻辑或计算机程序中可能无法很好地表达或正确优化。自然语言的离散或连续潜变量模型中的“分布函数”可能无法正确分解或估计。本教程介绍了统计模型和神经网络的基础知识,并重点介绍了一系列高级贝叶斯模型和深度模型,包括分层Dirichlet过程、中餐馆过程、分层Pitman-Yor过程、印度自助餐过程、循环神经网络(RNN)、长短期记忆、序列到序列模型、变分自编码器(VAE)、生成对抗网络(GAN)、注意机制、记忆增强神经网络、跳跃神经网络,时间差分VAE,随机神经网络,随机时间卷积网络,预测状态神经网络,以及策略神经网络。增强先验/后验表示是解决。我们介绍了这些模型是如何连接的,以及为什么它们适用于自然语言中符号和复杂模式的各种应用。针对复杂模型的优化问题,提出了变分推理和抽样方法。单词和句子嵌入、聚类和共聚类与语言和语义约束相结合。提出了一系列的案例研究、任务和应用,以解决深度贝叶斯挖掘、搜索、学习和理解中的不同问题。最后,提出了今后研究的方向和展望。本教程的目的是向新手介绍深度贝叶斯学习中的主要主题,激发和解释数据挖掘和自然语言理解中新兴的重要主题,并展示结合不同机器学习工作线的新颖综合。
{"title":"Deep Bayesian Data Mining","authors":"Jen-Tzung Chien","doi":"10.1145/3336191.3371870","DOIUrl":"https://doi.org/10.1145/3336191.3371870","url":null,"abstract":"This tutorial addresses the fundamentals and advances in deep Bayesian mining and learning for natural language with ubiquitous applications ranging from speech recognition to document summarization, text classification, text segmentation, information extraction, image caption generation, sentence generation, dialogue control, sentiment classification, recommendation system, question answering and machine translation, to name a few. Traditionally, \"deep learning\" is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The \"semantic structure\" in words, sentences, entities, actions and documents drawn from a large vocabulary may not be well expressed or correctly optimized in mathematical logic or computer programs. The \"distribution function\" in discrete or continuous latent variable model for natural language may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including hierarchical Dirichlet process, Chinese restaurant process, hierarchical Pitman-Yor process, Indian buffet process, recurrent neural network (RNN), long short-term memory, sequence-to-sequence model, variational auto-encoder (VAE), generative adversarial network (GAN), attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in natural language. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The word and sentence embeddings, clustering and co-clustering are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian mining, searching, learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for data mining and natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129738102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LARA: Attribute-to-feature Adversarial Learning for New-item Recommendation 针对新项目推荐的属性-特征对抗性学习
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371805
Changfeng Sun, Han Liu, Meng Liu, Z. Ren, Tian Gan, Liqiang Nie
Recommending new items in real-world e-commerce portals is a challenging problem as the cold start phenomenon, i.e., lacks of user-item interactions. To address this problem, we propose a novel recommendation model, i.e., adversarial neural network with multiple generators, to generate users from multiple perspectives of items' attributes. Namely, the generated users are represented by attribute-level features. As both users and items are attribute-level representations, we can implicitly obtain user-item attribute-level interaction information. In light of this, the new item can be recommended to users based on attribute-level similarity. Extensive experimental results on two item cold-start scenarios, movie and goods recommendation, verify the effectiveness of our proposed model as compared to state-of-the-art baselines.
在现实世界的电子商务门户网站中,推荐新商品是一个具有挑战性的问题,因为存在冷启动现象,即缺乏用户与商品的交互。为了解决这个问题,我们提出了一种新的推荐模型,即具有多个生成器的对抗神经网络,从项目属性的多个角度生成用户。也就是说,生成的用户由属性级特征表示。由于用户和项目都是属性级表示,我们可以隐式地获得用户-项目属性级交互信息。因此,可以根据属性级相似性向用户推荐新项目。与最先进的基线相比,在电影和商品推荐两种项目冷启动场景下的大量实验结果验证了我们提出的模型的有效性。
{"title":"LARA: Attribute-to-feature Adversarial Learning for New-item Recommendation","authors":"Changfeng Sun, Han Liu, Meng Liu, Z. Ren, Tian Gan, Liqiang Nie","doi":"10.1145/3336191.3371805","DOIUrl":"https://doi.org/10.1145/3336191.3371805","url":null,"abstract":"Recommending new items in real-world e-commerce portals is a challenging problem as the cold start phenomenon, i.e., lacks of user-item interactions. To address this problem, we propose a novel recommendation model, i.e., adversarial neural network with multiple generators, to generate users from multiple perspectives of items' attributes. Namely, the generated users are represented by attribute-level features. As both users and items are attribute-level representations, we can implicitly obtain user-item attribute-level interaction information. In light of this, the new item can be recommended to users based on attribute-level similarity. Extensive experimental results on two item cold-start scenarios, movie and goods recommendation, verify the effectiveness of our proposed model as compared to state-of-the-art baselines.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Deep Multi-Graph Clustering via Attentive Cross-Graph Association 基于细心交叉图关联的深度多图聚类
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371806
Dongsheng Luo, Jingchao Ni, Suhang Wang, Yuchen Bian, Xiong Yu, Xiang Zhang
Multi-graph clustering aims to improve clustering accuracy by leveraging information from different domains, which has been shown to be extremely effective for achieving better clustering results than single graph based clustering algorithms. Despite the previous success, existing multi-graph clustering methods mostly use shallow models, which are incapable to capture the highly non-linear structures and the complex cluster associations in multi-graph, thus result in sub-optimal results. Inspired by the powerful representation learning capability of neural networks, in this paper, we propose an end-to-end deep learning model to simultaneously infer cluster assignments and cluster associations in multi-graph. Specifically, we use autoencoding networks to learn node embeddings. Meanwhile, we propose a minimum-entropy based clustering strategy to cluster nodes in the embedding space for each graph. We introduce two regularizers to leverage both within-graph and cross-graph dependencies. An attentive mechanism is further developed to learn cross-graph cluster associations. Through extensive experiments on a variety of datasets, we observe that our method outperforms state-of-the-art baselines by a large margin.
多图聚类旨在通过利用不同领域的信息来提高聚类精度,与基于单图的聚类算法相比,多图聚类的聚类效果非常好。尽管已有的多图聚类方法取得了成功,但现有的多图聚类方法大多使用浅模型,无法捕捉多图中高度非线性的结构和复杂的聚类关联,从而导致次优结果。受神经网络强大的表示学习能力的启发,本文提出了一种端到端深度学习模型,可以同时推断多图中的聚类分配和聚类关联。具体来说,我们使用自动编码网络来学习节点嵌入。同时,我们提出了一种基于最小熵的聚类策略,对每个图的嵌入空间中的节点进行聚类。我们引入两个正则器来利用图内和图间依赖关系。进一步开发了一种关注机制来学习跨图聚类关联。通过对各种数据集的广泛实验,我们观察到我们的方法在很大程度上优于最先进的基线。
{"title":"Deep Multi-Graph Clustering via Attentive Cross-Graph Association","authors":"Dongsheng Luo, Jingchao Ni, Suhang Wang, Yuchen Bian, Xiong Yu, Xiang Zhang","doi":"10.1145/3336191.3371806","DOIUrl":"https://doi.org/10.1145/3336191.3371806","url":null,"abstract":"Multi-graph clustering aims to improve clustering accuracy by leveraging information from different domains, which has been shown to be extremely effective for achieving better clustering results than single graph based clustering algorithms. Despite the previous success, existing multi-graph clustering methods mostly use shallow models, which are incapable to capture the highly non-linear structures and the complex cluster associations in multi-graph, thus result in sub-optimal results. Inspired by the powerful representation learning capability of neural networks, in this paper, we propose an end-to-end deep learning model to simultaneously infer cluster assignments and cluster associations in multi-graph. Specifically, we use autoencoding networks to learn node embeddings. Meanwhile, we propose a minimum-entropy based clustering strategy to cluster nodes in the embedding space for each graph. We introduce two regularizers to leverage both within-graph and cross-graph dependencies. An attentive mechanism is further developed to learn cross-graph cluster associations. Through extensive experiments on a variety of datasets, we observe that our method outperforms state-of-the-art baselines by a large margin.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130346176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Recurrent Memory Reasoning Network for Expert Finding in Community Question Answering 社区问答中专家查找的循环记忆推理网络
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371817
Jinlan Fu, Yi Li, Qi Zhang, Qinzhuo Wu, Renfeng Ma, Xuanjing Huang, Yu-Gang Jiang
Expert finding is a task designed to enable recommendation of the right person who can provide high-quality answers to a requester's question. Most previous works try to involve a content-based recommendation, which only superficially comprehends the relevance between a requester's question and the expertise of candidate experts by exploring the content or topic similarity between the requester's question and the candidate experts' historical answers. However, if a candidate expert has never answered a question similar to the requester's question, then existing methods have difficulty making a correct recommendation. Therefore, exploring the implicit relevance between a requester's question and a candidate expert's historical records by perception and reasoning should be taken into consideration. In this study, we propose a novel textslrecurrent memory reasoning network (RMRN) to perform this task. This method focuses on different parts of a question, and accordingly retrieves information from the histories of the candidate expert.Since only a small percentage of historical records are relevant to any requester's question, we introduce a Gumbel-Softmax-based mechanism to select relevant historical records from candidate experts' answering histories. To evaluate the proposed method, we constructed two large-scale datasets drawn from Stack Overflow and Yahoo! Answer. Experimental results on the constructed datasets demonstrate that the proposed method could achieve better performance than existing state-of-the-art methods.
专家查找是一项任务,旨在推荐能够为请求者的问题提供高质量答案的合适人选。大多数先前的工作都试图涉及基于内容的推荐,它只是通过探索请求者的问题与候选专家的历史答案之间的内容或主题相似性来肤浅地理解请求者的问题与候选专家的专业知识之间的相关性。但是,如果候选专家从未回答过与请求者的问题类似的问题,那么现有的方法就很难做出正确的推荐。因此,应该考虑通过感知和推理来探索请求者的问题与候选专家的历史记录之间的隐含相关性。在这项研究中,我们提出了一种新的文本循环记忆推理网络(RMRN)来完成这项任务。该方法关注问题的不同部分,并相应地从候选专家的历史记录中检索信息。由于只有一小部分历史记录与任何请求者的问题相关,因此我们引入了基于gumbel - softmax的机制,从候选专家的回答历史中选择相关的历史记录。为了评估所提出的方法,我们构建了两个来自Stack Overflow和Yahoo!的答案。在构建的数据集上的实验结果表明,该方法比现有的先进方法具有更好的性能。
{"title":"Recurrent Memory Reasoning Network for Expert Finding in Community Question Answering","authors":"Jinlan Fu, Yi Li, Qi Zhang, Qinzhuo Wu, Renfeng Ma, Xuanjing Huang, Yu-Gang Jiang","doi":"10.1145/3336191.3371817","DOIUrl":"https://doi.org/10.1145/3336191.3371817","url":null,"abstract":"Expert finding is a task designed to enable recommendation of the right person who can provide high-quality answers to a requester's question. Most previous works try to involve a content-based recommendation, which only superficially comprehends the relevance between a requester's question and the expertise of candidate experts by exploring the content or topic similarity between the requester's question and the candidate experts' historical answers. However, if a candidate expert has never answered a question similar to the requester's question, then existing methods have difficulty making a correct recommendation. Therefore, exploring the implicit relevance between a requester's question and a candidate expert's historical records by perception and reasoning should be taken into consideration. In this study, we propose a novel textslrecurrent memory reasoning network (RMRN) to perform this task. This method focuses on different parts of a question, and accordingly retrieves information from the histories of the candidate expert.Since only a small percentage of historical records are relevant to any requester's question, we introduce a Gumbel-Softmax-based mechanism to select relevant historical records from candidate experts' answering histories. To evaluate the proposed method, we constructed two large-scale datasets drawn from Stack Overflow and Yahoo! Answer. Experimental results on the constructed datasets demonstrate that the proposed method could achieve better performance than existing state-of-the-art methods.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114212491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Interpretable Click-Through Rate Prediction through Hierarchical Attention 可解释的点击率预测通过层次注意
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371785
Zeyu Li, Wei Cheng, Yang Chen, Haifeng Chen, Wei Wang
Click-through rate (CTR) prediction is a critical task in online advertising and marketing. For this problem, existing approaches, with shallow or deep architectures, have three major drawbacks. First, they typically lack persuasive rationales to explain the outcomes of the models. Unexplainable predictions and recommendations may be difficult to validate and thus unreliable and untrustworthy. In many applications, inappropriate suggestions may even bring severe consequences. Second, existing approaches have poor efficiency in analyzing high-order feature interactions. Third, the polysemy of feature interactions in different semantic subspaces is largely ignored. In this paper, we propose InterHAt that employs a Transformer with multi-head self-attention for feature learning. On top of that, hierarchical attention layers are utilized for predicting CTR while simultaneously providing interpretable insights of the prediction results. InterHAt captures high-order feature interactions by an efficient attentional aggregation strategy with low computational complexity. Extensive experiments on four public real datasets and one synthetic dataset demonstrate the effectiveness and efficiency of InterHAt.
点击率(CTR)预测是网络广告和营销中的一项关键任务。对于这个问题,现有的方法,无论是浅架构还是深架构,都有三个主要的缺点。首先,他们通常缺乏有说服力的理由来解释模型的结果。无法解释的预测和建议可能难以验证,因此不可靠和不值得信任。在许多应用中,不恰当的建议甚至可能带来严重的后果。其次,现有方法在分析高阶特征交互时效率较低。第三,不同语义子空间中特征交互的多义性在很大程度上被忽略。在本文中,我们提出了使用具有多头自关注的Transformer进行特征学习的InterHAt。最重要的是,分层注意层用于预测点击率,同时提供预测结果的可解释见解。InterHAt通过低计算复杂度的高效注意力聚合策略捕获高阶特征交互。在4个公开真实数据集和1个合成数据集上的大量实验证明了InterHAt的有效性和高效性。
{"title":"Interpretable Click-Through Rate Prediction through Hierarchical Attention","authors":"Zeyu Li, Wei Cheng, Yang Chen, Haifeng Chen, Wei Wang","doi":"10.1145/3336191.3371785","DOIUrl":"https://doi.org/10.1145/3336191.3371785","url":null,"abstract":"Click-through rate (CTR) prediction is a critical task in online advertising and marketing. For this problem, existing approaches, with shallow or deep architectures, have three major drawbacks. First, they typically lack persuasive rationales to explain the outcomes of the models. Unexplainable predictions and recommendations may be difficult to validate and thus unreliable and untrustworthy. In many applications, inappropriate suggestions may even bring severe consequences. Second, existing approaches have poor efficiency in analyzing high-order feature interactions. Third, the polysemy of feature interactions in different semantic subspaces is largely ignored. In this paper, we propose InterHAt that employs a Transformer with multi-head self-attention for feature learning. On top of that, hierarchical attention layers are utilized for predicting CTR while simultaneously providing interpretable insights of the prediction results. InterHAt captures high-order feature interactions by an efficient attentional aggregation strategy with low computational complexity. Extensive experiments on four public real datasets and one synthetic dataset demonstrate the effectiveness and efficiency of InterHAt.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126044474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Context-aware Deep Model for Joint Mobility and Time Prediction 情境感知关节活动和时间预测的深度模型
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371837
Yile Chen, Cheng Long, G. Cong, Chenliang Li
Mobility prediction, which is to predict where a user will arrive based on the user's historical mobility records, has attracted much attention. We argue that it is more useful to know not only where but also when a user will arrive next in many scenarios such as targeted advertising and taxi service. In this paper, we propose a novel context-aware deep model called DeepJMT for jointly performing mobility prediction (to know where) and time prediction (to know when). The DeepJMT model consists of (1) a hierarchical recurrent neural network (RNN) based sequential dependency encoder, which is more capable of capturing a user's mobility regularities and temporal patterns compared to vanilla RNN based models; (2) a spatial context extractor and a periodicity context extractor to extract location semantics and the user's periodicity, respectively; and (3) a co-attention based social & temporal context extractor which could extract the mobility and temporal evidence from social relationships. Experiments conducted on three real-world datasets show that DeepJMT outperforms the state-of-the-art mobility prediction and time prediction methods.
移动预测是根据用户的历史移动记录来预测用户将到达的地方,这引起了人们的广泛关注。我们认为,在定向广告和出租车服务等许多场景中,不仅知道用户下一个到达的地点,而且知道用户下一个到达的时间更有用。在本文中,我们提出了一种新的上下文感知深度模型,称为DeepJMT,用于联合执行移动性预测(知道在哪里)和时间预测(知道何时)。DeepJMT模型由(1)基于层次递归神经网络(RNN)的顺序依赖编码器组成,与基于普通RNN的模型相比,该编码器更能捕获用户的移动规律和时间模式;(2)空间上下文提取器和周期性上下文提取器分别提取位置语义和用户周期性;(3)基于共同注意的社会和时间语境提取器,可以从社会关系中提取流动性和时间证据。在三个真实数据集上进行的实验表明,DeepJMT优于最先进的移动性预测和时间预测方法。
{"title":"Context-aware Deep Model for Joint Mobility and Time Prediction","authors":"Yile Chen, Cheng Long, G. Cong, Chenliang Li","doi":"10.1145/3336191.3371837","DOIUrl":"https://doi.org/10.1145/3336191.3371837","url":null,"abstract":"Mobility prediction, which is to predict where a user will arrive based on the user's historical mobility records, has attracted much attention. We argue that it is more useful to know not only where but also when a user will arrive next in many scenarios such as targeted advertising and taxi service. In this paper, we propose a novel context-aware deep model called DeepJMT for jointly performing mobility prediction (to know where) and time prediction (to know when). The DeepJMT model consists of (1) a hierarchical recurrent neural network (RNN) based sequential dependency encoder, which is more capable of capturing a user's mobility regularities and temporal patterns compared to vanilla RNN based models; (2) a spatial context extractor and a periodicity context extractor to extract location semantics and the user's periodicity, respectively; and (3) a co-attention based social & temporal context extractor which could extract the mobility and temporal evidence from social relationships. Experiments conducted on three real-world datasets show that DeepJMT outperforms the state-of-the-art mobility prediction and time prediction methods.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116815872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Parameter Tuning in Personal Search Systems 个人搜索系统中的参数调整
Pub Date : 2020-01-20 DOI: 10.1145/3336191.3371820
S. Chen, Xuanhui Wang, Zhen Qin, Donald Metzler
Retrieval effectiveness in information retrieval systems is heavily dependent on how various parameters are tuned. One option to find these parameters is to run multiple online experiments and using a parameter sweep approach in order to optimize the search system. There are multiple downsides of this approach, mainly that it may lead to a poor experience for users. Another option is to do offline evaluation, which can act as a safeguard against potential quality issues. Offline evaluation requires a validation set of data that can be benchmarked against different parameter settings. However, for search over personal corpora, e.g. email and file search, it is impractical and often impossible to get a complete representative validation set, due to the inability to save raw queries and document information. In this work, we show how to do offline parameter tuning with only a partial validation set. In addition, we demonstrate how to do parameter tuning in the cases when we have complete knowledge of the internal implementation of the search system (white-box tuning), as well as the case where we have only partial knowledge (grey-box tuning). This has allowed us to do offline parameter tuning in a privacy-sensitive manner.
信息检索系统的检索效率很大程度上取决于各种参数的调优方式。找到这些参数的一种选择是运行多个在线实验,并使用参数扫描方法来优化搜索系统。这种方法有很多缺点,主要是它可能会给用户带来糟糕的体验。另一个选择是进行离线评估,这可以作为对潜在质量问题的保障。离线评估需要一组验证数据,这些数据可以针对不同的参数设置进行基准测试。然而,对于个人语料库的搜索,例如电子邮件和文件搜索,由于无法保存原始查询和文档信息,获得完整的代表性验证集是不切实际的,而且通常是不可能的。在本文中,我们将展示如何仅使用部分验证集进行离线参数调优。此外,我们还演示了在我们完全了解搜索系统的内部实现(白盒调优)以及我们只有部分知识(灰盒调优)的情况下如何进行参数调优。这使我们能够以隐私敏感的方式进行离线参数调优。
{"title":"Parameter Tuning in Personal Search Systems","authors":"S. Chen, Xuanhui Wang, Zhen Qin, Donald Metzler","doi":"10.1145/3336191.3371820","DOIUrl":"https://doi.org/10.1145/3336191.3371820","url":null,"abstract":"Retrieval effectiveness in information retrieval systems is heavily dependent on how various parameters are tuned. One option to find these parameters is to run multiple online experiments and using a parameter sweep approach in order to optimize the search system. There are multiple downsides of this approach, mainly that it may lead to a poor experience for users. Another option is to do offline evaluation, which can act as a safeguard against potential quality issues. Offline evaluation requires a validation set of data that can be benchmarked against different parameter settings. However, for search over personal corpora, e.g. email and file search, it is impractical and often impossible to get a complete representative validation set, due to the inability to save raw queries and document information. In this work, we show how to do offline parameter tuning with only a partial validation set. In addition, we demonstrate how to do parameter tuning in the cases when we have complete knowledge of the internal implementation of the search system (white-box tuning), as well as the case where we have only partial knowledge (grey-box tuning). This has allowed us to do offline parameter tuning in a privacy-sensitive manner.","PeriodicalId":319008,"journal":{"name":"Proceedings of the 13th International Conference on Web Search and Data Mining","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128874069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 13th International Conference on Web Search and Data Mining
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1