首页 > 最新文献

The World Wide Web Conference最新文献

英文 中文
Augmenting Knowledge Tracing by Considering Forgetting Behavior 考虑遗忘行为增强知识追踪
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313565
Koki Nagatani, Qian Zhang, Masahiro Sato, Yan-Ying Chen, Francine Chen, T. Ohkuma
Computer-aided education systems are now seeking to provide each student with personalized materials based on a student's individual knowledge. To provide suitable learning materials, tracing each student's knowledge over a period of time is important. However, predicting each student's knowledge is difficult because students tend to forget. The forgetting behavior is mainly because of two reasons: the lag time from the previous interaction, and the number of past trials on a question. Although there are a few studies that consider forgetting while modeling a student's knowledge, some models consider only partial information about forgetting, whereas others consider multiple features about forgetting, ignoring a student's learning sequence. In this paper, we focus on modeling and predicting a student's knowledge by considering their forgetting behavior. We extend the deep knowledge tracing model [17], which is a state-of-the-art sequential model for knowledge tracing, to consider forgetting by incorporating multiple types of information related to forgetting. Experiments on knowledge tracing datasets show that our proposed model improves the predictive performance as compared to baselines. Moreover, we also examine that the combination of multiple types of information that affect the behavior of forgetting results in performance improvement.
计算机辅助教育系统现在正在寻求根据学生的个人知识为每个学生提供个性化的材料。为了提供合适的学习材料,在一段时间内追踪每个学生的知识是很重要的。然而,预测每个学生的知识是困难的,因为学生往往会忘记。遗忘行为主要有两个原因:与之前互动的滞后时间,以及过去对一个问题的试验次数。虽然有一些研究在建模学生的知识时考虑了遗忘,但一些模型只考虑了关于遗忘的部分信息,而另一些模型考虑了关于遗忘的多个特征,忽略了学生的学习顺序。在本文中,我们主要通过考虑学生的遗忘行为来建模和预测学生的知识。我们扩展了深度知识追踪模型[17],该模型是一种最先进的知识追踪序列模型,通过整合与遗忘相关的多种类型的信息来考虑遗忘。在知识跟踪数据集上的实验表明,与基线相比,我们提出的模型提高了预测性能。此外,我们还研究了影响遗忘行为的多种类型信息的组合导致绩效改善。
{"title":"Augmenting Knowledge Tracing by Considering Forgetting Behavior","authors":"Koki Nagatani, Qian Zhang, Masahiro Sato, Yan-Ying Chen, Francine Chen, T. Ohkuma","doi":"10.1145/3308558.3313565","DOIUrl":"https://doi.org/10.1145/3308558.3313565","url":null,"abstract":"Computer-aided education systems are now seeking to provide each student with personalized materials based on a student's individual knowledge. To provide suitable learning materials, tracing each student's knowledge over a period of time is important. However, predicting each student's knowledge is difficult because students tend to forget. The forgetting behavior is mainly because of two reasons: the lag time from the previous interaction, and the number of past trials on a question. Although there are a few studies that consider forgetting while modeling a student's knowledge, some models consider only partial information about forgetting, whereas others consider multiple features about forgetting, ignoring a student's learning sequence. In this paper, we focus on modeling and predicting a student's knowledge by considering their forgetting behavior. We extend the deep knowledge tracing model [17], which is a state-of-the-art sequential model for knowledge tracing, to consider forgetting by incorporating multiple types of information related to forgetting. Experiments on knowledge tracing datasets show that our proposed model improves the predictive performance as compared to baselines. Moreover, we also examine that the combination of multiple types of information that affect the behavior of forgetting results in performance improvement.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72959440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 115
Can You Give Me a Reason?: Argument-inducing Online Forum by Argument Mining 你能给我一个理由吗?:基于论据挖掘的诱导论点的在线论坛
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3314127
Makiko Ida, Gaku Morio, Kosui Iwasa, Tomoyuki Tatsumi, Takaki Yasui, K. Fujita
This demonstration paper presents an argument-inducing online forum that stimulates participants with lack of premises for their claim in online discussions. The proposed forum provides its participants the following two subsystems: (1) Argument estimator for online discussions automatically generates a visualization of the argument structures in posts based on argument mining. The forum indicates structures such as claim-premise relations in real time by exploiting a state-of-the-art deep learning model. (2) Argument-inducing agent for online discussion (AIAD) automatically generates a reply post based on the argument estimator requesting further reasons to improve the argumentation of participants. Our experimental discussion demonstrates that the argument estimator can detect the argument structures from online discussions, and AIAD can induce premises from the participants. To the best of our knowledge, our argument-inducing online forum is the first approach to either visualize or request a real-time argument for online discussions. Our forum can be used to collect and induce claim-reasons pairs rather than only opinions to understand various lines of reasoning in online arguments such as civic discussions, online debates, and education objectives. The argument estimator code is available at https://github.com/EdoFrank/EMNLP2018-ArgMining-Morio and the demonstration video is available at https://youtu.be/T9fNJfneQV8.
这篇演示论文提出了一个引起争论的在线论坛,刺激参与者在网上讨论中缺乏他们的主张的前提。该论坛为参与者提供了以下两个子系统:(1)在线讨论的论据估计器基于论据挖掘自动生成帖子中论据结构的可视化。该论坛通过利用最先进的深度学习模型实时显示索赔-前提关系等结构。(2) AIAD (argument -inducing agent for online discussion)基于论据估计器自动生成回复帖子,请求进一步的理由来改进参与者的论据。我们的实验讨论表明,论点估计器可以从在线讨论中检测论点结构,AIAD可以从参与者那里归纳出前提。据我们所知,我们的辩论诱导在线论坛是第一个将在线讨论可视化或要求实时辩论的方法。我们的论坛可以用来收集和归纳主张-理由对,而不仅仅是观点,以理解公民讨论、在线辩论和教育目标等在线争论中的各种推理路线。参数估计器代码可在https://github.com/EdoFrank/EMNLP2018-ArgMining-Morio上获得,演示视频可在https://youtu.be/T9fNJfneQV8上获得。
{"title":"Can You Give Me a Reason?: Argument-inducing Online Forum by Argument Mining","authors":"Makiko Ida, Gaku Morio, Kosui Iwasa, Tomoyuki Tatsumi, Takaki Yasui, K. Fujita","doi":"10.1145/3308558.3314127","DOIUrl":"https://doi.org/10.1145/3308558.3314127","url":null,"abstract":"This demonstration paper presents an argument-inducing online forum that stimulates participants with lack of premises for their claim in online discussions. The proposed forum provides its participants the following two subsystems: (1) Argument estimator for online discussions automatically generates a visualization of the argument structures in posts based on argument mining. The forum indicates structures such as claim-premise relations in real time by exploiting a state-of-the-art deep learning model. (2) Argument-inducing agent for online discussion (AIAD) automatically generates a reply post based on the argument estimator requesting further reasons to improve the argumentation of participants. Our experimental discussion demonstrates that the argument estimator can detect the argument structures from online discussions, and AIAD can induce premises from the participants. To the best of our knowledge, our argument-inducing online forum is the first approach to either visualize or request a real-time argument for online discussions. Our forum can be used to collect and induce claim-reasons pairs rather than only opinions to understand various lines of reasoning in online arguments such as civic discussions, online debates, and education objectives. The argument estimator code is available at https://github.com/EdoFrank/EMNLP2018-ArgMining-Morio and the demonstration video is available at https://youtu.be/T9fNJfneQV8.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"154 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73343816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Knowledge-aware Assessment of Severity of Suicide Risk for Early Intervention 早期干预自杀风险严重程度的知识意识评估
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313698
Manas Gaur, Amanuel Alambo, Joy Prakash Sain, Ugur Kursuncu, K. Thirunarayan, Ramakanth Kavuluru, A. Sheth, R. Welton, Jyotishman Pathak
Mental health illness such as depression is a significant risk factor for suicide ideation, behaviors, and attempts. A report by Substance Abuse and Mental Health Services Administration (SAMHSA) shows that 80% of the patients suffering from Borderline Personality Disorder (BPD) have suicidal behavior, 5-10% of whom commit suicide. While multiple initiatives have been developed and implemented for suicide prevention, a key challenge has been the social stigma associated with mental disorders, which deters patients from seeking help or sharing their experiences directly with others including clinicians. This is particularly true for teenagers and younger adults where suicide is the second highest cause of death in the US. Prior research involving surveys and questionnaires (e.g. PHQ-9) for suicide risk prediction failed to provide a quantitative assessment of risk that informed timely clinical decision-making for intervention. Our interdisciplinary study concerns the use of Reddit as an unobtrusive data source for gleaning information about suicidal tendencies and other related mental health conditions afflicting depressed users. We provide details of our learning framework that incorporates domain-specific knowledge to predict the severity of suicide risk for an individual. Our approach involves developing a suicide risk severity lexicon using medical knowledge bases and suicide ontology to detect cues relevant to suicidal thoughts and actions. We also use language modeling, medical entity recognition and normalization and negation detection to create a dataset of 2181 redditors that have discussed or implied suicidal ideation, behavior, or attempt. Given the importance of clinical knowledge, our gold standard dataset of 500 redditors (out of 2181) was developed by four practicing psychiatrists following the guidelines outlined in Columbia Suicide Severity Rating Scale (C-SSRS), with the pairwise annotator agreement of 0.79 and group-wise agreement of 0.73. Compared to the existing four-label classification scheme (no risk, low risk, moderate risk, and high risk), our proposed C-SSRS-based 5-label classification scheme distinguishes people who are supportive, from those who show different severity of suicidal tendency. Our 5-label classification scheme outperforms the state-of-the-art schemes by improving the graded recall by 4.2% and reducing the perceived risk measure by 12.5%. Convolutional neural network (CNN) provided the best performance in our scheme due to the discriminative features and use of domain-specific knowledge resources, in comparison to SVM-L that has been used in the state-of-the-art tools over similar dataset.
精神疾病,如抑郁症,是自杀意念、行为和企图的重要危险因素。药物滥用和精神健康服务管理局(SAMHSA)的一份报告显示,80%的边缘型人格障碍(BPD)患者有自杀行为,其中5-10%的人自杀。虽然为预防自杀制定和实施了多项举措,但一个关键挑战是与精神障碍相关的社会污名,这阻碍了患者寻求帮助或直接与包括临床医生在内的其他人分享他们的经验。对于青少年和年轻人来说尤其如此,自杀是美国第二大死亡原因。先前的研究包括调查和问卷(如PHQ-9)自杀风险预测,但未能提供定量的风险评估,为及时的临床干预决策提供信息。我们的跨学科研究关注的是使用Reddit作为一个不引人注目的数据源来收集有关自杀倾向和其他相关心理健康状况的信息,这些信息折磨着抑郁的用户。我们提供了我们的学习框架的细节,该框架结合了特定领域的知识来预测个人自杀风险的严重程度。我们的方法包括使用医学知识库和自杀本体开发自杀风险严重程度词典,以检测与自杀想法和行为相关的线索。我们还使用语言建模、医疗实体识别和规范化以及否定检测来创建一个包含2181名reddit用户的数据集,这些用户讨论或暗示了自杀念头、行为或企图。考虑到临床知识的重要性,我们的500名redditor(共2181人)的金标准数据集是由四位执业精神科医生根据哥伦比亚自杀严重程度评定量表(C-SSRS)中概述的指导原则开发的,双注释者一致性为0.79,组一致度为0.73。与现有的四标签分类方案(无风险、低风险、中风险和高风险)相比,我们提出的基于c - ssrs的五标签分类方案将支持者与表现出不同严重程度自杀倾向的人区分开来。我们的5标签分类方案优于最先进的方案,将分级召回率提高了4.2%,将感知风险度量降低了12.5%。与SVM-L相比,卷积神经网络(CNN)在我们的方案中提供了最好的性能,因为它具有判别特征和使用特定领域的知识资源,SVM-L已经在类似的数据集上使用了最先进的工具。
{"title":"Knowledge-aware Assessment of Severity of Suicide Risk for Early Intervention","authors":"Manas Gaur, Amanuel Alambo, Joy Prakash Sain, Ugur Kursuncu, K. Thirunarayan, Ramakanth Kavuluru, A. Sheth, R. Welton, Jyotishman Pathak","doi":"10.1145/3308558.3313698","DOIUrl":"https://doi.org/10.1145/3308558.3313698","url":null,"abstract":"Mental health illness such as depression is a significant risk factor for suicide ideation, behaviors, and attempts. A report by Substance Abuse and Mental Health Services Administration (SAMHSA) shows that 80% of the patients suffering from Borderline Personality Disorder (BPD) have suicidal behavior, 5-10% of whom commit suicide. While multiple initiatives have been developed and implemented for suicide prevention, a key challenge has been the social stigma associated with mental disorders, which deters patients from seeking help or sharing their experiences directly with others including clinicians. This is particularly true for teenagers and younger adults where suicide is the second highest cause of death in the US. Prior research involving surveys and questionnaires (e.g. PHQ-9) for suicide risk prediction failed to provide a quantitative assessment of risk that informed timely clinical decision-making for intervention. Our interdisciplinary study concerns the use of Reddit as an unobtrusive data source for gleaning information about suicidal tendencies and other related mental health conditions afflicting depressed users. We provide details of our learning framework that incorporates domain-specific knowledge to predict the severity of suicide risk for an individual. Our approach involves developing a suicide risk severity lexicon using medical knowledge bases and suicide ontology to detect cues relevant to suicidal thoughts and actions. We also use language modeling, medical entity recognition and normalization and negation detection to create a dataset of 2181 redditors that have discussed or implied suicidal ideation, behavior, or attempt. Given the importance of clinical knowledge, our gold standard dataset of 500 redditors (out of 2181) was developed by four practicing psychiatrists following the guidelines outlined in Columbia Suicide Severity Rating Scale (C-SSRS), with the pairwise annotator agreement of 0.79 and group-wise agreement of 0.73. Compared to the existing four-label classification scheme (no risk, low risk, moderate risk, and high risk), our proposed C-SSRS-based 5-label classification scheme distinguishes people who are supportive, from those who show different severity of suicidal tendency. Our 5-label classification scheme outperforms the state-of-the-art schemes by improving the graded recall by 4.2% and reducing the perceived risk measure by 12.5%. Convolutional neural network (CNN) provided the best performance in our scheme due to the discriminative features and use of domain-specific knowledge resources, in comparison to SVM-L that has been used in the state-of-the-art tools over similar dataset.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80078840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
How Representative Is a SPARQL Benchmark? An Analysis of RDF Triplestore Benchmarks SPARQL基准测试的代表性如何?RDF三重存储基准分析
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313556
Muhammad Saleem, Gábor Szárnyas, Felix Conrads, Syed Ahmad Chan Bukhari, Qaiser Mehmood, A. N. Ngomo
Triplestores are data management systems for storing and querying RDF data. Over recent years, various benchmarks have been proposed to assess the performance of triplestores across different performance measures. However, choosing the most suitable benchmark for evaluating triplestores in practical settings is not a trivial task. This is because triplestores experience varying workloads when deployed in real applications. We address the problem of determining an appropriate benchmark for a given real-life workload by providing a fine-grained comparative analysis of existing triplestore benchmarks. In particular, we analyze the data and queries provided with the existing triplestore benchmarks in addition to several real-world datasets. Furthermore, we measure the correlation between the query execution time and various SPARQL query features and rank those features based on their significance levels. Our experiments reveal several interesting insights about the design of such benchmarks. With this fine-grained evaluation, we aim to support the design and implementation of more diverse benchmarks. Application developers can use our result to analyze their data and queries and choose a data management system.
triplestore是用于存储和查询RDF数据的数据管理系统。近年来,人们提出了各种基准来评估不同性能指标下三元存储的性能。然而,在实际设置中为评估triplestore选择最合适的基准并不是一项简单的任务。这是因为triplestore在实际应用程序中部署时会经历不同的工作负载。我们通过对现有triplestore基准进行细粒度比较分析,解决了为给定的实际工作负载确定适当基准的问题。特别地,我们分析了现有triplestore基准测试提供的数据和查询,以及几个真实世界的数据集。此外,我们度量查询执行时间与各种SPARQL查询特性之间的相关性,并根据它们的显著性水平对这些特性进行排序。我们的实验揭示了关于此类基准测试设计的几个有趣的见解。通过这种细粒度的评估,我们的目标是支持更多样化的基准测试的设计和实现。应用程序开发人员可以使用我们的结果来分析他们的数据和查询,并选择数据管理系统。
{"title":"How Representative Is a SPARQL Benchmark? An Analysis of RDF Triplestore Benchmarks","authors":"Muhammad Saleem, Gábor Szárnyas, Felix Conrads, Syed Ahmad Chan Bukhari, Qaiser Mehmood, A. N. Ngomo","doi":"10.1145/3308558.3313556","DOIUrl":"https://doi.org/10.1145/3308558.3313556","url":null,"abstract":"Triplestores are data management systems for storing and querying RDF data. Over recent years, various benchmarks have been proposed to assess the performance of triplestores across different performance measures. However, choosing the most suitable benchmark for evaluating triplestores in practical settings is not a trivial task. This is because triplestores experience varying workloads when deployed in real applications. We address the problem of determining an appropriate benchmark for a given real-life workload by providing a fine-grained comparative analysis of existing triplestore benchmarks. In particular, we analyze the data and queries provided with the existing triplestore benchmarks in addition to several real-world datasets. Furthermore, we measure the correlation between the query execution time and various SPARQL query features and rank those features based on their significance levels. Our experiments reveal several interesting insights about the design of such benchmarks. With this fine-grained evaluation, we aim to support the design and implementation of more diverse benchmarks. Application developers can use our result to analyze their data and queries and choose a data management system.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80290715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
QAnswer: A Question Answering prototype bridging the gap between a considerable part of the LOD cloud and end-users QAnswer:一个问答原型,弥合了相当一部分LOD云和最终用户之间的差距
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3314124
Dennis Diefenbach, Pedro Henrique Migliatti, Omar Qawasmeh, Vincent Lully, K. Singh, P. Maret
We present QAnswer, a Question Answering system which queries at the same time 3 core datasets of the Semantic Web, that are relevant for end-users. These datasets are Wikidata with Lexemes, LinkedGeodata and Musicbrainz. Additionally, it is possible to query these datasets in English, German, French, Italian, Spanish, Pourtuguese, Arabic and Chinese. Moreover, QAnswer includes a fallback option to the search engine Qwant when the answer to a question cannot be found in the datasets mentioned above. These features make QAnswer as the first prototype of a Question Answering System over a considerable part of the LOD cloud.
我们提出了QAnswer,一个问答系统,同时查询三个核心数据集的语义网,这是与最终用户相关的。这些数据集是维基数据与lexeme, LinkedGeodata和Musicbrainz。此外,还可以用英语、德语、法语、意大利语、西班牙语、葡萄牙语、阿拉伯语和中文查询这些数据集。此外,当在上面提到的数据集中找不到问题的答案时,QAnswer包含了一个搜索引擎Qwant的备用选项。这些特性使QAnswer成为在LOD云的相当一部分上的第一个问答系统原型。
{"title":"QAnswer: A Question Answering prototype bridging the gap between a considerable part of the LOD cloud and end-users","authors":"Dennis Diefenbach, Pedro Henrique Migliatti, Omar Qawasmeh, Vincent Lully, K. Singh, P. Maret","doi":"10.1145/3308558.3314124","DOIUrl":"https://doi.org/10.1145/3308558.3314124","url":null,"abstract":"We present QAnswer, a Question Answering system which queries at the same time 3 core datasets of the Semantic Web, that are relevant for end-users. These datasets are Wikidata with Lexemes, LinkedGeodata and Musicbrainz. Additionally, it is possible to query these datasets in English, German, French, Italian, Spanish, Pourtuguese, Arabic and Chinese. Moreover, QAnswer includes a fallback option to the search engine Qwant when the answer to a question cannot be found in the datasets mentioned above. These features make QAnswer as the first prototype of a Question Answering System over a considerable part of the LOD cloud.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81891865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Efficient Interaction-based Neural Ranking with Locality Sensitive Hashing 基于局部敏感哈希的高效交互神经排序
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313576
Shiyu Ji, Jinjin Shao, Tao Yang
Interaction-based neural ranking has been shown to be effective for document search using distributed word representations. However the time or space required is very expensive for online query processing with neural ranking. This paper investigates fast approximation of three interaction-based neural ranking algorithms using Locality Sensitive Hashing (LSH). It accelerates query-document interaction computation by using a runtime cache with precomputed term vectors, and speeds up kernel calculation by taking advantages of limited integer similarity values. This paper presents the design choices with cost analysis, and an evaluation that assesses efficiency benefits and relevance tradeoffs for the tested datasets.
基于交互的神经排序已被证明是有效的文档搜索使用分布式词表示。然而,使用神经排序的在线查询处理所需的时间和空间是非常昂贵的。本文研究了三种基于局部敏感哈希(LSH)的基于交互的神经排序算法的快速逼近。它通过使用带有预先计算的术语向量的运行时缓存来加速查询-文档交互计算,并通过利用有限的整数相似值来加快内核计算。本文提出了具有成本分析的设计选择,以及评估测试数据集的效率效益和相关性权衡的评估。
{"title":"Efficient Interaction-based Neural Ranking with Locality Sensitive Hashing","authors":"Shiyu Ji, Jinjin Shao, Tao Yang","doi":"10.1145/3308558.3313576","DOIUrl":"https://doi.org/10.1145/3308558.3313576","url":null,"abstract":"Interaction-based neural ranking has been shown to be effective for document search using distributed word representations. However the time or space required is very expensive for online query processing with neural ranking. This paper investigates fast approximation of three interaction-based neural ranking algorithms using Locality Sensitive Hashing (LSH). It accelerates query-document interaction computation by using a runtime cache with precomputed term vectors, and speeds up kernel calculation by taking advantages of limited integer similarity values. This paper presents the design choices with cost analysis, and an evaluation that assesses efficiency benefits and relevance tradeoffs for the tested datasets.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81936002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A Scalable Hybrid Research Paper Recommender System for Microsoft Academic 一个可扩展的混合研究论文推荐系统的微软学术
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313700
Anshul Kanakia, Zhihong Shen, Darrin Eide, Kuansan Wang
We present the design and methodology for the large scale hybrid paper recommender system used by Microsoft Academic. The system provides recommendations for approximately 160 million English research papers and patents. Our approach handles incomplete citation information while also alleviating the cold-start problem that often affects other recommender systems. We use the Microsoft Academic Graph (MAG), titles, and available abstracts of research papers to build a recommendation list for all documents, thereby combining co-citation and content based approaches. Tuning system parameters also allows for blending and prioritization of each approach which, in turn, allows us to balance paper novelty versus authority in recommendation results. We evaluate the generated recommendations via a user study of 40 participants, with over 2400 recommendation pairs graded and discuss the quality of the results using P@10 and nDCG scores. We see that there is a strong correlation between participant scores and the similarity rankings produced by our system but that additional focus needs to be put towards improving recommender precision, particularly for content based recommendations. The results of the user survey and associated analysis scripts are made available via GitHub and the recommendations produced by our system are available as part of the MAG on Azure to facilitate further research and light up novel research paper recommendation applications.
本文介绍了微软学术应用的大型混合式论文推荐系统的设计和方法。该系统为大约1.6亿篇英文研究论文和专利提供推荐。我们的方法处理了不完整的引文信息,同时也缓解了经常影响其他推荐系统的冷启动问题。我们使用Microsoft Academic Graph (MAG)、标题和研究论文的可用摘要来构建所有文档的推荐列表,从而结合了共同引用和基于内容的方法。调整系统参数还允许混合和优先考虑每种方法,这反过来又允许我们在推荐结果中平衡论文的新颖性和权威性。我们通过对40名参与者的用户研究来评估生成的推荐,对2400多对推荐进行评分,并使用P@10和nDCG分数讨论结果的质量。我们发现,参与者的分数与我们的系统产生的相似度排名之间存在很强的相关性,但我们需要进一步关注如何提高推荐的精度,尤其是基于内容的推荐。用户调查的结果和相关的分析脚本可以通过GitHub提供,我们的系统产生的建议可以作为Azure MAG的一部分,以促进进一步的研究和点亮新的研究论文推荐应用程序。
{"title":"A Scalable Hybrid Research Paper Recommender System for Microsoft Academic","authors":"Anshul Kanakia, Zhihong Shen, Darrin Eide, Kuansan Wang","doi":"10.1145/3308558.3313700","DOIUrl":"https://doi.org/10.1145/3308558.3313700","url":null,"abstract":"We present the design and methodology for the large scale hybrid paper recommender system used by Microsoft Academic. The system provides recommendations for approximately 160 million English research papers and patents. Our approach handles incomplete citation information while also alleviating the cold-start problem that often affects other recommender systems. We use the Microsoft Academic Graph (MAG), titles, and available abstracts of research papers to build a recommendation list for all documents, thereby combining co-citation and content based approaches. Tuning system parameters also allows for blending and prioritization of each approach which, in turn, allows us to balance paper novelty versus authority in recommendation results. We evaluate the generated recommendations via a user study of 40 participants, with over 2400 recommendation pairs graded and discuss the quality of the results using P@10 and nDCG scores. We see that there is a strong correlation between participant scores and the similarity rankings produced by our system but that additional focus needs to be put towards improving recommender precision, particularly for content based recommendations. The results of the user survey and associated analysis scripts are made available via GitHub and the recommendations produced by our system are available as part of the MAG on Azure to facilitate further research and light up novel research paper recommendation applications.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84353593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Securing the Deep Fraud Detector in Large-Scale E-Commerce Platform via Adversarial Machine Learning Approach 基于对抗性机器学习方法的大型电子商务平台深度欺诈检测器保护
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313533
Qingyu Guo, Z. Li, Bo An, Pengrui Hui, Jiaming Huang, Long Zhang, Mengchen Zhao
Fraud transactions are one of the major threats faced by online e-commerce platforms. Recently, deep learning based classifiers have been deployed to detect fraud transactions. Inspired by findings on adversarial examples, this paper is the first to analyze the vulnerability of deep fraud detector to slight perturbations on input transactions, which is very challenging since the sparsity and discretization of transaction data result in a non-convex discrete optimization. Inspired by the iterative Fast Gradient Sign Method (FGSM) for the L8 attack, we first propose the Iterative Fast Coordinate Method (IFCM) for discrete L1 and L2 attacks which is efficient to generate large amounts of instances with satisfactory effectiveness. We then provide two novel attack algorithms to solve the discrete optimization. The first one is the Augmented Iterative Search (AIS) algorithm, which repeatedly searches for effective “simple” perturbation. The second one is called the Rounded Relaxation with Reparameterization (R3), which rounds the solution obtained by solving a relaxed and unconstrained optimization problem with reparameterization tricks. Finally, we conduct extensive experimental evaluation on the deployed fraud detector in TaoBao, one of the largest e-commerce platforms in the world, with millions of real-world transactions. Results show that (i) The deployed detector is highly vulnerable to attacks as the average precision is decreased from nearly 90% to as low as 20% with little perturbations; (ii) Our proposed attacks significantly outperform the adaptions of the state-of-the-art attacks. (iii) The model trained with an adversarial training process is significantly robust against attacks and performs well on the unperturbed data.
欺诈交易是网络电子商务平台面临的主要威胁之一。最近,基于深度学习的分类器被用于检测欺诈交易。受对抗性示例研究结果的启发,本文首次分析了深度欺诈检测器对输入交易轻微扰动的脆弱性,这是非常具有挑战性的,因为交易数据的稀疏性和离散性导致非凸离散优化。受L8攻击的迭代快速梯度符号法(FGSM)的启发,我们首次提出了针对离散L1和L2攻击的迭代快速坐标法(IFCM),该方法可以有效地生成大量实例,并且效果令人满意。然后,我们提供了两种新的攻击算法来解决离散优化问题。第一个是增广迭代搜索(AIS)算法,该算法反复搜索有效的“简单”扰动。第二种是带重参数化的四舍五入松弛法(R3),它对一个带重参数化技巧的松弛无约束优化问题的解进行四舍五入。最后,我们对部署在淘宝上的欺诈检测器进行了广泛的实验评估,淘宝是世界上最大的电子商务平台之一,有数百万的真实交易。结果表明:(1)在扰动很小的情况下,探测器的平均精度从近90%下降到低至20%,极易受到攻击;(ii)我们提出的攻击明显优于对最先进攻击的适应。(iii)使用对抗性训练过程训练的模型对攻击具有显著的鲁棒性,并且在未扰动数据上表现良好。
{"title":"Securing the Deep Fraud Detector in Large-Scale E-Commerce Platform via Adversarial Machine Learning Approach","authors":"Qingyu Guo, Z. Li, Bo An, Pengrui Hui, Jiaming Huang, Long Zhang, Mengchen Zhao","doi":"10.1145/3308558.3313533","DOIUrl":"https://doi.org/10.1145/3308558.3313533","url":null,"abstract":"Fraud transactions are one of the major threats faced by online e-commerce platforms. Recently, deep learning based classifiers have been deployed to detect fraud transactions. Inspired by findings on adversarial examples, this paper is the first to analyze the vulnerability of deep fraud detector to slight perturbations on input transactions, which is very challenging since the sparsity and discretization of transaction data result in a non-convex discrete optimization. Inspired by the iterative Fast Gradient Sign Method (FGSM) for the L8 attack, we first propose the Iterative Fast Coordinate Method (IFCM) for discrete L1 and L2 attacks which is efficient to generate large amounts of instances with satisfactory effectiveness. We then provide two novel attack algorithms to solve the discrete optimization. The first one is the Augmented Iterative Search (AIS) algorithm, which repeatedly searches for effective “simple” perturbation. The second one is called the Rounded Relaxation with Reparameterization (R3), which rounds the solution obtained by solving a relaxed and unconstrained optimization problem with reparameterization tricks. Finally, we conduct extensive experimental evaluation on the deployed fraud detector in TaoBao, one of the largest e-commerce platforms in the world, with millions of real-world transactions. Results show that (i) The deployed detector is highly vulnerable to attacks as the average precision is decreased from nearly 90% to as low as 20% with little perturbations; (ii) Our proposed attacks significantly outperform the adaptions of the state-of-the-art attacks. (iii) The model trained with an adversarial training process is significantly robust against attacks and performs well on the unperturbed data.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81159036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps 一种多模态神经嵌入方法检测手机仿冒应用
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3313427
Jathushan Rajasegaran, Naveen Karunanayake, Ashanie Gunathillake, Suranga Seneviratne, Guillaume Jourjon
Counterfeit apps impersonate existing popular apps in attempts to misguide users. Many counterfeits can be identified once installed, however even a tech-savvy user may struggle to detect them before installation. In this paper, we propose a novel approach of combining content embeddings and style embeddings generated from pre-trained convolutional neural networks to detect counterfeit apps. We present an analysis of approximately 1.2 million apps from Google Play Store and identify a set of potential counterfeits for top-10,000 apps. Under conservative assumptions, we were able to find 2,040 potential counterfeits that contain malware in a set of 49,608 apps that showed high similarity to one of the top-10,000 popular apps in Google Play Store. We also find 1,565 potential counterfeits asking for at least five additional dangerous permissions than the original app and 1,407 potential counterfeits having at least five extra third party advertisement libraries.
假冒应用模仿现有的流行应用,试图误导用户。许多假冒产品一旦安装就可以识别,然而,即使是精通技术的用户也可能很难在安装之前发现它们。在本文中,我们提出了一种将预训练卷积神经网络生成的内容嵌入和样式嵌入相结合的新方法来检测假冒应用程序。我们分析了来自Google Play Store的大约120万款应用,并在排名前1万的应用中找出了一系列潜在的仿冒产品。在保守的假设下,我们能够在49,608个应用中发现2,040个包含恶意软件的潜在假冒产品,这些应用与Google Play Store中排名前10,000的热门应用之一高度相似。我们还发现,1565款潜在仿冒应用要求至少5个比原始应用额外的危险权限,1407款潜在仿冒应用要求至少5个额外的第三方广告库。
{"title":"A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps","authors":"Jathushan Rajasegaran, Naveen Karunanayake, Ashanie Gunathillake, Suranga Seneviratne, Guillaume Jourjon","doi":"10.1145/3308558.3313427","DOIUrl":"https://doi.org/10.1145/3308558.3313427","url":null,"abstract":"Counterfeit apps impersonate existing popular apps in attempts to misguide users. Many counterfeits can be identified once installed, however even a tech-savvy user may struggle to detect them before installation. In this paper, we propose a novel approach of combining content embeddings and style embeddings generated from pre-trained convolutional neural networks to detect counterfeit apps. We present an analysis of approximately 1.2 million apps from Google Play Store and identify a set of potential counterfeits for top-10,000 apps. Under conservative assumptions, we were able to find 2,040 potential counterfeits that contain malware in a set of 49,608 apps that showed high similarity to one of the top-10,000 popular apps in Google Play Store. We also find 1,565 potential counterfeits asking for at least five additional dangerous permissions than the original app and 1,407 potential counterfeits having at least five extra third party advertisement libraries.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81236554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
TaxVis: a Visual System for Detecting Tax Evasion Group 税务vis:一种发现逃税集团的视觉系统
Pub Date : 2019-05-13 DOI: 10.1145/3308558.3314144
Hongchao Yu, Huan He, Q. Zheng, Bo Dong
The demo presents TaxVis, a visual detection system for tax auditor. The system supports tax evasion group detection based on a two-phase detection approach. Different from the pattern matching based methods, this two-phase method can analyze the suspicious groups automatically without artificial extraction of tax evasion patterns. In the first phase, we use a network embedding method node2vec to learn representations that embed corporations from a Corporation Associated Network (CANet), and use LightGBM to calculate a suspicious score for each corporation. In the second phase, the system use three detection rules to analyze the transaction anomaly around the suspicious corporations. According to these transaction anomalies, we can discover potential suspicious tax evasion groups. We demonstrate TaxVis on tax data of Shaanxi province in China to verify the usefulness of the system.
演示演示了TaxVis,一个税务审计员的视觉检测系统。系统支持基于两阶段检测方法的偷税漏税群体检测。与基于模式匹配的方法不同,该方法可以自动分析可疑群体,无需人工提取逃税模式。在第一阶段,我们使用网络嵌入方法node2vec从公司关联网络(CANet)中学习嵌入公司的表示,并使用LightGBM计算每个公司的可疑分数。在第二阶段,系统使用三条检测规则对可疑公司周围的交易异常进行分析。根据这些交易异常,我们可以发现潜在的可疑逃税集团。我们以中国陕西省的税收数据为例,验证了该系统的有效性。
{"title":"TaxVis: a Visual System for Detecting Tax Evasion Group","authors":"Hongchao Yu, Huan He, Q. Zheng, Bo Dong","doi":"10.1145/3308558.3314144","DOIUrl":"https://doi.org/10.1145/3308558.3314144","url":null,"abstract":"The demo presents TaxVis, a visual detection system for tax auditor. The system supports tax evasion group detection based on a two-phase detection approach. Different from the pattern matching based methods, this two-phase method can analyze the suspicious groups automatically without artificial extraction of tax evasion patterns. In the first phase, we use a network embedding method node2vec to learn representations that embed corporations from a Corporation Associated Network (CANet), and use LightGBM to calculate a suspicious score for each corporation. In the second phase, the system use three detection rules to analyze the transaction anomaly around the suspicious corporations. According to these transaction anomalies, we can discover potential suspicious tax evasion groups. We demonstrate TaxVis on tax data of Shaanxi province in China to verify the usefulness of the system.","PeriodicalId":23013,"journal":{"name":"The World Wide Web Conference","volume":"184 9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78578352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
The World Wide Web Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1