Xinping Min, Yuliang Shi, Li-zhen Cui, Han Yu, Yuan Miao
{"title":"Efficient Crowd-Powered Active Learning for Reliable Review Evaluation","authors":"Xinping Min, Yuliang Shi, Li-zhen Cui, Han Yu, Yuan Miao","doi":"10.1145/3126973.3129307","DOIUrl":null,"url":null,"abstract":"To mitigate uncertainty in the quality of online purchases (e.g., e-commerce), many people rely on review comments from others in their decision-making processes. The key challenge in this situation is how to identify useful comments among a large corpus of candidate review comments with potentially varying usefulness. In this paper, we propose the Reliable Review Evaluation Framework (RREF) which combines crowdsourcing with machine learning to address this problem. To improve crowdsourcing quality control, we propose a novel review query crowdsourcing approach which jointly considers workers' track records in review provision and current workloads when allocating review comments for workers to rate. Using the ratings crowdsourced from workers, RREF then enhances the adaptive topic classification model selection and weighting functions of AdaBoost with dynamic keyword list reconstruction. RREF has been compared with state-of-the-art related frameworks using a large-scale real-world dataset, and demonstrated over 50% reduction in average classification errors.","PeriodicalId":370356,"journal":{"name":"International Conference on Crowd Science and Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Crowd Science and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3126973.3129307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
To mitigate uncertainty in the quality of online purchases (e.g., e-commerce), many people rely on review comments from others in their decision-making processes. The key challenge in this situation is how to identify useful comments among a large corpus of candidate review comments with potentially varying usefulness. In this paper, we propose the Reliable Review Evaluation Framework (RREF) which combines crowdsourcing with machine learning to address this problem. To improve crowdsourcing quality control, we propose a novel review query crowdsourcing approach which jointly considers workers' track records in review provision and current workloads when allocating review comments for workers to rate. Using the ratings crowdsourced from workers, RREF then enhances the adaptive topic classification model selection and weighting functions of AdaBoost with dynamic keyword list reconstruction. RREF has been compared with state-of-the-art related frameworks using a large-scale real-world dataset, and demonstrated over 50% reduction in average classification errors.