{"title":"Enriching Recommendation Models with Logic Conditions","authors":"Lihang Fan, Wenfei Fan, Ping Lu, Chao Tian, Qiang Yin","doi":"10.1145/3617330","DOIUrl":null,"url":null,"abstract":"This paper proposes RecLogic, a framework for improving the accuracy of machine learning (ML) models for recommendation. It aims to enhance existing ML models with logic conditions to reduce false positives and false negatives, without training a new model. Underlying RecLogic are (a) a class of prediction rules on graphs, denoted by TIEs, (b) a new approach to learning TIEs, and (c) a new paradigm for recommendation with TIEs. TIEs may embed ML recommendation models as predicates; as opposed to prior graph rules, it is tractable to decide whether a graph satisfies a set of TIEs. To enrich ML models, RecLogic iteratively trains a generator with feedback from each round, to learn TIEs with a probabilistic bound. RecLogic also provides a PTIME parallel algorithm for making recommendations with the learned TIEs. Using real-life data, we empirically verify that RecLogic improves the accuracy of ML predictions by 22.89% on average in an area where the prediction strength is neither sufficiently large nor sufficiently small, up to 33.10%.","PeriodicalId":498157,"journal":{"name":"Proceedings of the ACM on Management of Data","volume":"33 7","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on Management of Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3617330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes RecLogic, a framework for improving the accuracy of machine learning (ML) models for recommendation. It aims to enhance existing ML models with logic conditions to reduce false positives and false negatives, without training a new model. Underlying RecLogic are (a) a class of prediction rules on graphs, denoted by TIEs, (b) a new approach to learning TIEs, and (c) a new paradigm for recommendation with TIEs. TIEs may embed ML recommendation models as predicates; as opposed to prior graph rules, it is tractable to decide whether a graph satisfies a set of TIEs. To enrich ML models, RecLogic iteratively trains a generator with feedback from each round, to learn TIEs with a probabilistic bound. RecLogic also provides a PTIME parallel algorithm for making recommendations with the learned TIEs. Using real-life data, we empirically verify that RecLogic improves the accuracy of ML predictions by 22.89% on average in an area where the prediction strength is neither sufficiently large nor sufficiently small, up to 33.10%.