Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.

IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Frontiers in Artificial Intelligence Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI:10.3389/frai.2024.1410790
Jaime Govea, Rommel Gutierrez, William Villegas-Ch
{"title":"Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.","authors":"Jaime Govea, Rommel Gutierrez, William Villegas-Ch","doi":"10.3389/frai.2024.1410790","DOIUrl":null,"url":null,"abstract":"<p><p>In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410769/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1410790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能时代的透明度和精确度:可解释性增强推荐系统的评估。
在当今的信息时代,推荐系统已成为过滤和个性化用户海量数据流的重要工具。然而,这些系统日益增加的复杂性和不透明性引起了人们对其透明度和用户信任度的担忧。推荐缺乏可解释性会导致用户在不知情的情况下做出决定,并降低对这些先进系统的信任度。我们的研究通过将可解释性技术整合到推荐系统中来提高推荐的精确度和透明度,从而解决这一问题。我们在 MovieLens 和亚马逊数据集上实施并评估了推荐模型,并应用 LIME 和 SHAP 等可解释性方法来分解模型决策。结果表明,推荐的精确度有了显著提高,用户理解和信任系统提供的建议的能力也明显增强。例如,在采用这些可解释性技术后,我们发现推荐精确度提高了 3%,这证明了它们在性能和改善用户体验方面的附加价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
期刊最新文献
Impact of hypertension on coronary artery plaques and FFR-CT in type 2 diabetes mellitus patients: evaluation utilizing artificial intelligence processed coronary computed tomography angiography. Using large language models to support pre-service teachers mathematical reasoning-an exploratory study on ChatGPT as an instrument for creating mathematical proofs in geometry. Prediction of unobserved bifurcation by unsupervised extraction of slowly time-varying system parameter dynamics from time series using reservoir computing. Enzyme catalytic efficiency prediction: employing convolutional neural networks and XGBoost. Heuristic machine learning approaches for identifying phishing threats across web and email platforms.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1