FairUMAP 2019 Chairs' Welcome Overview

Bettina Berendt, Veronika Bogina, R. Burke, Michael D. Ekstrand, Alan Hartman, S. Kleanthous, T. Kuflik, B. Mobasher, Jahna Otterbacher
{"title":"FairUMAP 2019 Chairs' Welcome Overview","authors":"Bettina Berendt, Veronika Bogina, R. Burke, Michael D. Ekstrand, Alan Hartman, S. Kleanthous, T. Kuflik, B. Mobasher, Jahna Otterbacher","doi":"10.1145/3314183.3323842","DOIUrl":null,"url":null,"abstract":"It is our great pleasure to welcome you to the Second FairUMAP workshop at UMAP 2019. This full-day workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand. The workshop was motivated by the observation that these two fields increasingly impact one another. Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today's highly complex, information-rich online environments. Machine learning techniques applied to big data, as done by recommender systems, and user modeling in general, are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users' needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on user characteristics has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives.","PeriodicalId":240482,"journal":{"name":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314183.3323842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is our great pleasure to welcome you to the Second FairUMAP workshop at UMAP 2019. This full-day workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand. The workshop was motivated by the observation that these two fields increasingly impact one another. Personalization has become a ubiquitous and essential part of systems that help users find relevant information in today's highly complex, information-rich online environments. Machine learning techniques applied to big data, as done by recommender systems, and user modeling in general, are key enabling technologies that allow intelligent systems to learn from users and adapt their output to users' needs and preferences. However, there has been a growing recognition that these underlying technologies raise novel ethical, legal, and policy challenges. It has become apparent that a single-minded focus on user characteristics has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, and other social welfare considerations are not captured by typical metrics based on which data-driven personalized models are optimized. Indeed, widely-used personalization systems in popular sites such as Facebook, Google News and YouTube have been heavily criticized for personalizing information delivery too heavily at the cost of these other objectives.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
2019 FairUMAP主席欢迎概述
我们非常高兴地欢迎您参加2019年UMAP第二届FairUMAP研讨会。这个全天的研讨会汇集了研究人员在用户建模,适应和个性化的交叉点工作,以及在算法系统的偏见,公平和透明度的另一方面。讲习班的动机是观察到这两个领域日益相互影响。在当今高度复杂、信息丰富的在线环境中,个性化已经成为帮助用户找到相关信息的系统中无处不在的重要组成部分。应用于大数据的机器学习技术,如推荐系统和一般的用户建模,是关键的使能技术,允许智能系统向用户学习,并根据用户的需求和偏好调整其输出。然而,越来越多的人认识到,这些基础技术带来了新的伦理、法律和政策挑战。很明显,对用户特征的单一关注掩盖了此类系统必须能够提供的其他重要和有益的结果。系统属性,如公平性、透明性、平衡性和其他社会福利考虑因素,并没有被基于数据驱动的个性化模型进行优化的典型度量所捕获。事实上,在Facebook、Google News和YouTube等流行网站中广泛使用的个性化系统,因为过于注重个性化信息传递而牺牲了其他目标而受到了严厉的批评。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Shaping the Reaction: Community Characteristics and Emotional Tone of Citizen Responses to Robotics Videos at TED versus YouTube Supporting the Exploration of Cultural Heritage Information via Search Behavior Analysis Exer-model: A User Model for Scrutinising Long-term Models of Physical Activity from Multiple Sensors NEAR: A Partner to Explain Any Factorised Recommender System Tikkoun Sofrim: A WebApp for Personalization and Adaptation of Crowdsourcing Transcriptions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1