消费者市场中的算法危害

IF 3 1区 社会学 Q1 LAW Journal of Legal Analysis Pub Date : 2023-08-21 DOI:10.1093/jla/laad003
Oren Bar-Gill, Cass R Sunstein, Inbal Talgam-Cohen
{"title":"消费者市场中的算法危害","authors":"Oren Bar-Gill, Cass R Sunstein, Inbal Talgam-Cohen","doi":"10.1093/jla/laad003","DOIUrl":null,"url":null,"abstract":"Machine learning algorithms are increasingly able to predict what goods and services particular people will buy, and at what price. It is possible to imagine a situation in which relatively uniform, or coarsely set, prices and product characteristics are replaced by far more in the way of individualization. Companies might, for example, offer people shirts and shoes that are particularly suited to their situations, that fit with their particular tastes, and that have prices that fit their personal valuations. In many cases, the use of algorithms promises to increase efficiency and to promote social welfare; it might also promote fair distribution. But when consumers suffer from an absence of information or from behavioral biases, algorithms can cause serious harm. Companies might, for example, exploit such biases in order to lead people to purchase products that have little or no value for them or to pay too much for products that do have value for them. Algorithmic harm, understood as the exploitation of an absence of information or of behavioral biases, can disproportionately affect members of identifiable groups, including women and people of color. Since algorithms exacerbate the harm caused to imperfectly informed and imperfectly rational consumers, their increasing use provides fresh support for existing efforts to reduce information and rationality deficits, especially through optimally designed disclosure mandates. In addition, there is a more particular need for algorithm-centered policy responses. Specifically, algorithmic transparency—transparency about the nature, uses, and consequences of algorithms—is both crucial and challenging; novel methods designed to open the algorithmic “black box” and “interpret” the algorithm’s decision-making process should play a key role. In appropriate cases, regulators should also police the design and implementation of algorithms, with a particular emphasis on the exploitation of an absence of information or of behavioral biases.","PeriodicalId":45189,"journal":{"name":"Journal of Legal Analysis","volume":"114 18","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Algorithmic Harm in Consumer Markets\",\"authors\":\"Oren Bar-Gill, Cass R Sunstein, Inbal Talgam-Cohen\",\"doi\":\"10.1093/jla/laad003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning algorithms are increasingly able to predict what goods and services particular people will buy, and at what price. It is possible to imagine a situation in which relatively uniform, or coarsely set, prices and product characteristics are replaced by far more in the way of individualization. Companies might, for example, offer people shirts and shoes that are particularly suited to their situations, that fit with their particular tastes, and that have prices that fit their personal valuations. In many cases, the use of algorithms promises to increase efficiency and to promote social welfare; it might also promote fair distribution. But when consumers suffer from an absence of information or from behavioral biases, algorithms can cause serious harm. Companies might, for example, exploit such biases in order to lead people to purchase products that have little or no value for them or to pay too much for products that do have value for them. Algorithmic harm, understood as the exploitation of an absence of information or of behavioral biases, can disproportionately affect members of identifiable groups, including women and people of color. Since algorithms exacerbate the harm caused to imperfectly informed and imperfectly rational consumers, their increasing use provides fresh support for existing efforts to reduce information and rationality deficits, especially through optimally designed disclosure mandates. In addition, there is a more particular need for algorithm-centered policy responses. Specifically, algorithmic transparency—transparency about the nature, uses, and consequences of algorithms—is both crucial and challenging; novel methods designed to open the algorithmic “black box” and “interpret” the algorithm’s decision-making process should play a key role. In appropriate cases, regulators should also police the design and implementation of algorithms, with a particular emphasis on the exploitation of an absence of information or of behavioral biases.\",\"PeriodicalId\":45189,\"journal\":{\"name\":\"Journal of Legal Analysis\",\"volume\":\"114 18\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Legal Analysis\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1093/jla/laad003\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Legal Analysis","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1093/jla/laad003","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

摘要

机器学习算法越来越能够预测特定人群将购买什么商品和服务,以及以什么价格购买。可以想象这样一种情况,在这种情况下,相对统一或粗略设定的价格和产品特征被更多的个性化方式所取代。例如,公司可能会为人们提供特别适合他们情况、符合他们特定品味、价格符合他们个人估价的衬衫和鞋子。在许多情况下,算法的使用有望提高效率和促进社会福利;它还可能促进公平分配。但当消费者因缺乏信息或行为偏见而痛苦时,算法可能会造成严重伤害。例如,公司可能会利用这种偏见,引导人们购买对他们来说价值不大或根本没有价值的产品,或者为对他们来说有价值的产品支付过高的费用。算法伤害,被理解为利用缺乏信息或行为偏见,会不成比例地影响可识别群体的成员,包括女性和有色人种。由于算法加剧了对不完全知情和不完全理性的消费者造成的伤害,它们的日益使用为减少信息和理性缺陷的现有努力提供了新的支持,特别是通过优化设计的披露授权。此外,还特别需要以算法为中心的策略响应。具体来说,算法的透明度——算法的性质、用途和后果的透明度——既是至关重要的,也是具有挑战性的;设计新颖的方法来打开算法的“黑匣子”并“解读”算法的决策过程应该发挥关键作用。在适当的情况下,监管机构还应监督算法的设计和实施,特别强调利用缺乏信息或行为偏见的情况。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Algorithmic Harm in Consumer Markets
Machine learning algorithms are increasingly able to predict what goods and services particular people will buy, and at what price. It is possible to imagine a situation in which relatively uniform, or coarsely set, prices and product characteristics are replaced by far more in the way of individualization. Companies might, for example, offer people shirts and shoes that are particularly suited to their situations, that fit with their particular tastes, and that have prices that fit their personal valuations. In many cases, the use of algorithms promises to increase efficiency and to promote social welfare; it might also promote fair distribution. But when consumers suffer from an absence of information or from behavioral biases, algorithms can cause serious harm. Companies might, for example, exploit such biases in order to lead people to purchase products that have little or no value for them or to pay too much for products that do have value for them. Algorithmic harm, understood as the exploitation of an absence of information or of behavioral biases, can disproportionately affect members of identifiable groups, including women and people of color. Since algorithms exacerbate the harm caused to imperfectly informed and imperfectly rational consumers, their increasing use provides fresh support for existing efforts to reduce information and rationality deficits, especially through optimally designed disclosure mandates. In addition, there is a more particular need for algorithm-centered policy responses. Specifically, algorithmic transparency—transparency about the nature, uses, and consequences of algorithms—is both crucial and challenging; novel methods designed to open the algorithmic “black box” and “interpret” the algorithm’s decision-making process should play a key role. In appropriate cases, regulators should also police the design and implementation of algorithms, with a particular emphasis on the exploitation of an absence of information or of behavioral biases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
3
审稿时长
16 weeks
期刊最新文献
The Limits of Formalism in the Separation of Powers Putting Freedom of Contract in its Place Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models How Election Rules Affect Who Wins Remote Work and City Decline: Lessons From the Garment District
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1