Martin Hilbert , Arti Thakur , Pablo M. Flores , Xiaoya Zhang , Jee Young Bhan , Patrick Bernhard , Feng Ji
{"title":"8%-10%的算法建议是 \"糟糕的\",但......一项探索性风险效用荟萃分析及其对监管的影响","authors":"Martin Hilbert , Arti Thakur , Pablo M. Flores , Xiaoya Zhang , Jee Young Bhan , Patrick Bernhard , Feng Ji","doi":"10.1016/j.ijinfomgt.2023.102743","DOIUrl":null,"url":null,"abstract":"<div><p>We conducted a quantitatively coarse-grained, but wide-ranging evaluation of the frequency recommender algorithms provide ‘good’ and ‘bad’ recommendations, with a focus on the latter. We found 151 algorithmic audits from 33 studies that report fitting risk-utility statistics from YouTube, Google Search, Twitter, Facebook, TikTok, Amazon, and others. Our findings indicate that roughly 8–10% of algorithmic recommendations are ‘bad’, while about a quarter actively protect users from self-induced harm (‘do good’). This average is remarkably consistent across the audits, irrespective of the platform nor on the kind of risk (bias/ discrimination, mental health and child harm, misinformation, or political extremism). Algorithmic audits find negative feedback loops that can ensnare users into spirals of ‘bad’ recommendations (or being ‘dragged down the rabbit hole’), but also highlight an even larger likelihood of positive spirals of ‘good recommendations’. While our analysis refrains from any judgment of the causal consequences and severity of risks, the detected levels surpass those associated with many other consumer products. They are comparable to the risk levels of generic food defects monitored by public authorities such as the FDA or FSIS in the United States. Consequently, our findings inform the ongoing discussion regarding regulatory oversight of the potential risks posed by recommender algorithms.</p></div>","PeriodicalId":48422,"journal":{"name":"International Journal of Information Management","volume":null,"pages":null},"PeriodicalIF":20.1000,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S026840122300124X/pdfft?md5=9e64a3c50fa338d21d87fc17bfb9b986&pid=1-s2.0-S026840122300124X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"8–10% of algorithmic recommendations are ‘bad’, but… an exploratory risk-utility meta-analysis and its regulatory implications\",\"authors\":\"Martin Hilbert , Arti Thakur , Pablo M. Flores , Xiaoya Zhang , Jee Young Bhan , Patrick Bernhard , Feng Ji\",\"doi\":\"10.1016/j.ijinfomgt.2023.102743\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We conducted a quantitatively coarse-grained, but wide-ranging evaluation of the frequency recommender algorithms provide ‘good’ and ‘bad’ recommendations, with a focus on the latter. We found 151 algorithmic audits from 33 studies that report fitting risk-utility statistics from YouTube, Google Search, Twitter, Facebook, TikTok, Amazon, and others. Our findings indicate that roughly 8–10% of algorithmic recommendations are ‘bad’, while about a quarter actively protect users from self-induced harm (‘do good’). This average is remarkably consistent across the audits, irrespective of the platform nor on the kind of risk (bias/ discrimination, mental health and child harm, misinformation, or political extremism). Algorithmic audits find negative feedback loops that can ensnare users into spirals of ‘bad’ recommendations (or being ‘dragged down the rabbit hole’), but also highlight an even larger likelihood of positive spirals of ‘good recommendations’. While our analysis refrains from any judgment of the causal consequences and severity of risks, the detected levels surpass those associated with many other consumer products. They are comparable to the risk levels of generic food defects monitored by public authorities such as the FDA or FSIS in the United States. Consequently, our findings inform the ongoing discussion regarding regulatory oversight of the potential risks posed by recommender algorithms.</p></div>\",\"PeriodicalId\":48422,\"journal\":{\"name\":\"International Journal of Information Management\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":20.1000,\"publicationDate\":\"2023-12-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S026840122300124X/pdfft?md5=9e64a3c50fa338d21d87fc17bfb9b986&pid=1-s2.0-S026840122300124X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Management\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S026840122300124X\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Management","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026840122300124X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
8–10% of algorithmic recommendations are ‘bad’, but… an exploratory risk-utility meta-analysis and its regulatory implications
We conducted a quantitatively coarse-grained, but wide-ranging evaluation of the frequency recommender algorithms provide ‘good’ and ‘bad’ recommendations, with a focus on the latter. We found 151 algorithmic audits from 33 studies that report fitting risk-utility statistics from YouTube, Google Search, Twitter, Facebook, TikTok, Amazon, and others. Our findings indicate that roughly 8–10% of algorithmic recommendations are ‘bad’, while about a quarter actively protect users from self-induced harm (‘do good’). This average is remarkably consistent across the audits, irrespective of the platform nor on the kind of risk (bias/ discrimination, mental health and child harm, misinformation, or political extremism). Algorithmic audits find negative feedback loops that can ensnare users into spirals of ‘bad’ recommendations (or being ‘dragged down the rabbit hole’), but also highlight an even larger likelihood of positive spirals of ‘good recommendations’. While our analysis refrains from any judgment of the causal consequences and severity of risks, the detected levels surpass those associated with many other consumer products. They are comparable to the risk levels of generic food defects monitored by public authorities such as the FDA or FSIS in the United States. Consequently, our findings inform the ongoing discussion regarding regulatory oversight of the potential risks posed by recommender algorithms.
期刊介绍:
The International Journal of Information Management (IJIM) is a distinguished, international, and peer-reviewed journal dedicated to providing its readers with top-notch analysis and discussions within the evolving field of information management. Key features of the journal include:
Comprehensive Coverage:
IJIM keeps readers informed with major papers, reports, and reviews.
Topical Relevance:
The journal remains current and relevant through Viewpoint articles and regular features like Research Notes, Case Studies, and a Reviews section, ensuring readers are updated on contemporary issues.
Focus on Quality:
IJIM prioritizes high-quality papers that address contemporary issues in information management.