{"title":"谷歌自动完成的伦理维度","authors":"Rosie Graham","doi":"10.1177/20539517231156518","DOIUrl":null,"url":null,"abstract":"What questions should we ask of Google’s Autocomplete suggestions? This article highlights some of the key ethical issues raised by Google’s automated suggestion tool that provides potential queries below a user’s search box. Much of the discourse surrounding Google’s suggestions has been framed through legal cases in which complex issues can become distilled into black-and-white questions of the law. For example, do Google have to remove a particular suggestion and do they have to pay a settlement for damages? This commentary argues that shaping this discourse along primarily legal lines obscures many of these other moral dimensions raised by Google Autocomplete. Building from existing typologies, this commentary first outlines the legal discourse before exploring five additional ethical challenges, each framed around a particular moral question in which all users have a stake. Written in the form of a commentary, the purpose of this article is not to conclusively answer the ethical questions raised, but rather to give an account of why these particular questions are worth debating. Autocomplete’s suggestions are not simply a mirror of what users are typing into Google’s search bar. Google’s official statement is that “Autocomplete is a time-saving but complex feature. It doesn’t simply display the most common queries on a given topic” but “also predict[s] individual words and phrases that are based on both real searches as well as word patterns found across the web” (Google, 2022). Both its underlying methods and associated terminology have changed throughout time, shifting between providing completions, suggestions, and predictions. In doing so, the grounds for potential critique are ever-changing, which means that Google’s approach to Autocomplete deserves significant scrutiny.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"The ethical dimensions of Google autocomplete\",\"authors\":\"Rosie Graham\",\"doi\":\"10.1177/20539517231156518\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"What questions should we ask of Google’s Autocomplete suggestions? This article highlights some of the key ethical issues raised by Google’s automated suggestion tool that provides potential queries below a user’s search box. Much of the discourse surrounding Google’s suggestions has been framed through legal cases in which complex issues can become distilled into black-and-white questions of the law. For example, do Google have to remove a particular suggestion and do they have to pay a settlement for damages? This commentary argues that shaping this discourse along primarily legal lines obscures many of these other moral dimensions raised by Google Autocomplete. Building from existing typologies, this commentary first outlines the legal discourse before exploring five additional ethical challenges, each framed around a particular moral question in which all users have a stake. Written in the form of a commentary, the purpose of this article is not to conclusively answer the ethical questions raised, but rather to give an account of why these particular questions are worth debating. Autocomplete’s suggestions are not simply a mirror of what users are typing into Google’s search bar. Google’s official statement is that “Autocomplete is a time-saving but complex feature. It doesn’t simply display the most common queries on a given topic” but “also predict[s] individual words and phrases that are based on both real searches as well as word patterns found across the web” (Google, 2022). Both its underlying methods and associated terminology have changed throughout time, shifting between providing completions, suggestions, and predictions. In doing so, the grounds for potential critique are ever-changing, which means that Google’s approach to Autocomplete deserves significant scrutiny.\",\"PeriodicalId\":47834,\"journal\":{\"name\":\"Big Data & Society\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Big Data & Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/20539517231156518\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data & Society","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/20539517231156518","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
What questions should we ask of Google’s Autocomplete suggestions? This article highlights some of the key ethical issues raised by Google’s automated suggestion tool that provides potential queries below a user’s search box. Much of the discourse surrounding Google’s suggestions has been framed through legal cases in which complex issues can become distilled into black-and-white questions of the law. For example, do Google have to remove a particular suggestion and do they have to pay a settlement for damages? This commentary argues that shaping this discourse along primarily legal lines obscures many of these other moral dimensions raised by Google Autocomplete. Building from existing typologies, this commentary first outlines the legal discourse before exploring five additional ethical challenges, each framed around a particular moral question in which all users have a stake. Written in the form of a commentary, the purpose of this article is not to conclusively answer the ethical questions raised, but rather to give an account of why these particular questions are worth debating. Autocomplete’s suggestions are not simply a mirror of what users are typing into Google’s search bar. Google’s official statement is that “Autocomplete is a time-saving but complex feature. It doesn’t simply display the most common queries on a given topic” but “also predict[s] individual words and phrases that are based on both real searches as well as word patterns found across the web” (Google, 2022). Both its underlying methods and associated terminology have changed throughout time, shifting between providing completions, suggestions, and predictions. In doing so, the grounds for potential critique are ever-changing, which means that Google’s approach to Autocomplete deserves significant scrutiny.
期刊介绍:
Big Data & Society (BD&S) is an open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities, and computing and their intersections with the arts and natural sciences. The journal focuses on the implications of Big Data for societies and aims to connect debates about Big Data practices and their effects on various sectors such as academia, social life, industry, business, and government.
BD&S considers Big Data as an emerging field of practices, not solely defined by but generative of unique data qualities such as high volume, granularity, data linking, and mining. The journal pays attention to digital content generated both online and offline, encompassing social media, search engines, closed networks (e.g., commercial or government transactions), and open networks like digital archives, open government, and crowdsourced data. Rather than providing a fixed definition of Big Data, BD&S encourages interdisciplinary inquiries, debates, and studies on various topics and themes related to Big Data practices.
BD&S seeks contributions that analyze Big Data practices, involve empirical engagements and experiments with innovative methods, and reflect on the consequences of these practices for the representation, realization, and governance of societies. As a digital-only journal, BD&S's platform can accommodate multimedia formats such as complex images, dynamic visualizations, videos, and audio content. The contents of the journal encompass peer-reviewed research articles, colloquia, bookcasts, think pieces, state-of-the-art methods, and work by early career researchers.