Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734066
Nicolas Malevé, Katrina Sluis
Abstract Despite computer vision's extensive mobilization of cameras, photographers, and viewing subjects, photography's place in machine vision remains undertheorized. This article illuminates an operative theory of photography that exists in a latent form, embedded in the tools, practices, and discourses of machine vision research and enabling the methodological imperatives of dataset production. Focusing on the development of the canonical object recognition dataset ImageNet, the article analyzes how the dataset pipeline translates the radical polysemy of the photographic image into a stable and transparent form of data that can be portrayed as a proxy of human vision. Reflecting on the prominence of the photographic snapshot in machine vision discourse, the article traces the path that made this popular cultural practice amenable to the dataset. Following the evolution from nineteenth-century scientific photography to the acquisition of massive sets of online photos, the article shows how dataset creators inherit and transform a form of “instrumental realism,” a photographic enterprise that aims to establish a generalized look from contingent instances in the pursuit of statistical truth. The article concludes with a reflection on how the latent photographic theory of machine vision we have advanced relates to the large image models built for generative AI today.
{"title":"The Photographic Pipeline of Machine Vision; or, Machine Vision's Latent Photographic Theory","authors":"Nicolas Malevé, Katrina Sluis","doi":"10.1215/2834703x-10734066","DOIUrl":"https://doi.org/10.1215/2834703x-10734066","url":null,"abstract":"Abstract Despite computer vision's extensive mobilization of cameras, photographers, and viewing subjects, photography's place in machine vision remains undertheorized. This article illuminates an operative theory of photography that exists in a latent form, embedded in the tools, practices, and discourses of machine vision research and enabling the methodological imperatives of dataset production. Focusing on the development of the canonical object recognition dataset ImageNet, the article analyzes how the dataset pipeline translates the radical polysemy of the photographic image into a stable and transparent form of data that can be portrayed as a proxy of human vision. Reflecting on the prominence of the photographic snapshot in machine vision discourse, the article traces the path that made this popular cultural practice amenable to the dataset. Following the evolution from nineteenth-century scientific photography to the acquisition of massive sets of online photos, the article shows how dataset creators inherit and transform a form of “instrumental realism,” a photographic enterprise that aims to establish a generalized look from contingent instances in the pursuit of statistical truth. The article concludes with a reflection on how the latent photographic theory of machine vision we have advanced relates to the large image models built for generative AI today.","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734977
Josh Simons, Eli Frankel
{"title":"Viral Justice: How We Grow the World We Want","authors":"Josh Simons, Eli Frankel","doi":"10.1215/2834703x-10734977","DOIUrl":"https://doi.org/10.1215/2834703x-10734977","url":null,"abstract":"","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"27 15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734046
Sam Lavigne
Abstract Web scraping is a technique for automatically downloading and processing web content or converting online text and other media into structured data. This article describes the role that web scraping plays for web businesses and machine learning systems and the fundamental tension between the openness of the web and the interests of private corporations. It then goes on to sketch an outline for “scrapism,” the practice of using web scraping for artistic, critical, and political ends.
{"title":"Scrapism: A Manifesto","authors":"Sam Lavigne","doi":"10.1215/2834703x-10734046","DOIUrl":"https://doi.org/10.1215/2834703x-10734046","url":null,"abstract":"Abstract Web scraping is a technique for automatically downloading and processing web content or converting online text and other media into structured data. This article describes the role that web scraping plays for web businesses and machine learning systems and the fundamental tension between the openness of the web and the interests of private corporations. It then goes on to sketch an outline for “scrapism,” the practice of using web scraping for artistic, critical, and political ends.","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734086
Heather Love
{"title":"The Shame Machine: Who Profits in the New Age of Humiliation","authors":"Heather Love","doi":"10.1215/2834703x-10734086","DOIUrl":"https://doi.org/10.1215/2834703x-10734086","url":null,"abstract":"","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734106
James Smithies
{"title":"Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition","authors":"James Smithies","doi":"10.1215/2834703x-10734106","DOIUrl":"https://doi.org/10.1215/2834703x-10734106","url":null,"abstract":"","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734116
France Winddance Twine
{"title":"Technology of the Oppressed: Inequity and the Digital Mundane in Favelas in Brazil","authors":"France Winddance Twine","doi":"10.1215/2834703x-10734116","DOIUrl":"https://doi.org/10.1215/2834703x-10734116","url":null,"abstract":"","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734076
Christopher Newfield
Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.
批评人士已经发现了机器语言和深度学习系统中存在的一系列操作缺陷,这些系统现在被称为“人工智能”。讨论最多的五个是社会偏见,尤其是种族主义;不透明,用户无法评估结果是如何产生的;强制,即架构、数据集、算法等由设计师和平台控制,而不是由用户控制;系统性侵犯隐私;基于企业的研究缺乏学术自由,结果可能会根据企业目标进行炒作,如果不是,则会受到压制和扭曲。本文关注的是人工智能的第六个问题,即“智能”一词错误地描述了相关技术的实际状态和影响。为了填补在公共讨论中严格使用“智能”的空白,它分析了布莱恩·坎特韦尔·史密斯(Brian Cantwell Smith)的《人工智能的承诺》(the Promise of Artificial intelligence, 2019),指出人文学科通常使用史密斯严苛的“真正智能”概念。为了让这一概念在技术专家中广为流传,文章呼吁用人文学者在定性和定量学科之间的认知平等的基础上,从一开始就参与“人工智能”研究项目的构建和评估的制度来取代C. P. Snow在20世纪50年代制定的两种文化等级制度。
{"title":"How to Make “AI” Intelligent; or, The Question of Epistemic Equality","authors":"Christopher Newfield","doi":"10.1215/2834703x-10734076","DOIUrl":"https://doi.org/10.1215/2834703x-10734076","url":null,"abstract":"Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734036
Kristin Rose, Kate Henne, Sabelo Mhlambi, Anand Sarwate, Sasha Costanza-Chock
{"title":"Critical AI and Design Justice: An Interview with Sasha Costanza-Chock","authors":"Kristin Rose, Kate Henne, Sabelo Mhlambi, Anand Sarwate, Sasha Costanza-Chock","doi":"10.1215/2834703x-10734036","DOIUrl":"https://doi.org/10.1215/2834703x-10734036","url":null,"abstract":"","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734096
Seth Perlow
{"title":"Artificial Life after Frankenstein","authors":"Seth Perlow","doi":"10.1215/2834703x-10734096","DOIUrl":"https://doi.org/10.1215/2834703x-10734096","url":null,"abstract":"","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1215/2834703x-10734056
Caroline E. Schuster, Kristen M. Schuster
Abstract This article argues that critical AI studies should make a methodological investment in “thick description” to counteract the tendency both within computational design and business settings to presume (or, in the case of start-ups, hope for) a seamless and inevitable journey from data to monetizable domain knowledge and useful services. Perhaps the classic application of that critical data-studies framework is Marion Fourcade and Kevin Healy's influential 2017 essay, “Seeing Like a Market,” which advances a comprehensive account of how value is extracted from data-collection processes. As important as these critiques have been, the apparent inevitability of this assemblage of power, knowledge, and profit arises in part through the metaphor of “sight.” Thick description—especially when combined with a feminist and queer attention to embodiment, materiality, and multisensory experience—can in this respect supplement Fourcade and Healey's critique by revealing unexpected imaginative possibilities built out of social materialities.
本文认为,关键的人工智能研究应该在方法论上投资于“厚描述”,以抵消计算设计和商业环境中假设(或者,在初创企业的情况下,希望)从数据到可货币化的领域知识和有用服务的无缝和不可避免的旅程的趋势。也许这一关键数据研究框架的经典应用是Marion Fourcade和Kevin Healy在2017年发表的有影响力的文章《像市场一样看》(Seeing Like a Market),这篇文章全面阐述了如何从数据收集过程中提取价值。与这些批评同样重要的是,这种权力、知识和利益的集合显然是不可避免的,部分是通过“视觉”的隐喻产生的。厚重的描述——尤其是当与女权主义者和酷儿对化身、物质性和多感官体验的关注结合在一起时——可以在这方面补充Fourcade和Healey的批判,揭示出基于社会物质性的意想不到的想象可能性。
{"title":"Thick Description for Critical AI: Generating Data Capitalism and Provocations for a Multisensory Approach","authors":"Caroline E. Schuster, Kristen M. Schuster","doi":"10.1215/2834703x-10734056","DOIUrl":"https://doi.org/10.1215/2834703x-10734056","url":null,"abstract":"Abstract This article argues that critical AI studies should make a methodological investment in “thick description” to counteract the tendency both within computational design and business settings to presume (or, in the case of start-ups, hope for) a seamless and inevitable journey from data to monetizable domain knowledge and useful services. Perhaps the classic application of that critical data-studies framework is Marion Fourcade and Kevin Healy's influential 2017 essay, “Seeing Like a Market,” which advances a comprehensive account of how value is extracted from data-collection processes. As important as these critiques have been, the apparent inevitability of this assemblage of power, knowledge, and profit arises in part through the metaphor of “sight.” Thick description—especially when combined with a feminist and queer attention to embodiment, materiality, and multisensory experience—can in this respect supplement Fourcade and Healey's critique by revealing unexpected imaginative possibilities built out of social materialities.","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}