{"title":"基准测试的实践,计算机视觉管道中的漏洞","authors":"N. Malevé","doi":"10.1080/17540763.2023.2189159","DOIUrl":null,"url":null,"abstract":"Computer vision datasets have proved to be key instruments to interpret visual data. This article concentrates on benchmark datasets which are used to define a technical problem and provide with a common referent against which different solutions can be compared. Through three case studies, ImageNet, the Pilot Parliaments Benchmark dataset and VizWiz, the article analyzes how photography is mobilised to conceive interventions in computer vision. The benchmark is a privileged site where photographic curation is tactically performed to change the scale of visual perception, oppose racial and gendered discrimination or rethink image interpretation for visually impaired users. Through the elaboration of benchmarks, engineers create curatorial pipelines involving large chains of heterogeneous actors exploiting various photographic practices from amateur snapshot to political portraiture or photography made by blind users. The article contends that the mobilisation of photography in the benchmark goes together with a multifaceted notion of vulnerability. It analyzes how various forms of vulnerabilities and insecurities pertaining to users, software companies, or vision systems are framed and how benchmarks are conceived in response to them. Following the alliances that form around vulnerabilities, the text explores the potential and limits of the practices of benchmarking in computer vision.","PeriodicalId":39970,"journal":{"name":"Photographies","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PRACTICES OF BENCHMARKING, VULNERABILITY IN THE COMPUTER VISION PIPELINE\",\"authors\":\"N. Malevé\",\"doi\":\"10.1080/17540763.2023.2189159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computer vision datasets have proved to be key instruments to interpret visual data. This article concentrates on benchmark datasets which are used to define a technical problem and provide with a common referent against which different solutions can be compared. Through three case studies, ImageNet, the Pilot Parliaments Benchmark dataset and VizWiz, the article analyzes how photography is mobilised to conceive interventions in computer vision. The benchmark is a privileged site where photographic curation is tactically performed to change the scale of visual perception, oppose racial and gendered discrimination or rethink image interpretation for visually impaired users. Through the elaboration of benchmarks, engineers create curatorial pipelines involving large chains of heterogeneous actors exploiting various photographic practices from amateur snapshot to political portraiture or photography made by blind users. The article contends that the mobilisation of photography in the benchmark goes together with a multifaceted notion of vulnerability. It analyzes how various forms of vulnerabilities and insecurities pertaining to users, software companies, or vision systems are framed and how benchmarks are conceived in response to them. Following the alliances that form around vulnerabilities, the text explores the potential and limits of the practices of benchmarking in computer vision.\",\"PeriodicalId\":39970,\"journal\":{\"name\":\"Photographies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photographies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/17540763.2023.2189159\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photographies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17540763.2023.2189159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PRACTICES OF BENCHMARKING, VULNERABILITY IN THE COMPUTER VISION PIPELINE
Computer vision datasets have proved to be key instruments to interpret visual data. This article concentrates on benchmark datasets which are used to define a technical problem and provide with a common referent against which different solutions can be compared. Through three case studies, ImageNet, the Pilot Parliaments Benchmark dataset and VizWiz, the article analyzes how photography is mobilised to conceive interventions in computer vision. The benchmark is a privileged site where photographic curation is tactically performed to change the scale of visual perception, oppose racial and gendered discrimination or rethink image interpretation for visually impaired users. Through the elaboration of benchmarks, engineers create curatorial pipelines involving large chains of heterogeneous actors exploiting various photographic practices from amateur snapshot to political portraiture or photography made by blind users. The article contends that the mobilisation of photography in the benchmark goes together with a multifaceted notion of vulnerability. It analyzes how various forms of vulnerabilities and insecurities pertaining to users, software companies, or vision systems are framed and how benchmarks are conceived in response to them. Following the alliances that form around vulnerabilities, the text explores the potential and limits of the practices of benchmarking in computer vision.