{"title":"How to Govern Visibility?: Legitimizations and Contestations of Visual Data Practices after the 2017 G20 Summit in Hamburg","authors":"Rebecca Venema","doi":"10.24908/ss.v18i4.13535","DOIUrl":null,"url":null,"abstract":"Technological changes shift how visibility can be established, governed, and used. Ubiquitous visual technologies, the possibility to distribute and use images from heterogeneous sources across different social contexts and publics, and increasingly powerful facial recognition tools afford new avenues for law enforcement. Concurrently, these changes also trigger fundamental concerns about privacy violations and all-encompassing surveillance. Using the example of police investigations after the 2017 G20 summit in Hamburg, the present article provides insights into how different actors in the political and public realm in Germany deal with these potentials and tensions in handling visual data. Based on a qualitative content analysis of newspaper articles (n=42), tweets (n=267), experts’ reports (n=3), and minutes of parliamentary debates and committee hearings (n=8), this study examines how visual data were collected, analyzed, and published and how different actors legitimated and contested these practices. The findings show that combined state, corporate, and privately produced visual data and the use of facial recognition tools allowed the police to cover and track public life in large parts of the inner city of Hamburg during the summit days. Police authorities characterized visual data and algorithmic tools as objective, trustworthy, and indispensable evidence-providing tools but black-boxed the heterogeneity of sources, the analytical steps, and their potential implications. Critics, in turn, expressed concerns about infringements of civic rights, the trustworthiness of police authorities, and the extensive police surveillance capacities. Based on these findings, this article discusses three topics that remained blind spots in the debates but merit further attention in discussions on norms for visual data management and for governing visibility: (1) collective responsibilities in visibility management, (2) trust in visual data and facial recognition technologies, and (3) social consequences of encompassing visual data collection and registered faceprints.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Surveillance & Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24908/ss.v18i4.13535","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Technological changes shift how visibility can be established, governed, and used. Ubiquitous visual technologies, the possibility to distribute and use images from heterogeneous sources across different social contexts and publics, and increasingly powerful facial recognition tools afford new avenues for law enforcement. Concurrently, these changes also trigger fundamental concerns about privacy violations and all-encompassing surveillance. Using the example of police investigations after the 2017 G20 summit in Hamburg, the present article provides insights into how different actors in the political and public realm in Germany deal with these potentials and tensions in handling visual data. Based on a qualitative content analysis of newspaper articles (n=42), tweets (n=267), experts’ reports (n=3), and minutes of parliamentary debates and committee hearings (n=8), this study examines how visual data were collected, analyzed, and published and how different actors legitimated and contested these practices. The findings show that combined state, corporate, and privately produced visual data and the use of facial recognition tools allowed the police to cover and track public life in large parts of the inner city of Hamburg during the summit days. Police authorities characterized visual data and algorithmic tools as objective, trustworthy, and indispensable evidence-providing tools but black-boxed the heterogeneity of sources, the analytical steps, and their potential implications. Critics, in turn, expressed concerns about infringements of civic rights, the trustworthiness of police authorities, and the extensive police surveillance capacities. Based on these findings, this article discusses three topics that remained blind spots in the debates but merit further attention in discussions on norms for visual data management and for governing visibility: (1) collective responsibilities in visibility management, (2) trust in visual data and facial recognition technologies, and (3) social consequences of encompassing visual data collection and registered faceprints.