Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.13239
Keith Spiller, X. L’Hoiry
This special issue seeks to add new enquiries and greater depth to discussions that have enabled and empowered citizens to carry out modes of surveillance in order to become more engaged with the task of policing.
这期特刊旨在增加新的调查和更深入的讨论,使公民能够实施监控模式,以便更多地参与警务任务。
{"title":"Editorial: Visibilities and New Models of Policing","authors":"Keith Spiller, X. L’Hoiry","doi":"10.24908/ss.v17i3/4.13239","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.13239","url":null,"abstract":"This special issue seeks to add new enquiries and greater depth to discussions that have enabled and empowered citizens to carry out modes of surveillance in order to become more engaged with the task of policing.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49508874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.12330
Daniela Agostinho, C. D’Ignazio, Annie Ring, N. Thylstrup, Kristin Veel
From global search engines to local smart cities, from public health monitoring to personal self-tracking technologies, digital technologies continuously capture, process, and archive social, material, and affective information in the form of big data. Although the use of big data emerged from the human desire to acquire more knowledge and master more information and to eliminate human error in large-scale information management, it has become clear in recent years that big data technologies, and the archives of data they accrue, bring with them new and important uncertainties in the form of new biases, systemic errors, and, as a result, new ethical challenges that require urgent attention and analysis. This collaboratively written article outlines the conceptual framework of the Uncertain Archives research collective to show how cultural theories of the archive can be meaningfully applied to the empirical field of big data. More specifically, the article argues that this approach grounded in cultural theory can help research going forward to attune to and address the uncertainties present in the storage and analysis of large amounts of information. By focusing on the notions of the unknown, error, and vulnerability, we reveal a set of different, albeit intertwined, configurations of archival uncertainty that emerge along with the phenomenon of big data use. We regard these configurations as central to understanding the conditions of the digitally networked data archives that are a crucial component of today’s cultures of surveillance and governmentality.
{"title":"Uncertain Archives: Approaching the Unknowns, Errors, and Vulnerabilities of Big Data through Cultural Theories of the Archive","authors":"Daniela Agostinho, C. D’Ignazio, Annie Ring, N. Thylstrup, Kristin Veel","doi":"10.24908/ss.v17i3/4.12330","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.12330","url":null,"abstract":"From global search engines to local smart cities, from public health monitoring to personal self-tracking technologies, digital technologies continuously capture, process, and archive social, material, and affective information in the form of big data. Although the use of big data emerged from the human desire to acquire more knowledge and master more information and to eliminate human error in large-scale information management, it has become clear in recent years that big data technologies, and the archives of data they accrue, bring with them new and important uncertainties in the form of new biases, systemic errors, and, as a result, new ethical challenges that require urgent attention and analysis. This collaboratively written article outlines the conceptual framework of the Uncertain Archives research collective to show how cultural theories of the archive can be meaningfully applied to the empirical field of big data. More specifically, the article argues that this approach grounded in cultural theory can help research going forward to attune to and address the uncertainties present in the storage and analysis of large amounts of information. By focusing on the notions of the unknown, error, and vulnerability, we reveal a set of different, albeit intertwined, configurations of archival uncertainty that emerge along with the phenomenon of big data use. We regard these configurations as central to understanding the conditions of the digitally networked data archives that are a crucial component of today’s cultures of surveillance and governmentality.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.24908/ss.v17i3/4.12330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47837090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.8664
A. Mols, J. Pridmore
Neighbourhood watch messaging groups are part of an already pervasive phenomenon in The Netherlands, despite having only recently emerged. In many neighbourhoods, street signs have been installed to make passers-by aware of active neighbourhood surveillance. In messaging groups (using WhatsApp or similar communication apps), neighbours exchange warnings, concerns, and information about incidents, emergencies, and (allegedly) suspicious situations. These exchanges often lead to neighbours actively protecting and monitoring their streets, sending messages about suspicious activities, and using camera-phones to record events. While citizen-initiated participatory policing practices in the neighbourhood can increase (experiences of) safety and social cohesion, they often default to lateral surveillance, ethnic profiling, risky vigilantism, and distrust towards neighbours and strangers. Whereas the use of messaging apps is central, WhatsApp neighbourhood crime prevention (WNCP) groups are heterogeneous: they vary from independent self-organised policing networks to neighbours working with and alongside community police. As suggested by one of our interviewees, this can lead to citizens “actually doing police work,” which complicates relationships between police and citizens. This paper draws on interviews and focus groups in order to examine participatory policing practices and the responsibilisation of citizens for their neighbourhood safety and security. This exploration of actual practices shows that these often diverge from the intended process and that the blurring of boundaries between police and citizens complicates issues of accountability and normalises suspicion and the responsibilisation of citizens.
{"title":"When Citizens Are “Actually Doing Police Work”: The Blurring of Boundaries in WhatsApp Neighbourhood Crime Prevention Groups in The Netherlands","authors":"A. Mols, J. Pridmore","doi":"10.24908/ss.v17i3/4.8664","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.8664","url":null,"abstract":"Neighbourhood watch messaging groups are part of an already pervasive phenomenon in The Netherlands, despite having only recently emerged. In many neighbourhoods, street signs have been installed to make passers-by aware of active neighbourhood surveillance. In messaging groups (using WhatsApp or similar communication apps), neighbours exchange warnings, concerns, and information about incidents, emergencies, and (allegedly) suspicious situations. These exchanges often lead to neighbours actively protecting and monitoring their streets, sending messages about suspicious activities, and using camera-phones to record events. While citizen-initiated participatory policing practices in the neighbourhood can increase (experiences of) safety and social cohesion, they often default to lateral surveillance, ethnic profiling, risky vigilantism, and distrust towards neighbours and strangers. Whereas the use of messaging apps is central, WhatsApp neighbourhood crime prevention (WNCP) groups are heterogeneous: they vary from independent self-organised policing networks to neighbours working with and alongside community police. As suggested by one of our interviewees, this can lead to citizens “actually doing police work,” which complicates relationships between police and citizens. This paper draws on interviews and focus groups in order to examine participatory policing practices and the responsibilisation of citizens for their neighbourhood safety and security. This exploration of actual practices shows that these often diverge from the intended process and that the blurring of boundaries between police and citizens complicates issues of accountability and normalises suspicion and the responsibilisation of citizens.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.24908/ss.v17i3/4.8664","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45097542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.8517
P. Ullrich
The police, in particular the riot police, can be a rather inaccessible object of investigation, whose reservations towards research are analysed with reference to five barriers: 1) police control of access to the field, 2) the doubly asymmetric research relationship, 3) attempts by the police to steer the process, 4) the sceptical attitude of (potential) interviewees, and 5) the restrained discussion behaviour. However, what appears as a hurdle from a researcher’s perspective allows structures of the object itself to be reconstructed. These include a prevalence of narratives of police “innocence” and “powerlessness” with which resistance against external aspirations for control is buttressed. The police view themselves as constantly being under public scrutiny and being unjustly publicly criticised. In this manner the predominant attitude towards research is reserved if not hostile. The police definitional power in its fields of action is thus partially transferred to research on the police. However, police interference has its limits, and counterstrategies will be set forth. Most data used are from a grounded theory methodology (GTM) project on video surveillance and countersurveillance of demonstrations, based primarily on group discussions and expert interviews with riot police.
{"title":"Data and Obstacle: Police (Non)Visibility in Research on Protest Policing","authors":"P. Ullrich","doi":"10.24908/ss.v17i3/4.8517","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.8517","url":null,"abstract":"The police, in particular the riot police, can be a rather inaccessible object of investigation, whose reservations towards research are analysed with reference to five barriers: 1) police control of access to the field, 2) the doubly asymmetric research relationship, 3) attempts by the police to steer the process, 4) the sceptical attitude of (potential) interviewees, and 5) the restrained discussion behaviour. However, what appears as a hurdle from a researcher’s perspective allows structures of the object itself to be reconstructed. These include a prevalence of narratives of police “innocence” and “powerlessness” with which resistance against external aspirations for control is buttressed. The police view themselves as constantly being under public scrutiny and being unjustly publicly criticised. In this manner the predominant attitude towards research is reserved if not hostile. The police definitional power in its fields of action is thus partially transferred to research on the police. However, police interference has its limits, and counterstrategies will be set forth. Most data used are from a grounded theory methodology (GTM) project on video surveillance and countersurveillance of demonstrations, based primarily on group discussions and expert interviews with riot police.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43468861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.7098
Roderic N. Crooks
This paper focuses on the responses of teachers and students in a South Los Angeles public high school to dataveillance regimes that were meant to control specific behaviors. Over a period of two years, a newly deployed one-to-one tablet computer program supported the integration of dataveillance regimes with previously established modes of pursuing teacher and student accountability. As tablet computers achieved ubiquity, students, teachers, and administrators challenged the ambiguous relationship between digital data and the behavior of subjects putatively described by these data. Conflicts over digital data—what data could mean, what they could stand in for, and what could be deemed normal or aberrant—emerged between school authorities and targets of dataveilleance. Where school authorities often depicted their own surveillance capabilities as immediate, inescapable, and predictive, contests over the interpretation of data attenuated this power, showing it to be partial, negotiated, and retroactive, a dynamic this study refers to as interpretive resistance. This study uses a theoretical framework based on performativity of digital data to think through the implications of observed contestations around representation. Performativity conceptualizes digital data not as a set of objective, value-neutral observations but as the ability to produce statuses of norm and deviance.
{"title":"Cat-and-Mouse Games: Dataveillance and Performativity in Urban Schools","authors":"Roderic N. Crooks","doi":"10.24908/ss.v17i3/4.7098","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.7098","url":null,"abstract":"This paper focuses on the responses of teachers and students in a South Los Angeles public high school to dataveillance regimes that were meant to control specific behaviors. Over a period of two years, a newly deployed one-to-one tablet computer program supported the integration of dataveillance regimes with previously established modes of pursuing teacher and student accountability. As tablet computers achieved ubiquity, students, teachers, and administrators challenged the ambiguous relationship between digital data and the behavior of subjects putatively described by these data. Conflicts over digital data—what data could mean, what they could stand in for, and what could be deemed normal or aberrant—emerged between school authorities and targets of dataveilleance. Where school authorities often depicted their own surveillance capabilities as immediate, inescapable, and predictive, contests over the interpretation of data attenuated this power, showing it to be partial, negotiated, and retroactive, a dynamic this study refers to as interpretive resistance. This study uses a theoretical framework based on performativity of digital data to think through the implications of observed contestations around representation. Performativity conceptualizes digital data not as a set of objective, value-neutral observations but as the ability to produce statuses of norm and deviance.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44315097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.10410
Aaron Shapiro
Predictive analytics and artificial intelligence are applied widely across law enforcement agencies and the criminal justice system. Despite criticism that such tools reinforce inequality and structural discrimination, proponents insist that they will nonetheless improve the equality and fairness of outcomes by countering humans’ biased or capricious decision-making. How can predictive analytics be understood simultaneously as a source of, and solution to, discrimination and bias in criminal justice and law enforcement? The article provides a framework for understanding the techno-political gambit of predictive policing as a mechanism of police reform—a discourse that I call “predictive policing for reform.” Focusing specifically on geospatial predictive policing systems, I argue that “predictive policing for reform” should be seen as a flawed attempt to rationalize police patrols through an algorithmic remediation of patrol geographies. The attempt is flawed because predictive systems operate on the sociotechnical practices of police patrols, which are themselves contradictory enactments of the state’s power to distribute safety and harm. The ambiguities and contradictions of the patrol are not resolved through algorithmic remediation. Instead, they lead to new indeterminacies, trade-offs, and experimentations based on unfalsifiable claims. I detail these through a discussion of predictive policing firm HunchLab’s use of predictive analytics to rationalize patrols and mitigate bias. Understanding how the “predictive policing for reform” discourse is operationalized as a series of technical fixes that rely on the production of indeterminacies allows for a more nuanced critique of predictive policing.
{"title":"Predictive Policing for Reform? Indeterminacy and Intervention in Big Data Policing","authors":"Aaron Shapiro","doi":"10.24908/ss.v17i3/4.10410","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.10410","url":null,"abstract":"Predictive analytics and artificial intelligence are applied widely across law enforcement agencies and the criminal justice system. Despite criticism that such tools reinforce inequality and structural discrimination, proponents insist that they will nonetheless improve the equality and fairness of outcomes by countering humans’ biased or capricious decision-making. How can predictive analytics be understood simultaneously as a source of, and solution to, discrimination and bias in criminal justice and law enforcement? The article provides a framework for understanding the techno-political gambit of predictive policing as a mechanism of police reform—a discourse that I call “predictive policing for reform.” Focusing specifically on geospatial predictive policing systems, I argue that “predictive policing for reform” should be seen as a flawed attempt to rationalize police patrols through an algorithmic remediation of patrol geographies. The attempt is flawed because predictive systems operate on the sociotechnical practices of police patrols, which are themselves contradictory enactments of the state’s power to distribute safety and harm. The ambiguities and contradictions of the patrol are not resolved through algorithmic remediation. Instead, they lead to new indeterminacies, trade-offs, and experimentations based on unfalsifiable claims. I detail these through a discussion of predictive policing firm HunchLab’s use of predictive analytics to rationalize patrols and mitigate bias. Understanding how the “predictive policing for reform” discourse is operationalized as a series of technical fixes that rely on the production of indeterminacies allows for a more nuanced critique of predictive policing.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.24908/ss.v17i3/4.10410","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47327397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.8604
A. Hills
Technology-based surveillance practices have changed the modes of policing found in the global North but have yet to influence police–citizen engagement in Southern cities such as Mogadishu, the capital of Somalia. Based on the role played by monitoring in Mogadishu’s formal security plan and in an informal neighbourhood watch scheme in Waberi district, this article uses a policy-oriented approach to generate insight into surveillance and policing in a fragile and seemingly dysfunctional environment. It shows that while watching is an integral aspect of everyday life, sophisticated technologies capable of digitally capturing real-time events play no part in crime reporting or in the monitoring of terrorist threats, and information is delivered by using basic and inclusive methods such as word of mouth, rather than by mobile telephones or social media. Indeed, the availability of technologies such as CCTV has actually resulted in the reproduction and reinforcement of older models of policing; even when the need to monitor security threats encourages residents to engage with the task of policing, their responses reflect local preferences and legacy issues dating from the 1970s and 2000s. In other words, policing practice has not been reconfigured. In Mogadishu, as in most of the world, the policing task is shaped as much by residents’ expectations as by the technologies available.
{"title":"Monitoring Mogadishu","authors":"A. Hills","doi":"10.24908/ss.v17i3/4.8604","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.8604","url":null,"abstract":"Technology-based surveillance practices have changed the modes of policing found in the global North but have yet to influence police–citizen engagement in Southern cities such as Mogadishu, the capital of Somalia. Based on the role played by monitoring in Mogadishu’s formal security plan and in an informal neighbourhood watch scheme in Waberi district, this article uses a policy-oriented approach to generate insight into surveillance and policing in a fragile and seemingly dysfunctional environment. It shows that while watching is an integral aspect of everyday life, sophisticated technologies capable of digitally capturing real-time events play no part in crime reporting or in the monitoring of terrorist threats, and information is delivered by using basic and inclusive methods such as word of mouth, rather than by mobile telephones or social media. Indeed, the availability of technologies such as CCTV has actually resulted in the reproduction and reinforcement of older models of policing; even when the need to monitor security threats encourages residents to engage with the task of policing, their responses reflect local preferences and legacy issues dating from the 1970s and 2000s. In other words, policing practice has not been reconfigured. In Mogadishu, as in most of the world, the policing task is shaped as much by residents’ expectations as by the technologies available.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44813162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.10821
Luke Stark, K. Crawford
Problematic use of data, patterns of bias emerging in AI systems, and the role of platforms like Facebook and Twitter during elections have thrown the issue of data ethics into sharp relief. Yet the focus of conversations about data ethics has centered on computer scientists, engineers, and designers, with far less attention paid to the digital practices of artists and others in the cultural sector. Artists have historically deployed new technologies in unexpected and often prescient ways, making them a community able to speak directly to the changing and nuanced ethical questions faced by those who use data and machine learning systems. We conducted interviews with thirty-three artists working with digital data, with a focus on how artists prefigure and commonly challenge data practices and ethical concerns of computer scientists, researchers, and the wider population. We found artists were frequently working to produce a sense of defamiliarization and critical distance from contemporary digital technologies in their audiences. The ethics of using large-scale data and AI systems for these artists were generally developed in ongoing conversations with other practitioners in their communities and in relation to a longer history of art practice.
{"title":"The Work of Art in the Age of Artificial Intelligence: What Artists Can Teach Us About the Ethics of Data Practice","authors":"Luke Stark, K. Crawford","doi":"10.24908/ss.v17i3/4.10821","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.10821","url":null,"abstract":"Problematic use of data, patterns of bias emerging in AI systems, and the role of platforms like Facebook and Twitter during elections have thrown the issue of data ethics into sharp relief. Yet the focus of conversations about data ethics has centered on computer scientists, engineers, and designers, with far less attention paid to the digital practices of artists and others in the cultural sector. Artists have historically deployed new technologies in unexpected and often prescient ways, making them a community able to speak directly to the changing and nuanced ethical questions faced by those who use data and machine learning systems. We conducted interviews with thirty-three artists working with digital data, with a focus on how artists prefigure and commonly challenge data practices and ethical concerns of computer scientists, researchers, and the wider population. We found artists were frequently working to produce a sense of defamiliarization and critical distance from contemporary digital technologies in their audiences. The ethics of using large-scale data and AI systems for these artists were generally developed in ongoing conversations with other practitioners in their communities and in relation to a longer history of art practice.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.24908/ss.v17i3/4.10821","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43184893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-07DOI: 10.24908/ss.v17i3/4.10836
W. Chivers
Communications surveillance in the UK has been an increasingly contentious issue since the early 2000s. The Investigatory Powers Act 2016 is the result of a long series of attempts by the UK government to reform communications surveillance legislation. The consultations on this legislation—and on its precursor, the Draft Communications Data Bill 2012—offer unique insight into how such efforts generate resistance to surveillance. This article draws attention to the role of communications service providers (CSPs)—who are increasingly being responsibilised to collect and retain communications data—within a multi-actor network of resistance. It also identifies the reasons CSPs gave for resisting these proposed reforms. Content analysis of the consultation documents reveals three themes that were central to the CSPs’ arguments: technology, territory, and trust. The article concludes by considering the implications for understanding resistance to contemporary digital surveillance.
{"title":"Resisting Digital Surveillance Reform: The Arguments and Tactics of Communications Service Providers","authors":"W. Chivers","doi":"10.24908/ss.v17i3/4.10836","DOIUrl":"https://doi.org/10.24908/ss.v17i3/4.10836","url":null,"abstract":"Communications surveillance in the UK has been an increasingly contentious issue since the early 2000s. The Investigatory Powers Act 2016 is the result of a long series of attempts by the UK government to reform communications surveillance legislation. The consultations on this legislation—and on its precursor, the Draft Communications Data Bill 2012—offer unique insight into how such efforts generate resistance to surveillance. This article draws attention to the role of communications service providers (CSPs)—who are increasingly being responsibilised to collect and retain communications data—within a multi-actor network of resistance. It also identifies the reasons CSPs gave for resisting these proposed reforms. Content analysis of the consultation documents reveals three themes that were central to the CSPs’ arguments: technology, territory, and trust. The article concludes by considering the implications for understanding resistance to contemporary digital surveillance.","PeriodicalId":47078,"journal":{"name":"Surveillance & Society","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.24908/ss.v17i3/4.10836","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48467971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}