This chapter examines how data-driven technologies are deployed as mass surveillance and social credit scoring in China and their threat to democracy. Over the last decade, China has put in place a state-sponsored system of mass automated surveillance. It has successfully managed to limit the Internet to state-approved websites, apps, and social media, corralling users into a monitored, non-anonymous environment and preventing access to overseas media and information. From December of 2019, all mobile phone users registering new SIM cards must agree to a facial recognition scan to prove their identity. The state has also facilitated the transition from anonymous cash to traceable digital transactions. Most significantly, the state has created a social credit scoring system that pulls together various forms of data into a historical archive and uses it to assign each citizen and company a set of scores that affects their lifestyles and ability to trade. On the one hand, this is about making the credit information publicly accessible, so that those who are deemed untrustworthy are publicly shamed and lose their reputation. On the other hand, it is about guilt-by-association and administering collective punishment. This sociality works to minimize protest and unrest and reinforce the logic of the system.
{"title":"Big Brother is Watching and Controlling You","authors":"Rob Kitchin","doi":"10.2307/j.ctv1c9hmnq.24","DOIUrl":"https://doi.org/10.2307/j.ctv1c9hmnq.24","url":null,"abstract":"This chapter examines how data-driven technologies are deployed as mass surveillance and social credit scoring in China and their threat to democracy. Over the last decade, China has put in place a state-sponsored system of mass automated surveillance. It has successfully managed to limit the Internet to state-approved websites, apps, and social media, corralling users into a monitored, non-anonymous environment and preventing access to overseas media and information. From December of 2019, all mobile phone users registering new SIM cards must agree to a facial recognition scan to prove their identity. The state has also facilitated the transition from anonymous cash to traceable digital transactions. Most significantly, the state has created a social credit scoring system that pulls together various forms of data into a historical archive and uses it to assign each citizen and company a set of scores that affects their lifestyles and ability to trade. On the one hand, this is about making the credit information publicly accessible, so that those who are deemed untrustworthy are publicly shamed and lose their reputation. On the other hand, it is about guilt-by-association and administering collective punishment. This sociality works to minimize protest and unrest and reinforce the logic of the system.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125066208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-03DOI: 10.1332/policypress/9781529215144.003.0026
Rob Kitchin
This chapter addresses the life of COVID-19 data, how it has been used to reshape our daily lives by directing intervention measures, and how new data-driven technologies have been deployed to try and help tackle the spread of the coronavirus. Specifically, it examines infection and death rates and the use of surveillance technologies designed to trace contacts, monitor movement, and regulate people's behaviour. The use of these technologies raised questions and active debate concerning the data life cycle and their effects on civil liberties and governmentality. Indeed, most of the critical analysis of contact tracing apps focused on its potential infringement of civil liberties, particularly privacy, since they require fine-grained knowledge about social networks and health status and, for some, location. The concern was that intimate details about a person's life would be shared with the state without sufficient data protection measures that would foreclose data re/misuse and ensure that data would be deleted after 14 days (at which point it becomes redundant) or stored indefinitely.
{"title":"A Matter of Life and Death","authors":"Rob Kitchin","doi":"10.1332/policypress/9781529215144.003.0026","DOIUrl":"https://doi.org/10.1332/policypress/9781529215144.003.0026","url":null,"abstract":"This chapter addresses the life of COVID-19 data, how it has been used to reshape our daily lives by directing intervention measures, and how new data-driven technologies have been deployed to try and help tackle the spread of the coronavirus. Specifically, it examines infection and death rates and the use of surveillance technologies designed to trace contacts, monitor movement, and regulate people's behaviour. The use of these technologies raised questions and active debate concerning the data life cycle and their effects on civil liberties and governmentality. Indeed, most of the critical analysis of contact tracing apps focused on its potential infringement of civil liberties, particularly privacy, since they require fine-grained knowledge about social networks and health status and, for some, location. The concern was that intimate details about a person's life would be shared with the state without sufficient data protection measures that would foreclose data re/misuse and ensure that data would be deleted after 14 days (at which point it becomes redundant) or stored indefinitely.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"773 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132986448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter discusses issues of data quality and veracity in open datasets, using a variety of examples from the Irish data system. These examples include the Residential Property Price Register (RPPR), the Dublin Dashboard project, the TRIPS database, and Irish crime data. There are a number of issues with Irish crime data, such as crimes being recorded in relation to the police stations that handle them, rather than the location they are committed. There are also issues in the standardization of crime categorization, with some police officers recording the same crimes in slightly different ways, and also in timeliness of recording. Moreover, there are difficulties of retrieving data from the crime management software system. In addition to errors, every dataset has issues of representativeness — that is, the extent to which the data faithfully represents that which it seeks to measure. In generating data, processes of extraction, abstraction, generalization and sampling can introduce measurement error, noise, imprecision and bias. Yet internationally, there has been much work expended on formulating data-quality guidelines and standards, trying to get those generating and sharing data to adhere to them, and promoting the importance of reporting this information to users.
{"title":"In Data We Trust","authors":"Rob Kitchin","doi":"10.2307/j.ctv1c9hmnq.9","DOIUrl":"https://doi.org/10.2307/j.ctv1c9hmnq.9","url":null,"abstract":"This chapter discusses issues of data quality and veracity in open datasets, using a variety of examples from the Irish data system. These examples include the Residential Property Price Register (RPPR), the Dublin Dashboard project, the TRIPS database, and Irish crime data. There are a number of issues with Irish crime data, such as crimes being recorded in relation to the police stations that handle them, rather than the location they are committed. There are also issues in the standardization of crime categorization, with some police officers recording the same crimes in slightly different ways, and also in timeliness of recording. Moreover, there are difficulties of retrieving data from the crime management software system. In addition to errors, every dataset has issues of representativeness — that is, the extent to which the data faithfully represents that which it seeks to measure. In generating data, processes of extraction, abstraction, generalization and sampling can introduce measurement error, noise, imprecision and bias. Yet internationally, there has been much work expended on formulating data-quality guidelines and standards, trying to get those generating and sharing data to adhere to them, and promoting the importance of reporting this information to users.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121637601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter imagines a conversation between two senior civil servants when they realize that the Irish government has lost 3.6 billion euros through a spreadsheet error. The Assistant Secretary of the Department of Finance reports to the General Secretary that the accountant was not sure how to classify a loan to the Housing Finance Agency (HFA) from the National Treasury Management Agency (NTMA). They had assumed that it might be adjusted for elsewhere in the General Government Debt calculations, but it was not. As such, the government debt appears twice in the national accounts, once as an asset for the NTMA and once as a liability for the HFA. The General Secretary then asks why the data entry error was not picked up. The Assistant Secretary answers that everybody assumed that somebody else had dealt with it. The accounts got returned, nobody spotted the mistake, and everyone moved onto to other tasks.
{"title":"How to Lose (and Regain) 3.6 Billion Euros","authors":"Rob Kitchin","doi":"10.2307/j.ctv1c9hmnq.10","DOIUrl":"https://doi.org/10.2307/j.ctv1c9hmnq.10","url":null,"abstract":"This chapter imagines a conversation between two senior civil servants when they realize that the Irish government has lost 3.6 billion euros through a spreadsheet error. The Assistant Secretary of the Department of Finance reports to the General Secretary that the accountant was not sure how to classify a loan to the Housing Finance Agency (HFA) from the National Treasury Management Agency (NTMA). They had assumed that it might be adjusted for elsewhere in the General Government Debt calculations, but it was not. As such, the government debt appears twice in the national accounts, once as an asset for the NTMA and once as a liability for the HFA. The General Secretary then asks why the data entry error was not picked up. The Assistant Secretary answers that everybody assumed that somebody else had dealt with it. The accounts got returned, nobody spotted the mistake, and everyone moved onto to other tasks.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126958214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter reveals how choices and decisions concerning the analytics applied to data shapes outcomes, through an account of a working session between academics and a government minister to devise and implement an 'objective' method for allocating government funding. The nub of the problem was the Minister had a very particular outcome in mind. He wanted the investment from his new scheme to be spread across as many constituencies as possible, and certainly the ones that traditionally voted for his party or those that might swing away from the government. However, he did not want to be seen to allocate the funding on political grounds, nor run the scheme on a competitive basis. Instead, he wanted to be able to say that the monies had been apportioned using a statistical formula that assessed need objectively. Creating a formula for producing a map that pleased the Minister proved to be trickier than anticipated. In part, this was because he had his own ideas about which variables were good indicators of relative deprivation and need.
{"title":"The Secret Science of Formulas","authors":"Rob Kitchin","doi":"10.2307/j.ctv1c9hmnq.16","DOIUrl":"https://doi.org/10.2307/j.ctv1c9hmnq.16","url":null,"abstract":"This chapter reveals how choices and decisions concerning the analytics applied to data shapes outcomes, through an account of a working session between academics and a government minister to devise and implement an 'objective' method for allocating government funding. The nub of the problem was the Minister had a very particular outcome in mind. He wanted the investment from his new scheme to be spread across as many constituencies as possible, and certainly the ones that traditionally voted for his party or those that might swing away from the government. However, he did not want to be seen to allocate the funding on political grounds, nor run the scheme on a competitive basis. Instead, he wanted to be able to say that the monies had been apportioned using a statistical formula that assessed need objectively. Creating a formula for producing a map that pleased the Minister proved to be trickier than anticipated. In part, this was because he had his own ideas about which variables were good indicators of relative deprivation and need.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125136836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter discusses the implications for citizens of data-driven management by charting the issues of living in a smart city testbed area, demonstrated through a walking tour for local residents, led by a public official. It was clear to the recently hired community liaison officer for the city's smart docklands team that the key expected outcome was to convince local residents that there was nothing to fear from the trialling of new technologies in their area and to get their buy-in. However, interaction with the local community had been a secondary concern to those establishing initiative. They had been much more focused on the technical and business aspects of building the testbed and securing investment than how it related to those that lived and worked there. Nevertheless, the community liaison officer tries to convince the citizens that they do not collect personal data and that the initiative provides job opportunities.
{"title":"Guinea Pigs","authors":"Rob Kitchin","doi":"10.2307/j.ctv1c9hmnq.23","DOIUrl":"https://doi.org/10.2307/j.ctv1c9hmnq.23","url":null,"abstract":"This chapter discusses the implications for citizens of data-driven management by charting the issues of living in a smart city testbed area, demonstrated through a walking tour for local residents, led by a public official. It was clear to the recently hired community liaison officer for the city's smart docklands team that the key expected outcome was to convince local residents that there was nothing to fear from the trialling of new technologies in their area and to get their buy-in. However, interaction with the local community had been a secondary concern to those establishing initiative. They had been much more focused on the technical and business aspects of building the testbed and securing investment than how it related to those that lived and worked there. Nevertheless, the community liaison officer tries to convince the citizens that they do not collect personal data and that the initiative provides job opportunities.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"436 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122678444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This chapter investigates the Kafkaesque procedures involved in data-driven airport security. It considers the experience of a passenger who keeps being selected for a special security check. While waiting in line for the security check, he has a conversation with another passenger who had been suffering the same drama for four years. The passenger recounts how he has been trying to find a way to get off the list, but the Transportation Security Administration officers do not seem to know why a passenger is on it. Moreover, there is no process to get off it. A man told the passenger once that not even the people that programmed the system know why a person is selected; they do not know what the critical data points are because the system self-learns.
{"title":"Security Theatre","authors":"Rob Kitchin","doi":"10.2307/j.ctv1c9hmnq.25","DOIUrl":"https://doi.org/10.2307/j.ctv1c9hmnq.25","url":null,"abstract":"This chapter investigates the Kafkaesque procedures involved in data-driven airport security. It considers the experience of a passenger who keeps being selected for a special security check. While waiting in line for the security check, he has a conversation with another passenger who had been suffering the same drama for four years. The passenger recounts how he has been trying to find a way to get off the list, but the Transportation Security Administration officers do not seem to know why a passenger is on it. Moreover, there is no process to get off it. A man told the passenger once that not even the people that programmed the system know why a person is selected; they do not know what the critical data points are because the system self-learns.","PeriodicalId":446623,"journal":{"name":"Data Lives","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131108503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}