Pub Date : 2023-01-01DOI: 10.1177/20539517231163174
A. Bernier, Maili Raven-Adams, D. Zaccagnini, B. Knoppers
Health organisations use numerous different mechanisms to collect biomedical data, to determine the applicable ethical, legal and institutional conditions of use, and to reutilise the data in accordance with the relevant rules. These methods and mechanisms differ from one organisation to another, and involve considerable specialised human labour, including record-keeping functions and decision-making committees. In reutilising data at scale, however, organisations struggle to meet demands for data interoperability and for rapid inter-organisational data exchange due to reliance on legacy paper-based records and on the human-initiated administration of accompanying permissions in data. The adoption of permissions-recording, and permissions-administration tools that can be implemented at scale across numerous organisations is imperative. Further, these must be implemented in a manner that does not compromise the nuanced and contextual adjudicative processes of research ethics committees, data access committees, and biomedical research organisations. The tools required to implement a streamlined system of biomedical data exchange have in great part been developed. Indeed, there remains but a small core of functions that must further be standardised and automated to enable the recording and administration of permissions in biomedical research data with minimal human effort. Recording ethical provenance in this manner would enable biomedical data exchange to be performed at scale, in full respect of the ethical, legal, and institutional rules applicable to different datasets. This despite foundational differences between the distinct legal and normative frameworks is applicable to distinct communities and organisations that share data between one another.
{"title":"Recording the ethical provenance of data and automating data stewardship","authors":"A. Bernier, Maili Raven-Adams, D. Zaccagnini, B. Knoppers","doi":"10.1177/20539517231163174","DOIUrl":"https://doi.org/10.1177/20539517231163174","url":null,"abstract":"Health organisations use numerous different mechanisms to collect biomedical data, to determine the applicable ethical, legal and institutional conditions of use, and to reutilise the data in accordance with the relevant rules. These methods and mechanisms differ from one organisation to another, and involve considerable specialised human labour, including record-keeping functions and decision-making committees. In reutilising data at scale, however, organisations struggle to meet demands for data interoperability and for rapid inter-organisational data exchange due to reliance on legacy paper-based records and on the human-initiated administration of accompanying permissions in data. The adoption of permissions-recording, and permissions-administration tools that can be implemented at scale across numerous organisations is imperative. Further, these must be implemented in a manner that does not compromise the nuanced and contextual adjudicative processes of research ethics committees, data access committees, and biomedical research organisations. The tools required to implement a streamlined system of biomedical data exchange have in great part been developed. Indeed, there remains but a small core of functions that must further be standardised and automated to enable the recording and administration of permissions in biomedical research data with minimal human effort. Recording ethical provenance in this manner would enable biomedical data exchange to be performed at scale, in full respect of the ethical, legal, and institutional rules applicable to different datasets. This despite foundational differences between the distinct legal and normative frameworks is applicable to distinct communities and organisations that share data between one another.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47782589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231159629
Jathan Sadowski, Kaitlin Beegle
The self-proclaimed usurper of Web 2.0, Web3 quickly became the center of attention. Not long ago, the public discourse was saturated with projects, promises, and peculiarities of Web3. Now the spotlight has swung around to focus on the many faults, failures, and frauds of Web3. The cycles of technological trends and investment bubbles seem to be accelerating in such a way as to escape any attempt at observing them in motion before they crash, and then everybody moves on to the next thing. Importantly, Web3 was not an anomaly or curiosity in the broader tech industry. It articulates patterns that existed before Web3 and will exist after. Web3 should be understood as a case study of innovation within the dominant model of Silicon Valley venture capitalism. Our focus in this article is on understanding how the movement around Web3 formed through an interplay between (1) normative concepts and contestations related to ideas of “decentralization” and (2) political economic interests and operations related to the dynamics of fictitious capital. By offering a critical analysis of Web3, our goal is also to show how any even potentially progressive (or as we call them “expansive”) forms of Web3 development struggle for success, recognition, and attention due to the wild excesses of hype and investment devoted to “extractive” forms of Web3. In the process, they provide us a better view of how different arrangements of technopolitics can exist at the same time, side-by-side, in complicated ways.
{"title":"Expansive and extractive networks of Web3","authors":"Jathan Sadowski, Kaitlin Beegle","doi":"10.1177/20539517231159629","DOIUrl":"https://doi.org/10.1177/20539517231159629","url":null,"abstract":"The self-proclaimed usurper of Web 2.0, Web3 quickly became the center of attention. Not long ago, the public discourse was saturated with projects, promises, and peculiarities of Web3. Now the spotlight has swung around to focus on the many faults, failures, and frauds of Web3. The cycles of technological trends and investment bubbles seem to be accelerating in such a way as to escape any attempt at observing them in motion before they crash, and then everybody moves on to the next thing. Importantly, Web3 was not an anomaly or curiosity in the broader tech industry. It articulates patterns that existed before Web3 and will exist after. Web3 should be understood as a case study of innovation within the dominant model of Silicon Valley venture capitalism. Our focus in this article is on understanding how the movement around Web3 formed through an interplay between (1) normative concepts and contestations related to ideas of “decentralization” and (2) political economic interests and operations related to the dynamics of fictitious capital. By offering a critical analysis of Web3, our goal is also to show how any even potentially progressive (or as we call them “expansive”) forms of Web3 development struggle for success, recognition, and attention due to the wild excesses of hype and investment devoted to “extractive” forms of Web3. In the process, they provide us a better view of how different arrangements of technopolitics can exist at the same time, side-by-side, in complicated ways.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46524146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We collect and analyze a corpus of more than 300,000 political emails sent during the 2020 US election cycle. These emails were sent by over 3000 political campaigns and organizations including federal and state level candidates as well as Political Action Committees. We find that in this corpus, manipulative tactics—techniques using some level of deception or clickbait—are the norm, not the exception. We measure six specific tactics senders use to nudge recipients to open emails. Three of these tactics—“dark patterns”—actively deceive recipients through the email user interface, for example, by formatting “from:” fields so that they create the false impression the message is a continuation of an ongoing conversation. The median active sender uses such tactics 5% of the time. The other three tactics, like sensationalistic clickbait—used by the median active sender 37% of the time—are not directly deceptive, but instead, exploit recipients’ curiosity gap and impose pressure to open emails. This can further expose recipients to deception in the email body, such as misleading claims of matching donations. Furthermore, by collecting emails from different locations in the US, we show that senders refine these tactics through A/B testing. Finally, we document disclosures of email addresses between senders in violation of privacy policies and recipients’ expectations. Cumulatively, these tactics undermine voters’ autonomy and welfare, exacting a particularly acute cost for those with low digital literacy. We offer the complete corpus of emails at https://electionemails2020.org for journalists and academics, which we hope will support future work.
{"title":"Manipulative tactics are the norm in political emails: Evidence from 300K emails from the 2020 US election cycle","authors":"Arunesh Mathur, Angelina Wang, Carsten Schwemmer, Maia Hamin, Brandon M Stewart, Arvind Narayanan","doi":"10.1177/20539517221145371","DOIUrl":"https://doi.org/10.1177/20539517221145371","url":null,"abstract":"We collect and analyze a corpus of more than 300,000 political emails sent during the 2020 US election cycle. These emails were sent by over 3000 political campaigns and organizations including federal and state level candidates as well as Political Action Committees. We find that in this corpus, manipulative tactics—techniques using some level of deception or clickbait—are the norm, not the exception. We measure six specific tactics senders use to nudge recipients to open emails. Three of these tactics—“dark patterns”—actively deceive recipients through the email user interface, for example, by formatting “from:” fields so that they create the false impression the message is a continuation of an ongoing conversation. The median active sender uses such tactics 5% of the time. The other three tactics, like sensationalistic clickbait—used by the median active sender 37% of the time—are not directly deceptive, but instead, exploit recipients’ curiosity gap and impose pressure to open emails. This can further expose recipients to deception in the email body, such as misleading claims of matching donations. Furthermore, by collecting emails from different locations in the US, we show that senders refine these tactics through A/B testing. Finally, we document disclosures of email addresses between senders in violation of privacy policies and recipients’ expectations. Cumulatively, these tactics undermine voters’ autonomy and welfare, exacting a particularly acute cost for those with low digital literacy. We offer the complete corpus of emails at https://electionemails2020.org for journalists and academics, which we hope will support future work.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45283674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231158996
Rolien Hoyng
The corporate discourse on the circular economy holds that the growth of the electronics industry, driven by continuous innovation, does not imperil ecological sustainability. To achieve sustainable growth, its advocates propose optimizing recycling by means of artificial intelligence and sets of interrelated datacentric and algorithmic technologies. Drawing on critical data and algorithm studies, theories of waste, and empirical research, this paper investigates ecological ethics in the context of the datacentric and algorithmically mediated circular economy. It foregrounds the indeterminate and fickle material nature of waste as well as the uncertainties inherent in, and stemming from, datafication and computation. My question is: how do the rationalities, affordances, and dispositions of datacentric and algorithmic technologies perform and displace notions of corporate responsibility and transparency? In order to answer this question, I compare the smart circular economy to the informal recycling practices that it claims to replace, and I analyze relations between waste matter and data as well as distributions of agency. Specifically, I consider transitions and slippages between response-ability and responsibility. Conceptually, I bring process-relation or immanence-based philosophies such as Bergson's and Deleuze's into a debate about relations between waste matter and data and the ambition of algorithmic control over waste. My aim is not to demand heightened corporate responsibility enacted through control but to rethink responsibility in the smart circular economy along the lines of Amoore's cloud ethics to carve out a position of critique beyond either a deontological perspective that reinforces corporate agency or new-materialist denunciation of the concept.
{"title":"Ecological ethics and the smart circular economy","authors":"Rolien Hoyng","doi":"10.1177/20539517231158996","DOIUrl":"https://doi.org/10.1177/20539517231158996","url":null,"abstract":"The corporate discourse on the circular economy holds that the growth of the electronics industry, driven by continuous innovation, does not imperil ecological sustainability. To achieve sustainable growth, its advocates propose optimizing recycling by means of artificial intelligence and sets of interrelated datacentric and algorithmic technologies. Drawing on critical data and algorithm studies, theories of waste, and empirical research, this paper investigates ecological ethics in the context of the datacentric and algorithmically mediated circular economy. It foregrounds the indeterminate and fickle material nature of waste as well as the uncertainties inherent in, and stemming from, datafication and computation. My question is: how do the rationalities, affordances, and dispositions of datacentric and algorithmic technologies perform and displace notions of corporate responsibility and transparency? In order to answer this question, I compare the smart circular economy to the informal recycling practices that it claims to replace, and I analyze relations between waste matter and data as well as distributions of agency. Specifically, I consider transitions and slippages between response-ability and responsibility. Conceptually, I bring process-relation or immanence-based philosophies such as Bergson's and Deleuze's into a debate about relations between waste matter and data and the ambition of algorithmic control over waste. My aim is not to demand heightened corporate responsibility enacted through control but to rethink responsibility in the smart circular economy along the lines of Amoore's cloud ethics to carve out a position of critique beyond either a deontological perspective that reinforces corporate agency or new-materialist denunciation of the concept.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45912563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231164115
Thomas Walsh
To better understand the COVID-19 pandemic, public health researchers turned to “big mobility data”—location data collected from mobile devices by companies engaged in surveillance capitalism. Publishing formerly private big mobility datasets, firms trumpeted their efforts to “fight” COVID-19 and researchers highlighted the potential of big mobility data to improve infectious disease models tracking the pandemic. However, these collaborations are defined by asymmetries in information, access, and power. The release of data is characterized by a lack of obligation on the part of the data provider towards public health goals, particularly those committed to a community-based, participatory model. There is a lack of appropriate reciprocities between data company, data subject, researcher, and community. People are de-centered, surveillance is de-linked from action while the agendas of public health and surveillance capitalism grow closer. This article argues that the current use of big mobility data in the COVID-19 pandemic represents a poor approach with respect to community and person-centered frameworks.
{"title":"Modeling COVID-19 with big mobility data: Surveillance and reaffirming the people in the data","authors":"Thomas Walsh","doi":"10.1177/20539517231164115","DOIUrl":"https://doi.org/10.1177/20539517231164115","url":null,"abstract":"To better understand the COVID-19 pandemic, public health researchers turned to “big mobility data”—location data collected from mobile devices by companies engaged in surveillance capitalism. Publishing formerly private big mobility datasets, firms trumpeted their efforts to “fight” COVID-19 and researchers highlighted the potential of big mobility data to improve infectious disease models tracking the pandemic. However, these collaborations are defined by asymmetries in information, access, and power. The release of data is characterized by a lack of obligation on the part of the data provider towards public health goals, particularly those committed to a community-based, participatory model. There is a lack of appropriate reciprocities between data company, data subject, researcher, and community. People are de-centered, surveillance is de-linked from action while the agendas of public health and surveillance capitalism grow closer. This article argues that the current use of big mobility data in the COVID-19 pandemic represents a poor approach with respect to community and person-centered frameworks.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44947327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231153808
Petter Törnberg
The rise of digital platforms has in recent years redefined contemporary capitalism—provoking discussions on whether platformization should be understood as bringing an altogether new form of capitalism, or as merely a continuation and intensification of existing neoliberal trends. This paper draws on regulation theory to examine social regulation in digital capitalism, arguing for understanding digital capitalism as continuities of existing capitalist trends coming to produce discontinuities. The paper makes three main arguments. First, it situates digital capitalism as a continuation of longer running post-Fordist trends of financialization, digitalization, and privatization—converging in the emergence of digital proprietary markets, owned and regulated by transnational platform companies. Second, as the platform model is founded on monopolizing regulation, platforms come into direct competition with states and public institutions, which they pursue through a set of distinct technopolitical strategies to claim power to govern—resulting in a geographically variegated process of institutional transformation. Third, while the digital proprietary markets are continuities of existing trends, they bring new pressures and affordances, thus producing discontinuities in social regulation. We examine such discontinuities in relation to three aspects of social regulation: (a) from neoliberalism to techno-feudalism; (b) from Taylorist hierarchies toward algorithmic herds and technoliberal subjectivity; and (c) from postmodernity toward an automated consumer culture.
{"title":"How platforms govern: Social regulation in digital capitalism","authors":"Petter Törnberg","doi":"10.1177/20539517231153808","DOIUrl":"https://doi.org/10.1177/20539517231153808","url":null,"abstract":"The rise of digital platforms has in recent years redefined contemporary capitalism—provoking discussions on whether platformization should be understood as bringing an altogether new form of capitalism, or as merely a continuation and intensification of existing neoliberal trends. This paper draws on regulation theory to examine social regulation in digital capitalism, arguing for understanding digital capitalism as continuities of existing capitalist trends coming to produce discontinuities. The paper makes three main arguments. First, it situates digital capitalism as a continuation of longer running post-Fordist trends of financialization, digitalization, and privatization—converging in the emergence of digital proprietary markets, owned and regulated by transnational platform companies. Second, as the platform model is founded on monopolizing regulation, platforms come into direct competition with states and public institutions, which they pursue through a set of distinct technopolitical strategies to claim power to govern—resulting in a geographically variegated process of institutional transformation. Third, while the digital proprietary markets are continuities of existing trends, they bring new pressures and affordances, thus producing discontinuities in social regulation. We examine such discontinuities in relation to three aspects of social regulation: (a) from neoliberalism to techno-feudalism; (b) from Taylorist hierarchies toward algorithmic herds and technoliberal subjectivity; and (c) from postmodernity toward an automated consumer culture.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49443114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517221146122
Edward B. Kang
There is a gap in existing critical scholarship that engages with the ways in which current “machine listening” or voice analytics/biometric systems intersect with the technical specificities of machine learning. This article examines the sociotechnical assemblage of machine learning techniques, practices, and cultures that underlie these technologies. After engaging with various practitioners working in companies that develop machine listening systems, ranging from CEOs, machine learning engineers, data scientists, and business analysts, among others, I bring attention to the centrality of “learnability” as a malleable conceptual framework that bends according to various “ground-truthing” practices in formalizing certain listening-based prediction tasks for machine learning. In response, I introduce a process I call Ground Truth Tracings to examine the various ontological translations that occur in training a machine to “learn to listen.” Ultimately, by further examining this notion of learnability through the aperture of power, I take insights acquired through my fieldwork in the machine listening industry and propose a strategically reductive heuristic through which the epistemological and ethical soundness of machine learning, writ large, can be contemplated.
现有的批判性学术在当前的“机器聆听”或语音分析/生物识别系统与机器学习的技术特性交叉的方式方面存在差距。本文考察了机器学习技术、实践和文化的社会技术组合,这些技术是这些技术的基础。在与开发机器监听系统的公司的各种从业者接触后,包括首席执行官、机器学习工程师、数据科学家和商业分析师等,我提请注意“可学习性”的中心地位,它是一个可塑的概念框架,在为机器学习正式化某些基于听力的预测任务时,它会根据各种“基本事实”实践而弯曲。作为回应,我介绍了一个我称之为Ground Truth Tracings的过程,以检查在训练机器“学会倾听”时发生的各种本体论翻译,我从机器听力行业的实地工作中获得了一些见解,并提出了一种战略性的简化启发式方法,通过该方法可以全面考虑机器学习的认识论和伦理合理性。
{"title":"Ground truth tracings (GTT): On the epistemic limits of machine learning","authors":"Edward B. Kang","doi":"10.1177/20539517221146122","DOIUrl":"https://doi.org/10.1177/20539517221146122","url":null,"abstract":"There is a gap in existing critical scholarship that engages with the ways in which current “machine listening” or voice analytics/biometric systems intersect with the technical specificities of machine learning. This article examines the sociotechnical assemblage of machine learning techniques, practices, and cultures that underlie these technologies. After engaging with various practitioners working in companies that develop machine listening systems, ranging from CEOs, machine learning engineers, data scientists, and business analysts, among others, I bring attention to the centrality of “learnability” as a malleable conceptual framework that bends according to various “ground-truthing” practices in formalizing certain listening-based prediction tasks for machine learning. In response, I introduce a process I call Ground Truth Tracings to examine the various ontological translations that occur in training a machine to “learn to listen.” Ultimately, by further examining this notion of learnability through the aperture of power, I take insights acquired through my fieldwork in the machine listening industry and propose a strategically reductive heuristic through which the epistemological and ethical soundness of machine learning, writ large, can be contemplated.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43497044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231168100
Jonathan Gruber, E. Hargittai
Using the Internet means encountering algorithmic processes that influence what information a user sees or hears. Existing research has shown that people's algorithm skills vary considerably, that they develop individual theories to explain these processes, and that their online behavior can reflect these understandings. Yet, there is little research on how algorithm skills enable people to use algorithms to their own benefit and to avoid harms they may elicit. To fill this gap in the literature, we explore the extent to which people understand how the online systems and services they use may be influenced by personal data that algorithms know about them, and whether users change their behavior based on this understanding. Analyzing 83 in-depth interviews from five countries about people's experiences with researching and searching for products and services online, we show how being aware of personal data collection helps people understand algorithmic processes. However, this does not necessarily enable users to influence algorithmic output, because currently, options that help users control the level of customization they encounter online are limited. Besides the empirical contributions, we discuss research design implications based on the diversity of the sample and our findings for studying algorithm skills.
{"title":"The importance of algorithm skills for informed Internet use","authors":"Jonathan Gruber, E. Hargittai","doi":"10.1177/20539517231168100","DOIUrl":"https://doi.org/10.1177/20539517231168100","url":null,"abstract":"Using the Internet means encountering algorithmic processes that influence what information a user sees or hears. Existing research has shown that people's algorithm skills vary considerably, that they develop individual theories to explain these processes, and that their online behavior can reflect these understandings. Yet, there is little research on how algorithm skills enable people to use algorithms to their own benefit and to avoid harms they may elicit. To fill this gap in the literature, we explore the extent to which people understand how the online systems and services they use may be influenced by personal data that algorithms know about them, and whether users change their behavior based on this understanding. Analyzing 83 in-depth interviews from five countries about people's experiences with researching and searching for products and services online, we show how being aware of personal data collection helps people understand algorithmic processes. However, this does not necessarily enable users to influence algorithmic output, because currently, options that help users control the level of customization they encounter online are limited. Besides the empirical contributions, we discuss research design implications based on the diversity of the sample and our findings for studying algorithm skills.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41901578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231158994
A. Pasek, Hunter Vaughan, Nicole Starosielski
The climate impacts of the information and communications technology sector—and Big Data especially—is a topic of growing public and industry concern, though attempts to quantify its carbon footprint have produced contradictory results. Some studies argue that information and communications technology's global carbon footprint is set to rise dramatically in the coming years, requiring urgent regulation and sectoral degrowth. Others argue that information and communications technology's growth is largely decoupled from its carbon emissions, and so provides valuable climate solutions and a model for other industries. This article assesses these debates, arguing that, due to data frictions and incommensurate study designs, the question is likely to remain irresolvable at the global scale. We present six methodological factors that drive this impasse: fraught access to industry data, bottom-up vs. top-down assessments, system boundaries, geographic averaging, functional units, and energy efficiencies. In response, we propose an alternative approach that reframes the question in spatial and situated terms: A relational footprinting that demarcates particular relationships between elements—geographic, technical, and social—within broader information and communications technology infrastructures. Illustrating this model with one of the global Internet's most overlooked components—subsea telecommunication cables—we propose that information and communications technology futures would be best charted not only in terms of quantified total energy use, but in specifying the geographical and technical parts of the network that are the least carbon-intensive, and which can therefore provide opportunities for both carbon reductions and a renewed infrastructural politics. In parallel to the politics of (de)growth, we must also consider different network forms.
{"title":"The world wide web of carbon: Toward a relational footprinting of information and communications technology's climate impacts","authors":"A. Pasek, Hunter Vaughan, Nicole Starosielski","doi":"10.1177/20539517231158994","DOIUrl":"https://doi.org/10.1177/20539517231158994","url":null,"abstract":"The climate impacts of the information and communications technology sector—and Big Data especially—is a topic of growing public and industry concern, though attempts to quantify its carbon footprint have produced contradictory results. Some studies argue that information and communications technology's global carbon footprint is set to rise dramatically in the coming years, requiring urgent regulation and sectoral degrowth. Others argue that information and communications technology's growth is largely decoupled from its carbon emissions, and so provides valuable climate solutions and a model for other industries. This article assesses these debates, arguing that, due to data frictions and incommensurate study designs, the question is likely to remain irresolvable at the global scale. We present six methodological factors that drive this impasse: fraught access to industry data, bottom-up vs. top-down assessments, system boundaries, geographic averaging, functional units, and energy efficiencies. In response, we propose an alternative approach that reframes the question in spatial and situated terms: A relational footprinting that demarcates particular relationships between elements—geographic, technical, and social—within broader information and communications technology infrastructures. Illustrating this model with one of the global Internet's most overlooked components—subsea telecommunication cables—we propose that information and communications technology futures would be best charted not only in terms of quantified total energy use, but in specifying the geographical and technical parts of the network that are the least carbon-intensive, and which can therefore provide opportunities for both carbon reductions and a renewed infrastructural politics. In parallel to the politics of (de)growth, we must also consider different network forms.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46213167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/20539517231164118
Sun-ha Hong
Post-truth tells the story of a public descending into unreason, aided and abetted by platforms and other data-driven systems. But this apparent collapse of epistemic consensus is, I argue, also dominated by loud and aggressive commitment to the idea of facts and Reason – a site where an imagined modern past is being pillaged for vestigial legitimacy. This article identifies two common practices of such reappropriation and mythologisation. (1) Fact signalling involves performative invocations of facts and Reason, which are then weaponised to discredit communicative rivals and establish affective solidarity. This is often closely tied to (2) fact nostalgia: the cultivation of an imagined past when ‘facts were facts’ and we, the good liberal subjects, could recognise facts when we saw them. Both tendencies are underwritten by a myth of connection: the still enduring narrative that maximising the circulation of information regardless of provenance or meaning will eventually yield a more rational public – even as data-driven systems tend to undermine the very conditions for such a public. Drawing on examples from YouTube-amplified ‘alternative influencers’ in the American right, and the normative discourses around fact-checking practices, I argue that this continued reliance on the vestigial authority of the modern past is a pernicious obstacle in normative debates around data-driven publics, keeping us stuck on the same dead-end scripts of heroically suspicious individuals and ignorant, irrational masses.
{"title":"Fact signalling and fact nostalgia in the data-driven society","authors":"Sun-ha Hong","doi":"10.1177/20539517231164118","DOIUrl":"https://doi.org/10.1177/20539517231164118","url":null,"abstract":"Post-truth tells the story of a public descending into unreason, aided and abetted by platforms and other data-driven systems. But this apparent collapse of epistemic consensus is, I argue, also dominated by loud and aggressive commitment to the idea of facts and Reason – a site where an imagined modern past is being pillaged for vestigial legitimacy. This article identifies two common practices of such reappropriation and mythologisation. (1) Fact signalling involves performative invocations of facts and Reason, which are then weaponised to discredit communicative rivals and establish affective solidarity. This is often closely tied to (2) fact nostalgia: the cultivation of an imagined past when ‘facts were facts’ and we, the good liberal subjects, could recognise facts when we saw them. Both tendencies are underwritten by a myth of connection: the still enduring narrative that maximising the circulation of information regardless of provenance or meaning will eventually yield a more rational public – even as data-driven systems tend to undermine the very conditions for such a public. Drawing on examples from YouTube-amplified ‘alternative influencers’ in the American right, and the normative discourses around fact-checking practices, I argue that this continued reliance on the vestigial authority of the modern past is a pernicious obstacle in normative debates around data-driven publics, keeping us stuck on the same dead-end scripts of heroically suspicious individuals and ignorant, irrational masses.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46642136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}