{"title":"数据促进发展的关键框架:将数据关系和人工智能历史化","authors":"Alexander Martin Mussgnug, Sabina Leonelli","doi":"10.1111/dech.12857","DOIUrl":null,"url":null,"abstract":"<p><b>Nick Couldry and Ulises A. Mejias, <i>The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism</i>. Redwood City, CA: Stanford University Press, 2019. 352 pp. £ 15.50 paperback</b>. <b>Matteo Pasquinelli, <i>The Eye of The Master: A Social History of Artificial Intelligence</i>. London: Verso Books, 2023. 272 pp. £ 13.85 paperback</b>.</p><p>Recent years have witnessed increasing efforts to leverage emerging data sources and digital technologies in the design and delivery of international development programmes. Today, big data and artificial intelligence (AI) in particular have become a formative part of development work. This is evidenced by the establishment of intergovernmental innovation labs such as the UN Global Pulse, academic research centres such as the University of California Berkeley's Global Policy Lab, and a plethora of industry-driven initiatives. Under the banner of ‘data for development’, large-scale data integration for logistical, managerial and administrative purposes is heralded as revolutionizing capacity-building efforts in low-resourced nations and territories. Besides others, novel data technologies promise to transform access to social services and legal systems, the efficient use of natural resources, logistical efforts towards distributing food and medical care, educational programmes to improve literacy and computational skills, and effective coordination between local, national and transnational agencies.</p><p>In the face of much hype and enthusiasm for such applications, some have expressed concerns regarding the increasing datafication of development work, starting from the very umbrella term of ‘development’ under which these initiatives often sit (e.g. Dirlik, <span>2014</span>). The emphasis on ‘development’ may reflect an implicit evaluation of social contexts as being more or less ‘adequate’ depending on the extent to which they offer access to digital technologies. This, however, may not reflect other criteria for whether or not a given context is underdeveloped, which include access to social welfare, medical services and free trade among other possible options, nor may it acknowledge the very different impact that digitalization and AI-powered technologies may have depending on local socio-cultural norms and preferences. Relatedly, Laura Mann (<span>2018</span>) has criticized the almost exclusive focus of data for development applications on humanitarian aid at the expense of economic and socio-ecological development. All too often, public‒private partnerships in the design and deployment of these technologies contribute to the annexation of communities into existing economic, epistemic and technical infrastructures in a manner that ultimately benefits the Global North rather than allowing for the building of capacity in the Global South. For instance, agricultural development initiatives pushing toward greater data collection and openness might extract information from local contexts in a manner that ultimately most benefits multinational agribusinesses rather than local farms, especially given the large budgets needed to analyse the data and transform the resulting insights into useable products and the lack of consultation with farming communities over which products to develop in the first place (Bronson, <span>2022</span>; <span>Leonelli</span>, forthcoming; Rotz et al., <span>2019</span>). Others have expressed concerns over private and public actors ‘ethics dumping’ risks and harms associated with the testing of emerging socio-technical systems in the Global South under the umbrella of development (see Mohamed et al., <span>2020</span>). Communities that resist such datafication risk paying a high price — directly, by being excluded from aid programmes, and indirectly, as knowledge systems are increasingly shaped in a way that is biased toward highly datafied contexts.</p><p>Emerging applications of big data and machine learning to poverty estimation and targeting present one example through which to concretize these critiques. Effective poverty relief requires up-to-date poverty statistics at a level of granularity and accuracy that enables targeted interventions, policy making and programme monitoring. Traditional poverty statistics are taken from survey measurements, census data, or social registries which respectively are commonly not available at the resolution needed, outdated, or rely on highly inaccurate proxies (Jerven, <span>2013</span>; Kidd et al., <span>2021</span>). In pursuit of cheap, timely and granular poverty statistics, researchers have begun to train machine-learning models on mobile network data and satellite imagery to predict poverty metrics in the Global South (Blumenstock et al., <span>2015</span>; Yeh et al., <span>2020</span>). Recently, these efforts have moved from the proof-of-concept stage to their real-world application. An example of such emerging implementations is a collaboration between academic researchers, communication providers, the NGO GiveDirectly and the government of Togo. In response to the COVID-19 crisis, the Togolese government implemented an emergency cash transfer programme targeting informal workers in urban settings. In a second phase, GiveDirectly expanded the emergency relief to rural areas. To determine programme eligibility, researchers at University of California Berkeley's Global Policy Lab employed machine learning and big data for geographic and individual-level targeting. Using call detail records obtained from mobile phone providers, the developers estimated a proxy of consumption for each mobile phone subscriber in the poorest cantons. Registered subscribers with the lowest predicted proxy of consumption automatically received aid in the form of mobile cash transfers (Aiken et al., <span>2022</span>).</p><p>Such uses of machine learning and alternative data sources in the automated targeting of poverty relief raise numerous ethical issues. Amongst these are privacy concerns as communication records that are simultaneously sensitive and challenging to anonymize effectively (de Montjoye et al., <span>2019</span>) are made available without the informed consent of subscribers. The stark contrast in public and judicial considerations given to such emergency-response data grabs in lower- versus higher-resourced countries illustrates the disparate treatment of privacy rights in already historically disenfranchised settings (Bradford et al., <span>2020</span>; Ienca and Vayena, <span>2020</span>; Lubell, <span>2020</span>). Potential privacy harms must also be regarded in the light of the risk of function creep — that is, the use of technical systems beyond the purpose they were initially engineered or authorized for. The use of mobile network data in the individual-level targeting of poverty aid, for instance, enables the building of socio-technical systems and the acquisition of technical know-how that can also be used in the microtargeting of commercial and administrative services in ways that portend new forms of corporate and state surveillance.1 Contrary to their potential harms, the immediate benefits of machine learning applications to poverty relief are often much less apparent, continuing in a long tradition of targeting efforts relying on rather inaccurate proxy means tests (Brown et al., <span>2018</span>) now taken as ‘ground truth’ for the evaluation of machine learning predictions. In emerging machine learning applications to poverty aid, problematic dynamics of targeting in international development intersect with questionable epistemic and ethical norms of applied machine learning practice. For instance, a focus on the targeting methodology itself at the expense of exploring broader questions of policy design and their ideological underpinnings (Kidd, <span>2013</span>) echoes the often-narrow prioritization of predictive accuracy in the evaluation of machine learning applications (Mussgnug, <span>2022</span>). In the same vein, perspectives on targeting as first and foremost a rationing mechanism (Kidd, <span>2017</span>) resonate strongly with the exclusion of citizens without mobile phone access or resisting datafication from targeting and delivery systems relying on mobile network records and mobile cash respectively.</p><p>Machine learning-based poverty targeting stands representative of many donor-driven development projects promoting the use of big data and AI in public health, agriculture, education and poverty relief. Such data for development applications continue a prolonged trend of shifting funds from state institutions and ministries to project-based interventions headed by non-state actors (Lipton, <span>1992</span>; Mkandawire, <span>2005</span>). Development discourse often contrasts such targeted interventions both with structural economic reforms and long-term transformational development. In times where the use of big data and AI has moved from an ambitious promise to a formative part of development work, this article reviews foundational scholarship on the historical trajectory and contemporary implications of datafication and AI. These perspectives can help us to understand data for development applications not only narrowly as targeted interventions but situate them firmly as part of broader structural social and economic transformations themselves.</p><p>Our motivation is twofold. First, a better understanding of the <i>longue durée</i> of datafication and development can attune us to the complex entanglements that must be considered in navigating the responsible design and implementation of data for development applications. Second, overarching historical and analytical frameworks can connect domains of scholarship in a manner that supports much-needed collaboration and exchange between social scientists, practitioners, policy makers and scholars from the humanities invested in the responsible infrastructuring, design and deployment of emerging technologies in international development. To this end, we also consider the extent to which broader framings, in turn, might benefit from closer consideration of the practical, technical and social challenges of research practices on the ground. In doing so, we seek to foster a two-way exchange between technical and socio-cultural scholarship, as well as close engagement with groups with experience in and/or expertise of relevance to the deployment of the technologies in question. In what follows, we introduce, synthesize and expand on two timely publications that offer overarching analytical perspectives by historically situating data platforms and AI. We first discuss Nick Couldry and Ulises A. Mejias's (<span>2019</span>) landmark study <i>The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism</i> which inquires into the entanglement of capitalism, coloniality and datafication. We then connect these ideas to Matteo Pasquinelli's (<span>2023</span>) labour theory of AI in his recent work <i>The Eye of The Master: A Social History of Artificial Intelligence</i>.</p><p><i>The Costs of Connection</i> has played an important role over the last five years as a key scholarly analysis of the socio-political effects of the increasing datafication of human life (see Couldry and Mejias, <span>2023</span>). Looking specifically at the intersection between data work, sociology of labour, global history and political economy, the authors argue that the nature and implications of this datafication can only be understood by delineating its foundation in a new emerging form of capitalism. In this type of capitalism, human life becomes increasingly implicated with digital technologies in a manner that renders human life extractable for capitalist gain. Their ‘data colonialism’ is set apart from parallel and complementary accounts such as Shoshana Zuboff's (<span>2019</span>) <i>Surveillance Capitalism</i>, Nick Srnicek's (<span>2017</span>) <i>Platform Capitalism</i> or Sarah Myers West's (<span>2019</span>) <i>Data Capitalism</i> through an emphasis on the ways in which capitalism itself rests upon centuries of historical colonialism and continues to be entangled with colonialist legacies.</p><p>The book is structured in three parts. It begins with the authors identifying the current datafication's dual foundations in both colonialism and capitalism. Akin to how historical colonialism appropriated and extracted the natural resources of territorial conquest, data colonialism identifies human life itself as the new ‘raw material’. To be prepared for extraction and commodification, however, human life must first be transformed into data relations — that is, rendered into means of social interaction and self-reflection facilitated by digital tools. Reorganized as data relations, human life can then be abstracted into data, analogous to industrial capitalism's abstraction of work as labour. Convincingly, if briefly, Couldry and Mejias link this quest for a new input for capitalism to the dwindling purchasing power of the lower and middle classes in the face of growing inequality and the depletion of natural resources. In more detail, the book outlines how this expansion of capitalist production is accompanied by a radical transformation of political and economic dynamics the authors label the ‘Cloud Empire’. The Clould Empire denotes the reconfiguration of resources, and imaginations around the data colonialist agenda. Big platform businesses concentrated in the USA and China, such as Google, Facebook, Tencent and Baidu are the key players of this emerging economic order. Operating increasingly as monopolies-monopsonies, they not only concentrate economic power but increasingly shape their own regulatory spaces (i.e., platform governance) and actively seek to collaborate with and steer state authorities in pursuit of an increasingly seamless appropriation and extraction of social life. This rendering ready for capitalist exploitation, the authors link to colonialism's appropriation of new territories and people within them. Hereby, their argument draws substantially from comparisons and analogies between historical colonialism and current developments. For instance, the authors illustrate how data colonialism relies on distinct doctrines, from an emphasis on a ‘digital community’ and ‘personalization’, to the misnomer of ‘raw data’ in a manner that mirrors colonial ideologies such as <i>terra nullius</i> and the ‘civilized world’. In the same vein, the book compares the end user license agreements of digital services, stripping users of their data ownership, to the Spanish Requerimiento of 1513.2</p><p>The second part of the book shifts focus to the implications of data colonialism for human life and our epistemology of the social. Couldry and Mejias situate the effects of data colonialism’ within a broader historical transformation of social knowledge, beginning with the emergence of quantification and statistical thinking. These ‘technologies of distance’ (Porter, <span>1995</span>) come to institutionalize a new framing of ‘normality’ (Hacking, <span>1990</span>) and new means of social control. The restructuring of life around data relations builds on this process, making human activity increasingly legible and governable not only for capitalist exploitation but also for social scientific research. In restructuring their epistemic landscapes in a manner aligned with the increasing abstraction of human life as data, the authors contend that computational social scientists become complicit in data colonialism, slowly eroding their ability for critical engagement with its character and implications as their methods become reliant on the very platforms and tools they are studying. These implications not only shape the study of social life in aggregate but extend to the domain of individuals and their (lack of) agency over datafication processes. As many of today's critical data scholars also argued, data colonialism threatens to erode the personal autonomy of its data subjects. Couldry and Mejias note how ‘a continuously trackable life is a dispossessed life, whose space is continuously invaded and subjected to extraction by external power’ (p. 157). As data colonialism entrenches itself and progressively infiltrates our thinking with its ideologies, we risk unlearning the norms and freedoms associated with our autonomy that render the effective safeguarding of our rights and identification of harms possible in the first place — a situation where ‘we’ embraces the vast majority of countries and social realities around the globe, regardless of the widely different existing understandings of autonomy, social agency and rights.</p><p>This leads the reader into the final part of the book. Part three reiterates the main threads of their argument, emphasizing how data colonialism renews colonial dynamics and augments capitalism's domain in a manner that renders human life itself the direct input of economic extraction. The book shines in its postscript, articulating possible means of resistance. Here, Couldry and Mejias advocate for the imagination of alternative counterpresents, the challenging of data colonialism's underlying ideologies through critical data literacy programmes, and the purposeful construction of ‘seamfull’ technologies that oppose the seamless extraction of human life pursued by data colonialism.</p><p>Data colonialism presents a forceful framing (and labelling) that can help contextualize critical scholarship on data for development and position it within the <i>longue durée</i> of colonialism and capitalism's interrelations. It is not surprising that Couldry and Mejias’ book, and related scholarship, has strongly influenced the direction of critical data studies since its publication in 2019. Its dual focus on capitalism and colonialism provides a unique perspective from which to analyse the entanglement of commercial interests of the data economy with the persistent colonial heritage of international development activities that is at the very core of the data for development movement. Moreover, data colonialism brings to the fore the strategic groundwork underlying the current datafication. The authors convincingly anatomize how datafication is not only superimposed upon social life but relies on its reconfiguration and adoption of suitable ideologies in promoting its aims and obscuring its implications. Uses of social data for development purposes, in ways too close for comfort to commercial applications and speculative investment by big tech companies in the Global North and increasingly China, ultimately rest upon this reorganization of social life. A more recent book tackling the history of AI, rather than data platforms, ends up further developing and strengthening this insight, while also connecting it to current developments in the ways data are put to work through machine learning algorithms, predictive engines, large language models and neural networks, among others. In <i>The Eye of the Master</i> Pasquinelli seems, prima facie, concerned with a very different set of problems to those motivating Couldry and Mejias. Pasquinelli's goal is to use historical analysis to show how the development of algorithmic thinking has been predicated on transforming the very ways in which such thinking is conceptualized and operationalized. In his view, algorithmic thinking has very deep historical roots, emerging early in human history ‘as a material abstraction, through the interaction of mind with tools, in order to change the world and solve mostly economic and social problems’ (p. 16). In fact, he argues that labour itself may constitute a primitive form of algorithm: an attempt to not only act towards the production of specific outputs, but also to evaluate the logistics and implications of such production, thereby enabling the social organization of work. In Pasquinelli's interpretation, the automation of labour, and particularly mental labour, should be regarded as overlapping with the automation of knowledge production. To support this argument, the book examines several salient historical episodes in the history of computing, including for instance Charles Babbage's effort to develop the first analytic machine and Rosenblatt's invention of the first artificial neural network (the ‘perceptron’), as ‘computation emerging not only as a means for augmenting labour but also as an instrument (and implicit metrics) for measuring it’ (p. 17).</p><p>In Pasquinelli's view, therefore, AI was not born as a way to automate human thinking and various forms of reasoning, but rather as a way to measure and evaluate efforts to carry out cognitive tasks and organize social hierarchies (p. 21). Building on this reading of the history of AI, Pasquinelli ends on a note that closely resonates with Couldry's and Mejias’ rendition of data colonialism. In his words: ‘the replacement of traditional jobs by AI should be studied together with the displacement and multiplication of precarious, underpaid, and marginalised jobs across a global economy’ (p. 21), with ‘AI and ghost work appearing to be the two sides of the one and same mechanism of labour automaton and social psychometrics’ (p. 22). AI, grounded on the immense advancements in data accumulation and analysis characterizing this era of human history, is thereby threatening to foster and expand existing social, digital and economic divides, particularly between groups with the skills and opportunities to take advantage of these tools and those who do not have relevant abilities, capacity and/or living conditions.</p><p>Pasquinelli's book pioneers the application of historical and political epistemology, along with the evolution the evolution of mechanical thinking and automation, to the study of algorithmic thinking. His argument intersects with — and advances — Couldry and Mejias’ defence of the enormous significance of long-standing efforts to develop methods for social quantification, in ways that explain the explosive growth of quantification throughout the 20th century. Pasquinelli's analysis also illuminates the extent to which this phenomenon extended across the globe, rather than being restricted to Western countries — for instance, by highlighting the importance of social mathematics in Hindu culture.</p><p>What can we learn from this kind of scholarship on the history of data platforms, colonialism and AI, as many critical examiners and practitioners of international development alike scramble to responsibly and sustainably innovate development efforts? Interdisciplinary works such as <i>The Costs of Connection</i> and <i>The Eye of the Master</i> attune us to the complex entanglements surrounding data for development applications by situating them firmly within broader structural social and economic transformations. We advocate for the relevance of such scholarship not only as critical perspectives <i>on</i> data for development pursuits but as analytical frameworks <i>in</i> the design and implementation of responsible digital innovations itself. To this end, broader critical framings can connect domains of scholarship in a manner that supports collaboration and exchange between social scientists, practitioners, policy makers and scholars from the humanities invested in the responsible infrastructuring, design and deployment of emerging technologies in international development. Consider, for instance, the use of machine learning in the targeting of poverty aid in Togo and its ethical and social implications, as sketched briefly at the beginning of this article.</p><p>Arguably, the biggest challenge in the responsible design of such data for development applications is not the identification of potential ethical implications in the absolute and abstract, but rather their assessment within a given social and economic context relative to limited available alternatives. Hereby, off-the-shelf principles or recommendations can only provide limited guidance due to the highly contextual, local and technology-specific nature of the trade-offs faced. Instead, navigating and fostering the sustainable and responsible integration of AI and big data in international development requires: technical expertise; familiarity with circumstances on the ground; consultative mechanisms whereby users can provide feedback and direction to technology development; and a sensitivity towards the complex historical, social and epistemological dynamics that surround these applications, cultivated through regular consultation both with contributors and with existing research on local conditions. The latter becomes central once we acknowledge that responsible innovation demands not a narrow perspective on the most immediate consequences of adopting a specific technological solution, but further considering their role in reinforcing or shifting existing power dynamics and their situatedness within broader socio-economic and ecological transformations.</p><p>It is here, in particular, that critical historical scholarship as exemplified by the works of Pasquinelli, Couldry and Mejias can serve a vital function for this emerging strand of international development research. By examining the contemporary datafication and proliferation of AI within the longue durée of technological and economic transformations, <i>The Costs of Connection</i> and <i>The Eye of the Master</i> illuminate and attune us to complex entanglements of data for development efforts with broader social, technological, political, ecological and economic dynamics. Moreover, overarching historical and conceptual frameworks such as surveyed here can provide much needed connective tissue and a foundation for interdisciplinary scholarship <i>on and within</i> the datafication of international development. Pasquinelli, Mejias and Couldry, for instance, bring into dialogue research from domains such as labour theory, critical data and media studies, AI ethics and decolonialism, neocolonialism, postcolonialism, and international development. Building upon such work by connecting and integrating a wide range of disciplinary perspectives in researching data for development applications is central to better understanding existing and anticipating future implications of the ongoing transformations of international development efforts. Pioneering efforts, such as Mirca Madianou's (<span>2019</span>) critical empirical investigation of data practices in humanitarian responses to the refugee crisis, exemplify the fruitfulness of this approach. More, however, can and must be done not only in critically examining data for development applications post-hoc but in bringing interdisciplinary perspectives to bear on their responsible design and implementation itself.</p><p>To this end and in closing, we want to highlight three ways in which the accounts here could be further developed as an even more powerful perspective from which to scrutinize dynamics surrounding the use of big data for development. First, we advocate for a greater emphasis on the essential coloniality of datafication. We echo Densua Mumford's (<span>2021</span>) insightful critique, stressing the ways in which colonialism often appears as a motif rather than the main theme of <i>The Costs of Connection</i> and the only cursory treatment of postcolonial and decolonial scholarship.3 Central to understanding the ongoing legacies of colonialism, also in data for development applications, are the ways in which it shaped and continues to influence our systems of knowledge, i.e., the coloniality of knowledge (Quijano, <span>2000</span>). Here, research building on Couldry and Mejias’ rendition of data colonialism needs to more extensively engage with the ways in which our current reliance on and understanding of data constitutes an extension, if not culmination of a problematic universalism and conception of objectivity and epistemic authority (Ricaurte, <span>2019</span>). This becomes particularly relevant in the context of data for development applications, which often involve the imposition of a particular epistemology on historically colonized communities by the Global North and, increasingly, China (e.g. Lynsey, <span>2018</span>; see also Gravett, <span>2021</span>). Understanding both Western-led data for development projects and framings of Sino-African data relations as forms of postcolonial resistance (Eisenman and Shinn, <span>2023</span>: 237, 336) requires us to go beyond articulating parallels between colonial and data extractivism by tracing consequential epistemological, political and socio-economic linkages between colonialism and current data for development practice.</p><p>Second, and taking our cue from Pasquinelli's call for ‘deconnectivism’, this work could be taken as motivation and inspiration to undo some of the foundational fabric — including the underpinning philosophy and belief in the progressive power of digital technologies — that constitutes contemporary enactments of AI governance and data-intensive practices in the contemporary world and the neoliberal market. As Couldry, Mejias and Pasquinelli all point out, it is high time to re-appropriate this space as one engaged with bottom-up, community-grounded social action, and mindful of the enormous divides and disparities that characterize social interactions with technology in every corner of the globe, whether it is high-resourced or low-resourced countries. Computing, mathematics, data infrastructures, modelling, statistics and related technical domains and forms of intervention are a space of political struggle, even if often presented as a neutral, objective, dehumanized and thereby apolitical terrain (Beaulieu and Leonelli, <span>2021</span>). Reclaiming human agency in the midst of these developments and ensuring that ideas of public interest and common good are continuously debated and central to technological applications, is ever more urgent as AI accelerates and takes over a greater fraction of human activities in the coming months and years.</p><p>Last and not least, it is crucial to remind ourselves that humans are unavoidably enmeshed in a complex ecosystem and an increasingly fragile web of life, and that data affects all organic forms on the planet in ways that provide both unrivalled opportunities and existential risks for our species. We thus suggest a less anthropocentric framing and more substantial engagement with how our relationship with the nonhuman environment —including plant, animals, microorganisms, insects, fungi — is both foundational to and shaped by datafication today. Long-standing colonial doctrines of nature as disconnected from human life and primed for boundless appropriation lie at the core of today's data-intensive practices of ‘knowledge extraction’. In turn, this datafication and the underlying manufacturing of new data relations not only abstract and reorganize social life but also transforms human relations with nature. Further expanding data colonialism's focus along these lines would render it a more powerful framework for analysing data for development for at least three reasons. First, applications, for instance in agricultural development, rely heavily on biological and environmental data. Much can be gained from understanding not only uses of human but also non-human data in light of their colonial and capitalist genealogy. Second, a so-developed thesis would provide an analytic lens from which to scrutinize the often-anthropocentric epistemology of development work that isolates economic deprivation from issues such as climate justice and environmental sustainability.</p><p>This tendency becomes evident when the promised benefits of data for development are eulogized without acknowledging the devastating environmental impact of ever-growing data and computational infrastructures (Dhar, <span>2020</span>). Third, tackling the climate crisis — and related decline in planetary health — is arguably critical to all aspects of development work, given its well-documented links with ever more frequent natural disasters, mass migration, expanding social inequities and deepening failures in public health and food security. Unless this fundamental insight informs the responsible creation and use of data-intensive technologies for development, thereby giving central stage to the interconnections between inhabitants of our planet, there is no hope of technology fostering sustainable and effective remedies to human suffering.</p>","PeriodicalId":48194,"journal":{"name":"Development and Change","volume":"55 5","pages":"1109-1121"},"PeriodicalIF":3.0000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/dech.12857","citationCount":"0","resultStr":"{\"title\":\"A Critical Framing of Data for Development: Historicizing Data Relations and AI\",\"authors\":\"Alexander Martin Mussgnug, Sabina Leonelli\",\"doi\":\"10.1111/dech.12857\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><b>Nick Couldry and Ulises A. Mejias, <i>The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism</i>. Redwood City, CA: Stanford University Press, 2019. 352 pp. £ 15.50 paperback</b>. <b>Matteo Pasquinelli, <i>The Eye of The Master: A Social History of Artificial Intelligence</i>. London: Verso Books, 2023. 272 pp. £ 13.85 paperback</b>.</p><p>Recent years have witnessed increasing efforts to leverage emerging data sources and digital technologies in the design and delivery of international development programmes. Today, big data and artificial intelligence (AI) in particular have become a formative part of development work. This is evidenced by the establishment of intergovernmental innovation labs such as the UN Global Pulse, academic research centres such as the University of California Berkeley's Global Policy Lab, and a plethora of industry-driven initiatives. Under the banner of ‘data for development’, large-scale data integration for logistical, managerial and administrative purposes is heralded as revolutionizing capacity-building efforts in low-resourced nations and territories. Besides others, novel data technologies promise to transform access to social services and legal systems, the efficient use of natural resources, logistical efforts towards distributing food and medical care, educational programmes to improve literacy and computational skills, and effective coordination between local, national and transnational agencies.</p><p>In the face of much hype and enthusiasm for such applications, some have expressed concerns regarding the increasing datafication of development work, starting from the very umbrella term of ‘development’ under which these initiatives often sit (e.g. Dirlik, <span>2014</span>). The emphasis on ‘development’ may reflect an implicit evaluation of social contexts as being more or less ‘adequate’ depending on the extent to which they offer access to digital technologies. This, however, may not reflect other criteria for whether or not a given context is underdeveloped, which include access to social welfare, medical services and free trade among other possible options, nor may it acknowledge the very different impact that digitalization and AI-powered technologies may have depending on local socio-cultural norms and preferences. Relatedly, Laura Mann (<span>2018</span>) has criticized the almost exclusive focus of data for development applications on humanitarian aid at the expense of economic and socio-ecological development. All too often, public‒private partnerships in the design and deployment of these technologies contribute to the annexation of communities into existing economic, epistemic and technical infrastructures in a manner that ultimately benefits the Global North rather than allowing for the building of capacity in the Global South. For instance, agricultural development initiatives pushing toward greater data collection and openness might extract information from local contexts in a manner that ultimately most benefits multinational agribusinesses rather than local farms, especially given the large budgets needed to analyse the data and transform the resulting insights into useable products and the lack of consultation with farming communities over which products to develop in the first place (Bronson, <span>2022</span>; <span>Leonelli</span>, forthcoming; Rotz et al., <span>2019</span>). Others have expressed concerns over private and public actors ‘ethics dumping’ risks and harms associated with the testing of emerging socio-technical systems in the Global South under the umbrella of development (see Mohamed et al., <span>2020</span>). Communities that resist such datafication risk paying a high price — directly, by being excluded from aid programmes, and indirectly, as knowledge systems are increasingly shaped in a way that is biased toward highly datafied contexts.</p><p>Emerging applications of big data and machine learning to poverty estimation and targeting present one example through which to concretize these critiques. Effective poverty relief requires up-to-date poverty statistics at a level of granularity and accuracy that enables targeted interventions, policy making and programme monitoring. Traditional poverty statistics are taken from survey measurements, census data, or social registries which respectively are commonly not available at the resolution needed, outdated, or rely on highly inaccurate proxies (Jerven, <span>2013</span>; Kidd et al., <span>2021</span>). In pursuit of cheap, timely and granular poverty statistics, researchers have begun to train machine-learning models on mobile network data and satellite imagery to predict poverty metrics in the Global South (Blumenstock et al., <span>2015</span>; Yeh et al., <span>2020</span>). Recently, these efforts have moved from the proof-of-concept stage to their real-world application. An example of such emerging implementations is a collaboration between academic researchers, communication providers, the NGO GiveDirectly and the government of Togo. In response to the COVID-19 crisis, the Togolese government implemented an emergency cash transfer programme targeting informal workers in urban settings. In a second phase, GiveDirectly expanded the emergency relief to rural areas. To determine programme eligibility, researchers at University of California Berkeley's Global Policy Lab employed machine learning and big data for geographic and individual-level targeting. Using call detail records obtained from mobile phone providers, the developers estimated a proxy of consumption for each mobile phone subscriber in the poorest cantons. Registered subscribers with the lowest predicted proxy of consumption automatically received aid in the form of mobile cash transfers (Aiken et al., <span>2022</span>).</p><p>Such uses of machine learning and alternative data sources in the automated targeting of poverty relief raise numerous ethical issues. Amongst these are privacy concerns as communication records that are simultaneously sensitive and challenging to anonymize effectively (de Montjoye et al., <span>2019</span>) are made available without the informed consent of subscribers. The stark contrast in public and judicial considerations given to such emergency-response data grabs in lower- versus higher-resourced countries illustrates the disparate treatment of privacy rights in already historically disenfranchised settings (Bradford et al., <span>2020</span>; Ienca and Vayena, <span>2020</span>; Lubell, <span>2020</span>). Potential privacy harms must also be regarded in the light of the risk of function creep — that is, the use of technical systems beyond the purpose they were initially engineered or authorized for. The use of mobile network data in the individual-level targeting of poverty aid, for instance, enables the building of socio-technical systems and the acquisition of technical know-how that can also be used in the microtargeting of commercial and administrative services in ways that portend new forms of corporate and state surveillance.1 Contrary to their potential harms, the immediate benefits of machine learning applications to poverty relief are often much less apparent, continuing in a long tradition of targeting efforts relying on rather inaccurate proxy means tests (Brown et al., <span>2018</span>) now taken as ‘ground truth’ for the evaluation of machine learning predictions. In emerging machine learning applications to poverty aid, problematic dynamics of targeting in international development intersect with questionable epistemic and ethical norms of applied machine learning practice. For instance, a focus on the targeting methodology itself at the expense of exploring broader questions of policy design and their ideological underpinnings (Kidd, <span>2013</span>) echoes the often-narrow prioritization of predictive accuracy in the evaluation of machine learning applications (Mussgnug, <span>2022</span>). In the same vein, perspectives on targeting as first and foremost a rationing mechanism (Kidd, <span>2017</span>) resonate strongly with the exclusion of citizens without mobile phone access or resisting datafication from targeting and delivery systems relying on mobile network records and mobile cash respectively.</p><p>Machine learning-based poverty targeting stands representative of many donor-driven development projects promoting the use of big data and AI in public health, agriculture, education and poverty relief. Such data for development applications continue a prolonged trend of shifting funds from state institutions and ministries to project-based interventions headed by non-state actors (Lipton, <span>1992</span>; Mkandawire, <span>2005</span>). Development discourse often contrasts such targeted interventions both with structural economic reforms and long-term transformational development. In times where the use of big data and AI has moved from an ambitious promise to a formative part of development work, this article reviews foundational scholarship on the historical trajectory and contemporary implications of datafication and AI. These perspectives can help us to understand data for development applications not only narrowly as targeted interventions but situate them firmly as part of broader structural social and economic transformations themselves.</p><p>Our motivation is twofold. First, a better understanding of the <i>longue durée</i> of datafication and development can attune us to the complex entanglements that must be considered in navigating the responsible design and implementation of data for development applications. Second, overarching historical and analytical frameworks can connect domains of scholarship in a manner that supports much-needed collaboration and exchange between social scientists, practitioners, policy makers and scholars from the humanities invested in the responsible infrastructuring, design and deployment of emerging technologies in international development. To this end, we also consider the extent to which broader framings, in turn, might benefit from closer consideration of the practical, technical and social challenges of research practices on the ground. In doing so, we seek to foster a two-way exchange between technical and socio-cultural scholarship, as well as close engagement with groups with experience in and/or expertise of relevance to the deployment of the technologies in question. In what follows, we introduce, synthesize and expand on two timely publications that offer overarching analytical perspectives by historically situating data platforms and AI. We first discuss Nick Couldry and Ulises A. Mejias's (<span>2019</span>) landmark study <i>The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism</i> which inquires into the entanglement of capitalism, coloniality and datafication. We then connect these ideas to Matteo Pasquinelli's (<span>2023</span>) labour theory of AI in his recent work <i>The Eye of The Master: A Social History of Artificial Intelligence</i>.</p><p><i>The Costs of Connection</i> has played an important role over the last five years as a key scholarly analysis of the socio-political effects of the increasing datafication of human life (see Couldry and Mejias, <span>2023</span>). Looking specifically at the intersection between data work, sociology of labour, global history and political economy, the authors argue that the nature and implications of this datafication can only be understood by delineating its foundation in a new emerging form of capitalism. In this type of capitalism, human life becomes increasingly implicated with digital technologies in a manner that renders human life extractable for capitalist gain. Their ‘data colonialism’ is set apart from parallel and complementary accounts such as Shoshana Zuboff's (<span>2019</span>) <i>Surveillance Capitalism</i>, Nick Srnicek's (<span>2017</span>) <i>Platform Capitalism</i> or Sarah Myers West's (<span>2019</span>) <i>Data Capitalism</i> through an emphasis on the ways in which capitalism itself rests upon centuries of historical colonialism and continues to be entangled with colonialist legacies.</p><p>The book is structured in three parts. It begins with the authors identifying the current datafication's dual foundations in both colonialism and capitalism. Akin to how historical colonialism appropriated and extracted the natural resources of territorial conquest, data colonialism identifies human life itself as the new ‘raw material’. To be prepared for extraction and commodification, however, human life must first be transformed into data relations — that is, rendered into means of social interaction and self-reflection facilitated by digital tools. Reorganized as data relations, human life can then be abstracted into data, analogous to industrial capitalism's abstraction of work as labour. Convincingly, if briefly, Couldry and Mejias link this quest for a new input for capitalism to the dwindling purchasing power of the lower and middle classes in the face of growing inequality and the depletion of natural resources. In more detail, the book outlines how this expansion of capitalist production is accompanied by a radical transformation of political and economic dynamics the authors label the ‘Cloud Empire’. The Clould Empire denotes the reconfiguration of resources, and imaginations around the data colonialist agenda. Big platform businesses concentrated in the USA and China, such as Google, Facebook, Tencent and Baidu are the key players of this emerging economic order. Operating increasingly as monopolies-monopsonies, they not only concentrate economic power but increasingly shape their own regulatory spaces (i.e., platform governance) and actively seek to collaborate with and steer state authorities in pursuit of an increasingly seamless appropriation and extraction of social life. This rendering ready for capitalist exploitation, the authors link to colonialism's appropriation of new territories and people within them. Hereby, their argument draws substantially from comparisons and analogies between historical colonialism and current developments. For instance, the authors illustrate how data colonialism relies on distinct doctrines, from an emphasis on a ‘digital community’ and ‘personalization’, to the misnomer of ‘raw data’ in a manner that mirrors colonial ideologies such as <i>terra nullius</i> and the ‘civilized world’. In the same vein, the book compares the end user license agreements of digital services, stripping users of their data ownership, to the Spanish Requerimiento of 1513.2</p><p>The second part of the book shifts focus to the implications of data colonialism for human life and our epistemology of the social. Couldry and Mejias situate the effects of data colonialism’ within a broader historical transformation of social knowledge, beginning with the emergence of quantification and statistical thinking. These ‘technologies of distance’ (Porter, <span>1995</span>) come to institutionalize a new framing of ‘normality’ (Hacking, <span>1990</span>) and new means of social control. The restructuring of life around data relations builds on this process, making human activity increasingly legible and governable not only for capitalist exploitation but also for social scientific research. In restructuring their epistemic landscapes in a manner aligned with the increasing abstraction of human life as data, the authors contend that computational social scientists become complicit in data colonialism, slowly eroding their ability for critical engagement with its character and implications as their methods become reliant on the very platforms and tools they are studying. These implications not only shape the study of social life in aggregate but extend to the domain of individuals and their (lack of) agency over datafication processes. As many of today's critical data scholars also argued, data colonialism threatens to erode the personal autonomy of its data subjects. Couldry and Mejias note how ‘a continuously trackable life is a dispossessed life, whose space is continuously invaded and subjected to extraction by external power’ (p. 157). As data colonialism entrenches itself and progressively infiltrates our thinking with its ideologies, we risk unlearning the norms and freedoms associated with our autonomy that render the effective safeguarding of our rights and identification of harms possible in the first place — a situation where ‘we’ embraces the vast majority of countries and social realities around the globe, regardless of the widely different existing understandings of autonomy, social agency and rights.</p><p>This leads the reader into the final part of the book. Part three reiterates the main threads of their argument, emphasizing how data colonialism renews colonial dynamics and augments capitalism's domain in a manner that renders human life itself the direct input of economic extraction. The book shines in its postscript, articulating possible means of resistance. Here, Couldry and Mejias advocate for the imagination of alternative counterpresents, the challenging of data colonialism's underlying ideologies through critical data literacy programmes, and the purposeful construction of ‘seamfull’ technologies that oppose the seamless extraction of human life pursued by data colonialism.</p><p>Data colonialism presents a forceful framing (and labelling) that can help contextualize critical scholarship on data for development and position it within the <i>longue durée</i> of colonialism and capitalism's interrelations. It is not surprising that Couldry and Mejias’ book, and related scholarship, has strongly influenced the direction of critical data studies since its publication in 2019. Its dual focus on capitalism and colonialism provides a unique perspective from which to analyse the entanglement of commercial interests of the data economy with the persistent colonial heritage of international development activities that is at the very core of the data for development movement. Moreover, data colonialism brings to the fore the strategic groundwork underlying the current datafication. The authors convincingly anatomize how datafication is not only superimposed upon social life but relies on its reconfiguration and adoption of suitable ideologies in promoting its aims and obscuring its implications. Uses of social data for development purposes, in ways too close for comfort to commercial applications and speculative investment by big tech companies in the Global North and increasingly China, ultimately rest upon this reorganization of social life. A more recent book tackling the history of AI, rather than data platforms, ends up further developing and strengthening this insight, while also connecting it to current developments in the ways data are put to work through machine learning algorithms, predictive engines, large language models and neural networks, among others. In <i>The Eye of the Master</i> Pasquinelli seems, prima facie, concerned with a very different set of problems to those motivating Couldry and Mejias. Pasquinelli's goal is to use historical analysis to show how the development of algorithmic thinking has been predicated on transforming the very ways in which such thinking is conceptualized and operationalized. In his view, algorithmic thinking has very deep historical roots, emerging early in human history ‘as a material abstraction, through the interaction of mind with tools, in order to change the world and solve mostly economic and social problems’ (p. 16). In fact, he argues that labour itself may constitute a primitive form of algorithm: an attempt to not only act towards the production of specific outputs, but also to evaluate the logistics and implications of such production, thereby enabling the social organization of work. In Pasquinelli's interpretation, the automation of labour, and particularly mental labour, should be regarded as overlapping with the automation of knowledge production. To support this argument, the book examines several salient historical episodes in the history of computing, including for instance Charles Babbage's effort to develop the first analytic machine and Rosenblatt's invention of the first artificial neural network (the ‘perceptron’), as ‘computation emerging not only as a means for augmenting labour but also as an instrument (and implicit metrics) for measuring it’ (p. 17).</p><p>In Pasquinelli's view, therefore, AI was not born as a way to automate human thinking and various forms of reasoning, but rather as a way to measure and evaluate efforts to carry out cognitive tasks and organize social hierarchies (p. 21). Building on this reading of the history of AI, Pasquinelli ends on a note that closely resonates with Couldry's and Mejias’ rendition of data colonialism. In his words: ‘the replacement of traditional jobs by AI should be studied together with the displacement and multiplication of precarious, underpaid, and marginalised jobs across a global economy’ (p. 21), with ‘AI and ghost work appearing to be the two sides of the one and same mechanism of labour automaton and social psychometrics’ (p. 22). AI, grounded on the immense advancements in data accumulation and analysis characterizing this era of human history, is thereby threatening to foster and expand existing social, digital and economic divides, particularly between groups with the skills and opportunities to take advantage of these tools and those who do not have relevant abilities, capacity and/or living conditions.</p><p>Pasquinelli's book pioneers the application of historical and political epistemology, along with the evolution the evolution of mechanical thinking and automation, to the study of algorithmic thinking. His argument intersects with — and advances — Couldry and Mejias’ defence of the enormous significance of long-standing efforts to develop methods for social quantification, in ways that explain the explosive growth of quantification throughout the 20th century. Pasquinelli's analysis also illuminates the extent to which this phenomenon extended across the globe, rather than being restricted to Western countries — for instance, by highlighting the importance of social mathematics in Hindu culture.</p><p>What can we learn from this kind of scholarship on the history of data platforms, colonialism and AI, as many critical examiners and practitioners of international development alike scramble to responsibly and sustainably innovate development efforts? Interdisciplinary works such as <i>The Costs of Connection</i> and <i>The Eye of the Master</i> attune us to the complex entanglements surrounding data for development applications by situating them firmly within broader structural social and economic transformations. We advocate for the relevance of such scholarship not only as critical perspectives <i>on</i> data for development pursuits but as analytical frameworks <i>in</i> the design and implementation of responsible digital innovations itself. To this end, broader critical framings can connect domains of scholarship in a manner that supports collaboration and exchange between social scientists, practitioners, policy makers and scholars from the humanities invested in the responsible infrastructuring, design and deployment of emerging technologies in international development. Consider, for instance, the use of machine learning in the targeting of poverty aid in Togo and its ethical and social implications, as sketched briefly at the beginning of this article.</p><p>Arguably, the biggest challenge in the responsible design of such data for development applications is not the identification of potential ethical implications in the absolute and abstract, but rather their assessment within a given social and economic context relative to limited available alternatives. Hereby, off-the-shelf principles or recommendations can only provide limited guidance due to the highly contextual, local and technology-specific nature of the trade-offs faced. Instead, navigating and fostering the sustainable and responsible integration of AI and big data in international development requires: technical expertise; familiarity with circumstances on the ground; consultative mechanisms whereby users can provide feedback and direction to technology development; and a sensitivity towards the complex historical, social and epistemological dynamics that surround these applications, cultivated through regular consultation both with contributors and with existing research on local conditions. The latter becomes central once we acknowledge that responsible innovation demands not a narrow perspective on the most immediate consequences of adopting a specific technological solution, but further considering their role in reinforcing or shifting existing power dynamics and their situatedness within broader socio-economic and ecological transformations.</p><p>It is here, in particular, that critical historical scholarship as exemplified by the works of Pasquinelli, Couldry and Mejias can serve a vital function for this emerging strand of international development research. By examining the contemporary datafication and proliferation of AI within the longue durée of technological and economic transformations, <i>The Costs of Connection</i> and <i>The Eye of the Master</i> illuminate and attune us to complex entanglements of data for development efforts with broader social, technological, political, ecological and economic dynamics. Moreover, overarching historical and conceptual frameworks such as surveyed here can provide much needed connective tissue and a foundation for interdisciplinary scholarship <i>on and within</i> the datafication of international development. Pasquinelli, Mejias and Couldry, for instance, bring into dialogue research from domains such as labour theory, critical data and media studies, AI ethics and decolonialism, neocolonialism, postcolonialism, and international development. Building upon such work by connecting and integrating a wide range of disciplinary perspectives in researching data for development applications is central to better understanding existing and anticipating future implications of the ongoing transformations of international development efforts. Pioneering efforts, such as Mirca Madianou's (<span>2019</span>) critical empirical investigation of data practices in humanitarian responses to the refugee crisis, exemplify the fruitfulness of this approach. More, however, can and must be done not only in critically examining data for development applications post-hoc but in bringing interdisciplinary perspectives to bear on their responsible design and implementation itself.</p><p>To this end and in closing, we want to highlight three ways in which the accounts here could be further developed as an even more powerful perspective from which to scrutinize dynamics surrounding the use of big data for development. First, we advocate for a greater emphasis on the essential coloniality of datafication. We echo Densua Mumford's (<span>2021</span>) insightful critique, stressing the ways in which colonialism often appears as a motif rather than the main theme of <i>The Costs of Connection</i> and the only cursory treatment of postcolonial and decolonial scholarship.3 Central to understanding the ongoing legacies of colonialism, also in data for development applications, are the ways in which it shaped and continues to influence our systems of knowledge, i.e., the coloniality of knowledge (Quijano, <span>2000</span>). Here, research building on Couldry and Mejias’ rendition of data colonialism needs to more extensively engage with the ways in which our current reliance on and understanding of data constitutes an extension, if not culmination of a problematic universalism and conception of objectivity and epistemic authority (Ricaurte, <span>2019</span>). This becomes particularly relevant in the context of data for development applications, which often involve the imposition of a particular epistemology on historically colonized communities by the Global North and, increasingly, China (e.g. Lynsey, <span>2018</span>; see also Gravett, <span>2021</span>). Understanding both Western-led data for development projects and framings of Sino-African data relations as forms of postcolonial resistance (Eisenman and Shinn, <span>2023</span>: 237, 336) requires us to go beyond articulating parallels between colonial and data extractivism by tracing consequential epistemological, political and socio-economic linkages between colonialism and current data for development practice.</p><p>Second, and taking our cue from Pasquinelli's call for ‘deconnectivism’, this work could be taken as motivation and inspiration to undo some of the foundational fabric — including the underpinning philosophy and belief in the progressive power of digital technologies — that constitutes contemporary enactments of AI governance and data-intensive practices in the contemporary world and the neoliberal market. As Couldry, Mejias and Pasquinelli all point out, it is high time to re-appropriate this space as one engaged with bottom-up, community-grounded social action, and mindful of the enormous divides and disparities that characterize social interactions with technology in every corner of the globe, whether it is high-resourced or low-resourced countries. Computing, mathematics, data infrastructures, modelling, statistics and related technical domains and forms of intervention are a space of political struggle, even if often presented as a neutral, objective, dehumanized and thereby apolitical terrain (Beaulieu and Leonelli, <span>2021</span>). Reclaiming human agency in the midst of these developments and ensuring that ideas of public interest and common good are continuously debated and central to technological applications, is ever more urgent as AI accelerates and takes over a greater fraction of human activities in the coming months and years.</p><p>Last and not least, it is crucial to remind ourselves that humans are unavoidably enmeshed in a complex ecosystem and an increasingly fragile web of life, and that data affects all organic forms on the planet in ways that provide both unrivalled opportunities and existential risks for our species. We thus suggest a less anthropocentric framing and more substantial engagement with how our relationship with the nonhuman environment —including plant, animals, microorganisms, insects, fungi — is both foundational to and shaped by datafication today. Long-standing colonial doctrines of nature as disconnected from human life and primed for boundless appropriation lie at the core of today's data-intensive practices of ‘knowledge extraction’. In turn, this datafication and the underlying manufacturing of new data relations not only abstract and reorganize social life but also transforms human relations with nature. Further expanding data colonialism's focus along these lines would render it a more powerful framework for analysing data for development for at least three reasons. First, applications, for instance in agricultural development, rely heavily on biological and environmental data. Much can be gained from understanding not only uses of human but also non-human data in light of their colonial and capitalist genealogy. Second, a so-developed thesis would provide an analytic lens from which to scrutinize the often-anthropocentric epistemology of development work that isolates economic deprivation from issues such as climate justice and environmental sustainability.</p><p>This tendency becomes evident when the promised benefits of data for development are eulogized without acknowledging the devastating environmental impact of ever-growing data and computational infrastructures (Dhar, <span>2020</span>). Third, tackling the climate crisis — and related decline in planetary health — is arguably critical to all aspects of development work, given its well-documented links with ever more frequent natural disasters, mass migration, expanding social inequities and deepening failures in public health and food security. Unless this fundamental insight informs the responsible creation and use of data-intensive technologies for development, thereby giving central stage to the interconnections between inhabitants of our planet, there is no hope of technology fostering sustainable and effective remedies to human suffering.</p>\",\"PeriodicalId\":48194,\"journal\":{\"name\":\"Development and Change\",\"volume\":\"55 5\",\"pages\":\"1109-1121\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/dech.12857\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Development and Change\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/dech.12857\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DEVELOPMENT STUDIES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Development and Change","FirstCategoryId":"90","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/dech.12857","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DEVELOPMENT STUDIES","Score":null,"Total":0}
A Critical Framing of Data for Development: Historicizing Data Relations and AI
Nick Couldry and Ulises A. Mejias, The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism. Redwood City, CA: Stanford University Press, 2019. 352 pp. £ 15.50 paperback. Matteo Pasquinelli, The Eye of The Master: A Social History of Artificial Intelligence. London: Verso Books, 2023. 272 pp. £ 13.85 paperback.
Recent years have witnessed increasing efforts to leverage emerging data sources and digital technologies in the design and delivery of international development programmes. Today, big data and artificial intelligence (AI) in particular have become a formative part of development work. This is evidenced by the establishment of intergovernmental innovation labs such as the UN Global Pulse, academic research centres such as the University of California Berkeley's Global Policy Lab, and a plethora of industry-driven initiatives. Under the banner of ‘data for development’, large-scale data integration for logistical, managerial and administrative purposes is heralded as revolutionizing capacity-building efforts in low-resourced nations and territories. Besides others, novel data technologies promise to transform access to social services and legal systems, the efficient use of natural resources, logistical efforts towards distributing food and medical care, educational programmes to improve literacy and computational skills, and effective coordination between local, national and transnational agencies.
In the face of much hype and enthusiasm for such applications, some have expressed concerns regarding the increasing datafication of development work, starting from the very umbrella term of ‘development’ under which these initiatives often sit (e.g. Dirlik, 2014). The emphasis on ‘development’ may reflect an implicit evaluation of social contexts as being more or less ‘adequate’ depending on the extent to which they offer access to digital technologies. This, however, may not reflect other criteria for whether or not a given context is underdeveloped, which include access to social welfare, medical services and free trade among other possible options, nor may it acknowledge the very different impact that digitalization and AI-powered technologies may have depending on local socio-cultural norms and preferences. Relatedly, Laura Mann (2018) has criticized the almost exclusive focus of data for development applications on humanitarian aid at the expense of economic and socio-ecological development. All too often, public‒private partnerships in the design and deployment of these technologies contribute to the annexation of communities into existing economic, epistemic and technical infrastructures in a manner that ultimately benefits the Global North rather than allowing for the building of capacity in the Global South. For instance, agricultural development initiatives pushing toward greater data collection and openness might extract information from local contexts in a manner that ultimately most benefits multinational agribusinesses rather than local farms, especially given the large budgets needed to analyse the data and transform the resulting insights into useable products and the lack of consultation with farming communities over which products to develop in the first place (Bronson, 2022; Leonelli, forthcoming; Rotz et al., 2019). Others have expressed concerns over private and public actors ‘ethics dumping’ risks and harms associated with the testing of emerging socio-technical systems in the Global South under the umbrella of development (see Mohamed et al., 2020). Communities that resist such datafication risk paying a high price — directly, by being excluded from aid programmes, and indirectly, as knowledge systems are increasingly shaped in a way that is biased toward highly datafied contexts.
Emerging applications of big data and machine learning to poverty estimation and targeting present one example through which to concretize these critiques. Effective poverty relief requires up-to-date poverty statistics at a level of granularity and accuracy that enables targeted interventions, policy making and programme monitoring. Traditional poverty statistics are taken from survey measurements, census data, or social registries which respectively are commonly not available at the resolution needed, outdated, or rely on highly inaccurate proxies (Jerven, 2013; Kidd et al., 2021). In pursuit of cheap, timely and granular poverty statistics, researchers have begun to train machine-learning models on mobile network data and satellite imagery to predict poverty metrics in the Global South (Blumenstock et al., 2015; Yeh et al., 2020). Recently, these efforts have moved from the proof-of-concept stage to their real-world application. An example of such emerging implementations is a collaboration between academic researchers, communication providers, the NGO GiveDirectly and the government of Togo. In response to the COVID-19 crisis, the Togolese government implemented an emergency cash transfer programme targeting informal workers in urban settings. In a second phase, GiveDirectly expanded the emergency relief to rural areas. To determine programme eligibility, researchers at University of California Berkeley's Global Policy Lab employed machine learning and big data for geographic and individual-level targeting. Using call detail records obtained from mobile phone providers, the developers estimated a proxy of consumption for each mobile phone subscriber in the poorest cantons. Registered subscribers with the lowest predicted proxy of consumption automatically received aid in the form of mobile cash transfers (Aiken et al., 2022).
Such uses of machine learning and alternative data sources in the automated targeting of poverty relief raise numerous ethical issues. Amongst these are privacy concerns as communication records that are simultaneously sensitive and challenging to anonymize effectively (de Montjoye et al., 2019) are made available without the informed consent of subscribers. The stark contrast in public and judicial considerations given to such emergency-response data grabs in lower- versus higher-resourced countries illustrates the disparate treatment of privacy rights in already historically disenfranchised settings (Bradford et al., 2020; Ienca and Vayena, 2020; Lubell, 2020). Potential privacy harms must also be regarded in the light of the risk of function creep — that is, the use of technical systems beyond the purpose they were initially engineered or authorized for. The use of mobile network data in the individual-level targeting of poverty aid, for instance, enables the building of socio-technical systems and the acquisition of technical know-how that can also be used in the microtargeting of commercial and administrative services in ways that portend new forms of corporate and state surveillance.1 Contrary to their potential harms, the immediate benefits of machine learning applications to poverty relief are often much less apparent, continuing in a long tradition of targeting efforts relying on rather inaccurate proxy means tests (Brown et al., 2018) now taken as ‘ground truth’ for the evaluation of machine learning predictions. In emerging machine learning applications to poverty aid, problematic dynamics of targeting in international development intersect with questionable epistemic and ethical norms of applied machine learning practice. For instance, a focus on the targeting methodology itself at the expense of exploring broader questions of policy design and their ideological underpinnings (Kidd, 2013) echoes the often-narrow prioritization of predictive accuracy in the evaluation of machine learning applications (Mussgnug, 2022). In the same vein, perspectives on targeting as first and foremost a rationing mechanism (Kidd, 2017) resonate strongly with the exclusion of citizens without mobile phone access or resisting datafication from targeting and delivery systems relying on mobile network records and mobile cash respectively.
Machine learning-based poverty targeting stands representative of many donor-driven development projects promoting the use of big data and AI in public health, agriculture, education and poverty relief. Such data for development applications continue a prolonged trend of shifting funds from state institutions and ministries to project-based interventions headed by non-state actors (Lipton, 1992; Mkandawire, 2005). Development discourse often contrasts such targeted interventions both with structural economic reforms and long-term transformational development. In times where the use of big data and AI has moved from an ambitious promise to a formative part of development work, this article reviews foundational scholarship on the historical trajectory and contemporary implications of datafication and AI. These perspectives can help us to understand data for development applications not only narrowly as targeted interventions but situate them firmly as part of broader structural social and economic transformations themselves.
Our motivation is twofold. First, a better understanding of the longue durée of datafication and development can attune us to the complex entanglements that must be considered in navigating the responsible design and implementation of data for development applications. Second, overarching historical and analytical frameworks can connect domains of scholarship in a manner that supports much-needed collaboration and exchange between social scientists, practitioners, policy makers and scholars from the humanities invested in the responsible infrastructuring, design and deployment of emerging technologies in international development. To this end, we also consider the extent to which broader framings, in turn, might benefit from closer consideration of the practical, technical and social challenges of research practices on the ground. In doing so, we seek to foster a two-way exchange between technical and socio-cultural scholarship, as well as close engagement with groups with experience in and/or expertise of relevance to the deployment of the technologies in question. In what follows, we introduce, synthesize and expand on two timely publications that offer overarching analytical perspectives by historically situating data platforms and AI. We first discuss Nick Couldry and Ulises A. Mejias's (2019) landmark study The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism which inquires into the entanglement of capitalism, coloniality and datafication. We then connect these ideas to Matteo Pasquinelli's (2023) labour theory of AI in his recent work The Eye of The Master: A Social History of Artificial Intelligence.
The Costs of Connection has played an important role over the last five years as a key scholarly analysis of the socio-political effects of the increasing datafication of human life (see Couldry and Mejias, 2023). Looking specifically at the intersection between data work, sociology of labour, global history and political economy, the authors argue that the nature and implications of this datafication can only be understood by delineating its foundation in a new emerging form of capitalism. In this type of capitalism, human life becomes increasingly implicated with digital technologies in a manner that renders human life extractable for capitalist gain. Their ‘data colonialism’ is set apart from parallel and complementary accounts such as Shoshana Zuboff's (2019) Surveillance Capitalism, Nick Srnicek's (2017) Platform Capitalism or Sarah Myers West's (2019) Data Capitalism through an emphasis on the ways in which capitalism itself rests upon centuries of historical colonialism and continues to be entangled with colonialist legacies.
The book is structured in three parts. It begins with the authors identifying the current datafication's dual foundations in both colonialism and capitalism. Akin to how historical colonialism appropriated and extracted the natural resources of territorial conquest, data colonialism identifies human life itself as the new ‘raw material’. To be prepared for extraction and commodification, however, human life must first be transformed into data relations — that is, rendered into means of social interaction and self-reflection facilitated by digital tools. Reorganized as data relations, human life can then be abstracted into data, analogous to industrial capitalism's abstraction of work as labour. Convincingly, if briefly, Couldry and Mejias link this quest for a new input for capitalism to the dwindling purchasing power of the lower and middle classes in the face of growing inequality and the depletion of natural resources. In more detail, the book outlines how this expansion of capitalist production is accompanied by a radical transformation of political and economic dynamics the authors label the ‘Cloud Empire’. The Clould Empire denotes the reconfiguration of resources, and imaginations around the data colonialist agenda. Big platform businesses concentrated in the USA and China, such as Google, Facebook, Tencent and Baidu are the key players of this emerging economic order. Operating increasingly as monopolies-monopsonies, they not only concentrate economic power but increasingly shape their own regulatory spaces (i.e., platform governance) and actively seek to collaborate with and steer state authorities in pursuit of an increasingly seamless appropriation and extraction of social life. This rendering ready for capitalist exploitation, the authors link to colonialism's appropriation of new territories and people within them. Hereby, their argument draws substantially from comparisons and analogies between historical colonialism and current developments. For instance, the authors illustrate how data colonialism relies on distinct doctrines, from an emphasis on a ‘digital community’ and ‘personalization’, to the misnomer of ‘raw data’ in a manner that mirrors colonial ideologies such as terra nullius and the ‘civilized world’. In the same vein, the book compares the end user license agreements of digital services, stripping users of their data ownership, to the Spanish Requerimiento of 1513.2
The second part of the book shifts focus to the implications of data colonialism for human life and our epistemology of the social. Couldry and Mejias situate the effects of data colonialism’ within a broader historical transformation of social knowledge, beginning with the emergence of quantification and statistical thinking. These ‘technologies of distance’ (Porter, 1995) come to institutionalize a new framing of ‘normality’ (Hacking, 1990) and new means of social control. The restructuring of life around data relations builds on this process, making human activity increasingly legible and governable not only for capitalist exploitation but also for social scientific research. In restructuring their epistemic landscapes in a manner aligned with the increasing abstraction of human life as data, the authors contend that computational social scientists become complicit in data colonialism, slowly eroding their ability for critical engagement with its character and implications as their methods become reliant on the very platforms and tools they are studying. These implications not only shape the study of social life in aggregate but extend to the domain of individuals and their (lack of) agency over datafication processes. As many of today's critical data scholars also argued, data colonialism threatens to erode the personal autonomy of its data subjects. Couldry and Mejias note how ‘a continuously trackable life is a dispossessed life, whose space is continuously invaded and subjected to extraction by external power’ (p. 157). As data colonialism entrenches itself and progressively infiltrates our thinking with its ideologies, we risk unlearning the norms and freedoms associated with our autonomy that render the effective safeguarding of our rights and identification of harms possible in the first place — a situation where ‘we’ embraces the vast majority of countries and social realities around the globe, regardless of the widely different existing understandings of autonomy, social agency and rights.
This leads the reader into the final part of the book. Part three reiterates the main threads of their argument, emphasizing how data colonialism renews colonial dynamics and augments capitalism's domain in a manner that renders human life itself the direct input of economic extraction. The book shines in its postscript, articulating possible means of resistance. Here, Couldry and Mejias advocate for the imagination of alternative counterpresents, the challenging of data colonialism's underlying ideologies through critical data literacy programmes, and the purposeful construction of ‘seamfull’ technologies that oppose the seamless extraction of human life pursued by data colonialism.
Data colonialism presents a forceful framing (and labelling) that can help contextualize critical scholarship on data for development and position it within the longue durée of colonialism and capitalism's interrelations. It is not surprising that Couldry and Mejias’ book, and related scholarship, has strongly influenced the direction of critical data studies since its publication in 2019. Its dual focus on capitalism and colonialism provides a unique perspective from which to analyse the entanglement of commercial interests of the data economy with the persistent colonial heritage of international development activities that is at the very core of the data for development movement. Moreover, data colonialism brings to the fore the strategic groundwork underlying the current datafication. The authors convincingly anatomize how datafication is not only superimposed upon social life but relies on its reconfiguration and adoption of suitable ideologies in promoting its aims and obscuring its implications. Uses of social data for development purposes, in ways too close for comfort to commercial applications and speculative investment by big tech companies in the Global North and increasingly China, ultimately rest upon this reorganization of social life. A more recent book tackling the history of AI, rather than data platforms, ends up further developing and strengthening this insight, while also connecting it to current developments in the ways data are put to work through machine learning algorithms, predictive engines, large language models and neural networks, among others. In The Eye of the Master Pasquinelli seems, prima facie, concerned with a very different set of problems to those motivating Couldry and Mejias. Pasquinelli's goal is to use historical analysis to show how the development of algorithmic thinking has been predicated on transforming the very ways in which such thinking is conceptualized and operationalized. In his view, algorithmic thinking has very deep historical roots, emerging early in human history ‘as a material abstraction, through the interaction of mind with tools, in order to change the world and solve mostly economic and social problems’ (p. 16). In fact, he argues that labour itself may constitute a primitive form of algorithm: an attempt to not only act towards the production of specific outputs, but also to evaluate the logistics and implications of such production, thereby enabling the social organization of work. In Pasquinelli's interpretation, the automation of labour, and particularly mental labour, should be regarded as overlapping with the automation of knowledge production. To support this argument, the book examines several salient historical episodes in the history of computing, including for instance Charles Babbage's effort to develop the first analytic machine and Rosenblatt's invention of the first artificial neural network (the ‘perceptron’), as ‘computation emerging not only as a means for augmenting labour but also as an instrument (and implicit metrics) for measuring it’ (p. 17).
In Pasquinelli's view, therefore, AI was not born as a way to automate human thinking and various forms of reasoning, but rather as a way to measure and evaluate efforts to carry out cognitive tasks and organize social hierarchies (p. 21). Building on this reading of the history of AI, Pasquinelli ends on a note that closely resonates with Couldry's and Mejias’ rendition of data colonialism. In his words: ‘the replacement of traditional jobs by AI should be studied together with the displacement and multiplication of precarious, underpaid, and marginalised jobs across a global economy’ (p. 21), with ‘AI and ghost work appearing to be the two sides of the one and same mechanism of labour automaton and social psychometrics’ (p. 22). AI, grounded on the immense advancements in data accumulation and analysis characterizing this era of human history, is thereby threatening to foster and expand existing social, digital and economic divides, particularly between groups with the skills and opportunities to take advantage of these tools and those who do not have relevant abilities, capacity and/or living conditions.
Pasquinelli's book pioneers the application of historical and political epistemology, along with the evolution the evolution of mechanical thinking and automation, to the study of algorithmic thinking. His argument intersects with — and advances — Couldry and Mejias’ defence of the enormous significance of long-standing efforts to develop methods for social quantification, in ways that explain the explosive growth of quantification throughout the 20th century. Pasquinelli's analysis also illuminates the extent to which this phenomenon extended across the globe, rather than being restricted to Western countries — for instance, by highlighting the importance of social mathematics in Hindu culture.
What can we learn from this kind of scholarship on the history of data platforms, colonialism and AI, as many critical examiners and practitioners of international development alike scramble to responsibly and sustainably innovate development efforts? Interdisciplinary works such as The Costs of Connection and The Eye of the Master attune us to the complex entanglements surrounding data for development applications by situating them firmly within broader structural social and economic transformations. We advocate for the relevance of such scholarship not only as critical perspectives on data for development pursuits but as analytical frameworks in the design and implementation of responsible digital innovations itself. To this end, broader critical framings can connect domains of scholarship in a manner that supports collaboration and exchange between social scientists, practitioners, policy makers and scholars from the humanities invested in the responsible infrastructuring, design and deployment of emerging technologies in international development. Consider, for instance, the use of machine learning in the targeting of poverty aid in Togo and its ethical and social implications, as sketched briefly at the beginning of this article.
Arguably, the biggest challenge in the responsible design of such data for development applications is not the identification of potential ethical implications in the absolute and abstract, but rather their assessment within a given social and economic context relative to limited available alternatives. Hereby, off-the-shelf principles or recommendations can only provide limited guidance due to the highly contextual, local and technology-specific nature of the trade-offs faced. Instead, navigating and fostering the sustainable and responsible integration of AI and big data in international development requires: technical expertise; familiarity with circumstances on the ground; consultative mechanisms whereby users can provide feedback and direction to technology development; and a sensitivity towards the complex historical, social and epistemological dynamics that surround these applications, cultivated through regular consultation both with contributors and with existing research on local conditions. The latter becomes central once we acknowledge that responsible innovation demands not a narrow perspective on the most immediate consequences of adopting a specific technological solution, but further considering their role in reinforcing or shifting existing power dynamics and their situatedness within broader socio-economic and ecological transformations.
It is here, in particular, that critical historical scholarship as exemplified by the works of Pasquinelli, Couldry and Mejias can serve a vital function for this emerging strand of international development research. By examining the contemporary datafication and proliferation of AI within the longue durée of technological and economic transformations, The Costs of Connection and The Eye of the Master illuminate and attune us to complex entanglements of data for development efforts with broader social, technological, political, ecological and economic dynamics. Moreover, overarching historical and conceptual frameworks such as surveyed here can provide much needed connective tissue and a foundation for interdisciplinary scholarship on and within the datafication of international development. Pasquinelli, Mejias and Couldry, for instance, bring into dialogue research from domains such as labour theory, critical data and media studies, AI ethics and decolonialism, neocolonialism, postcolonialism, and international development. Building upon such work by connecting and integrating a wide range of disciplinary perspectives in researching data for development applications is central to better understanding existing and anticipating future implications of the ongoing transformations of international development efforts. Pioneering efforts, such as Mirca Madianou's (2019) critical empirical investigation of data practices in humanitarian responses to the refugee crisis, exemplify the fruitfulness of this approach. More, however, can and must be done not only in critically examining data for development applications post-hoc but in bringing interdisciplinary perspectives to bear on their responsible design and implementation itself.
To this end and in closing, we want to highlight three ways in which the accounts here could be further developed as an even more powerful perspective from which to scrutinize dynamics surrounding the use of big data for development. First, we advocate for a greater emphasis on the essential coloniality of datafication. We echo Densua Mumford's (2021) insightful critique, stressing the ways in which colonialism often appears as a motif rather than the main theme of The Costs of Connection and the only cursory treatment of postcolonial and decolonial scholarship.3 Central to understanding the ongoing legacies of colonialism, also in data for development applications, are the ways in which it shaped and continues to influence our systems of knowledge, i.e., the coloniality of knowledge (Quijano, 2000). Here, research building on Couldry and Mejias’ rendition of data colonialism needs to more extensively engage with the ways in which our current reliance on and understanding of data constitutes an extension, if not culmination of a problematic universalism and conception of objectivity and epistemic authority (Ricaurte, 2019). This becomes particularly relevant in the context of data for development applications, which often involve the imposition of a particular epistemology on historically colonized communities by the Global North and, increasingly, China (e.g. Lynsey, 2018; see also Gravett, 2021). Understanding both Western-led data for development projects and framings of Sino-African data relations as forms of postcolonial resistance (Eisenman and Shinn, 2023: 237, 336) requires us to go beyond articulating parallels between colonial and data extractivism by tracing consequential epistemological, political and socio-economic linkages between colonialism and current data for development practice.
Second, and taking our cue from Pasquinelli's call for ‘deconnectivism’, this work could be taken as motivation and inspiration to undo some of the foundational fabric — including the underpinning philosophy and belief in the progressive power of digital technologies — that constitutes contemporary enactments of AI governance and data-intensive practices in the contemporary world and the neoliberal market. As Couldry, Mejias and Pasquinelli all point out, it is high time to re-appropriate this space as one engaged with bottom-up, community-grounded social action, and mindful of the enormous divides and disparities that characterize social interactions with technology in every corner of the globe, whether it is high-resourced or low-resourced countries. Computing, mathematics, data infrastructures, modelling, statistics and related technical domains and forms of intervention are a space of political struggle, even if often presented as a neutral, objective, dehumanized and thereby apolitical terrain (Beaulieu and Leonelli, 2021). Reclaiming human agency in the midst of these developments and ensuring that ideas of public interest and common good are continuously debated and central to technological applications, is ever more urgent as AI accelerates and takes over a greater fraction of human activities in the coming months and years.
Last and not least, it is crucial to remind ourselves that humans are unavoidably enmeshed in a complex ecosystem and an increasingly fragile web of life, and that data affects all organic forms on the planet in ways that provide both unrivalled opportunities and existential risks for our species. We thus suggest a less anthropocentric framing and more substantial engagement with how our relationship with the nonhuman environment —including plant, animals, microorganisms, insects, fungi — is both foundational to and shaped by datafication today. Long-standing colonial doctrines of nature as disconnected from human life and primed for boundless appropriation lie at the core of today's data-intensive practices of ‘knowledge extraction’. In turn, this datafication and the underlying manufacturing of new data relations not only abstract and reorganize social life but also transforms human relations with nature. Further expanding data colonialism's focus along these lines would render it a more powerful framework for analysing data for development for at least three reasons. First, applications, for instance in agricultural development, rely heavily on biological and environmental data. Much can be gained from understanding not only uses of human but also non-human data in light of their colonial and capitalist genealogy. Second, a so-developed thesis would provide an analytic lens from which to scrutinize the often-anthropocentric epistemology of development work that isolates economic deprivation from issues such as climate justice and environmental sustainability.
This tendency becomes evident when the promised benefits of data for development are eulogized without acknowledging the devastating environmental impact of ever-growing data and computational infrastructures (Dhar, 2020). Third, tackling the climate crisis — and related decline in planetary health — is arguably critical to all aspects of development work, given its well-documented links with ever more frequent natural disasters, mass migration, expanding social inequities and deepening failures in public health and food security. Unless this fundamental insight informs the responsible creation and use of data-intensive technologies for development, thereby giving central stage to the interconnections between inhabitants of our planet, there is no hope of technology fostering sustainable and effective remedies to human suffering.
期刊介绍:
Development and Change is essential reading for anyone interested in development studies and social change. It publishes articles from a wide range of authors, both well-established specialists and young scholars, and is an important resource for: - social science faculties and research institutions - international development agencies and NGOs - graduate teachers and researchers - all those with a serious interest in the dynamics of development, from reflective activists to analytical practitioners