Pub Date : 2023-12-01Epub Date: 2020-07-30DOI: 10.1177/0306312720945033
Rafaela Granja, Helena Machado
Forensic DNA Phenotyping (FDP) is a set of techniques that aim to infer externally visible characteristics in humans - such as eye, hair and skin color - and biogeographical ancestry of an unknown person, based on biological material. FDP has been applied in various jurisdictions in a limited number of high-profile cases to provide intelligence for criminal investigations. There are on-going controversies about the reliability and validity of FDP, which come together with debates about the ethical challenges emerging from the use of this technology in the criminal justice system. Our study explores how, in the context of complex politics of legitimation of and contestation over the use of FDP, forensic geneticists in Europe perceive this technology's potential applications, utility and risks. Forensic geneticists perform several forms of discursive boundary work, making distinctions between science and the criminal justice system, experts and non-experts, and good and bad science. Such forms of boundary work reconstruct the complex positioning vis-à-vis legal and scientific realities. In particular, while mobilizing interest in FDP, forensic geneticists simultaneously carve out notions of risk, accountability and scientific conduct that perform distance from FDP' implications in the criminal justice system.
{"title":"Forensic DNA phenotyping and its politics of legitimation and contestation: Views of forensic geneticists in Europe.","authors":"Rafaela Granja, Helena Machado","doi":"10.1177/0306312720945033","DOIUrl":"10.1177/0306312720945033","url":null,"abstract":"<p><p>Forensic DNA Phenotyping (FDP) is a set of techniques that aim to infer externally visible characteristics in humans - such as eye, hair and skin color - and biogeographical ancestry of an unknown person, based on biological material. FDP has been applied in various jurisdictions in a limited number of high-profile cases to provide intelligence for criminal investigations. There are on-going controversies about the reliability and validity of FDP, which come together with debates about the ethical challenges emerging from the use of this technology in the criminal justice system. Our study explores how, in the context of complex politics of legitimation of and contestation over the use of FDP, forensic geneticists in Europe perceive this technology's potential applications, utility and risks. Forensic geneticists perform several forms of discursive boundary work, making distinctions between science and the criminal justice system, experts and non-experts, and good and bad science. Such forms of boundary work reconstruct the complex positioning vis-à-vis legal and scientific realities. In particular, while mobilizing interest in FDP, forensic geneticists simultaneously carve out notions of risk, accountability and scientific conduct that perform distance from FDP' implications in the criminal justice system.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"850-868"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10696903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38216538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2021-08-02DOI: 10.1177/03063127211035562
Roos Hopman
Forensic DNA phenotyping (FDP) encompasses a set of technologies aimed at predicting phenotypic characteristics from genotypes. Advocates of FDP present it as the future of forensics, with an ultimate goal of producing complete, individualised facial composites based on DNA. With a focus on individuals and promised advances in technology comes the assumption that modern methods are steadily moving away from racial science. Yet in the quantification of physical differences, FDP builds upon some nineteenth- and twentieth-century scientific practices that measured and categorised human variation in terms of race. In this article I complicate the linear temporal approach to scientific progress by building on the notion of the folded object. Drawing on ethnographic fieldwork conducted in various genetic laboratories, I show how nineteenth- and early twentieth-century anthropological measuring and data-collection practices and statistical averaging techniques are folded into the ordering of measurements of skin color data taken with a spectrophotometer, the analysis of facial shape based on computational landmarks and the collection of iris photographs. Attending to the historicity of FDP facial renderings, I bring into focus how race comes about as a consequence of temporal folds.
{"title":"The face as folded object: Race and the problems with 'progress' in forensic DNA phenotyping.","authors":"Roos Hopman","doi":"10.1177/03063127211035562","DOIUrl":"10.1177/03063127211035562","url":null,"abstract":"<p><p>Forensic DNA phenotyping (FDP) encompasses a set of technologies aimed at predicting phenotypic characteristics from genotypes. Advocates of FDP present it as the future of forensics, with an ultimate goal of producing complete, individualised facial composites based on DNA. With a focus on individuals and promised advances in technology comes the assumption that modern methods are steadily moving away from racial science. Yet in the quantification of physical differences, FDP builds upon some nineteenth- and twentieth-century scientific practices that measured and categorised human variation in terms of race. In this article I complicate the linear temporal approach to scientific progress by building on the notion of the folded object. Drawing on ethnographic fieldwork conducted in various genetic laboratories, I show how nineteenth- and early twentieth-century anthropological measuring and data-collection practices and statistical averaging techniques are folded into the ordering of measurements of skin color data taken with a spectrophotometer, the analysis of facial shape based on computational landmarks and the collection of iris photographs. Attending to the historicity of FDP facial renderings, I bring into focus how race comes about as a consequence of temporal folds.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"869-890"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10696901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39275354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-11-02DOI: 10.1177/03063127231201178
Amade M'charek
What is race? And how does it figure in different scientific practices? To answer these questions, I suggest that we need to know race differently. Rather than defining race or looking for one conclusive answer to what it is, I propose methods that are open-ended, that allow us to follow race around, while remaining curious as to what it is. I suggest that we pursue generous methods. Drawing on empirical examples of forensic identification technologies, I argue that the slipperiness of race-the way race and its politics inexorably shift and change-cannot be fully grasped as an 'object multiple'. Race, I show, is not race: The same word refers to different phenomena. To grasp this, I introduce the notion of the affinity concept. Drawing on the history of race, along with contemporary work in forensic genetics, the affinity concept helps us articulate how race indexes three different scientific realities: race as object, race as method, and race as theory. These three different, yet interconnected realities, contribute to race's slipperiness as well as its virulence.
{"title":"Curious about race: Generous methods and modes of knowing in practice.","authors":"Amade M'charek","doi":"10.1177/03063127231201178","DOIUrl":"10.1177/03063127231201178","url":null,"abstract":"<p><p>What is race? And how does it figure in different scientific practices? To answer these questions, I suggest that we need to know race differently. Rather than defining race or looking for one conclusive answer to what it is, I propose methods that are open-ended, that allow us to follow race around, while remaining curious as to what it is. I suggest that we pursue <i>generous methods</i>. Drawing on empirical examples of forensic identification technologies, I argue that the slipperiness of race-the way race and its politics inexorably shift and change-cannot be fully grasped as an 'object multiple'. Race, I show, is not race: The same word refers to different phenomena. To grasp this, I introduce the notion of the <i>affinity concept</i>. Drawing on the history of race, along with contemporary work in forensic genetics, the affinity concept helps us articulate how race indexes three different scientific realities: race as <i>object</i>, race as <i>method</i>, and race as <i>theory</i>. These three different, yet interconnected realities, contribute to race's slipperiness as well as its virulence.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"826-849"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71428944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-08-31DOI: 10.1177/03063127231192857
Florian Jaton
This article expands on recent studies of machine learning or artificial intelligence (AI) algorithms that crucially depend on benchmark datasets, often called 'ground truths.' These ground-truth datasets gather input-data and output-targets, thereby establishing what can be retrieved computationally and evaluated statistically. I explore the case of the Tumor nEoantigen SeLection Alliance (TESLA), a consortium-based ground-truthing project in personalized cancer immunotherapy, where the 'truth' of the targets-immunogenic neoantigens-to be retrieved by the would-be AI algorithms depended on a broad technoscientific network whose setting up implied important organizational and material infrastructures. The study shows that instead of grounding an undisputable 'truth', the TESLA endeavor ended up establishing a contestable reference, the biology of neoantigens and how to measure their immunogenicity having slightly evolved alongside this four-year project. However, even if this controversy played down the scope of the TESLA ground truth, it did not discredit the whole undertaking. The magnitude of the technoscientific efforts that the TESLA project set into motion and the needs it ultimately succeeded in filling for the scientific and industrial community counterbalanced its metrological uncertainties, effectively instituting its contestable representation of 'true' neoantigens within the field of personalized cancer immunotherapy (at least temporarily). More generally, this case study indicates that the enforcement of ground truths, and what it leaves out, is a necessary condition to enable AI technologies in personalized medicine.
{"title":"Groundwork for AI: Enforcing a benchmark for neoantigen prediction in personalized cancer immunotherapy.","authors":"Florian Jaton","doi":"10.1177/03063127231192857","DOIUrl":"10.1177/03063127231192857","url":null,"abstract":"<p><p>This article expands on recent studies of machine learning or artificial intelligence (AI) algorithms that crucially depend on benchmark datasets, often called 'ground truths.' These ground-truth datasets gather input-data and output-targets, thereby establishing what can be retrieved computationally and evaluated statistically. I explore the case of the Tumor nEoantigen SeLection Alliance (TESLA), a consortium-based ground-truthing project in personalized cancer immunotherapy, where the 'truth' of the targets-immunogenic neoantigens-to be retrieved by the would-be AI algorithms depended on a broad technoscientific network whose setting up implied important organizational and material infrastructures. The study shows that instead of grounding an undisputable 'truth', the TESLA endeavor ended up establishing a contestable reference, the biology of neoantigens and how to measure their immunogenicity having slightly evolved alongside this four-year project. However, even if this controversy played down the scope of the TESLA ground truth, it did not discredit the whole undertaking. The magnitude of the technoscientific efforts that the TESLA project set into motion and the needs it ultimately succeeded in filling for the scientific and industrial community counterbalanced its metrological uncertainties, effectively instituting its contestable representation of 'true' neoantigens within the field of personalized cancer immunotherapy (at least temporarily). More generally, this case study indicates that the enforcement of ground truths, and what it leaves out, is a necessary condition to enable AI technologies in personalized medicine.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"787-810"},"PeriodicalIF":3.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/1f/46/10.1177_03063127231192857.PMC10543129.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10495658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-09-27DOI: 10.1177/03063127231191284
Philippe Sormani
For decades, playing Go at a professional level has counted among those things that, in Dreyfus's words, 'computers still can't do'. This changed dramatically in early March 2016, at the Four Seasons Hotel in Seoul, South Korea, when AlphaGo, the most sophisticated Go program at the time, beat Lee Sedol, an internationally top-ranked Go professional, by four games to one. A documentary movie has captured and crafted the unfolding drama and, since AlphaGo's momentous win, the drama has been retold in myriad variations. Yet the exhibition match as a technology demonstration-in short, the 'AlphaGo show'-has not received much scrutiny in STS, notwithstanding or precisely because of all the media frenzy, game commentary, and 'AI' expertise in its wake. This article therefore revisits the second game's 'move 37', its surprise delivery by AlphaGo on stage, and the subsequent line of commentary by the attending experts, initiated by the news-receipt token 'ooh'. Drawing upon a reflexive video analysis, the article explicates the Go move's scenic intelligibility-its embodied delivery as part of the technology demonstration-as the contingent result of intricate 'human/machine interfacing'. For mainstream media to report on AlphaGo's 'superhuman intelligence', it both relied upon and effaced such interfacing work. In turn, the article describes and discusses how 'object agency' and 'algorithmic drama' both trade on skillfully embodied play as a pivotal interfacing practice, informing the exhibition match from within its livestream broadcast.
{"title":"Interfacing AlphaGo: Embodied play, object agency, and algorithmic drama.","authors":"Philippe Sormani","doi":"10.1177/03063127231191284","DOIUrl":"https://doi.org/10.1177/03063127231191284","url":null,"abstract":"<p><p>For decades, playing Go at a professional level has counted among those things that, in Dreyfus's words, 'computers still can't do'. This changed dramatically in early March 2016, at the Four Seasons Hotel in Seoul, South Korea, when <i>AlphaGo</i>, the most sophisticated Go program at the time, beat Lee Sedol, an internationally top-ranked Go professional, by four games to one. A documentary movie has captured and crafted the unfolding drama and, since <i>AlphaGo</i>'s momentous win, the drama has been retold in myriad variations. Yet the exhibition match as a technology demonstration-in short, the '<i>AlphaGo show</i>'-has not received much scrutiny in STS, notwithstanding or precisely because of all the media frenzy, game commentary, and 'AI' expertise in its wake. This article therefore revisits the second game's 'move 37', its surprise delivery by <i>AlphaGo</i> on stage, and the subsequent line of commentary by the attending experts, initiated by the news-receipt token 'ooh'. Drawing upon a reflexive video analysis, the article explicates the Go move's scenic intelligibility-its embodied delivery as part of the technology demonstration-as the contingent result of intricate 'human/machine interfacing'. For mainstream media to report on <i>AlphaGo</i>'s 'superhuman intelligence', it both relied upon and effaced such interfacing work. In turn, the article describes and discusses how 'object agency' and 'algorithmic drama' both trade on skillfully embodied play as a pivotal interfacing practice, informing the exhibition match from within its livestream broadcast.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":"53 5","pages":"686-711"},"PeriodicalIF":3.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41146375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-05-08DOI: 10.1177/03063127231167589
Chiara Carboni, Rik Wehrens, Romke van der Veen, Antoinette de Bont
Artificial Intelligence (AI) tools are being developed to assist with increasingly complex diagnostic tasks in medicine. This produces epistemic disruption in diagnostic processes, even in the absence of AI itself, through the datafication and digitalization encouraged by the promissory discourses around AI. In this study of the digitization of an academic pathology department, we mobilize Barad's agential realist framework to examine these epistemic disruptions. Narratives and expectations around AI-assisted diagnostics-which are inextricable from material changes-enact specific types of organizational change, and produce epistemic objects that facilitate to the emergence of some epistemic practices and subjects, but hinder others. Agential realism allows us to simultaneously study epistemic, ethical, and ontological changes enacted through digitization efforts, while keeping a close eye on the attendant organizational changes. Based on ethnographic analysis of pathologists' changing work processes, we identify three different types of uncertainty produced by digitization: sensorial, intra-active, and fauxtomated uncertainty. Sensorial and intra-active uncertainty stem from the ontological otherness of digital objects, materialized in their affordances, and result in digital slides' partial illegibility. Fauxtomated uncertainty stems from the quasi-automated digital slide-making, which complicates the question of responsibility for epistemic objects and related knowledge by marginalizing the human.
{"title":"Eye for an AI: More-than-seeing, fauxtomation, and the enactment of uncertain data in digital pathology.","authors":"Chiara Carboni, Rik Wehrens, Romke van der Veen, Antoinette de Bont","doi":"10.1177/03063127231167589","DOIUrl":"10.1177/03063127231167589","url":null,"abstract":"<p><p>Artificial Intelligence (AI) tools are being developed to assist with increasingly complex diagnostic tasks in medicine. This produces epistemic disruption in diagnostic processes, even in the absence of AI itself, through the datafication and digitalization encouraged by the promissory discourses around AI. In this study of the digitization of an academic pathology department, we mobilize Barad's agential realist framework to examine these epistemic disruptions. Narratives and expectations around AI-assisted diagnostics-which are inextricable from material changes-enact specific types of organizational change, and produce epistemic objects that facilitate to the emergence of some epistemic practices and subjects, but hinder others. Agential realism allows us to simultaneously study epistemic, ethical, and ontological changes enacted through digitization efforts, while keeping a close eye on the attendant organizational changes. Based on ethnographic analysis of pathologists' changing work processes, we identify three different types of uncertainty produced by digitization: <i>sensorial</i>, <i>intra-active</i>, and <i>fauxtomated</i> uncertainty. Sensorial and intra-active uncertainty stem from the ontological otherness of digital objects, materialized in their affordances, and result in digital slides' partial illegibility. Fauxtomated uncertainty stems from the quasi-automated digital slide-making, which complicates the question of responsibility for epistemic objects and related knowledge by marginalizing the human.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"712-737"},"PeriodicalIF":3.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/59/c5/10.1177_03063127231167589.PMC10543128.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9784661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-09-13DOI: 10.1177/03063127231194591
Florian Jaton, Philippe Sormani
How can we examine so-called 'artificial intelligence' ('AI') without turning our backs on the STS tradition that questions both notions of artificiality and intelligence? This special issue attempts a step to the side: Instead of considering 'AI' as something that does or does not exist (and then taking a position on its benefits or harms), its ambition is to document, in an empirical and agnostic way, the performances that make, sometimes, 'AI' appear or disappear in situation. And it comes out, from this perspective, that 'AI' could be considered a vast commensuration undertaking.
{"title":"Enabling 'AI'? The situated production of commensurabilities.","authors":"Florian Jaton, Philippe Sormani","doi":"10.1177/03063127231194591","DOIUrl":"10.1177/03063127231194591","url":null,"abstract":"<p><p>How can we examine so-called 'artificial intelligence' ('AI') without turning our backs on the STS tradition that questions both notions of artificiality and intelligence? This special issue attempts a step to the side: Instead of considering 'AI' as something that does or does not exist (and then taking a position on its benefits or harms), its ambition is to document, in an empirical and agnostic way, the performances that make, sometimes, 'AI' appear or disappear in situation. And it comes out, from this perspective, that 'AI' could be considered a vast commensuration undertaking.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"625-634"},"PeriodicalIF":3.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/3c/c8/10.1177_03063127231194591.PMC10543127.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10221683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2022-04-07DOI: 10.1177/03063127221081446
Benjamin Lipp
Care robots promise to assist older people in an ageing society. This article investigates the socio-material conditions of care with robots by focusing on the usually invisible practices of human-machine interfacing. I define human-machine interfacing as the activities by roboticists and others to render interaction between robots and people possible in the first place. This includes, efforts to render prototypical arrangements of care 'robot-friendly'. In my video-assisted ethnography of human-robot interaction (HRI) experiments. I identify four types of interfacing practices, where care comes to matter: integrating the ephemeral entity that is 'a robot', helping it by way of mundane courtesies, making users 'fit' for interacting with it, and establishing corridors of interaction between the robot and people's bodies. I show that robots do not so much care for (older) people but rather, the other way around - people need to care for robots. Hence, care robots are not simply agents of care but also objects of care, rendering necessary a symmetrical analysis of human-machine interfacing. Furthermore, these practices do not merely reflect the prototypical state of the art in robotics. Rather, they indicate a more general mode of how robots and people interface. I argue that care with robots requires us to re-consider the exclusive focus on the human and at least complement it with care for the non-human and, incidentally, the robotic, too.
{"title":"Caring for robots: How care comes to matter in human-machine interfacing.","authors":"Benjamin Lipp","doi":"10.1177/03063127221081446","DOIUrl":"10.1177/03063127221081446","url":null,"abstract":"<p><p>Care robots promise to assist older people in an ageing society. This article investigates the socio-material conditions of care with robots by focusing on the usually invisible practices of human-machine interfacing. I define human-machine interfacing as the activities by roboticists and others to render interaction between robots and people possible in the first place. This includes, efforts to render prototypical arrangements of care 'robot-friendly'. In my video-assisted ethnography of human-robot interaction (HRI) experiments. I identify four types of interfacing practices, where care comes to matter: integrating the ephemeral entity that is 'a robot', helping it by way of mundane courtesies, making users 'fit' for interacting with it, and establishing corridors of interaction between the robot and people's bodies. I show that robots do not so much care for (older) people but rather, the other way around - people need to care for robots. Hence, care robots are not simply agents of care but also objects of care, rendering necessary a symmetrical analysis of human-machine interfacing. Furthermore, these practices do not merely reflect the prototypical state of the art in robotics. Rather, they indicate a more general mode of how robots and people interface. I argue that care with robots requires us to re-consider the exclusive focus on the human and at least complement it with care for the non-human and, incidentally, the robotic, too.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":"53 5","pages":"660-685"},"PeriodicalIF":3.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41138745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-05-08DOI: 10.1177/03063127231163756
Anne Henriksen, Lasse Blond
Recent policies and research articles call for turning AI into a form of IA ('intelligence augmentation'), by envisioning systems that center on and enhance humans. Based on a field study at an AI company, this article studies how AI is performed as developers enact two predictive systems along with stakeholders in public sector accounting and public sector healthcare. Inspired by STS theories about values in design, we analyze our empirical data focusing especially on how objectives, structured performances, and divisions of labor are built into the two systems and at whose expense. Our findings reveal that the development of the two AI systems is informed by politically motivated managerial interests in cost-efficiency. This results in AI systems that are (1) designed as managerial tools meant to enable efficiency improvements and cost reductions, and (2) enforced on professionals on the 'shop floor' in a top-down manner. Based on our findings and a discussion drawing on literature on the original visions of human-centered systems design from the 1960s, we argue that turning AI into IA seems dubious, and ask what human-centered AI really means and whether it remains an ideal not easily realizable in practice. More work should be done to rethink human-machine relationships in the age of big data and AI, in this way making the call for ethical and responsible AI more genuine and trustworthy.
{"title":"Executive-centered AI? Designing predictive systems for the public sector.","authors":"Anne Henriksen, Lasse Blond","doi":"10.1177/03063127231163756","DOIUrl":"10.1177/03063127231163756","url":null,"abstract":"<p><p>Recent policies and research articles call for turning AI into a form of IA ('intelligence augmentation'), by envisioning systems that center on and enhance humans. Based on a field study at an AI company, this article studies how AI is performed as developers enact two predictive systems along with stakeholders in public sector accounting and public sector healthcare. Inspired by STS theories about values in design, we analyze our empirical data focusing especially on how objectives, structured performances, and divisions of labor are built into the two systems and at whose expense. Our findings reveal that the development of the two AI systems is informed by politically motivated managerial interests in cost-efficiency. This results in AI systems that are (1) designed as managerial tools meant to enable efficiency improvements and cost reductions, and (2) enforced on professionals on the 'shop floor' in a top-down manner. Based on our findings and a discussion drawing on literature on the original visions of human-centered systems design from the 1960s, we argue that turning AI into IA seems dubious, and ask what human-centered AI really means and whether it remains an ideal not easily realizable in practice. More work should be done to rethink human-machine relationships in the age of big data and AI, in this way making the call for ethical and responsible AI more genuine and trustworthy.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":" ","pages":"738-760"},"PeriodicalIF":3.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9424623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.1177/03063127231177455
Sonja van Wichelen
Increasingly, countries in the Global South-notably South Africa, Brazil, and Indonesia-are introducing material transfer agreements (MTAs) into their domestic laws for the exchange of scientific material. The MTA is a contract securing the legal transfer of tangible research material between organizations such as laboratories, pharmaceutical companies, or universities. Critical commentators argue that these agreements in the Global North have come to fulfill an important role in the expansion of dominant intellectual property regimes. Taking Indonesia as a case, this article examines how MTAs are enacted and implemented differently in the context of research involving the Global South. Against the conventionally understood forms of contract that commodify and commercialize materials and knowledge, the MTA in the South can be understood as a legal technology appropriated to translate a formerly relational economy of the scientific gift to a market system of science. As a way of gaining leverage in the uneven space of the global bioeconomy, the MTA functions as a technology for 'reverse appropriation', a reworking of its usage and meaning as a way of countering some of the global power inequalities experienced by Global South countries. The operation of this reverse appropriation, however, is hybrid, and reveals a complex reconfiguration of scientific exchange amidst a growing push for 'open science'.
{"title":"After biosovereignty: The material transfer agreement as technology of relations.","authors":"Sonja van Wichelen","doi":"10.1177/03063127231177455","DOIUrl":"https://doi.org/10.1177/03063127231177455","url":null,"abstract":"<p><p>Increasingly, countries in the Global South-notably South Africa, Brazil, and Indonesia-are introducing material transfer agreements (MTAs) into their domestic laws for the exchange of scientific material. The MTA is a contract securing the legal transfer of tangible research material between organizations such as laboratories, pharmaceutical companies, or universities. Critical commentators argue that these agreements in the Global North have come to fulfill an important role in the expansion of dominant intellectual property regimes. Taking Indonesia as a case, this article examines how MTAs are enacted and implemented differently in the context of research involving the Global South. Against the conventionally understood forms of contract that commodify and commercialize materials and knowledge, the MTA in the South can be understood as a legal technology appropriated to translate a formerly relational economy of the scientific gift to a market system of science. As a way of gaining leverage in the uneven space of the global bioeconomy, the MTA functions as a technology for 'reverse appropriation', a reworking of its usage and meaning as a way of countering some of the global power inequalities experienced by Global South countries. The operation of this reverse appropriation, however, is hybrid, and reveals a complex reconfiguration of scientific exchange amidst a growing push for 'open science'.</p>","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":"53 4","pages":"599-621"},"PeriodicalIF":3.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10097720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}