Pub Date : 2024-06-04DOI: 10.1016/j.jrt.2024.100087
Sarah Robinson , Jim Buckley , Luigina Ciolfi , Conor Linehan , Clare McInerney , Bashar Nuseibeh , John Twomey , Irum Rauf , John McCarthy
In recent years, we have seen many examples of software products unintentionally causing demonstrable harm. Many guidelines for ethical and responsible computing have been developed in response. Dominant approaches typically attribute liability and blame to individual companies or actors, rather than understanding how the working practices, norms, and cultural understandings in the software industry contribute to such outcomes. In this paper, we propose an understanding of responsibility that is infrastructural, relational, and cultural; thus, providing a foundation to better enable responsible software engineering into the future. Our approach draws on Young's (2006) social connection model of responsibility and Star and Ruhleder's (1994) concept of infrastructure. By bringing these theories together we introduce a concept called infrastructural injustice, which offers a new way for software engineers to consider their opportunities for responsible action with respect to society and the planet. We illustrate the utility of this approach by applying it to an Open-Source software communities’ development of Deepfake technology, to find key leverage points of responsibility that are relevant to both Deepfake technology and software engineering more broadly.
{"title":"Infrastructural justice for responsible software engineering,","authors":"Sarah Robinson , Jim Buckley , Luigina Ciolfi , Conor Linehan , Clare McInerney , Bashar Nuseibeh , John Twomey , Irum Rauf , John McCarthy","doi":"10.1016/j.jrt.2024.100087","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100087","url":null,"abstract":"<div><p>In recent years, we have seen many examples of software products unintentionally causing demonstrable harm. Many guidelines for ethical and responsible computing have been developed in response. Dominant approaches typically attribute liability and blame to individual companies or actors, rather than understanding how the working practices, norms, and cultural understandings in the software industry contribute to such outcomes. In this paper, we propose an understanding of responsibility that is infrastructural, relational, and cultural; thus, providing a foundation to better enable responsible software engineering into the future. Our approach draws on Young's (2006) social connection model of responsibility and Star and Ruhleder's (1994) concept of infrastructure. By bringing these theories together we introduce a concept called infrastructural injustice, which offers a new way for software engineers to consider their opportunities for responsible action with respect to society and the planet. We illustrate the utility of this approach by applying it to an Open-Source software communities’ development of Deepfake technology, to find key leverage points of responsibility that are relevant to both Deepfake technology and software engineering more broadly.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"19 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000131/pdfft?md5=129d725094c45ad3f08ea3d866a85b49&pid=1-s2.0-S2666659624000131-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1016/j.jrt.2024.100084
Clare Shelley-Egan, Pieter Vermaas
In this editorial, we engage with the European Commission's 2023 recommendation calling for risk assessment with Member States on four critical technology areas, including quantum technology. A particular emphasis is put on the risks associated with technology security and technology leakage. Such risks may lead to protectionist measures. Mobilising European normative anchor points that inform the “right impacts” of research and innovation, we argue that a protectionist approach on the part of the European Union can lead to moral isolationism. This, in turn, can limit Europe's contribution to global development with respect to technological advances, sustainable development and quality of life. We contend that decisions on protectionism around quantum technology should not be made with a protectionist mindset about European values.
{"title":"European technological protectionism and the risk of moral isolationism: The case of quantum technology development","authors":"Clare Shelley-Egan, Pieter Vermaas","doi":"10.1016/j.jrt.2024.100084","DOIUrl":"10.1016/j.jrt.2024.100084","url":null,"abstract":"<div><p>In this editorial, we engage with the European Commission's 2023 recommendation calling for risk assessment with Member States on four critical technology areas, including quantum technology. A particular emphasis is put on the risks associated with technology security and technology leakage. Such risks may lead to protectionist measures. Mobilising European normative anchor points that inform the “right impacts” of research and innovation, we argue that a protectionist approach on the part of the European Union can lead to moral isolationism. This, in turn, can limit Europe's contribution to global development with respect to technological advances, sustainable development and quality of life. We contend that decisions on protectionism around quantum technology should not be made with a protectionist mindset about European values.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000106/pdfft?md5=faecb48e04356c91ce7d914c60d69aa6&pid=1-s2.0-S2666659624000106-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141037141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1016/j.jrt.2024.100086
Siri Padmanabhan Poti, Christopher J Stanton
Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.
{"title":"Enabling affordances for AI Governance","authors":"Siri Padmanabhan Poti, Christopher J Stanton","doi":"10.1016/j.jrt.2024.100086","DOIUrl":"10.1016/j.jrt.2024.100086","url":null,"abstract":"<div><p>Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962400012X/pdfft?md5=9bf6cc548743ad7d2d5c0830773f5145&pid=1-s2.0-S266665962400012X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1016/j.jrt.2024.100085
Danielle Shanley, Darian Meacham
The rapid development of interactive virtual reality (VR) spaces like VRChat has been made possible due to continuing increases in computer processing power, advances in artificial intelligence (AI) technologies such as natural language processing (NLP), and advances in 3D modelling and spatial and edge computing. Perhaps because these spaces rely on new ways of integrating different forms of advanced computing, such as AI and VR, little is yet known about their potential ethical implications. In this contribution, we provide an overview of key themes frequently discussed in the context of these so-called Intelligent Virtual Environments (IVEs). We highlight different ethical questions and the ways in which they are typically taken up in the literature. We first map how common concerns tend to revolve around technological feasibility and psychological impacts. We then ask how shifting the focus towards more philosophical perspectives might reorient discussions surrounding IVEs, opening up important avenues for future research. Our contribution in this review is to highlight how as active mediators of experience these technologies require critical reflection and should not be evaluated solely in terms of their functionality.
{"title":"A place where “You can be who you've always wanted to be…” Examining the ethics of intelligent virtual environments","authors":"Danielle Shanley, Darian Meacham","doi":"10.1016/j.jrt.2024.100085","DOIUrl":"10.1016/j.jrt.2024.100085","url":null,"abstract":"<div><p>The rapid development of interactive virtual reality (VR) spaces like VRChat has been made possible due to continuing increases in computer processing power, advances in artificial intelligence (AI) technologies such as natural language processing (NLP), and advances in 3D modelling and spatial and edge computing. Perhaps because these spaces rely on new ways of integrating different forms of advanced computing, such as AI and VR, little is yet known about their potential ethical implications. In this contribution, we provide an overview of key themes frequently discussed in the context of these so-called <em>Intelligent Virtual Environments</em> (IVEs). We highlight different ethical questions and the ways in which they are typically taken up in the literature. We first map how common concerns tend to revolve around technological feasibility and psychological impacts. We then ask how shifting the focus towards more philosophical perspectives might reorient discussions surrounding IVEs, opening up important avenues for future research. Our contribution in this review is to highlight how as active mediators of experience these technologies require critical reflection and should not be evaluated solely in terms of their functionality.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000118/pdfft?md5=07ae452d1cff9888973af4ceb889ddc6&pid=1-s2.0-S2666659624000118-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141047393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1016/j.jrt.2024.100082
Gemma Serrano , Francesco Striano , Steven Umbrello
In this paper, we explore a new perspective on digital humanism, emphasizing the centrality of multi-stakeholder dialogues and a bottom-up approach to surfacing stakeholder values. This approach starkly contrasts with existing frameworks, such as the Vienna Manifesto's top-down digital humanism, which hinges on pre-established first principles. Our approach provides a more flexible, inclusive framework that captures a broader spectrum of ethical considerations, particularly those pertinent to the digital realm. We apply our model to two case studies, comparing the insights generated with those derived from a utilitarian perspective and the Vienna Manifesto's approach. The findings underscore the enhanced effectiveness of our approach in revealing additional, often overlooked stakeholder values, not typically encapsulated by traditional top-down methodologies. Furthermore, this paper positions our digital humanism approach as a powerful tool for framing ethics-by-design, by promoting a narrative that empowers and centralizes stakeholders. As a result, it paves the way for more nuanced, comprehensive ethical considerations in the design and implementation of digital technologies, thereby enriching the existing literature on digital ethics and setting a promising agenda for future research.
{"title":"Digital humanism as a bottom-up ethics","authors":"Gemma Serrano , Francesco Striano , Steven Umbrello","doi":"10.1016/j.jrt.2024.100082","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100082","url":null,"abstract":"<div><p>In this paper, we explore a new perspective on digital humanism, emphasizing the centrality of multi-stakeholder dialogues and a bottom-up approach to surfacing stakeholder values. This approach starkly contrasts with existing frameworks, such as the Vienna Manifesto's top-down digital humanism, which hinges on pre-established first principles. Our approach provides a more flexible, inclusive framework that captures a broader spectrum of ethical considerations, particularly those pertinent to the digital realm. We apply our model to two case studies, comparing the insights generated with those derived from a utilitarian perspective and the Vienna Manifesto's approach. The findings underscore the enhanced effectiveness of our approach in revealing additional, often overlooked stakeholder values, not typically encapsulated by traditional top-down methodologies. Furthermore, this paper positions our digital humanism approach as a powerful tool for framing ethics-by-design, by promoting a narrative that empowers and centralizes stakeholders. As a result, it paves the way for more nuanced, comprehensive ethical considerations in the design and implementation of digital technologies, thereby enriching the existing literature on digital ethics and setting a promising agenda for future research.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000088/pdfft?md5=a40431af04a93c455298d3e1eacfeb46&pid=1-s2.0-S2666659624000088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140330847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.1016/j.jrt.2024.100080
Federica Buongiorno , Xenia Chiaramonte
Few concepts have been subjected to as intense scrutiny in contemporary discourse as that of “humanism.” While these critiques have acknowledged the importance of retaining certain key aspects of humanism, such as rights, freedom, and human dignity, the term has assumed ambivalence, especially in light of post-colonial and gender studies, that cannot be ignored. The “Vienna Manifesto on Digital Humanism,” as well as the recent volume (2022) titled Perspectives on Digital Humanism, bear a complex imprint of this ambivalence. In this contribution, we aim to bring to the forefront and decipher this underlying trace, by considering alternative (non-humanistic) ways to understand human-technologies relations, beyond the dominant neoliberal paradigm (paragraphs 1 and 2); we then analyse those relations within the specific context of legal studies (paragraphs 3 and 4), one in which the interdependency of humans and non-humans shows a specific and complex form of “fundamental ambivalence.”
{"title":"Do we really need a “Digital Humanism”? A critique based on post-human philosophy of technology and socio-legal techniques","authors":"Federica Buongiorno , Xenia Chiaramonte","doi":"10.1016/j.jrt.2024.100080","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100080","url":null,"abstract":"<div><p>Few concepts have been subjected to as intense scrutiny in contemporary discourse as that of “humanism.” While these critiques have acknowledged the importance of retaining certain key aspects of humanism, such as rights, freedom, and human dignity, the term has assumed ambivalence, especially in light of post-colonial and gender studies, that cannot be ignored. The “Vienna Manifesto on Digital Humanism,” as well as the recent volume (2022) titled <em>Perspectives on Digital Humanism</em>, bear a complex imprint of this ambivalence. In this contribution, we aim to bring to the forefront and decipher this underlying trace, by considering alternative (non-humanistic) ways to understand human-technologies relations, beyond the dominant neoliberal paradigm (paragraphs 1 and 2); we then analyse those relations within the specific context of legal studies (paragraphs 3 and 4), one in which the interdependency of humans and non-humans shows a specific and complex form of “fundamental ambivalence.”</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000064/pdfft?md5=a83279cb48841b221775aa3aa2b0256f&pid=1-s2.0-S2666659624000064-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140187836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-11DOI: 10.1016/j.jrt.2024.100081
Maurizio Ferraris
This text aims to counter the anxieties generated by the recent emergence of AI and the criticisms leveled at it, demanding its moralization. It does so by demonstrating that AI is neither new nor is it true intelligence but rather a tool, akin to many others that have long been serving human intelligence and its objectives. In what follows, I offer a broader reflection on technology that aims to contextualize the novelty and singularity attributed to AI within the history of technological developments. My ultimate goal is to relativize the novelty of AI, seeking to alleviate the moral anxieties it currently elicits and encouraging a more normal, optimistic view of it. The first step in understanding AI is indeed to realize that its novelty is only relative, and that AI has many ancestors that, upon closer examination, turn out to be closely related.
{"title":"Intelligence as a human life form","authors":"Maurizio Ferraris","doi":"10.1016/j.jrt.2024.100081","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100081","url":null,"abstract":"<div><p>This text aims to counter the anxieties generated by the recent emergence of AI and the criticisms leveled at it, demanding its moralization. It does so by demonstrating that AI is neither new nor is it true intelligence but rather a tool, akin to many others that have long been serving human intelligence and its objectives. In what follows, I offer a broader reflection on technology that aims to contextualize the novelty and singularity attributed to AI within the history of technological developments. My ultimate goal is to relativize the novelty of AI, seeking to alleviate the moral anxieties it currently elicits and encouraging a more normal, optimistic view of it. The first step in understanding AI is indeed to realize that its novelty is only relative, and that AI has many ancestors that, upon closer examination, turn out to be closely related.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000076/pdfft?md5=1b728ab83e058b5709581507a0c2ecfb&pid=1-s2.0-S2666659624000076-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140187837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.jrt.2024.100079
Adam K. Taras , Niko Sünderhauf , Peter Corke , Donald G. Dansereau
Vision is an effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, identity, and activity of humans within that scene. This raises important questions about the reach, lifespan, and misuse of this information. This paper is a call to action to consider privacy in robotic vision. We propose a specific form of inherent privacy preservation in which no images are captured or could be reconstructed by an attacker, even with full remote access. We present a set of principles by which such systems could be designed, employing data-destroying operations and obfuscation in the optical and analogue domains. These cameras never see a full scene. Our localisation case study demonstrates in simulation four implementations that all fulfil this task. The design space of such systems is vast despite the constraints of optical-analogue processing. We hope to inspire future works that expand the range of applications open to sighted robotic systems.
{"title":"Inherently privacy-preserving vision for trustworthy autonomous systems: Needs and solutions","authors":"Adam K. Taras , Niko Sünderhauf , Peter Corke , Donald G. Dansereau","doi":"10.1016/j.jrt.2024.100079","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100079","url":null,"abstract":"<div><p>Vision is an effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, identity, and activity of humans within that scene. This raises important questions about the reach, lifespan, and misuse of this information. This paper is a call to action to consider privacy in robotic vision. We propose a specific form of inherent privacy preservation in which no images are captured or could be reconstructed by an attacker, even with full remote access. We present a set of principles by which such systems could be designed, employing data-destroying operations and obfuscation in the optical and analogue domains. These cameras <em>never</em> see a full scene. Our localisation case study demonstrates in simulation four implementations that all fulfil this task. The design space of such systems is vast despite the constraints of optical-analogue processing. We hope to inspire future works that expand the range of applications open to sighted robotic systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000052/pdfft?md5=4bc01eda85dc3576e713b1aa99ec1739&pid=1-s2.0-S2666659624000052-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139999894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1016/j.jrt.2024.100077
Antonio Lucci , Andrea Osti
This paper evaluates the historical-anthropological and ethical underpinnings of the concept of “digital humanism.” Our inquiry begins with a reconstructive analysis (§1), focusing on three pivotal works defining digital humanism. The objective is to expose shared characteristics shaping the notions of “human being” and “humanity.” Moving forward, our investigation employs anthropological-evolutionary (§2) and individual-cognitive (§3) perspectives to discern how cultural-historical contingencies shape the implicit understanding of the “human being” that forms the foundation for digital humanism. As an illustrative case study, we delve into Luddism (§4) to illuminate the potential and limitations of adopting a critical stance towards digital humanism. Through a thorough analysis, encompassing both efficacy and implicit anthropological elements, our goal is to extract ethical implications (§5) pertinent to our broader objective. This examination reveals the interplay between cultural-historical contingencies and anthropological constants in shaping assumptions about the “human being” within the context of digital humanism. In conclusion, our paper contributes to a nuanced understanding of the implicit assumptions permeating the digital humanism discourse. We advocate for a more critical and reflective engagement with the foundational concepts of digital humanism, urging scholars and practitioners to navigate the complexities of its historical-anthropological and ethical dimensions.
{"title":"Exit (digital) humanity: Critical notes on the anthropological foundations of “digital humanism”","authors":"Antonio Lucci , Andrea Osti","doi":"10.1016/j.jrt.2024.100077","DOIUrl":"10.1016/j.jrt.2024.100077","url":null,"abstract":"<div><p>This paper evaluates the historical-anthropological and ethical underpinnings of the concept of “digital humanism.” Our inquiry begins with a reconstructive analysis (§1), focusing on three pivotal works defining digital humanism. The objective is to expose shared characteristics shaping the notions of “human being” and “humanity.” Moving forward, our investigation employs anthropological-evolutionary (§2) and individual-cognitive (§3) perspectives to discern how cultural-historical contingencies shape the implicit understanding of the “human being” that forms the foundation for digital humanism. As an illustrative case study, we delve into Luddism (§4) to illuminate the potential and limitations of adopting a critical stance towards digital humanism. Through a thorough analysis, encompassing both efficacy and implicit anthropological elements, our goal is to extract ethical implications (§5) pertinent to our broader objective. This examination reveals the interplay between cultural-historical contingencies and anthropological constants in shaping assumptions about the “human being” within the context of digital humanism. In conclusion, our paper contributes to a nuanced understanding of the implicit assumptions permeating the digital humanism discourse. We advocate for a more critical and reflective engagement with the foundational concepts of digital humanism, urging scholars and practitioners to navigate the complexities of its historical-anthropological and ethical dimensions.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000039/pdfft?md5=1f267005df7fc3992c4564d52def1b64&pid=1-s2.0-S2666659624000039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139891840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1016/j.jrt.2024.100078
Giacomo Pezzano
Beginning with a reconstruction of the anthropological paradigms underlying The Vienna Manifesto and The Onlife Manifesto (§ 1.1), this paper distinguishes between two possible approaches to digital humanism: an extroverted one, principally engaged in finding a way to humanize digital technologies, and an introverted one, pointing instead attention to how digital technologies can re-humanize us, particularly our “mindframe” (§ 1.2). On this basis, I stress that if we take seriously the consequences of the “mediatic turn”, according to which human reason is finally recognized as mediatically contingent (§ 2.1), then we should accept that just as the book created the poietic context for the development of traditional humanism and its “bookish” idea of private and public reason, so too digital psycho-technologies today provide the conditions for the rise of a new humanism (§ 2.2). I then discuss the possible humanizing potential of digital simulated worlds: I compare the symbolic-reconstructive mindset to the sensorimotor mindset (§ 3.1), and I highlight their respective mediological association with the book and the video game, advocating for the peculiar thinking and reasoning affordances now offered by the new digital psycho-technologies (§ 3.2).
{"title":"Are we done with (Wordy) manifestos? Towards an introverted digital humanism","authors":"Giacomo Pezzano","doi":"10.1016/j.jrt.2024.100078","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100078","url":null,"abstract":"<div><p>Beginning with a reconstruction of the anthropological paradigms underlying <em>The Vienna Manifesto</em> and <em>The Onlife Manifesto</em> (§ 1.1), this paper distinguishes between two possible approaches to digital humanism: an <em>extroverted</em> one, principally engaged in finding a way to humanize digital technologies, and an <em>introverted</em> one, pointing instead attention to how digital technologies can re-humanize us, particularly our “mindframe” (§ 1.2). On this basis, I stress that if we take seriously the consequences of the “mediatic turn”, according to which human reason is finally recognized as mediatically contingent (§ 2.1), then we should accept that just as the book created the poietic context for the development of traditional humanism and its “bookish” idea of private and public reason, so too digital psycho-technologies today provide the conditions for the rise of a new humanism (§ 2.2). I then discuss the possible humanizing potential of digital simulated worlds: I compare the symbolic-reconstructive mindset to the sensorimotor mindset (§ 3.1), and I highlight their respective mediological association with the book and the video game, advocating for the peculiar thinking and reasoning affordances now offered by the new digital psycho-technologies (§ 3.2).</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000040/pdfft?md5=bba4bc77d24cfec45f135507bd575f96&pid=1-s2.0-S2666659624000040-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139737874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}