Pub Date : 2025-07-29DOI: 10.1016/j.jrt.2025.100131
Blair Matthews
Language learning is increasingly being extended into digital and online spaces that have been enhanced by simulated reality and augmented with data and artificial intelligence. While this may expand opportunities for language learning, some critics argue that digital spaces may represent a pastiche or a parody of reality. However, while there are genuine issues, such criticisms may often fall back on naïve or essentialist views of authenticity, in particular by narrowing language learning scenarios to real-life or genuine communication. I argue that research undersocialises authenticity by not taking social relations into sufficient consideration, which denies or elides the ways that authenticity is achieved. In this conceptual paper, I offer a relational account of authenticity, where I conceive digital environments within a stratified ontological framework, where authenticity is not inherent in individuals or texts, but instead emerges from complex social contexts. Authenticity, then, does not refer to authenticity of texts or “being oneself”, but authenticity in relation to others. A stratified ontology provides opportunities to extend relations with others, offering what is described as a “submersion into a temporary agency”, where language learners can experiment with the social order in order to achieve authenticity of themselves in the target language. Finally, I present a relational pedagogy based on responsiveness, where feedback is distributed among disparate human and technical actors which facilitate, problematise or endorse authenticity.
{"title":"Artificial worlds and artificial minds: Authenticity and language learning in digital lifeworlds","authors":"Blair Matthews","doi":"10.1016/j.jrt.2025.100131","DOIUrl":"10.1016/j.jrt.2025.100131","url":null,"abstract":"<div><div>Language learning is increasingly being extended into digital and online spaces that have been enhanced by simulated reality and augmented with data and artificial intelligence. While this may expand opportunities for language learning, some critics argue that digital spaces may represent a pastiche or a parody of reality. However, while there are genuine issues, such criticisms may often fall back on naïve or essentialist views of authenticity, in particular by narrowing language learning scenarios to real-life or genuine communication. I argue that research undersocialises authenticity by not taking social relations into sufficient consideration, which denies or elides the ways that authenticity is achieved. In this conceptual paper, I offer a relational account of authenticity, where I conceive digital environments within a stratified ontological framework, where authenticity is not inherent in individuals or texts, but instead emerges from complex social contexts. Authenticity, then, does not refer to authenticity of texts or “being oneself”, but authenticity in relation to others. A stratified ontology provides opportunities to extend relations with others, offering what is described as a “submersion into a temporary agency”, where language learners can experiment with the social order in order to achieve authenticity of themselves in the target language. Finally, I present a relational pedagogy based on responsiveness, where feedback is distributed among disparate human and technical actors which facilitate, problematise or endorse authenticity.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144757674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-24DOI: 10.1016/j.jrt.2025.100130
Jason M. Pittman , Geoff Schaefer
The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.
{"title":"Toward a responsible and ethical authorization to operate: A case study in AI consulting","authors":"Jason M. Pittman , Geoff Schaefer","doi":"10.1016/j.jrt.2025.100130","DOIUrl":"10.1016/j.jrt.2025.100130","url":null,"abstract":"<div><div>The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144721687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23DOI: 10.1016/j.jrt.2025.100124
Zoe Porter , Philippa Ryan , Phillip Morgan , Joanna Al-Qaddoumi , Bernard Twomey , Paul Noordhof , John McDermid , Ibrahim Habli
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, senses in which they are responsible, and aspects of events they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.
{"title":"Unravelling responsibility for AI","authors":"Zoe Porter , Philippa Ryan , Phillip Morgan , Joanna Al-Qaddoumi , Bernard Twomey , Paul Noordhof , John McDermid , Ibrahim Habli","doi":"10.1016/j.jrt.2025.100124","DOIUrl":"10.1016/j.jrt.2025.100124","url":null,"abstract":"<div><div>It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of <em>who</em> is responsible for AI, <em>senses</em> in which they are responsible, and <em>aspects of events</em> they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144739109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23DOI: 10.1016/j.jrt.2025.100129
Anastasia Kordoni , Mark Levine , Amel Bennaceur , Carlos Gavidia-Calderon , Bashar Nuseibeh
While the technical and ethical challenges of using drones in Search-and-Rescue operations for transnationally displaced individuals have been explored, how drone footage can shape psychological processes at play and impact post-rescue legal decision-making has been overlooked. This paper investigates how transnationally displaced individuals' social identities are portrayed in court and the role of drone footage in reinforcing these identities. We conducted a discourse analysis of 11 open-access asylum and deportation cases following drone-assisted Search-and-Rescue operations at sea (2015–2021). Our results suggest two primary identity constructions: as victims and as traffickers, each underpinned by conflicting psychological processes. The defence portrayed the defendants through the lens of vulnerability, while the prosecution through unlawfulness. Psychological attributions of drone footage contributed differently to identity portrayal, influencing legal decisions regarding the status and entitlements of transnationally displaced individuals. We discuss the socio-ethical implications of these findings and propose a psychosocial account for responsible innovation in technology mediated humanitarian contexts.
{"title":"Intersecting social identity and drone use in humanitarian contexts: Psychological insights for legal decisions and responsible innovation","authors":"Anastasia Kordoni , Mark Levine , Amel Bennaceur , Carlos Gavidia-Calderon , Bashar Nuseibeh","doi":"10.1016/j.jrt.2025.100129","DOIUrl":"10.1016/j.jrt.2025.100129","url":null,"abstract":"<div><div>While the technical and ethical challenges of using drones in Search-and-Rescue operations for transnationally displaced individuals have been explored, how drone footage can shape psychological processes at play and impact post-rescue legal decision-making has been overlooked. This paper investigates how transnationally displaced individuals' social identities are portrayed in court and the role of drone footage in reinforcing these identities. We conducted a discourse analysis of 11 open-access asylum and deportation cases following drone-assisted Search-and-Rescue operations at sea (2015–2021). Our results suggest two primary identity constructions: as victims and as traffickers, each underpinned by conflicting psychological processes. The defence portrayed the defendants through the lens of vulnerability, while the prosecution through unlawfulness. Psychological attributions of drone footage contributed differently to identity portrayal, influencing legal decisions regarding the status and entitlements of transnationally displaced individuals. We discuss the socio-ethical implications of these findings and propose a psychosocial account for responsible innovation in technology mediated humanitarian contexts.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144724925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-18DOI: 10.1016/j.jrt.2025.100127
S. Matthew Liao , Iskandar Haykel , Katherine Cheung , Taylor Matalon
As AI and digital technologies advance rapidly, governance frameworks struggle to keep pace with emerging applications and risks. This paper introduces a "5W1H" framework to systematically analyze AI governance proposals through six key questions: What should be regulated (data, algorithms, sectors, or risk levels), Why regulate (ethics, legal compliance, market failures, or national interests), Who should regulate (industry, government, or public stakeholders), When regulation should occur (upstream, downstream, or lifecycle approaches), Where it should take place (local, national, or international levels), and How it should be enacted (hard versus soft regulation). The framework is applied to compare the European Union's AI Act with the current U.S. regulatory landscape, revealing the EU's comprehensive, risk-based approach versus America's fragmented, sector-specific strategy. By providing a structured analytical tool, the 5W1H framework helps policymakers, researchers, and stakeholders navigate complex AI governance decisions and identify areas for improvement in existing regulatory approaches.
{"title":"Navigating the complexities of AI and digital governance: the 5W1H framework","authors":"S. Matthew Liao , Iskandar Haykel , Katherine Cheung , Taylor Matalon","doi":"10.1016/j.jrt.2025.100127","DOIUrl":"10.1016/j.jrt.2025.100127","url":null,"abstract":"<div><div>As AI and digital technologies advance rapidly, governance frameworks struggle to keep pace with emerging applications and risks. This paper introduces a \"5W1H\" framework to systematically analyze AI governance proposals through six key questions: <em>What</em> should be regulated (data, algorithms, sectors, or risk levels), <em>Why</em> regulate (ethics, legal compliance, market failures, or national interests), <em>Who</em> should regulate (industry, government, or public stakeholders), <em>When</em> regulation should occur (upstream, downstream, or lifecycle approaches), <em>Where</em> it should take place (local, national, or international levels), and <em>How</em> it should be enacted (hard versus soft regulation). The framework is applied to compare the European Union's AI Act with the current U.S. regulatory landscape, revealing the EU's comprehensive, risk-based approach versus America's fragmented, sector-specific strategy. By providing a structured analytical tool, the 5W1H framework helps policymakers, researchers, and stakeholders navigate complex AI governance decisions and identify areas for improvement in existing regulatory approaches.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144696721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-17DOI: 10.1016/j.jrt.2025.100128
Yavuz Selim Balcioğlu , Ahmet Alkan Çelik , Erkut Altindağ
This article examines the European Union's Artificial Intelligence Act, a landmark legislation that sets forth comprehensive rules for the development, deployment, and governance of artificial intelligence technologies within the EU. Emphasizing a human-centric approach, the Act aims to ensure AI's safe use, protect fundamental rights, and foster innovation within a framework that supports economic growth. Through a detailed analysis, the article explores the Act's key provisions, including its risk-based approach, bans and restrictions on certain AI practices, and measures for safeguarding fundamental rights. It also discusses the potential impact on SMEs, the importance of balancing regulation with innovation, and the need for the Act to adapt in response to technological advancements. The role of stakeholders in ensuring the Act's successful implementation and the significance of this legislative milestone for the future of AI are highlighted. The article concludes with reflections on the opportunities the Act presents for ethical AI development and the challenges ahead in maintaining its relevance and efficacy in a rapidly evolving technological landscape.
{"title":"A turning point in AI: Europe's human-centric approach to technology regulation","authors":"Yavuz Selim Balcioğlu , Ahmet Alkan Çelik , Erkut Altindağ","doi":"10.1016/j.jrt.2025.100128","DOIUrl":"10.1016/j.jrt.2025.100128","url":null,"abstract":"<div><div>This article examines the European Union's Artificial Intelligence Act, a landmark legislation that sets forth comprehensive rules for the development, deployment, and governance of artificial intelligence technologies within the EU. Emphasizing a human-centric approach, the Act aims to ensure AI's safe use, protect fundamental rights, and foster innovation within a framework that supports economic growth. Through a detailed analysis, the article explores the Act's key provisions, including its risk-based approach, bans and restrictions on certain AI practices, and measures for safeguarding fundamental rights. It also discusses the potential impact on SMEs, the importance of balancing regulation with innovation, and the need for the Act to adapt in response to technological advancements. The role of stakeholders in ensuring the Act's successful implementation and the significance of this legislative milestone for the future of AI are highlighted. The article concludes with reflections on the opportunities the Act presents for ethical AI development and the challenges ahead in maintaining its relevance and efficacy in a rapidly evolving technological landscape.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1016/j.jrt.2025.100126
Andrew McStay , Vian Bakir
Despite regulatory efforts, there is a significant governance gap in managing emotion recognition AI technologies and those that emulate empathy. This paper asks: should international soft law mechanisms, such as ethical standards, complement hard law in addressing governance gaps in emotion recognition and empathy-emulating AI technologies? To argue that soft law can provide detailed guidance, particularly for research ethics committees and related boards advising on these technologies, the paper first explores how legal definitions of emotion recognition, especially in the EU AI Act, rest on reductive and physiognomic criticism of emotion recognition. It progresses to detail that systems may be designed to intentionally empathise with their users, but also that empathy may be unintentional – or effectively incidental to how these systems work. Approaches that are non-reductive and avoid labelling of emotion as conceived in the EU AI Act raises novel governance questions and physiognomic critique of a more dynamic nature. The paper finds that international soft law can complement hard law, especially when critique is subtle but significant, when guidance is anticipatory in nature, and when detailed recommendations for developers are required.
{"title":"Soft law for unintentional empathy: addressing the governance gap in emotion-recognition AI technologies","authors":"Andrew McStay , Vian Bakir","doi":"10.1016/j.jrt.2025.100126","DOIUrl":"10.1016/j.jrt.2025.100126","url":null,"abstract":"<div><div>Despite regulatory efforts, there is a significant governance gap in managing emotion recognition AI technologies and those that emulate empathy. This paper asks: should international soft law mechanisms, such as ethical standards, complement hard law in addressing governance gaps in emotion recognition and empathy-emulating AI technologies? To argue that soft law can provide detailed guidance, particularly for research ethics committees and related boards advising on these technologies, the paper first explores how legal definitions of emotion recognition, especially in the EU AI Act, rest on reductive and physiognomic criticism of emotion recognition. It progresses to detail that systems may be designed to intentionally empathise with their users, but also that empathy may be unintentional – or effectively incidental to how these systems work. Approaches that are non-reductive and avoid labelling of emotion as conceived in the EU AI Act raises novel governance questions and physiognomic critique of a more dynamic nature. The paper finds that international soft law can complement hard law, especially when critique is subtle but significant, when guidance is anticipatory in nature, and when detailed recommendations for developers are required.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1016/j.jrt.2025.100125
Dion R.J. O’Neale , Daniel Wilson , Paul T. Brown , Pascarn Dickinson , Manakore Rikus-Graham , Asia Ropeti
As the scope and prevalence of algorithmic systems and artificial intelligence for decision making expand, there is a growing understanding of the need for approaches to help with anticipating adverse consequences and to support the development and deployment of algorithmic systems that are socially responsible and ethically aware. This has led to increasing interest in "decolonising" algorithmic systems as a method of managing and mitigating harms and biases from algorithms and for supporting social benefits from algorithmic decision making for Indigenous peoples.
This article presents ten simple guidelines for giving practical effect to foundational Māori (the Indigenous people of Aotearoa New Zealand) principles in the design, deployment, and operation of algorithmic systems. The guidelines are based on previously established literature regarding ethical use of Māori data. Where possible we have related these guidelines and recommendations to other development practices, for example, to open-source software.
While not intended to be exhaustive or extensive, we hope that these guidelines are able to facilitate and encourage those who work with Māori data in algorithmic systems to engage with processes and practices that support culturally appropriate and ethical approaches for algorithmic systems.
{"title":"Ten simple guidelines for decolonising algorithmic systems","authors":"Dion R.J. O’Neale , Daniel Wilson , Paul T. Brown , Pascarn Dickinson , Manakore Rikus-Graham , Asia Ropeti","doi":"10.1016/j.jrt.2025.100125","DOIUrl":"10.1016/j.jrt.2025.100125","url":null,"abstract":"<div><div>As the scope and prevalence of algorithmic systems and artificial intelligence for decision making expand, there is a growing understanding of the need for approaches to help with anticipating adverse consequences and to support the development and deployment of algorithmic systems that are socially responsible and ethically aware. This has led to increasing interest in \"decolonising\" algorithmic systems as a method of managing and mitigating harms and biases from algorithms and for supporting social benefits from algorithmic decision making for Indigenous peoples.</div><div>This article presents ten simple guidelines for giving practical effect to foundational Māori (the Indigenous people of Aotearoa New Zealand) principles in the design, deployment, and operation of algorithmic systems. The guidelines are based on previously established literature regarding ethical use of Māori data. Where possible we have related these guidelines and recommendations to other development practices, for example, to open-source software.</div><div>While not intended to be exhaustive or extensive, we hope that these guidelines are able to facilitate and encourage those who work with Māori data in algorithmic systems to engage with processes and practices that support culturally appropriate and ethical approaches for algorithmic systems.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-24DOI: 10.1016/j.jrt.2025.100123
Mrinalini Ravi , Swarna Tyagi , Vandana Gopikumar , Emma Emily de Wit , Joske Bunders , Deborah Padgett , Barbara Regeer
Involving persons with lived experience in knowledge generation through participatory research (PR) has become increasingly important to challenge power structures in knowledge production and research. In the case of persons with lived experiences of mental illness, participatory research has gained popularity since the early 70 s, but there is little empirical work from countries like India on how PR can be implemented in psychiatric settings.
This study focuses on exploring the way persons with lived experiences of mental illness can be engaged as peer researchers in a service utilisation audit of The Banyan’s inpatient, outpatient and inclusive living facilities. The audit was an attempt by The Banyan to co-opt clients as peer-researchers, thereby enhancing participatory approaches to care planning and provision. Notes and transcripts of research process activities (three meetings for training purposes), 180 interviews as part of the audit, as well as follow up Focus Group Discussions (n = 4) conducted with 18 peer researchers, were used to document their experiences and gather feedback on the training and research process.
We foundthat, reflected against the lack of formal education in the past, the opportunity and support received to be part of a research endeavour, elicited a sense of pride, relief, and liberation in peer researchers. Additionally, actualising the role of an academic and researcher, and not just being passive responders to people in positions of intellectual and systemic power, engendered a sense of responsibility and accountability to peer researchers, and to the mental health system. Thirdly, supporting persons with experiences of mental illness in participatory research activities, especially in the context of low resource settings, requires specific consideration of practical conditions and adjustments needed to avoid tokenism. Finally, both peer- and staff researchers spoke about persisting hierarchies between them which deserve attention.
We conclude that participatory research has a significant scope amongst clients from disadvantaged communities in low-resource settings. Respondents repeatedly expressed an urgency for persons with lived experience to contribute to mental health pedagogy, and, in so doing, disrupt archaic treatment approaches.. Experiences from this enquiry also call for a rethink on how training in research can be developed for individuals without formal education and with cognitive difficulties, with the help of auditory support systemssuch that key concepts are available and accessible, and long-term memory becomes less of a deterrent in the pursuit of knowledge and truth.
{"title":"Participatory research in low resource settings - Endeavours in epistemic justice at the Banyan, India","authors":"Mrinalini Ravi , Swarna Tyagi , Vandana Gopikumar , Emma Emily de Wit , Joske Bunders , Deborah Padgett , Barbara Regeer","doi":"10.1016/j.jrt.2025.100123","DOIUrl":"10.1016/j.jrt.2025.100123","url":null,"abstract":"<div><div>Involving persons with lived experience in knowledge generation through participatory research (PR) has become increasingly important to challenge power structures in knowledge production and research. In the case of persons with lived experiences of mental illness, participatory research has gained popularity since the early 70 s, but there is little empirical work from countries like India on how PR can be implemented in psychiatric settings.</div><div>This study focuses on exploring the way persons with lived experiences of mental illness can be engaged as peer researchers in a service utilisation audit of The Banyan’s inpatient, outpatient and inclusive living facilities. The audit was an attempt by The Banyan to co-opt clients as peer-researchers, thereby enhancing participatory approaches to care planning and provision. Notes and transcripts of research process activities (three meetings for training purposes), 180 interviews as part of the audit, as well as follow up Focus Group Discussions (<em>n</em> = 4) conducted with 18 peer researchers, were used to document their experiences and gather feedback on the training and research process.</div><div>We foundthat, reflected against the lack of formal education in the past, the opportunity and support received to be part of a research endeavour, elicited a sense of pride, relief, and liberation in peer researchers. Additionally, actualising the role of an academic and researcher, and not just being passive responders to people in positions of intellectual and systemic power, engendered a sense of responsibility and accountability to peer researchers, and to the mental health system. Thirdly, supporting persons with experiences of mental illness in participatory research activities, especially in the context of low resource settings, requires specific consideration of practical conditions and adjustments needed to avoid tokenism. Finally, both peer- and staff researchers spoke about persisting hierarchies between them which deserve attention.</div><div>We conclude that participatory research has a significant scope amongst clients from disadvantaged communities in low-resource settings. Respondents repeatedly expressed an urgency for persons with lived experience to contribute to mental health pedagogy, and, in so doing, disrupt archaic treatment approaches.. Experiences from this enquiry also call for a rethink on how training in research can be developed for individuals without formal education and with cognitive difficulties, with the help of auditory support systemssuch that key concepts are available and accessible, and long-term memory becomes less of a deterrent in the pursuit of knowledge and truth.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.1016/j.jrt.2025.100121
Mark Graves , Emanuele Ratti
Responsible artificial intelligence (AI) requires integrating ethical awareness into the full process of designing and developing AI, including ethics-based auditing of AI technology. We claim the Capability Approach (CA) of Sen and Nussbaum grounds AI ethics in essential human freedoms and can increase awareness of the moral dimension in the technical decision making of developers and data scientists constructing data-centric AI systems. Our use of CA focuses awareness on the ethical impact that day-to-day technical decisions have on the freedom of data subjects to make choices and live meaningful lives according to their own values. For internal auditing of AI technology development, we design and develop a light-weight ethical auditing tool (LEAT) that uses simple natural language processing (NLP) techniques to search design and development documents for relevant ethical characterizations. We describe how CA guides our design, demonstrate LEAT on both principle- and capabilities-based use cases, and characterize its limitations.
{"title":"A capability approach to ethical development and internal auditing of AI technology","authors":"Mark Graves , Emanuele Ratti","doi":"10.1016/j.jrt.2025.100121","DOIUrl":"10.1016/j.jrt.2025.100121","url":null,"abstract":"<div><div>Responsible artificial intelligence (AI) requires integrating ethical awareness into the full process of designing and developing AI, including ethics-based auditing of AI technology. We claim the Capability Approach (CA) of Sen and Nussbaum grounds AI ethics in essential human freedoms and can increase awareness of the moral dimension in the technical decision making of developers and data scientists constructing data-centric AI systems. Our use of CA focuses awareness on the ethical impact that day-to-day technical decisions have on the freedom of data subjects to make choices and live meaningful lives according to their own values. For internal auditing of AI technology development, we design and develop a light-weight ethical auditing tool (LEAT) that uses simple natural language processing (NLP) techniques to search design and development documents for relevant ethical characterizations. We describe how CA guides our design, demonstrate LEAT on both principle- and capabilities-based use cases, and characterize its limitations.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}