Pub Date : 2026-03-01Epub Date: 2025-12-27DOI: 10.1016/j.jrt.2025.100147
Maira Klyshbekova , Gisela Reyes Cruz , Caitlin Bentley , Stef Garasto , Amy Aisha Brown , Christine Aicardi , Brian Ball , Mohammad Naiseh , Oana Andrei
Responsible Artificial Intelligence (RAI) education has emerged as a way of approaching the field of AI to address a host of concerns (Bentley et al., 2023). Many education providers have been releasing new RAI-related online courses, programmes, or toolkits. When combined with the issues emerging from the development, deployment, and use of AI, the expansion of RAI education and the proliferation of resources raise two critical questions. First, what can we learn about RAI from examining both the content and structure of publicly available RAI educational resources? Second, how might we understand the quality and impact of these RAI resources? We conducted a systematic search of UK RAI educational resources found online. We first present a descriptive analysis of 211 resources collected, including their type, format, cost, sector, audience, and type of provider. Furthermore, we describe our collaborative approach to analysing four pre-selected resources in-depth, from which we outlined an evaluation framework that we then employed for assessing the content of a subset of 47 resources. The five crucial areas of our framework could guide both learners and developers when approaching RAI resources.
{"title":"A UK perspective on responsible education for responsible AI: a multidisciplinary review and evaluation framework","authors":"Maira Klyshbekova , Gisela Reyes Cruz , Caitlin Bentley , Stef Garasto , Amy Aisha Brown , Christine Aicardi , Brian Ball , Mohammad Naiseh , Oana Andrei","doi":"10.1016/j.jrt.2025.100147","DOIUrl":"10.1016/j.jrt.2025.100147","url":null,"abstract":"<div><div>Responsible Artificial Intelligence (RAI) education has emerged as a way of approaching the field of AI to address a host of concerns (Bentley et al., 2023). Many education providers have been releasing new RAI-related online courses, programmes, or toolkits. When combined with the issues emerging from the development, deployment, and use of AI, the expansion of RAI education and the proliferation of resources raise two critical questions. First, what can we learn about RAI from examining both the content and structure of publicly available RAI educational resources? Second, how might we understand the quality and impact of these RAI resources? We conducted a systematic search of UK RAI educational resources found online. We first present a descriptive analysis of 211 resources collected, including their type, format, cost, sector, audience, and type of provider. Furthermore, we describe our collaborative approach to analysing four pre-selected resources in-depth, from which we outlined an evaluation framework that we then employed for assessing the content of a subset of 47 resources. The five crucial areas of our framework could guide both learners and developers when approaching RAI resources.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This reflective case study examines how the responsible use of social media can help raise awareness of electoral participation among immigrants in Norway. Through a structured, four-phase design process, the project combined semi-structured interviews, workshops with political representatives, and design probes as a new method to investigate barriers to participation and to test communication strategies. Findings revealed that the primary challenge was not the lack of information, but rather the ineffective distribution and visibility of existing resources. The project highlights the role of researchers and designers as facilitators of access, emphasising that responsible social media use requires careful attention to who is reached, how messages are interpreted, and what barriers remain. It also emphasises the need for participatory, community-driven approaches and the importance of integrating offline channels to reach diverse audiences. The reflections of the case study offer deeper insights into the ethical and strategic responsibilities of social media design that can be transferable to other cases in civic communication.
{"title":"Responsible design and use of social media technology: A reflective case study on raising awareness towards social sustainability","authors":"Christodoulos Christodoulou , Arild Skarsfjord Berg","doi":"10.1016/j.jrt.2026.100158","DOIUrl":"10.1016/j.jrt.2026.100158","url":null,"abstract":"<div><div>This reflective case study examines how the responsible use of social media can help raise awareness of electoral participation among immigrants in Norway. Through a structured, four-phase design process, the project combined semi-structured interviews, workshops with political representatives, and design probes as a new method to investigate barriers to participation and to test communication strategies. Findings revealed that the primary challenge was not the lack of information, but rather the ineffective distribution and visibility of existing resources. The project highlights the role of researchers and designers as facilitators of access, emphasising that responsible social media use requires careful attention to who is reached, how messages are interpreted, and what barriers remain. It also emphasises the need for participatory, community-driven approaches and the importance of integrating offline channels to reach diverse audiences. The reflections of the case study offer deeper insights into the ethical and strategic responsibilities of social media design that can be transferable to other cases in civic communication.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100158"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-03-04DOI: 10.1016/j.jrt.2026.100162
Kais Allkivi
Using NLP to analyze authentic learner language helps to build automated assessment and feedback tools. It also offers new and extensive insights into the development of second language production. However, there is a lack of research explicitly combining these aspects. This study aimed to classify Estonian proficiency examination writings (levels A2–C1), assuming that careful feature selection can lead to more explainable and generalizable machine learning models for language testing. Various linguistic properties of the training data were analyzed to identify relevant proficiency predictors associated with increasing complexity and correctness, rather than the writing task. Such lexical, morphological, surface, and error features were used to train classification models, which were compared to models that also allowed for other features. The pre-selected features yielded a similar test accuracy but reduced variation in the classification of different text types. The best classifiers achieved an accuracy of around 0.9. Additional evaluation on an earlier exam sample revealed that the writings have become more complex over a 7–10-year period, while accuracy still reached 0.8 with some feature sets. The results have been implemented in the writing evaluation module of an Estonian open-source language learning environment.
{"title":"Towards interpretable models for language proficiency assessment: Predicting the CEFR level of Estonian learner texts","authors":"Kais Allkivi","doi":"10.1016/j.jrt.2026.100162","DOIUrl":"10.1016/j.jrt.2026.100162","url":null,"abstract":"<div><div>Using NLP to analyze authentic learner language helps to build automated assessment and feedback tools. It also offers new and extensive insights into the development of second language production. However, there is a lack of research explicitly combining these aspects. This study aimed to classify Estonian proficiency examination writings (levels A2–C1), assuming that careful feature selection can lead to more explainable and generalizable machine learning models for language testing. Various linguistic properties of the training data were analyzed to identify relevant proficiency predictors associated with increasing complexity and correctness, rather than the writing task. Such lexical, morphological, surface, and error features were used to train classification models, which were compared to models that also allowed for other features. The pre-selected features yielded a similar test accuracy but reduced variation in the classification of different text types. The best classifiers achieved an accuracy of around 0.9. Additional evaluation on an earlier exam sample revealed that the writings have become more complex over a 7–10-year period, while accuracy still reached 0.8 with some feature sets. The results have been implemented in the writing evaluation module of an Estonian open-source language learning environment.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100162"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-13DOI: 10.1016/j.jrt.2026.100152
Ferdinand Griesdoorn, Maarten Kroesen, Pieter Vermaas
This exploratory thematic review examines the emerging landscape of Responsible Research and Innovation (RRI) education. It reviews 17 peer-reviewed studies published over the past two decades, using the PRISMA methodology. These studies were categorized into four themes to identify recurring successes and obstacles. The review highlights several successful practices, including the contextualization of RRI, promotion of reflexivity, participatory methods, interdisciplinary collaboration, and instances of institutional integration. Simultaneously, it uncovers persistent obstacles such as conceptual ambiguity, institutional resistance, scalability limitations, and the difficulty of translating abstract RRI principles into measurable competencies. The relations between some of these obstacles suggests a vicious reinforcing cycle that hinders progress in RRI education. We argue that resolving conceptual and definitional ambiguities could foster a more coherent and sustainable RRI education.
{"title":"An exploratory thematic review of the emerging field of RRI education","authors":"Ferdinand Griesdoorn, Maarten Kroesen, Pieter Vermaas","doi":"10.1016/j.jrt.2026.100152","DOIUrl":"10.1016/j.jrt.2026.100152","url":null,"abstract":"<div><div>This exploratory thematic review examines the emerging landscape of Responsible Research and Innovation (RRI) education. It reviews 17 peer-reviewed studies published over the past two decades, using the PRISMA methodology. These studies were categorized into four themes to identify recurring successes and obstacles. The review highlights several successful practices, including the contextualization of RRI, promotion of reflexivity, participatory methods, interdisciplinary collaboration, and instances of institutional integration. Simultaneously, it uncovers persistent obstacles such as conceptual ambiguity, institutional resistance, scalability limitations, and the difficulty of translating abstract RRI principles into measurable competencies. The relations between some of these obstacles suggests a vicious reinforcing cycle that hinders progress in RRI education. We argue that resolving conceptual and definitional ambiguities could foster a more coherent and sustainable RRI education.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100152"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-02-12DOI: 10.1016/j.jrt.2026.100159
Abhishek Thommandru , Varda Mone
This paper explores how digital technologies like artificial intelligence, blockchain archives, and platform-specific memorial systems change the grieving phenomenon. It examines how datafication processes, which transform emotional labour into content that is managed by algorithms, also redefine grief expression and remembrance and bring up ethical issues of commodification, privacy and consent. The discussion points out the historical continuities of mediated grief and how the traditional memorial artefacts were replaced by interactive digital settings and deals with the psychosocial consequences of long-term digital interaction with grieving. The most significant ones are ownership and control of digital legacies, being biased by an algorithm, the nature of representation and visibility, and the political aspects of collective mourning on the Internet. Relying on the principles of Responsible Research and Innovation (RRI), the work promotes the use of interdisciplinary methods in attaching human-centred design, ethics, sociocultural understanding, and legal changes to prevent the exploitation and inequality of digital afterlife technologies by applying humanistic, ethical, sociocultural and legal values and concerns.
{"title":"Datafication of mourning: Emotional labour, memory politics, and ethical innovation in digital grief technologies","authors":"Abhishek Thommandru , Varda Mone","doi":"10.1016/j.jrt.2026.100159","DOIUrl":"10.1016/j.jrt.2026.100159","url":null,"abstract":"<div><div>This paper explores how digital technologies like artificial intelligence, blockchain archives, and platform-specific memorial systems change the grieving phenomenon. It examines how datafication processes, which transform emotional labour into content that is managed by algorithms, also redefine grief expression and remembrance and bring up ethical issues of commodification, privacy and consent. The discussion points out the historical continuities of mediated grief and how the traditional memorial artefacts were replaced by interactive digital settings and deals with the psychosocial consequences of long-term digital interaction with grieving. The most significant ones are ownership and control of digital legacies, being biased by an algorithm, the nature of representation and visibility, and the political aspects of collective mourning on the Internet. Relying on the principles of Responsible Research and Innovation (RRI), the work promotes the use of interdisciplinary methods in attaching human-centred design, ethics, sociocultural understanding, and legal changes to prevent the exploitation and inequality of digital afterlife technologies by applying humanistic, ethical, sociocultural and legal values and concerns.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100159"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-07DOI: 10.1016/j.jrt.2026.100149
Pascalle Paumen , Katleen Gabriels
Artificial Intelligence (AI) based grief technologies are being offered as solutions to grief: AI griefbots are trained on the departed’s or dying person’s digital footprints to simulate them. Ongoing research questions whether they help or hinder ‘healthy’ grieving, neglecting underlying assumptions about ‘normal’ grieving. Using multimodal critical discourse analysis, this article analyses AI grief technology framings and portrayals in documentaries and six services’ websites (Seance AI; Eternos; You, Only Virtual; HereAfter AI; Project December; re;memory). We demonstrate AI grief technologies’ contribution to renegotiating grief as a technical problem to be solved. The techno-solutions are rooted in existing psychological discourses of ‘normal’ grief and position AI as band-aids or cures for grief and death. Documentaries on AI grief technologies play a significant part in reinforcing boundary-setting between normality and abnormality. Grief shifts from a human experience to something that can and should be made more efficient or avoided altogether through AI.
基于人工智能(AI)的悲伤技术被提供作为悲伤的解决方案:人工智能悲伤机器人通过训练死者或将死之人的数字足迹来模拟他们。正在进行的研究质疑它们是帮助还是阻碍了“健康”的悲伤,忽视了对“正常”悲伤的潜在假设。本文采用多模态批评话语分析,分析了纪录片和六个服务网站(Seance AI、Eternos、You, Only Virtual、HereAfter AI、Project December、re、memory)中AI悲伤技术的框架和描述。我们展示了人工智能悲伤技术对重新协商悲伤作为一个有待解决的技术问题的贡献。技术解决方案根植于现有的“正常”悲伤的心理学话语,并将人工智能定位为悲伤和死亡的创可贴或治疗方法。关于人工智能悲伤技术的纪录片在强化正常与异常之间的界限方面发挥了重要作用。悲伤从一种人类体验转变为可以而且应该通过人工智能变得更有效或完全避免的东西。
{"title":"Never say goodbye: assumptions of ‘normal’ grief in framings and portrayals of AI grief technologies","authors":"Pascalle Paumen , Katleen Gabriels","doi":"10.1016/j.jrt.2026.100149","DOIUrl":"10.1016/j.jrt.2026.100149","url":null,"abstract":"<div><div>Artificial Intelligence (AI) based grief technologies are being offered as solutions to grief: AI griefbots are trained on the departed’s or dying person’s digital footprints to simulate them. Ongoing research questions whether they help or hinder ‘healthy’ grieving, neglecting underlying assumptions about ‘normal’ grieving. Using multimodal critical discourse analysis, this article analyses AI grief technology framings and portrayals in documentaries and six services’ websites (<em>Seance AI; Eternos; You, Only Virtual; HereAfter AI; Project December; re;memory).</em> We demonstrate AI grief technologies’ contribution to renegotiating grief as a technical problem to be solved. The techno-solutions are rooted in existing psychological discourses of ‘normal’ grief and position AI as band-aids or cures for grief and death. Documentaries on AI grief technologies play a significant part in reinforcing boundary-setting between normality and abnormality. Grief shifts from a human experience to something that can and should be made more efficient or avoided altogether through AI.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100149"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-07DOI: 10.1016/j.jrt.2026.100150
Khadiza Laskor , Richard Owen , Andrew Charlesworth
Innovations regarding the digital afterlife, underpinned by rapid advances in generative AI and synthetic media, potentiate the creation of interactive, posthumous personas – digital re-creations – based on the digital remains of the dead. Current regulations do not extend to these, resulting in a governance void. We present findings from 69 stakeholder interviews that explored whether such re-creations should be governed and, if so, how. Our respondents exhibited a widespread view that governance was necessary and proposed several governance options, although there was little consensus as to which of these should be taken forward. Stakeholders acknowledged the various motivations and purposes of digital re-creations, which governance should be sensitive to. Our findings suggest governance principles that include proportionality (in relation to purpose and use), dignity (of the deceased) and protection from harm for those interacting with digital re-creations, particularly for the vulnerable, e.g. minors and those who may be grieving. Given the nascent stage of these innovations, initiatives aimed at developing a common understanding of terms (such as the digital afterlife), education and awareness programmes, and convening broadly configured, policy-oriented governance working groups are important first steps towards responsible development.
{"title":"Multi-stakeholder perspectives on governing innovation in the digital afterlife","authors":"Khadiza Laskor , Richard Owen , Andrew Charlesworth","doi":"10.1016/j.jrt.2026.100150","DOIUrl":"10.1016/j.jrt.2026.100150","url":null,"abstract":"<div><div>Innovations regarding the digital afterlife, underpinned by rapid advances in generative AI and synthetic media, potentiate the creation of interactive, posthumous personas – digital re-creations – based on the digital remains of the dead. Current regulations do not extend to these, resulting in a governance void. We present findings from 69 stakeholder interviews that explored whether such re-creations should be governed and, if so, how. Our respondents exhibited a widespread view that governance was necessary and proposed several governance options, although there was little consensus as to which of these should be taken forward. Stakeholders acknowledged the various motivations and purposes of digital re-creations, which governance should be sensitive to. Our findings suggest governance principles that include proportionality (in relation to purpose and use), dignity (of the deceased) and protection from harm for those interacting with digital re-creations, particularly for the vulnerable, e.g. minors and those who may be grieving. Given the nascent stage of these innovations, initiatives aimed at developing a common understanding of terms (such as the digital afterlife), education and awareness programmes, and convening broadly configured, policy-oriented governance working groups are important first steps towards responsible development.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100150"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-29DOI: 10.1016/j.jrt.2025.100143
Siri Padmanabhan Poti, Christopher J. Stanton, Catherine J. Stevens
In the global context of a ‘new social contract’ and a ‘flourishing world’, ethics mechanisms such as principles, guidelines, recommendations, standards, frameworks and checklists, are being established by public and private organisations and governments for the governance of 'algorithmic artificial persons (ALAP)', autonomous artificial intelligence (AAI) systems and emerging information and communication technologies (eICT). This paper examines, and identifies eleven gaps in the current ethics mechanisms, employing a ‘qualitative evidence synthesis’ (QES). Additionally, it proposes a ‘prescriptive conceptual model’ of an ‘anticipatory general ethics library’ (AnGEL), to enable resolution of these gaps. AnGEL is conceptually modelled as an implementable library of norms and rules for ALAPs / AAI systems and eICT, agnostic of domains and use cases. The conceptually modelled AnGEL may, post ‘verification’ through further discourse, and subsequent ‘validation’ of a prototype, be hosted on a cyber-physical intermediary. It can be rendered accessible through ‘Ethics-as-a-Service’ and provide ‘ethics interoperability’.
{"title":"Enabling ethics mechanisms in the governance of algorithmic artificial persons (ALAP)","authors":"Siri Padmanabhan Poti, Christopher J. Stanton, Catherine J. Stevens","doi":"10.1016/j.jrt.2025.100143","DOIUrl":"10.1016/j.jrt.2025.100143","url":null,"abstract":"<div><div>In the global context of a ‘new social contract’ and a ‘flourishing world’, ethics mechanisms such as principles, guidelines, recommendations, standards, frameworks and checklists, are being established by public and private organisations and governments for the governance of 'algorithmic artificial persons (ALAP)', autonomous artificial intelligence (AAI) systems and emerging information and communication technologies (eICT). This paper examines, and identifies eleven gaps in the current ethics mechanisms, employing a ‘qualitative evidence synthesis’ (QES). Additionally, it proposes a ‘prescriptive conceptual model’ of an ‘anticipatory general ethics library’ (AnGEL), to enable resolution of these gaps. AnGEL is conceptually modelled as an implementable library of norms and rules for ALAPs / AAI systems and eICT, agnostic of domains and use cases. The conceptually modelled AnGEL may, post ‘verification’ through further discourse, and subsequent ‘validation’ of a prototype, be hosted on a cyber-physical intermediary. It can be rendered accessible through ‘Ethics-as-a-Service’ and provide ‘ethics interoperability’.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-08DOI: 10.1016/j.jrt.2026.100148
Maria Isabel Betancur Franco
Within grief, chatbots are being used for therapy and as “griefbots”, impersonating the dead. Users risk data breaches and psychological attachments to monetised services. To determine human-centred guidelines for the responsible implementation of grief AI, a qualitative study was conducted, consisting of therapist interviews (N = 4), a survey of individuals who experienced grief (N = 49), and interviews with a subset of respondents (N = 4).
Participants expressed low trust in the efficacy and data security of therapeutic chatbots and strongly rejected griefbots, citing ethical and psychological concerns. Therapists also highlighted associated risks, but identified opportunities for their use as complementary therapy tools, emphasizing a need for professional supervision and usage restrictions.
Guidelines include personalisation, data privacy and security, professional oversight, and restrictions by age and psychological characteristics. The novel RUDA framework (Responsible Usage, Development, and Administration) is proposed, mitigating associated risks by outlining accountability by actor for the responsible implementation of grief AI.
{"title":"Design guidelines for the therapeutic use of AI in grief","authors":"Maria Isabel Betancur Franco","doi":"10.1016/j.jrt.2026.100148","DOIUrl":"10.1016/j.jrt.2026.100148","url":null,"abstract":"<div><div>Within grief, chatbots are being used for therapy and as “griefbots”, impersonating the dead. Users risk data breaches and psychological attachments to monetised services. To determine human-centred guidelines for the responsible implementation of grief AI, a qualitative study was conducted, consisting of therapist interviews (N = 4), a survey of individuals who experienced grief (N = 49), and interviews with a subset of respondents (N = 4).</div><div>Participants expressed low trust in the efficacy and data security of therapeutic chatbots and strongly rejected griefbots, citing ethical and psychological concerns. Therapists also highlighted associated risks, but identified opportunities for their use as complementary therapy tools, emphasizing a need for professional supervision and usage restrictions.</div><div>Guidelines include personalisation, data privacy and security, professional oversight, and restrictions by age and psychological characteristics. The novel RUDA framework (Responsible Usage, Development, and Administration) is proposed, mitigating associated risks by outlining accountability by actor for the responsible implementation of grief AI.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100148"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-14DOI: 10.1016/j.jrt.2025.100144
Iztok Kosem , Mojca Stritar Kučuk , Špela Arhar Holdt
This paper introduces a newly developed specialised Corplus concordancer designed to work with corpora containing annotated language corrections. Unlike traditional concordancers, which focus on retrieving linguistic patterns and frequency data from various types of corpora, this tool emphasises the retrieval and comparison of both erroneous and corrected forms within texts. It allows researchers and educators to track, analyse, and compare errors alongside their corrections, providing empirical data that can be applied to first- or second-language acquisition research, as well as applied linguistics, such as the development of language learning materials. The intuitive interface of the Corplus concordancer makes it suitable for classroom use, supporting data-driven learning and other approaches based on authentic language data. The Corplus concordancer has already been implemented to provide access to two Slovene corpora: the developmental corpus Šolar and the learner corpus KOST. The tool is programmed in a way to support a variety of languages and corpus types.
{"title":"Corplus: A new concordancer for exploring authentic texts with language corrections","authors":"Iztok Kosem , Mojca Stritar Kučuk , Špela Arhar Holdt","doi":"10.1016/j.jrt.2025.100144","DOIUrl":"10.1016/j.jrt.2025.100144","url":null,"abstract":"<div><div>This paper introduces a newly developed specialised Corplus concordancer designed to work with corpora containing annotated language corrections. Unlike traditional concordancers, which focus on retrieving linguistic patterns and frequency data from various types of corpora, this tool emphasises the retrieval and comparison of both erroneous and corrected forms within texts. It allows researchers and educators to track, analyse, and compare errors alongside their corrections, providing empirical data that can be applied to first- or second-language acquisition research, as well as applied linguistics, such as the development of language learning materials. The intuitive interface of the Corplus concordancer makes it suitable for classroom use, supporting data-driven learning and other approaches based on authentic language data. The Corplus concordancer has already been implemented to provide access to two Slovene corpora: the developmental corpus Šolar and the learner corpus KOST. The tool is programmed in a way to support a variety of languages and corpus types.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"25 ","pages":"Article 100144"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145799789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}