首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research
Pub Date : 2025-02-07 DOI: 10.1016/j.jrt.2025.100113
Aive Pevkur , Kadi Lubi
The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.
{"title":"The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research","authors":"Aive Pevkur ,&nbsp;Kadi Lubi","doi":"10.1016/j.jrt.2025.100113","DOIUrl":"10.1016/j.jrt.2025.100113","url":null,"abstract":"<div><div>The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring research practices with non-native English speakers: A reflective case study
Pub Date : 2025-02-05 DOI: 10.1016/j.jrt.2025.100109
Marilys Galindo, Teresa Solorzano, Julie Neisler
Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.
{"title":"Exploring research practices with non-native English speakers: A reflective case study","authors":"Marilys Galindo,&nbsp;Teresa Solorzano,&nbsp;Julie Neisler","doi":"10.1016/j.jrt.2025.100109","DOIUrl":"10.1016/j.jrt.2025.100109","url":null,"abstract":"<div><div>Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100109"},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Process industry disrupted: AI and the need for human orchestration
Pub Date : 2025-01-29 DOI: 10.1016/j.jrt.2025.100105
M.W. Vegter , V. Blok , R. Wesselink
According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue human-centered AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.
{"title":"Process industry disrupted: AI and the need for human orchestration","authors":"M.W. Vegter ,&nbsp;V. Blok ,&nbsp;R. Wesselink","doi":"10.1016/j.jrt.2025.100105","DOIUrl":"10.1016/j.jrt.2025.100105","url":null,"abstract":"<div><div>According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue <em>human-centered</em> AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human centred explainable AI decision-making in healthcare
Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100108
Catharina M. van Leersum , Clara Maathuis
Human-centred AI (HCAI1) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on explainable AI (XAI2) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.
A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called human-centred explainable AI (HCXAI3). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.
To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.
{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum ,&nbsp;Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":"10.1016/j.jrt.2025.100108","url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized governance in action: A governance framework of digital responsibility in startups
Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100107
Yangyang Zhao , Jiajun Qiu
The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.
{"title":"Decentralized governance in action: A governance framework of digital responsibility in startups","authors":"Yangyang Zhao ,&nbsp;Jiajun Qiu","doi":"10.1016/j.jrt.2025.100107","DOIUrl":"10.1016/j.jrt.2025.100107","url":null,"abstract":"<div><div>The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring expert and public perceptions of answerability and trustworthy autonomous systems
Pub Date : 2025-01-09 DOI: 10.1016/j.jrt.2025.100106
Louise Hatherall, Nayha Sethi
The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.
{"title":"Exploring expert and public perceptions of answerability and trustworthy autonomous systems","authors":"Louise Hatherall,&nbsp;Nayha Sethi","doi":"10.1016/j.jrt.2025.100106","DOIUrl":"10.1016/j.jrt.2025.100106","url":null,"abstract":"<div><div>The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring ethical frontiers of artificial intelligence in marketing
Pub Date : 2024-12-18 DOI: 10.1016/j.jrt.2024.100103
Harinder Hari , Arun Sharma , Sanjeev Verma , Rijul Chaturvedi
The pervasiveness of artificial intelligence (AI) in consumers' lives is proliferating. For firms, AI offers the potential to connect, serve, and satisfy consumers with posthuman abilities. However, the adoption and usage of this technology face barriers, with ethical concerns emerging as one of the most significant. Yet, much remains unknown about the ethical concerns. Accordingly, to fill the gap, the current study undertakes a comprehensive and systematic review of 445 publications on AI and marketing ethics, utilizing Scientific Procedures and Rationales for Systematic Literature review protocol to conduct performance analysis (quantitative and qualitative) and science mapping (conceptual and intellectual structures) for literature review and the identification of future research directions. Furthermore, the study conducts thematic and content analysis to uncover the themes, clusters, and theories operating in the field, leading to a conceptual framework that lists antecedents, mediators, moderators, and outcomes of ethics in AI in marketing. The findings of the study present future research directions, providing guidance for practitioners and scholars in the area of ethics in AI in marketing.
{"title":"Exploring ethical frontiers of artificial intelligence in marketing","authors":"Harinder Hari ,&nbsp;Arun Sharma ,&nbsp;Sanjeev Verma ,&nbsp;Rijul Chaturvedi","doi":"10.1016/j.jrt.2024.100103","DOIUrl":"10.1016/j.jrt.2024.100103","url":null,"abstract":"<div><div>The pervasiveness of artificial intelligence (AI) in consumers' lives is proliferating. For firms, AI offers the potential to connect, serve, and satisfy consumers with posthuman abilities. However, the adoption and usage of this technology face barriers, with ethical concerns emerging as one of the most significant. Yet, much remains unknown about the ethical concerns. Accordingly, to fill the gap, the current study undertakes a comprehensive and systematic review of 445 publications on AI and marketing ethics, utilizing Scientific Procedures and Rationales for Systematic Literature review protocol to conduct performance analysis (quantitative and qualitative) and science mapping (conceptual and intellectual structures) for literature review and the identification of future research directions. Furthermore, the study conducts thematic and content analysis to uncover the themes, clusters, and theories operating in the field, leading to a conceptual framework that lists antecedents, mediators, moderators, and outcomes of ethics in AI in marketing. The findings of the study present future research directions, providing guidance for practitioners and scholars in the area of ethics in AI in marketing.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The heuristics gap in AI ethics: Impact on green AI policies and beyond
Pub Date : 2024-12-16 DOI: 10.1016/j.jrt.2024.100104
Guglielmo Tamburrini
This article analyses the negative impact of heuristic biases on the main goals of AI ethics. These biases are found to hinder the identification of ethical issues in AI, the development of related ethical policies, and their application. This pervasive impact has been mostly neglected, giving rise to what is called here the heuristics gap in AI ethics. This heuristics gap is illustrated using the AI carbon footprint problem as an exemplary case. Psychological work on biases hampering climate warming mitigation actions is specialized to this problem, and novel extensions are proposed by considering heuristic mentalization strategies that one uses to design and interact with AI systems. To mitigate the effects of this heuristics gap, interventions on the design of ethical policies and suitable incentives for AI stakeholders are suggested. Finally, a checklist of questions helping one to investigate systematically this heuristics gap throughout the AI ethics pipeline is provided.
{"title":"The heuristics gap in AI ethics: Impact on green AI policies and beyond","authors":"Guglielmo Tamburrini","doi":"10.1016/j.jrt.2024.100104","DOIUrl":"10.1016/j.jrt.2024.100104","url":null,"abstract":"<div><div>This article analyses the negative impact of heuristic biases on the main goals of AI ethics. These biases are found to hinder the identification of ethical issues in AI, the development of related ethical policies, and their application. This pervasive impact has been mostly neglected, giving rise to what is called here the heuristics gap in AI ethics. This heuristics gap is illustrated using the AI carbon footprint problem as an exemplary case. Psychological work on biases hampering climate warming mitigation actions is specialized to this problem, and novel extensions are proposed by considering heuristic mentalization strategies that one uses to design and interact with AI systems. To mitigate the effects of this heuristics gap, interventions on the design of ethical policies and suitable incentives for AI stakeholders are suggested. Finally, a checklist of questions helping one to investigate systematically this heuristics gap throughout the AI ethics pipeline is provided.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring ethical research issues related to extended reality technologies used with autistic populations
Pub Date : 2024-12-14 DOI: 10.1016/j.jrt.2024.100102
Nigel Newbutt, Ryan Bradley
This article provides an exploration of the ethical considerations and challenges surrounding the use of extended reality (XR) technologies with autistic populations. As XR-based research offers promising avenues for supporting autistic individuals, we explore and highlight various ethical concerns are inherent in XR research and application with autistic individuals. Despite its potential, we outline areas of concern related to privacy, security, content regulation, psychological well-being, informed consent, realism, sensory overload, and accessibility. We conclude with the need for tailored ethical frameworks to guide XR research with autistic populations, emphasizing collaboration, accessibility, and safeguarding as key principles and underscore the importance of balancing technological innovation with ethical responsibility to ensure that XR research with autistic populations is conducted with sensitivity, inclusivity, and respect for individual rights and well-being.
{"title":"Exploring ethical research issues related to extended reality technologies used with autistic populations","authors":"Nigel Newbutt,&nbsp;Ryan Bradley","doi":"10.1016/j.jrt.2024.100102","DOIUrl":"10.1016/j.jrt.2024.100102","url":null,"abstract":"<div><div>This article provides an exploration of the ethical considerations and challenges surrounding the use of extended reality (XR) technologies with autistic populations. As XR-based research offers promising avenues for supporting autistic individuals, we explore and highlight various ethical concerns are inherent in XR research and application with autistic individuals. Despite its potential, we outline areas of concern related to privacy, security, content regulation, psychological well-being, informed consent, realism, sensory overload, and accessibility. We conclude with the need for tailored ethical frameworks to guide XR research with autistic populations, emphasizing collaboration, accessibility, and safeguarding as key principles and underscore the importance of balancing technological innovation with ethical responsibility to ensure that XR research with autistic populations is conducted with sensitivity, inclusivity, and respect for individual rights and well-being.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a critical recovery of liberatory PAR for food system transformations: Struggles and strategies in collaborating with radical and progressive food movements in EU-funded R&I projects 以批判的方式恢复解放性 PAR,促进粮食系统转型:在欧盟资助的研究与创新项目中与激进和进步粮食运动合作的斗争与战略
Pub Date : 2024-11-22 DOI: 10.1016/j.jrt.2024.100100
Tobia S. Jones, Anne M.C. Loeber
From sustainability and justice perspectives, food systems and R&I systems need transformation. Participatory action research (PAR) presents a suitable approach as it enables collaboration between those affected by a social issue and researchers based in universities to co-create knowledge and interventionist actions. However, PAR is often misconstrued even within projects calling for civil society actors to act as full partners in research. To avoid reproducing the very structures and practices in need of transformation, this paper argues for university researchers to team up with members of food movements to engage in ‘liberatory’ forms of PAR. The question is how liberatory PAR's guiding concepts of reciprocal participation, critical recovery and systemic devolution can be enacted in projects that did not start out as PAR projects. Two EU-funded projects on food system transformation serve as a basis to answer this question, generating concrete recommendations for establishing co-creative, mutually liberating, and transdisciplinary research collectives.
从可持续性和公正的角度来看,粮食系统和研究与创新系统需要转型。参与式行动研究(PAR)是一种合适的方法,因为它能使受社会问题影响的人与大学研究人员合作,共同创造知识和采取干预行动。然而,即使在呼吁民间社会行动者作为正式合作伙伴参与研究的项目中,参与式行动研究也常常被误解。为了避免重复需要变革的结构和实践,本文主张大学研究人员与粮食运动成员合作,参与 "解放 "形式的 PAR。问题是,解放性 PAR 的互惠参与、批判性恢复和系统下放等指导理念如何在并非以 PAR 项目起步的项目中得到贯彻。欧盟资助的两个粮食系统转型项目是回答这个问题的基础,它们为建立共同创造、相互解放和跨学科的研究集体提出了具体建议。
{"title":"Towards a critical recovery of liberatory PAR for food system transformations: Struggles and strategies in collaborating with radical and progressive food movements in EU-funded R&I projects","authors":"Tobia S. Jones,&nbsp;Anne M.C. Loeber","doi":"10.1016/j.jrt.2024.100100","DOIUrl":"10.1016/j.jrt.2024.100100","url":null,"abstract":"<div><div>From sustainability and justice perspectives, food systems and R&amp;I systems need transformation. Participatory action research (PAR) presents a suitable approach as it enables collaboration between those affected by a social issue and researchers based in universities to co-create knowledge and interventionist actions. However, PAR is often misconstrued even within projects calling for civil society actors to act as full partners in research. To avoid reproducing the very structures and practices in need of transformation, this paper argues for university researchers to team up with members of food movements to engage in ‘liberatory’ forms of PAR. The question is how liberatory PAR's guiding concepts of reciprocal participation, critical recovery and systemic devolution can be enacted in projects that did not start out as PAR projects. Two EU-funded projects on food system transformation serve as a basis to answer this question, generating concrete recommendations for establishing co-creative, mutually liberating, and transdisciplinary research collectives.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"20 ","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1