首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Data Hazards: An open-source vocabulary of ethical hazards for data-intensive projects
Pub Date : 2025-02-12 DOI: 10.1016/j.jrt.2025.100110
Natalie Zelenka , Nina H. Di Cara , Euan Bennet , Phil Clatworthy , Huw Day , Ismael Kherroubi Garcia , Susana Roman Garcia , Vanessa Aisyahsari Hanschke , Emma Siân Kuwertz
Understanding the potential for downstream harms from data-intensive technologies requires strong collaboration across disciplines and with the public. Having shared vocabularies of concerns reduces the communication barriers inherent in this work. The Data Hazards project (datahazards.com) contains an open-source, controlled vocabulary of 11 hazards associated with data science work, presented as ‘labels’. Each label has (i) an icon, (ii) a description, (iii) examples, and, crucially, (iv) suggested safety precautions. A reflective discussion format and resources have also been developed. These have been created over three years with feedback from interdisciplinary contributors, and their use evaluated by participants (N=47). The labels include concerns often out-of-scope for ethics committees, like environmental impact. The resources can be used as a structure for interdisciplinary harms discovery work, for communicating hazards, collecting public input or in educational settings. Future versions of the project will develop through feedback from open-source contributions, methodological research and outreach.
{"title":"Data Hazards: An open-source vocabulary of ethical hazards for data-intensive projects","authors":"Natalie Zelenka ,&nbsp;Nina H. Di Cara ,&nbsp;Euan Bennet ,&nbsp;Phil Clatworthy ,&nbsp;Huw Day ,&nbsp;Ismael Kherroubi Garcia ,&nbsp;Susana Roman Garcia ,&nbsp;Vanessa Aisyahsari Hanschke ,&nbsp;Emma Siân Kuwertz","doi":"10.1016/j.jrt.2025.100110","DOIUrl":"10.1016/j.jrt.2025.100110","url":null,"abstract":"<div><div>Understanding the potential for downstream harms from data-intensive technologies requires strong collaboration across disciplines and with the public. Having shared vocabularies of concerns reduces the communication barriers inherent in this work. The Data Hazards project (<span><span>datahazards.com</span><svg><path></path></svg></span>) contains an open-source, controlled vocabulary of 11 hazards associated with data science work, presented as ‘labels’. Each label has (i) an icon, (ii) a description, (iii) examples, and, crucially, (iv) suggested safety precautions. A reflective discussion format and resources have also been developed. These have been created over three years with feedback from interdisciplinary contributors, and their use evaluated by participants (N=47). The labels include concerns often out-of-scope for ethics committees, like environmental impact. The resources can be used as a structure for interdisciplinary harms discovery work, for communicating hazards, collecting public input or in educational settings. Future versions of the project will develop through feedback from open-source contributions, methodological research and outreach.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100110"},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research
Pub Date : 2025-02-07 DOI: 10.1016/j.jrt.2025.100113
Aive Pevkur , Kadi Lubi
The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.
{"title":"The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research","authors":"Aive Pevkur ,&nbsp;Kadi Lubi","doi":"10.1016/j.jrt.2025.100113","DOIUrl":"10.1016/j.jrt.2025.100113","url":null,"abstract":"<div><div>The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research ethics committees as knowledge gatekeepers: The impact of emerging technologies on social science research
Pub Date : 2025-02-07 DOI: 10.1016/j.jrt.2025.100112
Anu Masso , Jevgenia Gerassimenko , Tayfun Kasapoglu , Mai Beilmann
This article investigates the evolution of research ethics within the social sciences, emphasising the shift from procedural norms borrowed from medical and natural sciences to social scientific discipline-specific and method-based principles. This transformation acknowledges the unique challenges and opportunities in social science research, particularly in the context of emerging data technologies such as digital data, algorithms, and artificial intelligence. Our empirical analysis, based on a survey conducted among international social scientists (N = 214), highlights the precariousness researchers face regarding these technological shifts. Traditional methods remain prevalent, despite the recognition of new digital methodologies that necessitate new ethical principles. We discuss the role of ethics committees as influential gatekeepers, examining power dynamics and access to knowledge within the research landscape. The findings underscore the need for tailored ethical guidelines that accommodate diverse methodological approaches, advocate for interdisciplinary dialogue, and address inequalities in knowledge production. This article contributes to the broader understanding of evolving research ethics in an increasingly data-driven world.
{"title":"Research ethics committees as knowledge gatekeepers: The impact of emerging technologies on social science research","authors":"Anu Masso ,&nbsp;Jevgenia Gerassimenko ,&nbsp;Tayfun Kasapoglu ,&nbsp;Mai Beilmann","doi":"10.1016/j.jrt.2025.100112","DOIUrl":"10.1016/j.jrt.2025.100112","url":null,"abstract":"<div><div>This article investigates the evolution of research ethics within the social sciences, emphasising the shift from procedural norms borrowed from medical and natural sciences to social scientific discipline-specific and method-based principles. This transformation acknowledges the unique challenges and opportunities in social science research, particularly in the context of emerging data technologies such as digital data, algorithms, and artificial intelligence. Our empirical analysis, based on a survey conducted among international social scientists (N = 214), highlights the precariousness researchers face regarding these technological shifts. Traditional methods remain prevalent, despite the recognition of new digital methodologies that necessitate new ethical principles. We discuss the role of ethics committees as influential gatekeepers, examining power dynamics and access to knowledge within the research landscape. The findings underscore the need for tailored ethical guidelines that accommodate diverse methodological approaches, advocate for interdisciplinary dialogue, and address inequalities in knowledge production. This article contributes to the broader understanding of evolving research ethics in an increasingly data-driven world.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan
Pub Date : 2025-02-06 DOI: 10.1016/j.jrt.2025.100111
Paul Trauttmansdorff
Toward an Anthropology of Screens by Mauro Carbone and Graziano Lingua is an insightful book about the cultural and philosophical significance of screens, which highlights their role in mediating human interactions, reshaping relationships with people and artefacts, and raising ethical questions about their pervasive influence in contemporary life.
{"title":"Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan","authors":"Paul Trauttmansdorff","doi":"10.1016/j.jrt.2025.100111","DOIUrl":"10.1016/j.jrt.2025.100111","url":null,"abstract":"<div><div>Toward an Anthropology of Screens by Mauro Carbone and Graziano Lingua is an insightful book about the cultural and philosophical significance of screens, which highlights their role in mediating human interactions, reshaping relationships with people and artefacts, and raising ethical questions about their pervasive influence in contemporary life.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100111"},"PeriodicalIF":0.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring research practices with non-native English speakers: A reflective case study
Pub Date : 2025-02-05 DOI: 10.1016/j.jrt.2025.100109
Marilys Galindo, Teresa Solorzano, Julie Neisler
Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.
{"title":"Exploring research practices with non-native English speakers: A reflective case study","authors":"Marilys Galindo,&nbsp;Teresa Solorzano,&nbsp;Julie Neisler","doi":"10.1016/j.jrt.2025.100109","DOIUrl":"10.1016/j.jrt.2025.100109","url":null,"abstract":"<div><div>Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100109"},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Process industry disrupted: AI and the need for human orchestration
Pub Date : 2025-01-29 DOI: 10.1016/j.jrt.2025.100105
M.W. Vegter , V. Blok , R. Wesselink
According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue human-centered AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.
{"title":"Process industry disrupted: AI and the need for human orchestration","authors":"M.W. Vegter ,&nbsp;V. Blok ,&nbsp;R. Wesselink","doi":"10.1016/j.jrt.2025.100105","DOIUrl":"10.1016/j.jrt.2025.100105","url":null,"abstract":"<div><div>According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue <em>human-centered</em> AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human centred explainable AI decision-making in healthcare
Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100108
Catharina M. van Leersum , Clara Maathuis
Human-centred AI (HCAI1) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on explainable AI (XAI2) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.
A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called human-centred explainable AI (HCXAI3). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.
To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.
{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum ,&nbsp;Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":"10.1016/j.jrt.2025.100108","url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized governance in action: A governance framework of digital responsibility in startups
Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100107
Yangyang Zhao , Jiajun Qiu
The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.
{"title":"Decentralized governance in action: A governance framework of digital responsibility in startups","authors":"Yangyang Zhao ,&nbsp;Jiajun Qiu","doi":"10.1016/j.jrt.2025.100107","DOIUrl":"10.1016/j.jrt.2025.100107","url":null,"abstract":"<div><div>The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring expert and public perceptions of answerability and trustworthy autonomous systems
Pub Date : 2025-01-09 DOI: 10.1016/j.jrt.2025.100106
Louise Hatherall, Nayha Sethi
The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.
{"title":"Exploring expert and public perceptions of answerability and trustworthy autonomous systems","authors":"Louise Hatherall,&nbsp;Nayha Sethi","doi":"10.1016/j.jrt.2025.100106","DOIUrl":"10.1016/j.jrt.2025.100106","url":null,"abstract":"<div><div>The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring ethical frontiers of artificial intelligence in marketing
Pub Date : 2024-12-18 DOI: 10.1016/j.jrt.2024.100103
Harinder Hari , Arun Sharma , Sanjeev Verma , Rijul Chaturvedi
The pervasiveness of artificial intelligence (AI) in consumers' lives is proliferating. For firms, AI offers the potential to connect, serve, and satisfy consumers with posthuman abilities. However, the adoption and usage of this technology face barriers, with ethical concerns emerging as one of the most significant. Yet, much remains unknown about the ethical concerns. Accordingly, to fill the gap, the current study undertakes a comprehensive and systematic review of 445 publications on AI and marketing ethics, utilizing Scientific Procedures and Rationales for Systematic Literature review protocol to conduct performance analysis (quantitative and qualitative) and science mapping (conceptual and intellectual structures) for literature review and the identification of future research directions. Furthermore, the study conducts thematic and content analysis to uncover the themes, clusters, and theories operating in the field, leading to a conceptual framework that lists antecedents, mediators, moderators, and outcomes of ethics in AI in marketing. The findings of the study present future research directions, providing guidance for practitioners and scholars in the area of ethics in AI in marketing.
{"title":"Exploring ethical frontiers of artificial intelligence in marketing","authors":"Harinder Hari ,&nbsp;Arun Sharma ,&nbsp;Sanjeev Verma ,&nbsp;Rijul Chaturvedi","doi":"10.1016/j.jrt.2024.100103","DOIUrl":"10.1016/j.jrt.2024.100103","url":null,"abstract":"<div><div>The pervasiveness of artificial intelligence (AI) in consumers' lives is proliferating. For firms, AI offers the potential to connect, serve, and satisfy consumers with posthuman abilities. However, the adoption and usage of this technology face barriers, with ethical concerns emerging as one of the most significant. Yet, much remains unknown about the ethical concerns. Accordingly, to fill the gap, the current study undertakes a comprehensive and systematic review of 445 publications on AI and marketing ethics, utilizing Scientific Procedures and Rationales for Systematic Literature review protocol to conduct performance analysis (quantitative and qualitative) and science mapping (conceptual and intellectual structures) for literature review and the identification of future research directions. Furthermore, the study conducts thematic and content analysis to uncover the themes, clusters, and theories operating in the field, leading to a conceptual framework that lists antecedents, mediators, moderators, and outcomes of ethics in AI in marketing. The findings of the study present future research directions, providing guidance for practitioners and scholars in the area of ethics in AI in marketing.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1