首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital 数字主权与智能可穿戴设备:数字合法控制权分配的四大道德准则
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100053
N. Conradie, S. Nagel
{"title":"Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital","authors":"N. Conradie, S. Nagel","doi":"10.1016/j.jrt.2022.100053","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100053","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
¿Human-like Computers? Velden, Manfred (2022). Human-like Computers: A Lesson in Absurdity. Berlin: Schwabe Verlag. ?类人计算机?Manfred Velden(2022)。类人计算机:荒谬的一课。柏林:Schwabe Verlag。
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100037
Carlos Andrés Salazar Martínez
{"title":"¿Human-like Computers? Velden, Manfred (2022). Human-like Computers: A Lesson in Absurdity. Berlin: Schwabe Verlag.","authors":"Carlos Andrés Salazar Martínez","doi":"10.1016/j.jrt.2022.100037","DOIUrl":"10.1016/j.jrt.2022.100037","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000142/pdfft?md5=0f467b3c25ff4be3ac3bf4e00407bcf3&pid=1-s2.0-S2666659622000142-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46535421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space 太空殖民应该以繁殖为基础吗?关于选择在太空生孩子的关键考虑
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100040
Maurizio Balistreri , Steven Umbrello

This paper aims to argue for the thesis that it is not a priori morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we ​​could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.

这篇论文的目的是论证这一论点,即第一阶段的太空殖民是基于有性生殖的,这并不是先天的道德上的正当理由。我们的立场是基于这样的论点:至少在第一批殖民定居点,那些在太空出生的人可能没有很好的机会过上美好的生活。这个问题并不取决于这样一个事实,即另一个星球上的生命将不得不处理诸如太阳辐射或重力减少或完全没有的问题。考虑到我们可能殖民的行星或定居点可以通过地球工程过程完全改变,这些问题似乎可以得到解决。同样,人类在太空生活的能力也可以通过基因改造干预措施得到提高。然而,即使有关太空生存的问题得到了解决,我们认为,至少在太空或其他星球殖民的第一阶段,在太空生孩子可能是一种道德上不负责任的选择,因为我们认为,我们能给他们的生活可能不够好。我们认为,从我们决定要孩子的时候起,情况就是这样。我们认为,满足于我们的孩子有最低限度的生活价值,在道德上是不对的;在我们在太空生孩子之前,我们应该确保我们能给他们一个合理的机会过上美好的生活。这个原则既适用于地球——至少在你可以选择的地方——也适用于太空旅行。
{"title":"Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space","authors":"Maurizio Balistreri ,&nbsp;Steven Umbrello","doi":"10.1016/j.jrt.2022.100040","DOIUrl":"10.1016/j.jrt.2022.100040","url":null,"abstract":"<div><p>This paper aims to argue for the thesis that it is not <em>a priori</em> morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we ​​could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000178/pdfft?md5=e944bf978b1233e58ceb542e40645d21&pid=1-s2.0-S2666659622000178-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44667584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Erratum regarding missing Declaration of Competing Interest statements in previously published article. 关于先前发表的文章中缺少竞争利益声明的勘误表。
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100033
{"title":"Erratum regarding missing Declaration of Competing Interest statements in previously published article.","authors":"","doi":"10.1016/j.jrt.2022.100033","DOIUrl":"10.1016/j.jrt.2022.100033","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9421412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9888780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns 尼日利亚的数字身份(ID)管理计划:伦理、法律和社会文化问题
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100039
Damian Eke , Ridwan Oloyede , Paschal Ochang , Favour Borokini , Mercy Adeyeye , Lebura Sorbarikor , Bamidele Wale-Oshinowo , Simisola Akintoye

National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.

国家数字身份管理系统作为将公民纳入日益数字化的公共服务的关键工具,已经获得了吸引力。在世界银行的帮助下,世界各国正致力于建立和推广数字身份识别系统,以改善发展成果,这是“身份换发展”倡议的一部分。其中一个国家是尼日利亚,它正在为其1亿多居民建立一个国家身份管理数据库。然而,在国家一级设计和扩展这样一个系统会涉及隐私、安全、人权、道德和社会文化影响。通过混合方法,本文确定了其中一些问题,并对尼日利亚人最担心的问题进行了分类。它围绕集中的国家电子身份(eID)管理系统、公众信任和负责任的数据治理提供了一个经验上合理的视角,并就加强尼日利亚身份管理数字基础设施的隐私、安全和可信度提出了建议。
{"title":"Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns","authors":"Damian Eke ,&nbsp;Ridwan Oloyede ,&nbsp;Paschal Ochang ,&nbsp;Favour Borokini ,&nbsp;Mercy Adeyeye ,&nbsp;Lebura Sorbarikor ,&nbsp;Bamidele Wale-Oshinowo ,&nbsp;Simisola Akintoye","doi":"10.1016/j.jrt.2022.100039","DOIUrl":"10.1016/j.jrt.2022.100039","url":null,"abstract":"<div><p>National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000166/pdfft?md5=14d456c9bcd0a32b20f209e06035c96b&pid=1-s2.0-S2666659622000166-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49198422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Responsible innovation; responsible data. A case study in autonomous driving 负责任的创新;负责任的数据。自动驾驶案例研究
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100038
C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka

Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.

自动驾驶汽车(AVs)在运行过程中收集大量数据(mb /秒)。记录了哪些数据,谁有权访问这些数据,以及如何分析和使用这些数据,可能会产生重大的技术、伦理、社会和法律影响。通过在自动驾驶汽车的生命周期中嵌入负责任创新(RI)方法,可以预见和防止因数据记录不足而导致的负面后果。国际扶轮方法要求将社会效益、预期治理和利益相关者包容等问题置于研究考虑的前沿。作为基本原则,这些概念为研究创造了一种情境思维模式,根据定义,它将具有RI基础和应用。这种RI心态启发并指导了自动驾驶汽车研究项目的起源和运作。这对研究大纲和工作计划的影响,以及一路上遇到的挑战都是详细的,并对国际扶轮在实践中的结论和建议。
{"title":"Responsible innovation; responsible data. A case study in autonomous driving","authors":"C. Ten Holter ,&nbsp;L. Kunze ,&nbsp;J-A. Pattinson ,&nbsp;P. Salvini ,&nbsp;M. Jirotka","doi":"10.1016/j.jrt.2022.100038","DOIUrl":"10.1016/j.jrt.2022.100038","url":null,"abstract":"<div><p>Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000154/pdfft?md5=83cd9d06b2115ee4c793d9b4e7219e99&pid=1-s2.0-S2666659622000154-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes 人力资源技术中的负责任人工智能:一种创新的包容性和公平性匹配算法,用于招聘目的
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100041
Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier

In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.

在本文中,我们通过引入一种新的基于机器学习的工作匹配方法,从算法开发的公平设计方法的角度,解决了在人力资源管理中负责任地使用人工智能的广泛问题。我们的算法解决方案的目标是改进和自动化临时工的招聘,以找到与现有工作机会最匹配的人。我们讨论了公平应该如何成为人力资源管理的重点,并强调了在开发算法解决方案以匹配候选人与工作机会时出现的主要挑战和研究缺陷。在深入分析了我们专有数据集的分布和偏差之后,我们描述了用于评估我们机器学习模型的有效性和公平性的方法,以及纠正一些偏差的解决方案。我们引入的模型是我们努力控制机器学习算法在招聘结果中的不公平的第一步,更广泛地说,是在人力资源管理中负责任地使用人工智能,这要归功于旨在控制偏见和防止歧视性结果的“保障算法”。
{"title":"Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes","authors":"Sebastien Delecraz ,&nbsp;Loukman Eltarr ,&nbsp;Martin Becuwe ,&nbsp;Henri Bouxin ,&nbsp;Nicolas Boutin ,&nbsp;Olivier Oullier","doi":"10.1016/j.jrt.2022.100041","DOIUrl":"10.1016/j.jrt.2022.100041","url":null,"abstract":"<div><p>In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200018X/pdfft?md5=1067842485c764fe87523992da73aaec&pid=1-s2.0-S266665962200018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AI Documentation: A path to accountability 人工智能文档:责任之路
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100043
Florian Königstorfer, Stefan Thalmann

Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.

人工智能(AI)为企业带来了巨大的潜力,但由于它的黑箱特性,也存在很大的缺陷。在规范的用例中,这是一个特别的挑战,在部署之前需要对软件进行认证或验证。传统的软件文档不足以向审核员提供所需的证据,而且目前还没有针对人工智能的指导方针。因此,人工智能在受监管的用例中面临着重大的采用障碍,因为人工智能的问责制不能得到充分的保证。本访谈研究旨在确定在规范用例中记录人工智能的当前状态。我们发现,人工智能用例的风险水平对人工智能的采用和人工智能文档的范围有影响。此外,我们讨论了人工智能目前是如何记录的,以及从业者在记录人工智能时面临的挑战。
{"title":"AI Documentation: A path to accountability","authors":"Florian Königstorfer,&nbsp;Stefan Thalmann","doi":"10.1016/j.jrt.2022.100043","DOIUrl":"10.1016/j.jrt.2022.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000208/pdfft?md5=bb63316f230d774001f337edc4c0fa62&pid=1-s2.0-S2666659622000208-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A method for ethical AI in defence: A case study on developing trustworthy autonomous systems 一种道德人工智能防御方法:以开发值得信赖的自主系统为例
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100036
Tara Roberson , Stephen Bornstein , Rain Liivoja , Simon Ng , Jason Scholz , Kate Devitt

What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system – Athena AI  – within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.

在国防部开发和部署可信任的自主系统时,负责任和响应意味着什么?在这篇反思性的短文中,我们描述了一个案例研究,在一个由行业主导、政府资助的项目中,与不同的合作者和利益相关者一起构建一个值得信赖的自主系统——Athena AI。通过这个案例研究,我们总结了在高翻译准备水平的技术开发过程中嵌入负责任的研究和创新、设计伦理方法和原则的价值和影响。
{"title":"A method for ethical AI in defence: A case study on developing trustworthy autonomous systems","authors":"Tara Roberson ,&nbsp;Stephen Bornstein ,&nbsp;Rain Liivoja ,&nbsp;Simon Ng ,&nbsp;Jason Scholz ,&nbsp;Kate Devitt","doi":"10.1016/j.jrt.2022.100036","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100036","url":null,"abstract":"<div><p>What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system – Athena AI  – within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000130/pdfft?md5=881df316ceef04dfdd86d777884f9837&pid=1-s2.0-S2666659622000130-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72075553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project 让心理治疗利益相关者参与负责任的研究,开发自动化反馈工具:从ExTRAPPOLATE项目中学习
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100044
Jacob A Andrews , Mat Rawsthorne , Cosmin Manolescu , Matthew Burton McFaul , Blandine French , Elizabeth Rye , Rebecca McNaughton , Michael Baliousis , Sharron Smith , Sanchia Biswas , Erin Baker , Dean Repper , Yunfei Long , Tahseen Jilani , Jeremie Clos , Fred Higton , Nima Moghaddam , Sam Malins

Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the anticipation of barriers to use, reflection on stakeholders’ views, effective engagement with stakeholders, and action to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system.

了解利益相关者对医疗保健中新型自主系统的看法对于确保在进行大量投资后不会放弃这些系统至关重要。ExTRAPPOLATE项目将负责任研究与创新(RRI)的原则应用于心理治疗师的自动反馈系统“AutoCICS”的开发。通过三个在线研讨会召集了一个患者和从业者参考小组(PPRG),为系统的发展提供信息。迭代研讨会允许对系统的拟议变更(基于利益相关者的意见)进行仔细审查。PPRG参考小组提供了按角色区分的有价值的见解,包括与该系统对不同患者的适用性和可接受性相关的担忧和建议,以及伦理考虑。RRI方法能够预测使用障碍,反思利益相关者的意见,与利益相关者进行有效接触,并在未来计划的可行性和有效性研究中进行测试之前,采取行动修改系统的设计和拟议使用。在AutoCICS系统的开发中,可以从RRI的应用中获得许多最佳实践和经验教训。
{"title":"Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project","authors":"Jacob A Andrews ,&nbsp;Mat Rawsthorne ,&nbsp;Cosmin Manolescu ,&nbsp;Matthew Burton McFaul ,&nbsp;Blandine French ,&nbsp;Elizabeth Rye ,&nbsp;Rebecca McNaughton ,&nbsp;Michael Baliousis ,&nbsp;Sharron Smith ,&nbsp;Sanchia Biswas ,&nbsp;Erin Baker ,&nbsp;Dean Repper ,&nbsp;Yunfei Long ,&nbsp;Tahseen Jilani ,&nbsp;Jeremie Clos ,&nbsp;Fred Higton ,&nbsp;Nima Moghaddam ,&nbsp;Sam Malins","doi":"10.1016/j.jrt.2022.100044","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100044","url":null,"abstract":"<div><p>Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the <em>anticipation</em> of barriers to use, <em>reflection</em> on stakeholders’ views, effective <em>engagement</em> with stakeholders, and <em>action</em> to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200021X/pdfft?md5=aaad5f2bbda984671acf004f7fb61ea1&pid=1-s2.0-S266665962200021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72123542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1