首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Using LEGO® SERIOUS® Play with stakeholders for RRI 使用LEGO®SERIOUS®Play与利益相关者进行RRI
Pub Date : 2022-12-01 DOI: 10.1016/j.jrt.2022.100055
Stevienna de Saille , Alice Greenwood , James Law , Mark Ball , Mark Levine , Elvira Perez Vallejos , Cath Ritchie , David Cameron

This paper discusses Responsible (Research and) Innovation (RRI) within a UKRI project funded through the Trustworthy Autonomous Systems Hub, Imagining Robotic Care: Identifying conflict and confluence in stakeholder imaginaries of autonomous care systems. We used LEGO® Serious Play® as an RRI methodology for focus group workshops exploring sociotechnical imaginaries about how robots should (or should not) be incorporated into the existing UK health-social care system held by care system stakeholders, users and general publics. We outline the workshops’ protocol and some emerging insights from early data collection, including the ways that LSP aids in the surfacing of tacit knowledge, allowing participants to develop their own scenarios and definitions of ‘robot’ and ‘care’. We further discuss the implications of LSP as a method for upstream stakeholder engagement in general and how this may contribute to embedding RRI in robotics research on a larger scale.

本文讨论了由值得信赖的自主系统中心资助的UKRI项目中的负责任(研究和)创新(RRI),想象机器人护理:识别自主护理系统的利益相关者想象中的冲突和融合。我们使用LEGO®Serious Play®作为焦点小组研讨会的RRI方法,探索关于机器人应该(或不应该)如何纳入由护理系统利益相关者,用户和公众持有的现有英国健康社会保健系统的社会技术想象。我们概述了研讨会的协议和一些来自早期数据收集的新见解,包括LSP帮助隐性知识浮出水面的方式,允许参与者开发自己的场景和“机器人”和“护理”的定义。我们进一步讨论了LSP作为上游利益相关者参与的一般方法的含义,以及这如何有助于将RRI嵌入到更大规模的机器人研究中。
{"title":"Using LEGO® SERIOUS® Play with stakeholders for RRI","authors":"Stevienna de Saille ,&nbsp;Alice Greenwood ,&nbsp;James Law ,&nbsp;Mark Ball ,&nbsp;Mark Levine ,&nbsp;Elvira Perez Vallejos ,&nbsp;Cath Ritchie ,&nbsp;David Cameron","doi":"10.1016/j.jrt.2022.100055","DOIUrl":"10.1016/j.jrt.2022.100055","url":null,"abstract":"<div><p>This paper discusses Responsible (Research and) Innovation (RRI) within a UKRI project funded through the Trustworthy Autonomous Systems Hub, <strong>Imagining Robotic Care: Identifying conflict and confluence in stakeholder imaginaries of autonomous care systems</strong>. We used LEGO<strong>®</strong> Serious Play<strong>®</strong> as an RRI methodology for focus group workshops exploring sociotechnical imaginaries about how robots should (or should not) be incorporated into the existing UK health-social care system held by care system stakeholders, users and general publics. We outline the workshops’ protocol and some emerging insights from early data collection, including the ways that LSP aids in the surfacing of tacit knowledge, allowing participants to develop their own scenarios and definitions of ‘robot’ and ‘care’. We further discuss the implications of LSP as a method for upstream stakeholder engagement in general and how this may contribute to embedding RRI in robotics research on a larger scale.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"12 ","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000324/pdfft?md5=b4270c73b59f748a6c296e68ec0c17d1&pid=1-s2.0-S2666659622000324-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48297801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From episteme to techne: Crafting responsible innovation in trustworthy autonomous systems research practice 从认识到技术:在值得信赖的自主系统研究实践中打造负责任的创新
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100035
Pauline Leonard, Chira Tochia

This paper makes connections between the EPSRC AREA Framework for Responsible Research Innovation (RRI) and sociological, feminist and post-positivist methodological contributions to consider how the interpretive frames central to these traditions can bring valuable insights to practices of RRI. We argue that taking this interdisciplinary approach enables understanding the research process as a form of proficient craftwork or techne. Techne allows the richness of research methods debates to inform ways in which epistemic protocols can be strategically adjusted and reconfigured to more fully embed RRI principles in every stage of the research process. This enhances researchers’ capacity to minimise some of the undesirable and potentially harmful side effects of research practice and strive towards social good. We draw on fieldwork notes produced as part of our research on industrial cleaning robotics to illustrate how our craftwork approach to RRI is conducted in practice.

本文将EPSRC负责任研究创新领域框架(RRI)与社会学、女权主义和后实证主义方法论的贡献联系起来,以考虑这些传统的核心解释框架如何为RRI的实践带来有价值的见解。我们认为,采用这种跨学科的方法可以将研究过程理解为一种熟练的工艺或技术。技术允许研究方法辩论的丰富性,为认识协议可以战略性地调整和重新配置的方式提供信息,以便在研究过程的每个阶段更充分地嵌入RRI原则。这增强了研究人员将研究实践中一些不受欢迎的和潜在有害的副作用最小化并努力实现社会公益的能力。我们利用作为工业清洁机器人研究一部分的实地工作笔记来说明我们的RRI工艺方法是如何在实践中进行的。
{"title":"From episteme to techne: Crafting responsible innovation in trustworthy autonomous systems research practice","authors":"Pauline Leonard,&nbsp;Chira Tochia","doi":"10.1016/j.jrt.2022.100035","DOIUrl":"10.1016/j.jrt.2022.100035","url":null,"abstract":"<div><p>This paper makes connections between the EPSRC AREA Framework for Responsible Research Innovation (RRI) and sociological, feminist and post-positivist methodological contributions to consider how the interpretive frames central to these traditions can bring valuable insights to practices of RRI. We argue that taking this interdisciplinary approach enables understanding the research process as a form of proficient craftwork or techne. Techne allows the richness of research methods debates to inform ways in which epistemic protocols can be strategically adjusted and reconfigured to more fully embed RRI principles in every stage of the research process. This enhances researchers’ capacity to minimise some of the undesirable and potentially harmful side effects of research practice and strive towards social good. We draw on fieldwork notes produced as part of our research on industrial cleaning robotics to illustrate how our craftwork approach to RRI is conducted in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000129/pdfft?md5=37ae3ac7c87e9f2e939a48a0dffb9470&pid=1-s2.0-S2666659622000129-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49355150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital 数字主权与智能可穿戴设备:数字合法控制权分配的四大道德准则
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100053
N. Conradie, S. Nagel
{"title":"Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital","authors":"N. Conradie, S. Nagel","doi":"10.1016/j.jrt.2022.100053","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100053","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
¿Human-like Computers? Velden, Manfred (2022). Human-like Computers: A Lesson in Absurdity. Berlin: Schwabe Verlag. ?类人计算机?Manfred Velden(2022)。类人计算机:荒谬的一课。柏林:Schwabe Verlag。
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100037
Carlos Andrés Salazar Martínez
{"title":"¿Human-like Computers? Velden, Manfred (2022). Human-like Computers: A Lesson in Absurdity. Berlin: Schwabe Verlag.","authors":"Carlos Andrés Salazar Martínez","doi":"10.1016/j.jrt.2022.100037","DOIUrl":"10.1016/j.jrt.2022.100037","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000142/pdfft?md5=0f467b3c25ff4be3ac3bf4e00407bcf3&pid=1-s2.0-S2666659622000142-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46535421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space 太空殖民应该以繁殖为基础吗?关于选择在太空生孩子的关键考虑
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100040
Maurizio Balistreri , Steven Umbrello

This paper aims to argue for the thesis that it is not a priori morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we ​​could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.

这篇论文的目的是论证这一论点,即第一阶段的太空殖民是基于有性生殖的,这并不是先天的道德上的正当理由。我们的立场是基于这样的论点:至少在第一批殖民定居点,那些在太空出生的人可能没有很好的机会过上美好的生活。这个问题并不取决于这样一个事实,即另一个星球上的生命将不得不处理诸如太阳辐射或重力减少或完全没有的问题。考虑到我们可能殖民的行星或定居点可以通过地球工程过程完全改变,这些问题似乎可以得到解决。同样,人类在太空生活的能力也可以通过基因改造干预措施得到提高。然而,即使有关太空生存的问题得到了解决,我们认为,至少在太空或其他星球殖民的第一阶段,在太空生孩子可能是一种道德上不负责任的选择,因为我们认为,我们能给他们的生活可能不够好。我们认为,从我们决定要孩子的时候起,情况就是这样。我们认为,满足于我们的孩子有最低限度的生活价值,在道德上是不对的;在我们在太空生孩子之前,我们应该确保我们能给他们一个合理的机会过上美好的生活。这个原则既适用于地球——至少在你可以选择的地方——也适用于太空旅行。
{"title":"Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space","authors":"Maurizio Balistreri ,&nbsp;Steven Umbrello","doi":"10.1016/j.jrt.2022.100040","DOIUrl":"10.1016/j.jrt.2022.100040","url":null,"abstract":"<div><p>This paper aims to argue for the thesis that it is not <em>a priori</em> morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we ​​could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000178/pdfft?md5=e944bf978b1233e58ceb542e40645d21&pid=1-s2.0-S2666659622000178-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44667584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Erratum regarding missing Declaration of Competing Interest statements in previously published article. 关于先前发表的文章中缺少竞争利益声明的勘误表。
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100033
{"title":"Erratum regarding missing Declaration of Competing Interest statements in previously published article.","authors":"","doi":"10.1016/j.jrt.2022.100033","DOIUrl":"10.1016/j.jrt.2022.100033","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9421412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9888780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns 尼日利亚的数字身份(ID)管理计划:伦理、法律和社会文化问题
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100039
Damian Eke , Ridwan Oloyede , Paschal Ochang , Favour Borokini , Mercy Adeyeye , Lebura Sorbarikor , Bamidele Wale-Oshinowo , Simisola Akintoye

National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.

国家数字身份管理系统作为将公民纳入日益数字化的公共服务的关键工具,已经获得了吸引力。在世界银行的帮助下,世界各国正致力于建立和推广数字身份识别系统,以改善发展成果,这是“身份换发展”倡议的一部分。其中一个国家是尼日利亚,它正在为其1亿多居民建立一个国家身份管理数据库。然而,在国家一级设计和扩展这样一个系统会涉及隐私、安全、人权、道德和社会文化影响。通过混合方法,本文确定了其中一些问题,并对尼日利亚人最担心的问题进行了分类。它围绕集中的国家电子身份(eID)管理系统、公众信任和负责任的数据治理提供了一个经验上合理的视角,并就加强尼日利亚身份管理数字基础设施的隐私、安全和可信度提出了建议。
{"title":"Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns","authors":"Damian Eke ,&nbsp;Ridwan Oloyede ,&nbsp;Paschal Ochang ,&nbsp;Favour Borokini ,&nbsp;Mercy Adeyeye ,&nbsp;Lebura Sorbarikor ,&nbsp;Bamidele Wale-Oshinowo ,&nbsp;Simisola Akintoye","doi":"10.1016/j.jrt.2022.100039","DOIUrl":"10.1016/j.jrt.2022.100039","url":null,"abstract":"<div><p>National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000166/pdfft?md5=14d456c9bcd0a32b20f209e06035c96b&pid=1-s2.0-S2666659622000166-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49198422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Responsible innovation; responsible data. A case study in autonomous driving 负责任的创新;负责任的数据。自动驾驶案例研究
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100038
C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka

Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.

自动驾驶汽车(AVs)在运行过程中收集大量数据(mb /秒)。记录了哪些数据,谁有权访问这些数据,以及如何分析和使用这些数据,可能会产生重大的技术、伦理、社会和法律影响。通过在自动驾驶汽车的生命周期中嵌入负责任创新(RI)方法,可以预见和防止因数据记录不足而导致的负面后果。国际扶轮方法要求将社会效益、预期治理和利益相关者包容等问题置于研究考虑的前沿。作为基本原则,这些概念为研究创造了一种情境思维模式,根据定义,它将具有RI基础和应用。这种RI心态启发并指导了自动驾驶汽车研究项目的起源和运作。这对研究大纲和工作计划的影响,以及一路上遇到的挑战都是详细的,并对国际扶轮在实践中的结论和建议。
{"title":"Responsible innovation; responsible data. A case study in autonomous driving","authors":"C. Ten Holter ,&nbsp;L. Kunze ,&nbsp;J-A. Pattinson ,&nbsp;P. Salvini ,&nbsp;M. Jirotka","doi":"10.1016/j.jrt.2022.100038","DOIUrl":"10.1016/j.jrt.2022.100038","url":null,"abstract":"<div><p>Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000154/pdfft?md5=83cd9d06b2115ee4c793d9b4e7219e99&pid=1-s2.0-S2666659622000154-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes 人力资源技术中的负责任人工智能:一种创新的包容性和公平性匹配算法,用于招聘目的
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100041
Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier

In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.

在本文中,我们通过引入一种新的基于机器学习的工作匹配方法,从算法开发的公平设计方法的角度,解决了在人力资源管理中负责任地使用人工智能的广泛问题。我们的算法解决方案的目标是改进和自动化临时工的招聘,以找到与现有工作机会最匹配的人。我们讨论了公平应该如何成为人力资源管理的重点,并强调了在开发算法解决方案以匹配候选人与工作机会时出现的主要挑战和研究缺陷。在深入分析了我们专有数据集的分布和偏差之后,我们描述了用于评估我们机器学习模型的有效性和公平性的方法,以及纠正一些偏差的解决方案。我们引入的模型是我们努力控制机器学习算法在招聘结果中的不公平的第一步,更广泛地说,是在人力资源管理中负责任地使用人工智能,这要归功于旨在控制偏见和防止歧视性结果的“保障算法”。
{"title":"Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes","authors":"Sebastien Delecraz ,&nbsp;Loukman Eltarr ,&nbsp;Martin Becuwe ,&nbsp;Henri Bouxin ,&nbsp;Nicolas Boutin ,&nbsp;Olivier Oullier","doi":"10.1016/j.jrt.2022.100041","DOIUrl":"10.1016/j.jrt.2022.100041","url":null,"abstract":"<div><p>In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200018X/pdfft?md5=1067842485c764fe87523992da73aaec&pid=1-s2.0-S266665962200018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AI Documentation: A path to accountability 人工智能文档:责任之路
Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100043
Florian Königstorfer, Stefan Thalmann

Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.

人工智能(AI)为企业带来了巨大的潜力,但由于它的黑箱特性,也存在很大的缺陷。在规范的用例中,这是一个特别的挑战,在部署之前需要对软件进行认证或验证。传统的软件文档不足以向审核员提供所需的证据,而且目前还没有针对人工智能的指导方针。因此,人工智能在受监管的用例中面临着重大的采用障碍,因为人工智能的问责制不能得到充分的保证。本访谈研究旨在确定在规范用例中记录人工智能的当前状态。我们发现,人工智能用例的风险水平对人工智能的采用和人工智能文档的范围有影响。此外,我们讨论了人工智能目前是如何记录的,以及从业者在记录人工智能时面临的挑战。
{"title":"AI Documentation: A path to accountability","authors":"Florian Königstorfer,&nbsp;Stefan Thalmann","doi":"10.1016/j.jrt.2022.100043","DOIUrl":"10.1016/j.jrt.2022.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000208/pdfft?md5=bb63316f230d774001f337edc4c0fa62&pid=1-s2.0-S2666659622000208-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1