Pub Date : 2022-12-01DOI: 10.1016/j.jrt.2022.100055
Stevienna de Saille , Alice Greenwood , James Law , Mark Ball , Mark Levine , Elvira Perez Vallejos , Cath Ritchie , David Cameron
This paper discusses Responsible (Research and) Innovation (RRI) within a UKRI project funded through the Trustworthy Autonomous Systems Hub, Imagining Robotic Care: Identifying conflict and confluence in stakeholder imaginaries of autonomous care systems. We used LEGO® Serious Play® as an RRI methodology for focus group workshops exploring sociotechnical imaginaries about how robots should (or should not) be incorporated into the existing UK health-social care system held by care system stakeholders, users and general publics. We outline the workshops’ protocol and some emerging insights from early data collection, including the ways that LSP aids in the surfacing of tacit knowledge, allowing participants to develop their own scenarios and definitions of ‘robot’ and ‘care’. We further discuss the implications of LSP as a method for upstream stakeholder engagement in general and how this may contribute to embedding RRI in robotics research on a larger scale.
{"title":"Using LEGO® SERIOUS® Play with stakeholders for RRI","authors":"Stevienna de Saille , Alice Greenwood , James Law , Mark Ball , Mark Levine , Elvira Perez Vallejos , Cath Ritchie , David Cameron","doi":"10.1016/j.jrt.2022.100055","DOIUrl":"10.1016/j.jrt.2022.100055","url":null,"abstract":"<div><p>This paper discusses Responsible (Research and) Innovation (RRI) within a UKRI project funded through the Trustworthy Autonomous Systems Hub, <strong>Imagining Robotic Care: Identifying conflict and confluence in stakeholder imaginaries of autonomous care systems</strong>. We used LEGO<strong>®</strong> Serious Play<strong>®</strong> as an RRI methodology for focus group workshops exploring sociotechnical imaginaries about how robots should (or should not) be incorporated into the existing UK health-social care system held by care system stakeholders, users and general publics. We outline the workshops’ protocol and some emerging insights from early data collection, including the ways that LSP aids in the surfacing of tacit knowledge, allowing participants to develop their own scenarios and definitions of ‘robot’ and ‘care’. We further discuss the implications of LSP as a method for upstream stakeholder engagement in general and how this may contribute to embedding RRI in robotics research on a larger scale.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"12 ","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000324/pdfft?md5=b4270c73b59f748a6c296e68ec0c17d1&pid=1-s2.0-S2666659622000324-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48297801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100035
Pauline Leonard, Chira Tochia
This paper makes connections between the EPSRC AREA Framework for Responsible Research Innovation (RRI) and sociological, feminist and post-positivist methodological contributions to consider how the interpretive frames central to these traditions can bring valuable insights to practices of RRI. We argue that taking this interdisciplinary approach enables understanding the research process as a form of proficient craftwork or techne. Techne allows the richness of research methods debates to inform ways in which epistemic protocols can be strategically adjusted and reconfigured to more fully embed RRI principles in every stage of the research process. This enhances researchers’ capacity to minimise some of the undesirable and potentially harmful side effects of research practice and strive towards social good. We draw on fieldwork notes produced as part of our research on industrial cleaning robotics to illustrate how our craftwork approach to RRI is conducted in practice.
{"title":"From episteme to techne: Crafting responsible innovation in trustworthy autonomous systems research practice","authors":"Pauline Leonard, Chira Tochia","doi":"10.1016/j.jrt.2022.100035","DOIUrl":"10.1016/j.jrt.2022.100035","url":null,"abstract":"<div><p>This paper makes connections between the EPSRC AREA Framework for Responsible Research Innovation (RRI) and sociological, feminist and post-positivist methodological contributions to consider how the interpretive frames central to these traditions can bring valuable insights to practices of RRI. We argue that taking this interdisciplinary approach enables understanding the research process as a form of proficient craftwork or techne. Techne allows the richness of research methods debates to inform ways in which epistemic protocols can be strategically adjusted and reconfigured to more fully embed RRI principles in every stage of the research process. This enhances researchers’ capacity to minimise some of the undesirable and potentially harmful side effects of research practice and strive towards social good. We draw on fieldwork notes produced as part of our research on industrial cleaning robotics to illustrate how our craftwork approach to RRI is conducted in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000129/pdfft?md5=37ae3ac7c87e9f2e939a48a0dffb9470&pid=1-s2.0-S2666659622000129-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49355150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100053
N. Conradie, S. Nagel
{"title":"Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital","authors":"N. Conradie, S. Nagel","doi":"10.1016/j.jrt.2022.100053","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100053","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100040
Maurizio Balistreri , Steven Umbrello
This paper aims to argue for the thesis that it is not a priori morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.
{"title":"Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space","authors":"Maurizio Balistreri , Steven Umbrello","doi":"10.1016/j.jrt.2022.100040","DOIUrl":"10.1016/j.jrt.2022.100040","url":null,"abstract":"<div><p>This paper aims to argue for the thesis that it is not <em>a priori</em> morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000178/pdfft?md5=e944bf978b1233e58ceb542e40645d21&pid=1-s2.0-S2666659622000178-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44667584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100033
{"title":"Erratum regarding missing Declaration of Competing Interest statements in previously published article.","authors":"","doi":"10.1016/j.jrt.2022.100033","DOIUrl":"10.1016/j.jrt.2022.100033","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9421412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9888780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.
{"title":"Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns","authors":"Damian Eke , Ridwan Oloyede , Paschal Ochang , Favour Borokini , Mercy Adeyeye , Lebura Sorbarikor , Bamidele Wale-Oshinowo , Simisola Akintoye","doi":"10.1016/j.jrt.2022.100039","DOIUrl":"10.1016/j.jrt.2022.100039","url":null,"abstract":"<div><p>National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000166/pdfft?md5=14d456c9bcd0a32b20f209e06035c96b&pid=1-s2.0-S2666659622000166-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49198422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100038
C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka
Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.
{"title":"Responsible innovation; responsible data. A case study in autonomous driving","authors":"C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka","doi":"10.1016/j.jrt.2022.100038","DOIUrl":"10.1016/j.jrt.2022.100038","url":null,"abstract":"<div><p>Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000154/pdfft?md5=83cd9d06b2115ee4c793d9b4e7219e99&pid=1-s2.0-S2666659622000154-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100041
Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier
In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.
{"title":"Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes","authors":"Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier","doi":"10.1016/j.jrt.2022.100041","DOIUrl":"10.1016/j.jrt.2022.100041","url":null,"abstract":"<div><p>In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200018X/pdfft?md5=1067842485c764fe87523992da73aaec&pid=1-s2.0-S266665962200018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100043
Florian Königstorfer, Stefan Thalmann
Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.
{"title":"AI Documentation: A path to accountability","authors":"Florian Königstorfer, Stefan Thalmann","doi":"10.1016/j.jrt.2022.100043","DOIUrl":"10.1016/j.jrt.2022.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000208/pdfft?md5=bb63316f230d774001f337edc4c0fa62&pid=1-s2.0-S2666659622000208-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}