Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100053
N. Conradie, S. Nagel
{"title":"Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital","authors":"N. Conradie, S. Nagel","doi":"10.1016/j.jrt.2022.100053","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100053","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100040
Maurizio Balistreri , Steven Umbrello
This paper aims to argue for the thesis that it is not a priori morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.
{"title":"Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space","authors":"Maurizio Balistreri , Steven Umbrello","doi":"10.1016/j.jrt.2022.100040","DOIUrl":"10.1016/j.jrt.2022.100040","url":null,"abstract":"<div><p>This paper aims to argue for the thesis that it is not <em>a priori</em> morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000178/pdfft?md5=e944bf978b1233e58ceb542e40645d21&pid=1-s2.0-S2666659622000178-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44667584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100033
{"title":"Erratum regarding missing Declaration of Competing Interest statements in previously published article.","authors":"","doi":"10.1016/j.jrt.2022.100033","DOIUrl":"10.1016/j.jrt.2022.100033","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9421412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9888780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.
{"title":"Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns","authors":"Damian Eke , Ridwan Oloyede , Paschal Ochang , Favour Borokini , Mercy Adeyeye , Lebura Sorbarikor , Bamidele Wale-Oshinowo , Simisola Akintoye","doi":"10.1016/j.jrt.2022.100039","DOIUrl":"10.1016/j.jrt.2022.100039","url":null,"abstract":"<div><p>National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000166/pdfft?md5=14d456c9bcd0a32b20f209e06035c96b&pid=1-s2.0-S2666659622000166-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49198422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100038
C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka
Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.
{"title":"Responsible innovation; responsible data. A case study in autonomous driving","authors":"C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka","doi":"10.1016/j.jrt.2022.100038","DOIUrl":"10.1016/j.jrt.2022.100038","url":null,"abstract":"<div><p>Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000154/pdfft?md5=83cd9d06b2115ee4c793d9b4e7219e99&pid=1-s2.0-S2666659622000154-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100041
Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier
In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.
{"title":"Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes","authors":"Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier","doi":"10.1016/j.jrt.2022.100041","DOIUrl":"10.1016/j.jrt.2022.100041","url":null,"abstract":"<div><p>In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200018X/pdfft?md5=1067842485c764fe87523992da73aaec&pid=1-s2.0-S266665962200018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100043
Florian Königstorfer, Stefan Thalmann
Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.
{"title":"AI Documentation: A path to accountability","authors":"Florian Königstorfer, Stefan Thalmann","doi":"10.1016/j.jrt.2022.100043","DOIUrl":"10.1016/j.jrt.2022.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000208/pdfft?md5=bb63316f230d774001f337edc4c0fa62&pid=1-s2.0-S2666659622000208-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100036
Tara Roberson , Stephen Bornstein , Rain Liivoja , Simon Ng , Jason Scholz , Kate Devitt
What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system – Athena AI – within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.
{"title":"A method for ethical AI in defence: A case study on developing trustworthy autonomous systems","authors":"Tara Roberson , Stephen Bornstein , Rain Liivoja , Simon Ng , Jason Scholz , Kate Devitt","doi":"10.1016/j.jrt.2022.100036","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100036","url":null,"abstract":"<div><p>What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system – Athena AI – within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000130/pdfft?md5=881df316ceef04dfdd86d777884f9837&pid=1-s2.0-S2666659622000130-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72075553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.jrt.2022.100044
Jacob A Andrews , Mat Rawsthorne , Cosmin Manolescu , Matthew Burton McFaul , Blandine French , Elizabeth Rye , Rebecca McNaughton , Michael Baliousis , Sharron Smith , Sanchia Biswas , Erin Baker , Dean Repper , Yunfei Long , Tahseen Jilani , Jeremie Clos , Fred Higton , Nima Moghaddam , Sam Malins
Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the anticipation of barriers to use, reflection on stakeholders’ views, effective engagement with stakeholders, and action to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system.
{"title":"Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project","authors":"Jacob A Andrews , Mat Rawsthorne , Cosmin Manolescu , Matthew Burton McFaul , Blandine French , Elizabeth Rye , Rebecca McNaughton , Michael Baliousis , Sharron Smith , Sanchia Biswas , Erin Baker , Dean Repper , Yunfei Long , Tahseen Jilani , Jeremie Clos , Fred Higton , Nima Moghaddam , Sam Malins","doi":"10.1016/j.jrt.2022.100044","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100044","url":null,"abstract":"<div><p>Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the <em>anticipation</em> of barriers to use, <em>reflection</em> on stakeholders’ views, effective <em>engagement</em> with stakeholders, and <em>action</em> to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200021X/pdfft?md5=aaad5f2bbda984671acf004f7fb61ea1&pid=1-s2.0-S266665962200021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72123542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}