Pub Date : 2021-10-01DOI: 10.1016/j.jrt.2021.100015
Florian Cech
The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A3 framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.
{"title":"The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency","authors":"Florian Cech","doi":"10.1016/j.jrt.2021.100015","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100015","url":null,"abstract":"<div><p>The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A<sup>3</sup> framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100015"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000081/pdfft?md5=0c4add516911afa8f58f6e10d59434da&pid=1-s2.0-S2666659621000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72107121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1016/j.jrt.2021.100016
Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner
With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.
{"title":"Causality-based accountability mechanisms for socio-technical systems","authors":"Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner","doi":"10.1016/j.jrt.2021.100016","DOIUrl":"10.1016/j.jrt.2021.100016","url":null,"abstract":"<div><p>With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100016"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000093/pdfft?md5=70a06e5c6bb7727c37ce86ad9a9191e0&pid=1-s2.0-S2666659621000093-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46987346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-13DOI: 10.21203/rs.3.rs-709596/v1
Steven Umbrello
Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.
{"title":"The Role of Engineers in Harmonising Human Values for AI Systems Design","authors":"Steven Umbrello","doi":"10.21203/rs.3.rs-709596/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-709596/v1","url":null,"abstract":"\u0000 Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41705814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-22DOI: 10.1016/j.jrt.2021.100011
Arnold Lim , Enrong Pan
Imminent changes to the international monetary system alongside a shift toward more egalitarian principles of justice in commercial contracts for trade are now taking place. Such changes however do not sufficiently account for circumstances of hardship, or black-swan events such as COVID-19, whereby the relative losers of trading arrangements should continue to receive outcomes which are not only efficient, but also fair and resilient. We argue that the ‘Society-in-the-Loop’ (SITL) social contract paradigm, in conjunction with the use of Strategic Responsible Innovation Management (StRIM), can together provide a solution for improving distributive justice in trade. Through collaboration with a locally based trade facilitation company, we describe the innovation-planning phase of a blockchain smart contract solution based on Derek Leben's idea of a ‘Rawlsian Algorithm’ (2017). It is demonstrated how this can be used to strengthen the algorithmic fairness of commercial contract implementation in accordance with existing ISO 20022 standards. Since currently no formal design framework exists for modeling blockchain oriented software (BOS), an agile development approach is adopted which takes account of the substantial difference between traditional software development and smart contracts. This method involves the construction of UML Use Case, Sequence, and Class diagrams, with a view to blockchain specificities. Evaluation and feedback from the company is also considered.
{"title":"‘Toward a Global Social Contract for Trade’ - a Rawlsian approach to Blockchain Systems Design and Responsible Trade Facilitation in the New Bretton Woods era","authors":"Arnold Lim , Enrong Pan","doi":"10.1016/j.jrt.2021.100011","DOIUrl":"10.1016/j.jrt.2021.100011","url":null,"abstract":"<div><p>Imminent changes to the international monetary system alongside a shift toward more egalitarian principles of justice in commercial contracts for trade are now taking place. Such changes however do not sufficiently account for circumstances of hardship, or black-swan events such as COVID-19, whereby the relative losers of trading arrangements should continue to receive outcomes which are not only efficient, but also fair and resilient. We argue that the ‘Society-in-the-Loop’ (SITL) social contract paradigm, in conjunction with the use of Strategic Responsible Innovation Management (StRIM), can together provide a solution for improving distributive justice in trade. Through collaboration with a locally based trade facilitation company, we describe the innovation-planning phase of a blockchain smart contract solution based on Derek Leben's idea of a ‘Rawlsian Algorithm’ (2017). It is demonstrated how this can be used to strengthen the algorithmic fairness of commercial contract implementation in accordance with existing ISO 20022 standards. Since currently no formal design framework exists for modeling blockchain oriented software (BOS), an agile development approach is adopted which takes account of the substantial difference between traditional software development and smart contracts. This method involves the construction of UML Use Case, Sequence, and Class diagrams, with a view to blockchain specificities. Evaluation and feedback from the company is also considered.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"6 ","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2021-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47201164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1016/j.jrt.2021.100008
Zenlin Kwee, Emad Yaghmaei, Steven Flipse
While originally intended to transform research and innovation practice, the concept of responsible research and innovation (RRI) has largely remained a theoretical, policy-oriented construct, thereby engendering a perception that RRI indicators are very different from organizational or business indicators. As there is currently limited experience with RRI in businesses, in an attempt to gain more insights into RRI in practice, this paper focuses on an exploratory assessment of key performance indicators (KPIs) in a nanomedicine project. Based on correspondence analysis, we visually demonstrate associations among KPIs of RRI dimensions and of organizational ongoing R&D dimensions implying that these two indicators are not entirely different from each other and may even be potentially aligned. This finding may stimulate the motives of the RRI uptake in practice.
{"title":"Responsible research and innovation in practice an exploratory assessment of Key Performance Indicators (KPIs) in a Nanomedicine Project","authors":"Zenlin Kwee, Emad Yaghmaei, Steven Flipse","doi":"10.1016/j.jrt.2021.100008","DOIUrl":"10.1016/j.jrt.2021.100008","url":null,"abstract":"<div><p>While originally intended to transform research and innovation practice, the concept of responsible research and innovation (RRI) has largely remained a theoretical, policy-oriented construct, thereby engendering a perception that RRI indicators are very different from organizational or business indicators. As there is currently limited experience with RRI in businesses, in an attempt to gain more insights into RRI in practice, this paper focuses on an exploratory assessment of key performance indicators (KPIs) in a nanomedicine project. Based on correspondence analysis, we visually demonstrate associations among KPIs of RRI dimensions and of organizational ongoing R&D dimensions implying that these two indicators are not entirely different from each other and may even be potentially aligned. This finding may stimulate the motives of the RRI uptake in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"5 ","pages":"Article 100008"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43445807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1016/j.jrt.2020.100007
Ellen-Marie Forsberg , Erik Thorstensen , Flávia Dias Casagrande , Torhild Holthe , Liv Halvorsrud , Anne Lund , Evi Zouganeli
This article presents an analysis of a project in the field of assisted living technologies (ALT) for older adults where Responsible Research and Innovation (RRI) is used as an overall approach to the research and technology development work. Taking the project's three literature reviews - conducted in the fields of health science oriented towards occupational therapy, ICT research and development, and RRI - as starting points it applies perspectives from institutional logics to analyse the tension between RRI as an overall research and innovation (R&I) logic versus a disciplinary logic. This tension complicates the implementation of RRI, and we argue for giving this question more visibility. The article concludes that this project, from the funder's side and the project leader's side, was intended to be an example of research and technology development carried out within a new RRI R&I logic, but that it in large parts was conducted as a multidisciplinary project with RRI as a quasi-disciplinary logic in part in parallel with and in part in conflict with other logics in the project.
{"title":"Is RRI a new R&I logic? A reflection from an integrated RRI project","authors":"Ellen-Marie Forsberg , Erik Thorstensen , Flávia Dias Casagrande , Torhild Holthe , Liv Halvorsrud , Anne Lund , Evi Zouganeli","doi":"10.1016/j.jrt.2020.100007","DOIUrl":"https://doi.org/10.1016/j.jrt.2020.100007","url":null,"abstract":"<div><p>This article presents an analysis of a project in the field of assisted living technologies (ALT) for older adults where Responsible Research and Innovation (RRI) is used as an overall approach to the research and technology development work. Taking the project's three literature reviews - conducted in the fields of health science oriented towards occupational therapy, ICT research and development, and RRI - as starting points it applies perspectives from institutional logics to analyse the tension between RRI as an overall research and innovation (R&I) logic versus a disciplinary logic. This tension complicates the implementation of RRI, and we argue for giving this question more visibility. The article concludes that this project, from the funder's side and the project leader's side, was intended to be an example of research and technology development carried out within a new RRI R&I logic, but that it in large parts was conducted as a multidisciplinary project with RRI as a quasi-disciplinary logic in part in parallel with and in part in conflict with other logics in the project.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"5 ","pages":"Article 100007"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2020.100007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72121749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1016/j.jrt.2021.100010
Sally A. Applin , Catherine Flick
Nearly every week, a technology company is introducing a new surveillance technology, varying from applying facial recognition to observing and cataloguing behaviours of the public in the Commons and private spaces, to listening and recording what we say, or mapping what we do, where we go, and who we're with—or as much of these facets of our lives as can be accessed. As such, the general public writ-large has had to wrestle with the colonization of publicly funded space, and the outcomes to each of our personal lives as a result of the massive harvesting and storing of our data, and the potential machine learning and processing applied to that data. Facebook, once content to harvest our data through its website, cookies, and apps on mobile phones and computers, has now planned to follow us more deeply into the Commons by developing new mapping technology combined with smart camera equipped Augmented Reality (AR) eyeglasses, that will track, render and record the Commons—and us with it. The resulting data will privately benefit Facebook's continued goal to expand its worldwide reach and growth. In this paper, we examine the ethical implications of Facebook's Project Aria research pilot through the perspectives of Responsible Innovation, comparing both existing understandings of Responsible Research and Innovation and Facebook's own Responsible Innovation Principles; we contextualise Project Aria within the Commons through applying current social multi-dimensional communications theory to understand the extensive socio-technological implications of Project Aria within society and culture; and we address the potentially serious consequences of the Facebook Project Aria experiment, inspiring countless other companies to shift their focus to compete with Project Aria, or beat it to the consumer marketplace.
{"title":"Facebook's Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons","authors":"Sally A. Applin , Catherine Flick","doi":"10.1016/j.jrt.2021.100010","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100010","url":null,"abstract":"<div><p>Nearly every week, a technology company is introducing a new surveillance technology, varying from applying facial recognition to observing and cataloguing behaviours of the public in the Commons and private spaces, to listening and recording what we say, or mapping what we do, where we go, and who we're with—or as much of these facets of our lives as can be accessed. As such, the general public writ-large has had to wrestle with the colonization of publicly funded space, and the outcomes to each of our personal lives as a result of the massive harvesting and storing of our data, and the potential machine learning and processing applied to that data. Facebook, once content to harvest our data through its website, cookies, and apps on mobile phones and computers, has now planned to follow us more deeply into the Commons by developing new mapping technology combined with smart camera equipped Augmented Reality (AR) eyeglasses, that will track, render and record the Commons—and us with it. The resulting data will privately benefit Facebook's continued goal to expand its worldwide reach and growth. In this paper, we examine the ethical implications of Facebook's Project Aria research pilot through the perspectives of Responsible Innovation, comparing both existing understandings of Responsible Research and Innovation and Facebook's own Responsible Innovation Principles; we contextualise Project Aria within the Commons through applying current social multi-dimensional communications theory to understand the extensive socio-technological implications of Project Aria within society and culture; and we address the potentially serious consequences of the Facebook Project Aria experiment, inspiring countless other companies to shift their focus to compete with Project Aria, or beat it to the consumer marketplace.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"5 ","pages":"Article 100010"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2021.100010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72121751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1016/j.jrt.2020.100005
Rowena Rodrigues
This article focusses on legal and human rights issues of artificial intelligence (AI) being discussed and debated, how they are being addressed, gaps and challenges, and affected human rights principles. Such issues include: algorithmic transparency, cybersecurity vulnerabilities, unfairness, bias and discrimination, lack of contestability, legal personhood issues, intellectual property issues, adverse effects on workers, privacy and data protection issues, liability for damage and lack of accountability. The article uses the frame of ‘vulnerability’ to consolidate the understanding of critical areas of concern and guide risk and impact mitigation efforts to protect human well-being. While recognising the good work carried out in the AI law space, and acknowledging this area needs constant evaluation and agility in approach, this article advances the discussion, which is important given the gravity of the impacts of AI technologies, particularly on vulnerable individuals and groups, and their human rights.
{"title":"Legal and human rights issues of AI: Gaps, challenges and vulnerabilities","authors":"Rowena Rodrigues","doi":"10.1016/j.jrt.2020.100005","DOIUrl":"https://doi.org/10.1016/j.jrt.2020.100005","url":null,"abstract":"<div><p>This article focusses on legal and human rights issues of artificial intelligence (AI) being discussed and debated, how they are being addressed, gaps and challenges, and affected human rights principles. Such issues include: algorithmic transparency, cybersecurity vulnerabilities, unfairness, bias and discrimination, lack of contestability, legal personhood issues, intellectual property issues, adverse effects on workers, privacy and data protection issues, liability for damage and lack of accountability. The article uses the frame of ‘vulnerability’ to consolidate the understanding of critical areas of concern and guide risk and impact mitigation efforts to protect human well-being. While recognising the good work carried out in the AI law space, and acknowledging this area needs constant evaluation and agility in approach, this article advances the discussion, which is important given the gravity of the impacts of AI technologies, particularly on vulnerable individuals and groups, and their human rights.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"4 ","pages":"Article 100005"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2020.100005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72124067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1016/j.jrt.2020.100006
Kutoma Wakunuma , Tilimbe Jiya , Suleiman Aliyu
Artificial Intelligence (AI) is playing a crucial role in advancing efforts towards sustainable development across the globe. AI has the potential to help address some of the biggest challenges that society faces including health and well-being. Thus, AI can be useful in addressing some health and well-being related challenges by accelerating the attainment of the UN's Sustainable Development Goal 3 (SDG3), namely Good health and well-being. This paper draws on the Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) list of Official Development Assistance (ODA) and the Price Waterhouse Coopers (PwC) SDG selector to identify the SDG that is prioritised in Least Developed Countries (LDCs). Out of 32 least developed African countries on the list, SDG3 was the most common SDG, suggesting that health and well-being is a priority for these countries. In order to understand the opportunities and challenges that might result in applying AI in the acceleration of SDG3, the paper uses a SWOT analysis to highlight some socio-ethical implications of using AI in advancing SDGS in the identified LDCs on the DAC list.
{"title":"Socio-ethical implications of using AI in accelerating SDG3 in Least Developed Countries","authors":"Kutoma Wakunuma , Tilimbe Jiya , Suleiman Aliyu","doi":"10.1016/j.jrt.2020.100006","DOIUrl":"10.1016/j.jrt.2020.100006","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is playing a crucial role in advancing efforts towards sustainable development across the globe. AI has the potential to help address some of the biggest challenges that society faces including health and well-being. Thus, AI can be useful in addressing some health and well-being related challenges by accelerating the attainment of the UN's Sustainable Development Goal 3 (SDG3), namely Good health and well-being. This paper draws on the Organisation for Economic Co-operation and Development (OECD) Development Assistance Committee (DAC) list of Official Development Assistance (ODA) and the Price Waterhouse Coopers (PwC) SDG selector to identify the SDG that is prioritised in Least Developed Countries (LDCs). Out of 32 least developed African countries on the list, SDG3 was the most common SDG, suggesting that health and well-being is a priority for these countries. In order to understand the opportunities and challenges that might result in applying AI in the acceleration of SDG3, the paper uses a SWOT analysis to highlight some socio-ethical implications of using AI in advancing SDGS in the identified LDCs on the DAC list.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"4 ","pages":"Article 100006"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jrt.2020.100006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44450312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}