Pub Date : 2023-01-02DOI: 10.1080/17579961.2023.2184137
B. Kamphorst, M. Verweij, Josephine A. W. van Zeben
ABSTRACT As evidenced during the COVID-19 pandemic, there is a growing reliance on smartphone apps such as digital contact tracing apps and vaccination passports to respond to and mitigate public health threats. In light of the European Commission's guidance, Member States typically offer such apps on a voluntary, ‘opt-in’ basis. In this paper, we question the extent to which the individual choice to use these apps – and similar future technologies – is indeed a voluntary one. By explicating ethical and legal considerations governing the choice situations surrounding the use of smartphone apps, specifically those related to the negative consequences that declining the use of these apps may have (e.g. loss of opportunities, social exclusion, stigma), we argue that the projected downsides of refusal may in effect limit the liberty to decline for certain subpopulations. To mitigate these concerns, we recommend three categories of approaches that may be employed by governments to safeguard voluntariness.
{"title":"On the voluntariness of public health apps: a European case study on digital contact tracing","authors":"B. Kamphorst, M. Verweij, Josephine A. W. van Zeben","doi":"10.1080/17579961.2023.2184137","DOIUrl":"https://doi.org/10.1080/17579961.2023.2184137","url":null,"abstract":"ABSTRACT\u0000 As evidenced during the COVID-19 pandemic, there is a growing reliance on smartphone apps such as digital contact tracing apps and vaccination passports to respond to and mitigate public health threats. In light of the European Commission's guidance, Member States typically offer such apps on a voluntary, ‘opt-in’ basis. In this paper, we question the extent to which the individual choice to use these apps – and similar future technologies – is indeed a voluntary one. By explicating ethical and legal considerations governing the choice situations surrounding the use of smartphone apps, specifically those related to the negative consequences that declining the use of these apps may have (e.g. loss of opportunities, social exclusion, stigma), we argue that the projected downsides of refusal may in effect limit the liberty to decline for certain subpopulations. To mitigate these concerns, we recommend three categories of approaches that may be employed by governments to safeguard voluntariness.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"15 1","pages":"107 - 123"},"PeriodicalIF":0.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42044973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/17579961.2023.2184138
R. Matulionyte
ABSTRACT While artificial intelligence technologies (AI), such as machine learning (ML), hold significant potential for the economy and social wellbeing, it is unclear to what extent copyright laws stimulate or impede the development of these promising technologies. The unauthorised use of copyright-protected works in the ML process and its possible implications on economic rights of authors have been previously explored, however, the implications of such use on the moral rights of authors – the rights of attribution and integrity – have not been examined. This paper, by focusing on Australia as a case study, explores whether the use of works as training data in the ML process could amount to the infringement of moral rights of authors and, if so, whether law reform in the area is needed.
{"title":"Can AI infringe moral rights of authors and should we do anything about it? An Australian perspective","authors":"R. Matulionyte","doi":"10.1080/17579961.2023.2184138","DOIUrl":"https://doi.org/10.1080/17579961.2023.2184138","url":null,"abstract":"ABSTRACT\u0000 While artificial intelligence technologies (AI), such as machine learning (ML), hold significant potential for the economy and social wellbeing, it is unclear to what extent copyright laws stimulate or impede the development of these promising technologies. The unauthorised use of copyright-protected works in the ML process and its possible implications on economic rights of authors have been previously explored, however, the implications of such use on the moral rights of authors – the rights of attribution and integrity – have not been examined. This paper, by focusing on Australia as a case study, explores whether the use of works as training data in the ML process could amount to the infringement of moral rights of authors and, if so, whether law reform in the area is needed.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"15 1","pages":"124 - 147"},"PeriodicalIF":0.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42763747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/17579961.2023.2184136
Aina Turillazzi, M. Taddeo, Luciano Floridi, F. Casolari
ABSTRACT In December 2020, the European Commission issued the Digital Services Act (DSA), a legislative proposal for a single market of digital services, focusing on fundamental rights, data privacy, and the protection of stakeholders. The DSA seeks to promote European digital sovereignty, among other goals. This article reviews the literature and related documents on the DSA to map and evaluate its ethical, legal, and social implications. It examines four macro-areas of interest regarding the digital services offered by online platforms. The analysis concludes that, so far, the DSA has led to contrasting interpretations, ranging from stakeholders expecting it to be more challenging for gatekeepers, to others objecting that the proposed obligations are unjustified. The article contributes to this debate by arguing that a more robust framework for the benefit of all stakeholders should be defined.
{"title":"The digital services act: an analysis of its ethical, legal, and social implications","authors":"Aina Turillazzi, M. Taddeo, Luciano Floridi, F. Casolari","doi":"10.1080/17579961.2023.2184136","DOIUrl":"https://doi.org/10.1080/17579961.2023.2184136","url":null,"abstract":"ABSTRACT In December 2020, the European Commission issued the Digital Services Act (DSA), a legislative proposal for a single market of digital services, focusing on fundamental rights, data privacy, and the protection of stakeholders. The DSA seeks to promote European digital sovereignty, among other goals. This article reviews the literature and related documents on the DSA to map and evaluate its ethical, legal, and social implications. It examines four macro-areas of interest regarding the digital services offered by online platforms. The analysis concludes that, so far, the DSA has led to contrasting interpretations, ranging from stakeholders expecting it to be more challenging for gatekeepers, to others objecting that the proposed obligations are unjustified. The article contributes to this debate by arguing that a more robust framework for the benefit of all stakeholders should be defined.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"15 1","pages":"83 - 106"},"PeriodicalIF":0.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48735289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/17579961.2023.2184131
Jonathan McCarthy
ABSTRACT The background to this article is the seeming revival of a sandbox approach to FinTech regulation within the EU, as epitomised by legislative initiatives on AI and DLT. The article contributes to existing literature on the topic by highlighting how the current design of sandboxes is not being informed by sufficiently comprehensive empirical evidence from international examples. As well as examining relevant provisions of the European Commission’s proposals on AI sandboxes and introduction of a DLT pilot regime, the article refers to UK and Australian examples to demonstrate the varying features of sandboxes. Even if there is continued ambiguity as to the characteristics of sandboxes, broader regulatory supports, such as innovation hubs, can be vital. However, this should not diminish the need for improved clarity and transparency on sandboxes’ operations. The ability to make necessary refinements will determine how the EU’s regulatory approach to FinTech generally will evolve.
{"title":"From childish things: the evolving sandbox approach in the EU’s regulation of financial technology","authors":"Jonathan McCarthy","doi":"10.1080/17579961.2023.2184131","DOIUrl":"https://doi.org/10.1080/17579961.2023.2184131","url":null,"abstract":"ABSTRACT The background to this article is the seeming revival of a sandbox approach to FinTech regulation within the EU, as epitomised by legislative initiatives on AI and DLT. The article contributes to existing literature on the topic by highlighting how the current design of sandboxes is not being informed by sufficiently comprehensive empirical evidence from international examples. As well as examining relevant provisions of the European Commission’s proposals on AI sandboxes and introduction of a DLT pilot regime, the article refers to UK and Australian examples to demonstrate the varying features of sandboxes. Even if there is continued ambiguity as to the characteristics of sandboxes, broader regulatory supports, such as innovation hubs, can be vital. However, this should not diminish the need for improved clarity and transparency on sandboxes’ operations. The ability to make necessary refinements will determine how the EU’s regulatory approach to FinTech generally will evolve.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"15 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46741583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/17579961.2023.2184141
R. Budnik, A. Tedeev
ABSTRACT Startups and tech giants, working on the computer replication of human personality, are showing remarkable progress. A substantial part of this activity concentrates on the engineering of artificial life-enabling instruments that fall within the scope of intellectual property law. However, the rendering of virtual people based on a pre-designed technological basis raises new social dilemmas. This article covers five aspects of the activity. First, the initial experiments and main approaches to the computer emulation of humans are observed. Secondly, the interrelated ethical, legal, and technological challenges of the artificial person phenomenon are examined. Thirdly, licensing provisions on using the backbone platform of replicated individuals are considered. Fourthly, the allocation of virtual humans under the legal regime of the public domain is discussed. Finally, amendments to upgrade the relevant standpoints of law, fuelled by the progress of mind-uploading engineering, are elaborated. Overall, the study adds value to the development of legal and ethical principles for the science and technology of artificial life.
{"title":"Initial designs of artificial humans: intellectual property and ethical aspects","authors":"R. Budnik, A. Tedeev","doi":"10.1080/17579961.2023.2184141","DOIUrl":"https://doi.org/10.1080/17579961.2023.2184141","url":null,"abstract":"ABSTRACT Startups and tech giants, working on the computer replication of human personality, are showing remarkable progress. A substantial part of this activity concentrates on the engineering of artificial life-enabling instruments that fall within the scope of intellectual property law. However, the rendering of virtual people based on a pre-designed technological basis raises new social dilemmas. This article covers five aspects of the activity. First, the initial experiments and main approaches to the computer emulation of humans are observed. Secondly, the interrelated ethical, legal, and technological challenges of the artificial person phenomenon are examined. Thirdly, licensing provisions on using the backbone platform of replicated individuals are considered. Fourthly, the allocation of virtual humans under the legal regime of the public domain is discussed. Finally, amendments to upgrade the relevant standpoints of law, fuelled by the progress of mind-uploading engineering, are elaborated. Overall, the study adds value to the development of legal and ethical principles for the science and technology of artificial life.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"15 1","pages":"222 - 240"},"PeriodicalIF":0.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45407425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/17579961.2023.2184132
S. Teo
ABSTRACT The ubiquitous use of artificial intelligence (AI) can bring about both positive and negative consequences for individuals and societies. On the negative side, there is a concern not only about the impact of AI on first-order discrete individual rights (such as the right to privacy, non-discrimination and freedom of opinion and expression) but also about whether the human rights framework is fit for purpose relative to second-order challenges that cannot be effectively addressed by discrete legal rights focused upon the individual. The purpose of this article is to map the contours and utility of the concept of human dignity in addressing the second-order challenges presented by AI. Four key interpretations of human dignity are identified, namely: non-instrumentalization of the human person; the protection of certain vulnerable classes of persons; the recognition and exercise of inherent self-worth (including the exercise of individual autonomy); and a wider notion of the protection of humanity. Applying these interpretations to AI affordances, the paper argues that human dignity should foreground three second-order challenges, namely: the disembodiment of empiric self-representation and contextual sense-making; the choice architectures for the exercise of cognitive autonomy; and, the experiential context of lived experiences using the normative framework of human vulnerability.
{"title":"Human dignity and AI: mapping the contours and utility of human dignity in addressing challenges presented by AI","authors":"S. Teo","doi":"10.1080/17579961.2023.2184132","DOIUrl":"https://doi.org/10.1080/17579961.2023.2184132","url":null,"abstract":"ABSTRACT The ubiquitous use of artificial intelligence (AI) can bring about both positive and negative consequences for individuals and societies. On the negative side, there is a concern not only about the impact of AI on first-order discrete individual rights (such as the right to privacy, non-discrimination and freedom of opinion and expression) but also about whether the human rights framework is fit for purpose relative to second-order challenges that cannot be effectively addressed by discrete legal rights focused upon the individual. The purpose of this article is to map the contours and utility of the concept of human dignity in addressing the second-order challenges presented by AI. Four key interpretations of human dignity are identified, namely: non-instrumentalization of the human person; the protection of certain vulnerable classes of persons; the recognition and exercise of inherent self-worth (including the exercise of individual autonomy); and a wider notion of the protection of humanity. Applying these interpretations to AI affordances, the paper argues that human dignity should foreground three second-order challenges, namely: the disembodiment of empiric self-representation and contextual sense-making; the choice architectures for the exercise of cognitive autonomy; and, the experiential context of lived experiences using the normative framework of human vulnerability.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"15 1","pages":"241 - 279"},"PeriodicalIF":0.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49515629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-03DOI: 10.1080/17579961.2022.2113674
J. Truby, A. Dahdal, I. Ibrahim
ABSTRACT Many financial hubs around the world have used regulatory sandboxes as a tool to support and promote innovation in financial technology. Sandboxes can manage and control risks without stifling innovation. In recent years, the Arab Gulf States (Kuwait, Saudi Arabia, Qatar, Oman, Bahrain and United Arab Emirates) have introduced (or are introducing) regulatory sandboxes to, among other things, support economic diversification efforts through the promotion of financial technology. These jurisdiction-specific sandboxes are emerging at a time when other international hubs and global networks are already establishing regional and cross-border sandbox platforms. Given the cultural and economic similarities between Gulf States, a regional sandbox may be the ostensible logical progression. Beyond this vision, however, policymakers must first address and overcome several serious obstacles. Geopolitical realities and the sensitives around data sharing (particularly in a financial context) constitute some of the most critical challenges mitigating against a single sandbox policy. Rightly calibrated, a regional sandbox may encourage both a flourishing of the fintech sector in the region, and mending ties between Gulf States.
{"title":"Sandboxes in the desert: is a cross-border ‘gulf box’ feasible?","authors":"J. Truby, A. Dahdal, I. Ibrahim","doi":"10.1080/17579961.2022.2113674","DOIUrl":"https://doi.org/10.1080/17579961.2022.2113674","url":null,"abstract":"ABSTRACT Many financial hubs around the world have used regulatory sandboxes as a tool to support and promote innovation in financial technology. Sandboxes can manage and control risks without stifling innovation. In recent years, the Arab Gulf States (Kuwait, Saudi Arabia, Qatar, Oman, Bahrain and United Arab Emirates) have introduced (or are introducing) regulatory sandboxes to, among other things, support economic diversification efforts through the promotion of financial technology. These jurisdiction-specific sandboxes are emerging at a time when other international hubs and global networks are already establishing regional and cross-border sandbox platforms. Given the cultural and economic similarities between Gulf States, a regional sandbox may be the ostensible logical progression. Beyond this vision, however, policymakers must first address and overcome several serious obstacles. Geopolitical realities and the sensitives around data sharing (particularly in a financial context) constitute some of the most critical challenges mitigating against a single sandbox policy. Rightly calibrated, a regional sandbox may encourage both a flourishing of the fintech sector in the region, and mending ties between Gulf States.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"14 1","pages":"447 - 473"},"PeriodicalIF":0.0,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43334037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-03DOI: 10.1080/17579961.2022.2113669
H. Etienne
ABSTRACT When combined with an appropriate level of human judgement, machine learning applications were crucial resources insupporting decision-making in the context of the Covid-19 crisis, resulting in more efficient and better-informed responses to ethicalissues. This paper focusses on four social dimensions (bioethical, political, psychological, and economic) from which the decisionstaken in the context of the Covid-19 crisis derived major ethical implications. On the one hand, I argue against the possibility ofaddressing these issues from a purely algorithmic approach, elaborating on two types of limitations for automated systems toaddress ethical issues. This leads me to discuss how different ethical situations call for different performance metrics with regards tothe ‘contextual explicability and performance issue’, as well as to enunciate a gold principle: ‘legitimacy trumps accuracy’. On the otherhand, I present practical examples of machine learning applications which enhance, instead of dilute, human moral agency in betteraddressing these issues. I also suggest a ‘moral perimeter’ framework to ensure the responsibility of algorithms-assisted decisionmakersfor critical decisions. The unique potential of AI to ‘solve’ moral dilemmas by intervening on their conditions of possibility thenprompts me to discuss a new type of moral situation: AI-generated meta-dilemmas.
{"title":"Solving moral dilemmas with AI: how it helps us address the social implications of the Covid-19 crisis and enhance human responsibility to tackle meta-dilemmas","authors":"H. Etienne","doi":"10.1080/17579961.2022.2113669","DOIUrl":"https://doi.org/10.1080/17579961.2022.2113669","url":null,"abstract":"ABSTRACT When combined with an appropriate level of human judgement, machine learning applications were crucial resources insupporting decision-making in the context of the Covid-19 crisis, resulting in more efficient and better-informed responses to ethicalissues. This paper focusses on four social dimensions (bioethical, political, psychological, and economic) from which the decisionstaken in the context of the Covid-19 crisis derived major ethical implications. On the one hand, I argue against the possibility ofaddressing these issues from a purely algorithmic approach, elaborating on two types of limitations for automated systems toaddress ethical issues. This leads me to discuss how different ethical situations call for different performance metrics with regards tothe ‘contextual explicability and performance issue’, as well as to enunciate a gold principle: ‘legitimacy trumps accuracy’. On the otherhand, I present practical examples of machine learning applications which enhance, instead of dilute, human moral agency in betteraddressing these issues. I also suggest a ‘moral perimeter’ framework to ensure the responsibility of algorithms-assisted decisionmakersfor critical decisions. The unique potential of AI to ‘solve’ moral dilemmas by intervening on their conditions of possibility thenprompts me to discuss a new type of moral situation: AI-generated meta-dilemmas.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"14 1","pages":"305 - 324"},"PeriodicalIF":0.0,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43350072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-03DOI: 10.1080/17579961.2022.2113672
Daria Onitiu
ABSTRACT The discourse on filter bubbles and echo chambers applies to the use of social media analytics and consumer profiling for behavioural advertising in the fashion industry, this being relevant to an individual’s autonomy and control of personal information. However, we need to expand on the concept of filter bubbles and echo chambers to define the contours of self-exposure within the algorithmic context applied to the social and personal aspects of fashion. This paper claims that filter bubbles and echo chambers in fashion have an impact on the parameters and conditions of the right to privacy, influencing an individual’s perception and self-relationality. An analysis of the ECtHR’s interpretation of Article 8 of the ECHR Convention reveals that we need to shape notions of personal development and autonomy to include an individual’s plurality of needs, desires, and beliefs, as well as unconscious associations with fashion identity.
{"title":"Fashion, filter bubbles and echo chambers: questions of privacy, identity, and governance","authors":"Daria Onitiu","doi":"10.1080/17579961.2022.2113672","DOIUrl":"https://doi.org/10.1080/17579961.2022.2113672","url":null,"abstract":"ABSTRACT The discourse on filter bubbles and echo chambers applies to the use of social media analytics and consumer profiling for behavioural advertising in the fashion industry, this being relevant to an individual’s autonomy and control of personal information. However, we need to expand on the concept of filter bubbles and echo chambers to define the contours of self-exposure within the algorithmic context applied to the social and personal aspects of fashion. This paper claims that filter bubbles and echo chambers in fashion have an impact on the parameters and conditions of the right to privacy, influencing an individual’s perception and self-relationality. An analysis of the ECtHR’s interpretation of Article 8 of the ECHR Convention reveals that we need to shape notions of personal development and autonomy to include an individual’s plurality of needs, desires, and beliefs, as well as unconscious associations with fashion identity.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"14 1","pages":"395 - 420"},"PeriodicalIF":0.0,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48409366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-03DOI: 10.1080/17579961.2022.2113666
A. Pastra, Nathalie Schauffel, T. Ellwart, Tafsir Johansson
ABSTRACT The article contributes to the discussion concerning the role of trust in robotic and autonomous systems (RAS), with a sharp focus on remote inspection technologies (RITs) for vessel inspection, survey and maintenance. To this end, the article provides a first-hand insight into one of the major findings from BUGWRIGHT2—a collaborative project co-funded by the European Union’s Horizon 2020 Research and Innovation programme that aims to change the European vessel-structure maintenance landscape. In doing so, this article explores trust from a psychological perspective, reflecting on its characteristics and predictors, followed by a discussion on the AI-trust ecosystem as envisaged by the European Commission. Structured interviews with thirty-three subject matter experts guide the main analysis revealing that trust is an essential precondition for integrating RITs into the current manual-driven inspection system. A synoptic overview of the vital trust elements is provided before carving out the ways forward for developing a trustworthy environment governed by Human-Robot Interaction.
{"title":"Building a trust ecosystem for remote inspection technologies in ship hull inspections","authors":"A. Pastra, Nathalie Schauffel, T. Ellwart, Tafsir Johansson","doi":"10.1080/17579961.2022.2113666","DOIUrl":"https://doi.org/10.1080/17579961.2022.2113666","url":null,"abstract":"ABSTRACT The article contributes to the discussion concerning the role of trust in robotic and autonomous systems (RAS), with a sharp focus on remote inspection technologies (RITs) for vessel inspection, survey and maintenance. To this end, the article provides a first-hand insight into one of the major findings from BUGWRIGHT2—a collaborative project co-funded by the European Union’s Horizon 2020 Research and Innovation programme that aims to change the European vessel-structure maintenance landscape. In doing so, this article explores trust from a psychological perspective, reflecting on its characteristics and predictors, followed by a discussion on the AI-trust ecosystem as envisaged by the European Commission. Structured interviews with thirty-three subject matter experts guide the main analysis revealing that trust is an essential precondition for integrating RITs into the current manual-driven inspection system. A synoptic overview of the vital trust elements is provided before carving out the ways forward for developing a trustworthy environment governed by Human-Robot Interaction.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"14 1","pages":"474 - 497"},"PeriodicalIF":0.0,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46233341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}