首页 > 最新文献

Digital society : ethics, socio-legal and governance of digital technology最新文献

英文 中文
Driving into the Loop: Mapping Automation Bias and Liability Issues for Advanced Driver Assistance Systems 驾驶进入循环:映射自动化偏差和责任问题的先进驾驶辅助系统
Pub Date : 2023-10-07 DOI: 10.1007/s44206-023-00066-y
Katie Szilagyi, Jason Millar, AJung Moon, Shalaleh Rismani
Advanced driver assistance systems (ADAS) are transforming the modern driving experience. Today’s vehicles seem better equipped than ever to augment safety by automating routine driving activities. The assumption appears straightforward: automation will necessarily improve road safety because automation replaces the human driver, thereby reducing human driving errors. But is this truly a straightforward assumption? In our contention, this assumption has potentially dangerous limits. This paper explores how well-understood and well-researched psychological and cognitive phenomena pertaining to human interaction with automation should not be properly labelled as misuse. Framing the problem through an automation bias lens, we argue that such so-called instances of misuse can instead be seen as predictable by-products of specific engineering design choices. We engage empirical data to problematize the assumption that automating driving functions directly leads to increased safety. Our conclusion calls for more transparent testing and safety data on the part of manufacturers, for updated notions of misuse in legal contexts, and for updated driver training regimes.
先进驾驶辅助系统(ADAS)正在改变现代驾驶体验。如今的汽车似乎比以往任何时候都更能通过自动驾驶来提高安全性。这个假设似乎很简单:自动化必然会改善道路安全,因为自动化取代了人类驾驶员,从而减少了人类驾驶失误。但这真的是一个直截了当的假设吗?在我们的争论中,这种假设具有潜在的危险局限性。本文探讨了如何充分理解和充分研究与人类与自动化互动有关的心理和认知现象不应被适当地标记为误用。我们认为,从自动化偏见的角度来看,这些所谓的误用实例可以被视为特定工程设计选择的可预测的副产品。我们利用经验数据来质疑自动驾驶功能直接导致安全性提高的假设。我们的结论要求制造商提供更透明的测试和安全数据,更新法律背景下滥用的概念,以及更新驾驶员培训制度。
{"title":"Driving into the Loop: Mapping Automation Bias and Liability Issues for Advanced Driver Assistance Systems","authors":"Katie Szilagyi, Jason Millar, AJung Moon, Shalaleh Rismani","doi":"10.1007/s44206-023-00066-y","DOIUrl":"https://doi.org/10.1007/s44206-023-00066-y","url":null,"abstract":"Advanced driver assistance systems (ADAS) are transforming the modern driving experience. Today’s vehicles seem better equipped than ever to augment safety by automating routine driving activities. The assumption appears straightforward: automation will necessarily improve road safety because automation replaces the human driver, thereby reducing human driving errors. But is this truly a straightforward assumption? In our contention, this assumption has potentially dangerous limits. This paper explores how well-understood and well-researched psychological and cognitive phenomena pertaining to human interaction with automation should not be properly labelled as misuse. Framing the problem through an automation bias lens, we argue that such so-called instances of misuse can instead be seen as predictable by-products of specific engineering design choices. We engage empirical data to problematize the assumption that automating driving functions directly leads to increased safety. Our conclusion calls for more transparent testing and safety data on the part of manufacturers, for updated notions of misuse in legal contexts, and for updated driver training regimes.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135254976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are Concerns Related to Artificial Intelligence Development and Use Really Necessary: A Philosophical Discussion 对人工智能开发和使用的关注是否真的有必要:一场哲学讨论
Pub Date : 2023-09-30 DOI: 10.1007/s44206-023-00070-2
Levent Uzun
This article explores the philosophical considerations, concerns, and recommendations surrounding the development and use of artificial intelligence and large language models like ChatGPT. It addresses the concerns raised by educators and academics regarding academic integrity and the potential negative effects of LLMs. The article discusses the challenges posed by LLMs, such as plagiarism, and the opportunities they present, such as assisting students in the writing process and improving the quality of their work. It examines different philosophical approaches, including utilitarianism, deontological ethics, and virtue ethics, and their implications for the development and use of AI. The article also delves into key concerns related to privacy, bias, discrimination, and the impact on employment. It provides suggestions for a responsible and ethical approach, including prioritizing ethics and transparency in AI development, establishing clear regulations, and fostering responsible use by users. The importance of ongoing philosophical reflection, ethical considerations, and collaboration among stakeholders is emphasized. The article concludes by highlighting the need for future research to address these concerns and ensure that AI is developed and used in a manner consistent with ethical principles, values, and societal well-being.
本文探讨了围绕人工智能和大型语言模型(如ChatGPT)的开发和使用的哲学考虑、关注点和建议。它解决了教育工作者和学者对学术诚信和法学硕士潜在负面影响的担忧。这篇文章讨论了法学硕士所面临的挑战,比如抄袭,以及他们所提供的机会,比如在写作过程中帮助学生,提高他们的工作质量。它考察了不同的哲学方法,包括功利主义、义务伦理学和美德伦理学,以及它们对人工智能发展和使用的影响。本文还深入探讨了与隐私、偏见、歧视以及对就业的影响有关的关键问题。它为负责任和道德的方法提供了建议,包括优先考虑人工智能开发中的道德和透明度,建立明确的法规,以及促进用户负责任的使用。强调了持续的哲学反思、伦理考虑和利益相关者之间合作的重要性。文章最后强调了未来研究的必要性,以解决这些问题,并确保人工智能的开发和使用符合道德原则、价值观和社会福祉。
{"title":"Are Concerns Related to Artificial Intelligence Development and Use Really Necessary: A Philosophical Discussion","authors":"Levent Uzun","doi":"10.1007/s44206-023-00070-2","DOIUrl":"https://doi.org/10.1007/s44206-023-00070-2","url":null,"abstract":"This article explores the philosophical considerations, concerns, and recommendations surrounding the development and use of artificial intelligence and large language models like ChatGPT. It addresses the concerns raised by educators and academics regarding academic integrity and the potential negative effects of LLMs. The article discusses the challenges posed by LLMs, such as plagiarism, and the opportunities they present, such as assisting students in the writing process and improving the quality of their work. It examines different philosophical approaches, including utilitarianism, deontological ethics, and virtue ethics, and their implications for the development and use of AI. The article also delves into key concerns related to privacy, bias, discrimination, and the impact on employment. It provides suggestions for a responsible and ethical approach, including prioritizing ethics and transparency in AI development, establishing clear regulations, and fostering responsible use by users. The importance of ongoing philosophical reflection, ethical considerations, and collaboration among stakeholders is emphasized. The article concludes by highlighting the need for future research to address these concerns and ensure that AI is developed and used in a manner consistent with ethical principles, values, and societal well-being.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136278627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Commercial mHealth Apps and the Providers’ Responsibility for Hope 商业移动医疗应用程序和供应商对希望的责任
Pub Date : 2023-09-27 DOI: 10.1007/s44206-023-00071-1
Leon Rossmaier, Yashar Saghai, Philip Brey
Abstract In this paper, we ask whether the providers of commercial mHealth apps for self-tracking create inflated or false hopes for vulnerable user groups and whether they should be held responsible for this. This question is relevant because hopes created by the providers determine the modalities of the apps’ use. Due to the created hopes, users who may be vulnerable to certain design features of the app can experience bad outcomes in various dimensions of their well-being. This adds to structural injustices sustaining or exacerbating the vulnerable position of such user groups. We define structural injustices as systemic disadvantages for certain social groups that may be sustained or exacerbated by unfair power relations. Inflated hopes can also exclude digitally disadvantaged users. Thus, the hopes created by the providers of commercial mHealth apps for self-tracking press the question of whether the deployment and use of mHealth apps meet the requirements for qualifying as a just public health endeavor.
在本文中,我们询问商业移动健康应用程序的提供者是否为弱势用户群体创造了夸大或虚假的希望,以及他们是否应该对此负责。这个问题是相关的,因为供应商创造的希望决定了应用程序的使用方式。由于创造的希望,可能容易受到应用程序某些设计功能影响的用户可能会在他们的健康的各个方面经历不好的结果。这增加了结构性的不公正,维持或加剧了这些用户群体的脆弱地位。我们将结构性不公正定义为某些社会群体的系统性劣势,这种劣势可能会因不公平的权力关系而持续或加剧。过高的期望也会将数字弱势用户排除在外。因此,商业移动健康应用程序提供商为自我跟踪创造的希望,提出了一个问题,即移动健康应用程序的部署和使用是否符合作为一项公正的公共卫生努力的要求。
{"title":"Commercial mHealth Apps and the Providers’ Responsibility for Hope","authors":"Leon Rossmaier, Yashar Saghai, Philip Brey","doi":"10.1007/s44206-023-00071-1","DOIUrl":"https://doi.org/10.1007/s44206-023-00071-1","url":null,"abstract":"Abstract In this paper, we ask whether the providers of commercial mHealth apps for self-tracking create inflated or false hopes for vulnerable user groups and whether they should be held responsible for this. This question is relevant because hopes created by the providers determine the modalities of the apps’ use. Due to the created hopes, users who may be vulnerable to certain design features of the app can experience bad outcomes in various dimensions of their well-being. This adds to structural injustices sustaining or exacerbating the vulnerable position of such user groups. We define structural injustices as systemic disadvantages for certain social groups that may be sustained or exacerbated by unfair power relations. Inflated hopes can also exclude digitally disadvantaged users. Thus, the hopes created by the providers of commercial mHealth apps for self-tracking press the question of whether the deployment and use of mHealth apps meet the requirements for qualifying as a just public health endeavor.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135579132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Citizen- and Citizenry-Centric Digitalization of the Urban Environment: Urban Digital Twinning as Commoning 走向以市民和市民为中心的城市环境数字化:城市数字孪生的共性
Pub Date : 2023-09-19 DOI: 10.1007/s44206-023-00064-0
Stefano Calzati, Bastiaan van Loenen
Abstract In this paper, we make a case for (1) a sociotechnical understanding and (2) a commoning approach to the governance of digital twin technologies applied to the urban environment. The European Union has reinstated many times over the willingness to pursue a citizen-centric approach to digital transformation. However, recent studies show the limits of a human right-based only approach in that this overlooks consequences of data-driven technologies at societal level. The need to synthesize an individual-based and collective-based approach within an ecosystemic vision is key, especially when it comes to cities, which are complex systems affected by problems whose solutions require forms of self-organization. Tackling the limitations of current tech-centered and practice-first city digital twin (CDT) projects in Europe, in this article, we conceptualize the idea of urban digital twinning (UDT) as a process that is contextual, iterative, and participatory. Unpacking the normative understanding of data-as-resource, we claim that a commoning approach to data allows enacting a fair ecosystemic vision of the digitalization of the urban environment which is ultimately both citizen- and citizenry-centric.
在本文中,我们提出了(1)社会技术理解和(2)应用于城市环境的数字孪生技术治理的共同方法。欧盟已经多次表示愿意采取以公民为中心的数字化转型方法。然而,最近的研究表明,只以人权为基础的做法存在局限性,因为它忽视了数据驱动技术在社会层面的后果。在生态系统的视野中,综合个人和集体的方法是关键,特别是当涉及到城市时,城市是受问题影响的复杂系统,其解决方案需要各种形式的自组织。为了解决当前欧洲以技术为中心和实践为先的城市数字孪生(CDT)项目的局限性,本文将城市数字孪生(UDT)概念化为一个上下文相关、迭代和参与性的过程。通过对数据即资源的规范理解,我们提出了一种通用的数据处理方法,可以为城市环境的数字化制定一个公平的生态系统愿景,最终以公民和公民为中心。
{"title":"Towards a Citizen- and Citizenry-Centric Digitalization of the Urban Environment: Urban Digital Twinning as Commoning","authors":"Stefano Calzati, Bastiaan van Loenen","doi":"10.1007/s44206-023-00064-0","DOIUrl":"https://doi.org/10.1007/s44206-023-00064-0","url":null,"abstract":"Abstract In this paper, we make a case for (1) a sociotechnical understanding and (2) a commoning approach to the governance of digital twin technologies applied to the urban environment. The European Union has reinstated many times over the willingness to pursue a citizen-centric approach to digital transformation. However, recent studies show the limits of a human right-based only approach in that this overlooks consequences of data-driven technologies at societal level. The need to synthesize an individual-based and collective-based approach within an ecosystemic vision is key, especially when it comes to cities, which are complex systems affected by problems whose solutions require forms of self-organization. Tackling the limitations of current tech-centered and practice-first city digital twin (CDT) projects in Europe, in this article, we conceptualize the idea of urban digital twinning (UDT) as a process that is contextual, iterative, and participatory. Unpacking the normative understanding of data-as-resource, we claim that a commoning approach to data allows enacting a fair ecosystemic vision of the digitalization of the urban environment which is ultimately both citizen- and citizenry-centric.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135011518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New Study of AI Artists for Changing the Movie Industries 人工智能艺术家改变电影产业的新研究
Pub Date : 2023-09-14 DOI: 10.1007/s44206-023-00065-z
Araya Sookhom, Piyachat Klinthai, Pimpakarn A-masiri, Chutisant Kerdvibulvech
Due to the rise of artificial intelligence (AI) in the arts, this paper aims to explore the use of AI for reducing film production costs through the creation of realistic images. Additionally, we investigate whether AI can recreate the same character at the same age. Without needing to replace the original actor, qualitative data collection tools were employed to study three distinct population groups within the film industry: film industry professionals, moviegoers, and technologists. Our research reveals that AI, or AI artists in film production, still face limitations in significantly reducing production costs. Furthermore, it is crucial to engage a text expert in the image production process for films who possesses a comprehensive understanding of film principles in order to achieve images that align with the project’s requirements. Moreover, the introduction of the AI artist technique allows for the recreation of a character at the same age portrayed by the same actor, even if that actor may have passed away. Consequently, obtaining consent from the relatives of the actor or actress becomes a necessary step. Furthermore, the aspect of audience acceptance does not hold significant interest, as it demands a greater level of realism in both the image and the actors, surpassing what AI can provide. Therefore, this paper underscores the increasing influence of AI in the arts, particularly within film production, and examines its potential to reduce costs and recreate characters.
由于人工智能(AI)在艺术领域的兴起,本文旨在探索利用人工智能通过创造逼真的图像来降低电影制作成本。此外,我们还研究了AI是否可以在相同的年龄重建相同的角色。在不需要替换原始演员的情况下,采用定性数据收集工具来研究电影行业内的三个不同人群:电影行业专业人士、电影观众和技术人员。我们的研究表明,人工智能或电影制作中的人工智能艺术家在大幅降低制作成本方面仍然面临限制。此外,为了实现符合项目要求的图像,在电影的图像制作过程中,拥有对电影原则的全面理解的文本专家是至关重要的。此外,AI美工技术的引入允许由同一演员再现相同年龄的角色,即使该演员可能已经去世。因此,获得男演员或女演员亲属的同意成为必要的步骤。此外,观众接受度方面并没有引起很大的兴趣,因为它要求图像和演员都具有更高水平的现实性,这超出了人工智能所能提供的。因此,本文强调了人工智能在艺术领域,特别是在电影制作领域日益增长的影响力,并研究了它在降低成本和重塑角色方面的潜力。
{"title":"A New Study of AI Artists for Changing the Movie Industries","authors":"Araya Sookhom, Piyachat Klinthai, Pimpakarn A-masiri, Chutisant Kerdvibulvech","doi":"10.1007/s44206-023-00065-z","DOIUrl":"https://doi.org/10.1007/s44206-023-00065-z","url":null,"abstract":"Due to the rise of artificial intelligence (AI) in the arts, this paper aims to explore the use of AI for reducing film production costs through the creation of realistic images. Additionally, we investigate whether AI can recreate the same character at the same age. Without needing to replace the original actor, qualitative data collection tools were employed to study three distinct population groups within the film industry: film industry professionals, moviegoers, and technologists. Our research reveals that AI, or AI artists in film production, still face limitations in significantly reducing production costs. Furthermore, it is crucial to engage a text expert in the image production process for films who possesses a comprehensive understanding of film principles in order to achieve images that align with the project’s requirements. Moreover, the introduction of the AI artist technique allows for the recreation of a character at the same age portrayed by the same actor, even if that actor may have passed away. Consequently, obtaining consent from the relatives of the actor or actress becomes a necessary step. Furthermore, the aspect of audience acceptance does not hold significant interest, as it demands a greater level of realism in both the image and the actors, surpassing what AI can provide. Therefore, this paper underscores the increasing influence of AI in the arts, particularly within film production, and examines its potential to reduce costs and recreate characters.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Proposal for a Definition of General Purpose Artificial Intelligence Systems 通用人工智能系统定义的建议
Pub Date : 2023-09-12 DOI: 10.1007/s44206-023-00068-w
Carlos I. Gutierrez, Anthony Aguirre, Risto Uuk, Claire C. Boine, Matija Franklin
Abstract The European Union (EU) is in the middle of comprehensively regulating artificial intelligence (AI) through an effort known as the AI Act. Within the vast spectrum of issues under the Act’s aegis, the treatment of technologies classified as general purpose AI systems (GPAIS) merits special consideration. Particularly, existing proposals to define GPAIS do not provide sufficient guidance to distinguish these systems from those designed to perform specific tasks, denominated as fixed-purpose. Thus, our working paper has three objectives: first, to highlight the variance and ambiguity in the interpretation of GPAIS in the literature; second, to examine the dimensions of the generality of purpose available to define GPAIS; lastly, to propose a functional definition of the term that facilitates its governance within the EU. Our intention with this piece is to offer policymakers an alternative perspective on GPAIS that improves the hard and soft law efforts to mitigate these systems’ risks and protect the well-being and future of constituencies in the EU and globally.
欧盟(EU)正在通过一项被称为“人工智能法案”的努力对人工智能(AI)进行全面监管。在该法案所支持的广泛问题中,对归类为通用人工智能系统(GPAIS)的技术的处理值得特别考虑。特别是,现有的定义GPAIS的建议没有提供足够的指导,以区分这些系统与那些旨在执行特定任务的系统,称为固定用途。因此,我们的工作论文有三个目标:首先,强调文献中对GPAIS解释的差异和歧义;第二,检查可用于定义GPAIS的目的通用性的维度;最后,提出该术语的功能定义,以促进其在欧盟内部的治理。我们这篇文章的目的是为政策制定者提供一种关于GPAIS的替代视角,以改善硬法律和软法律的努力,以减轻这些系统的风险,并保护欧盟和全球选区的福祉和未来。
{"title":"A Proposal for a Definition of General Purpose Artificial Intelligence Systems","authors":"Carlos I. Gutierrez, Anthony Aguirre, Risto Uuk, Claire C. Boine, Matija Franklin","doi":"10.1007/s44206-023-00068-w","DOIUrl":"https://doi.org/10.1007/s44206-023-00068-w","url":null,"abstract":"Abstract The European Union (EU) is in the middle of comprehensively regulating artificial intelligence (AI) through an effort known as the AI Act. Within the vast spectrum of issues under the Act’s aegis, the treatment of technologies classified as general purpose AI systems (GPAIS) merits special consideration. Particularly, existing proposals to define GPAIS do not provide sufficient guidance to distinguish these systems from those designed to perform specific tasks, denominated as fixed-purpose. Thus, our working paper has three objectives: first, to highlight the variance and ambiguity in the interpretation of GPAIS in the literature; second, to examine the dimensions of the generality of purpose available to define GPAIS; lastly, to propose a functional definition of the term that facilitates its governance within the EU. Our intention with this piece is to offer policymakers an alternative perspective on GPAIS that improves the hard and soft law efforts to mitigate these systems’ risks and protect the well-being and future of constituencies in the EU and globally.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135825574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Lessons Learned from Assessing Trustworthy AI in Practice 在实践中评估可信赖人工智能的经验教训
Pub Date : 2023-09-09 DOI: 10.1007/s44206-023-00063-1
Dennis Vetter, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Georgios Kararigas, P. Kringen, V. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Magnus Westerlund, Renee Wurth, R. Zicari
{"title":"Lessons Learned from Assessing Trustworthy AI in Practice","authors":"Dennis Vetter, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Georgios Kararigas, P. Kringen, V. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Magnus Westerlund, Renee Wurth, R. Zicari","doi":"10.1007/s44206-023-00063-1","DOIUrl":"https://doi.org/10.1007/s44206-023-00063-1","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"163 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80273006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Debiasing Strategies for Conversational AI: Improving Privacy and Security Decision-Making 会话人工智能的去偏见策略:改善隐私和安全决策
Pub Date : 2023-09-09 DOI: 10.1007/s44206-023-00062-2
Anna Leschanowsky, Birgit Popp, Nils Peters
{"title":"Debiasing Strategies for Conversational AI: Improving Privacy and Security Decision-Making","authors":"Anna Leschanowsky, Birgit Popp, Nils Peters","doi":"10.1007/s44206-023-00062-2","DOIUrl":"https://doi.org/10.1007/s44206-023-00062-2","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75606463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Thermal Imaging in Robotics as a Privacy-Enhancing or Privacy-Invasive Measure? Misconceptions of Privacy when Using Thermal Cameras in Robots 机器人中的热成像是隐私增强还是隐私侵犯?在机器人中使用热像仪时对隐私的误解
Pub Date : 2023-09-06 DOI: 10.1007/s44206-023-00060-4
Naomi Lintvedt
{"title":"Thermal Imaging in Robotics as a Privacy-Enhancing or Privacy-Invasive Measure? Misconceptions of Privacy when Using Thermal Cameras in Robots","authors":"Naomi Lintvedt","doi":"10.1007/s44206-023-00060-4","DOIUrl":"https://doi.org/10.1007/s44206-023-00060-4","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74034808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovation Commons for the Data Economy 数据经济创新共享
Pub Date : 2023-08-01 DOI: 10.1007/s44206-023-00059-x
Sara Guidi
{"title":"Innovation Commons for the Data Economy","authors":"Sara Guidi","doi":"10.1007/s44206-023-00059-x","DOIUrl":"https://doi.org/10.1007/s44206-023-00059-x","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74698037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital society : ethics, socio-legal and governance of digital technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1