首页 > 最新文献

AI and ethics最新文献

英文 中文
ECS: an interactive tool for data quality assurance ECS:数据质量保证互动工具
Pub Date : 2024-01-08 DOI: 10.1007/s43681-023-00393-3
Christian Sieberichs, Simon Geerkens, Alexander Braun, Thomas Waschulzik

With the increasing capabilities of machine learning systems and their potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper, we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems.

随着机器学习系统能力的不断提高及其在安全关键型系统中的潜在应用,确保高质量数据变得越来越重要。在本文中,我们提出了一种保证数据质量的新方法。为此,我们首先讨论了数学基础知识,并通过多个示例介绍了该方法。这样就能检测出具有潜在有害特性的数据点,以便在安全关键型系统中使用。
{"title":"ECS: an interactive tool for data quality assurance","authors":"Christian Sieberichs,&nbsp;Simon Geerkens,&nbsp;Alexander Braun,&nbsp;Thomas Waschulzik","doi":"10.1007/s43681-023-00393-3","DOIUrl":"10.1007/s43681-023-00393-3","url":null,"abstract":"<div><p>With the increasing capabilities of machine learning systems and their potential use in safety-critical systems, ensuring high-quality data is becoming increasingly important. In this paper, we present a novel approach for the assurance of data quality. For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples. This results in the detection of data points with potentially harmful properties for the use in safety-critical systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"131 - 139"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00393-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139444880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should we develop AGI? Artificial suffering and the moral development of humans 我们应该开发人工智能吗?人工痛苦与人类的道德发展
Pub Date : 2024-01-08 DOI: 10.1007/s43681-023-00411-4
Oliver Li

Recent research papers and tests in real life point in the direction that machines in the future may develop some form of possibly rudimentary inner life. Philosophers have warned and emphasized that the possibility of artificial suffering or the possibility of machines as moral patients should not be ruled out. In this paper, I reflect on the consequences for moral development of striving for AGI. In the introduction, I present examples which point into the direction of the future possibility of artificial suffering and highlight the increasing similarity between, for example, machine–human and human–human interaction. Next, I present and discuss responses to the possibility of artificial suffering supporting a cautious attitude for the sake of the machines. From a virtue ethical perspective and the development of human virtues, I subsequently argue that humans should not pursue the path of developing and creating AGI, not merely for the sake of possible suffering in machines, but also due to machine–human interaction becoming more alike to human–human interaction and for the sake of the human’s own moral development. Thus, for several reasons, humanity, as a whole, should be extremely cautious about pursuing the path of developing AGI—Artificial General Intelligence.

{"title":"Should we develop AGI? Artificial suffering and the moral development of humans","authors":"Oliver Li","doi":"10.1007/s43681-023-00411-4","DOIUrl":"10.1007/s43681-023-00411-4","url":null,"abstract":"<div><p>Recent research papers and tests in real life point in the direction that machines in the future may develop some form of possibly rudimentary inner life. Philosophers have warned and emphasized that the possibility of artificial suffering or the possibility of machines as moral patients should not be ruled out. In this paper, I reflect on the consequences for moral development of striving for AGI. In the introduction, I present examples which point into the direction of the future possibility of artificial suffering and highlight the increasing similarity between, for example, machine–human and human–human interaction. Next, I present and discuss responses to the possibility of artificial suffering supporting a cautious attitude for the sake of the machines. From a virtue ethical perspective and the development of human virtues, I subsequently argue that humans should not pursue the path of developing and creating AGI, not merely for the sake of possible suffering in machines, but also due to machine–human interaction becoming more alike to human–human interaction and for the sake of the human’s own moral development. Thus, for several reasons, humanity, as a whole, should be extremely cautious about pursuing the path of developing AGI—Artificial General Intelligence.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"641 - 651"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00411-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139444857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain 为铁路领域基于 ML 的系统的持续开发和安全保障制定安全的 MLOps 流程
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00392-4
Marc Zeller, Thomas Waschulzik, Reiner Schmid, Claus Bahlmann

Traditional automation technologies alone are not sufficient to enable driverless operation of trains (called Grade of Automation (GoA) 4) on non-restricted infrastructure. The required perception tasks are nowadays realized using Machine Learning (ML) and thus need to be developed and deployed reliably and efficiently. One important aspect to achieve this is to use an MLOps process for tackling improved reproducibility, traceability, collaboration, and continuous adaptation of a driverless operation to changing conditions. MLOps mixes ML application development and operation (Ops) and enables high-frequency software releases and continuous innovation based on the feedback from operations. In this paper, we outline a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. It integrates system engineering, safety assurance, and the ML life-cycle in a comprehensive workflow. We present the individual stages of the process and their interactions. Moreover, we describe relevant challenges to automate the different stages of the safe MLOps process.

仅靠传统的自动化技术还不足以实现列车在非限制性基础设施上的无人驾驶(称为 "自动化等级(GoA)4")。如今,所需的感知任务可通过机器学习(ML)来实现,因此需要可靠、高效地开发和部署。实现这一目标的一个重要方面是采用 MLOps 流程,以提高无人驾驶操作的可重复性、可追溯性、协作性,并使其不断适应不断变化的条件。MLOps 混合了 ML 应用程序开发和运营(Ops),可实现高频率的软件发布和基于运营反馈的持续创新。在本文中,我们概述了铁路领域基于 ML 系统的持续开发和安全保证的安全 MLOps 流程。它将系统工程、安全保证和 ML 生命周期整合在一个全面的工作流程中。我们介绍了该流程的各个阶段及其相互作用。此外,我们还介绍了实现安全 MLOps 流程不同阶段自动化的相关挑战。
{"title":"Toward a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain","authors":"Marc Zeller,&nbsp;Thomas Waschulzik,&nbsp;Reiner Schmid,&nbsp;Claus Bahlmann","doi":"10.1007/s43681-023-00392-4","DOIUrl":"10.1007/s43681-023-00392-4","url":null,"abstract":"<div><p>Traditional automation technologies alone are not sufficient to enable driverless operation of trains (called Grade of Automation (GoA) 4) on non-restricted infrastructure. The required perception tasks are nowadays realized using Machine Learning (ML) and thus need to be developed and deployed reliably and efficiently. One important aspect to achieve this is to use an MLOps process for tackling improved reproducibility, traceability, collaboration, and continuous adaptation of a driverless operation to changing conditions. MLOps mixes ML application development and operation (Ops) and enables high-frequency software releases and continuous innovation based on the feedback from operations. In this paper, we outline a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. It integrates system engineering, safety assurance, and the ML life-cycle in a comprehensive workflow. We present the individual stages of the process and their interactions. Moreover, we describe relevant challenges to automate the different stages of the safe MLOps process.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"123 - 130"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods 被遗忘还是公平:揭示机器非学习方法对公平的影响
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00398-y
Dawen Zhang, Shidong Pan, Thong Hoang, Zhenchang Xing, Mark Staples, Xiwei Xu, Lina Yao, Qinghua Lu, Liming Zhu

The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.

被遗忘权(RTBF)允许个人要求从网络平台上删除个人信息。研究人员提出了机器非学习算法,作为从训练好的模型中删除特定数据以支持 RTBF 的解决方案。然而,这些方法修改了数据输入模型的方式和训练的方式,这可能会从公平的角度损害人工智能伦理。为了帮助人工智能从业者在采用这些非学习方法时做出负责任的决定,我们首次对机器非学习方法进行了研究,以揭示其对公平性的影响。我们设计了两种典型的机器非学习方法(SISA 和 AmnesiacML),并在三种不同的删除策略下,以三种公平性数据集作为基线,对这两种方法和一种再训练方法(ORTR)进行了实验。结果表明,与 ORTR 和 AmnesiacML 相比,SISA 变体的非均匀数据删除会带来更好的公平性,而初始训练和均匀数据删除并不一定会影响这三种方法的公平性。这项研究可以帮助实践者在实施 RTBF 解决方案时做出明智的决策,考虑到公平性方面的潜在权衡。
{"title":"To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods","authors":"Dawen Zhang,&nbsp;Shidong Pan,&nbsp;Thong Hoang,&nbsp;Zhenchang Xing,&nbsp;Mark Staples,&nbsp;Xiwei Xu,&nbsp;Lina Yao,&nbsp;Qinghua Lu,&nbsp;Liming Zhu","doi":"10.1007/s43681-023-00398-y","DOIUrl":"10.1007/s43681-023-00398-y","url":null,"abstract":"<div><p>The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"83 - 93"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00398-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142409544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The more they think, the less they want: studying people’s attitudes about autonomous vehicles could also contribute to shaping them 想得越多,要得越少:研究人们对自动驾驶汽车的态度也有助于塑造自动驾驶汽车
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00385-3
Hubert Etienne, Florian Cova

In the past years, many studies have surveyed people’s intuitions about moral dilemmas involving autonomous vehicles (AVs). One widespread rationale for this line of research has been that understanding people’s attitudes about such dilemmas might help increase the pace of the adoption of autonomous vehicles—a goal that certain researchers consider a pressing moral imperative. However, surveying people is not a neutral process that is independent of respondents’ opinions and responses: in fact, respondents’ opinions can be influenced merely by taking part in a survey. In this paper, we present the results of three studies that suggest that participating in such surveys impacts participants’ willingness to acquire AVs. In our studies, we find that reflecting on AV dilemmas negatively impacted participants' willingness. Based on these results, we argue that prompting the general population to focus on AV dilemmas might highlight aspects of AVs that discourage their adoption. This results in a tension between the main rationale for empirical research on AV dilemmas and the impact of this research on the public at large.

{"title":"The more they think, the less they want: studying people’s attitudes about autonomous vehicles could also contribute to shaping them","authors":"Hubert Etienne,&nbsp;Florian Cova","doi":"10.1007/s43681-023-00385-3","DOIUrl":"10.1007/s43681-023-00385-3","url":null,"abstract":"<div><p>In the past years, many studies have surveyed people’s intuitions about moral dilemmas involving autonomous vehicles (AVs). One widespread rationale for this line of research has been that understanding people’s attitudes about such dilemmas might help increase the pace of the adoption of autonomous vehicles—a goal that certain researchers consider a pressing moral imperative. However, surveying people is not a neutral process that is independent of respondents’ opinions and responses: in fact, respondents’ opinions can be influenced merely by taking part in a survey. In this paper, we present the results of three studies that suggest that participating in such surveys impacts participants’ willingness to acquire AVs. In our studies, we find that reflecting on AV dilemmas negatively impacted participants' willingness. Based on these results, we argue that prompting the general population to focus on AV dilemmas might highlight aspects of AVs that discourage their adoption. This results in a tension between the main rationale for empirical research on AV dilemmas and the impact of this research on the public at large.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"633 - 640"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical considerations and policy interventions concerning the impact of generative AI tools in the economy and in society 生成式人工智能工具对经济和社会影响的伦理考虑和政策干预
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00405-2
Mirko Farina, Xiao Yu, A. Lavazza
{"title":"Ethical considerations and policy interventions concerning the impact of generative AI tools in the economy and in society","authors":"Mirko Farina, Xiao Yu, A. Lavazza","doi":"10.1007/s43681-023-00405-2","DOIUrl":"https://doi.org/10.1007/s43681-023-00405-2","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"22 9","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139388884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AITA: AI trustworthiness assessment AITA:人工智能可信度评估
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00397-z
Bertrand Braunschweig, Stefan Buijsman, Faïcel Chamroukhi, Fredrik Heintz, Foutse Khomh, Juliette Mattioli, Maximilian Poretschkin
{"title":"AITA: AI trustworthiness assessment","authors":"Bertrand Braunschweig,&nbsp;Stefan Buijsman,&nbsp;Faïcel Chamroukhi,&nbsp;Fredrik Heintz,&nbsp;Foutse Khomh,&nbsp;Juliette Mattioli,&nbsp;Maximilian Poretschkin","doi":"10.1007/s43681-023-00397-z","DOIUrl":"10.1007/s43681-023-00397-z","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"1 - 3"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139389563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI risk assessment using ethical dimensions 利用伦理维度进行人工智能风险评估
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00401-6
Alessio Tartaro, Enrico Panai, Mariangela Zoe Cocchiaro

In the design, development, and use of artificial intelligence systems, it is important to ensure that they are safe and trustworthy. This requires a systematic approach to identifying, analyzing, evaluating, mitigating, and monitoring risks throughout the entire lifecycle of an AI system. While standardized risk management processes are being developed, organizations may struggle to implement AI risk management effectively and efficiently due to various implementation gaps. This paper discusses the main gaps in AI risks management and describes a tool that can be used to support organizations in AI risk assessment. The tool consists of a structured process for identifying, analyzing, and evaluating risks in the context of specific AI applications and environments. The tool accounts for the multidimensionality and context-sensitivity of AI risks. It provides a visualization and quantification of AI risks and can inform strategies to mitigate and minimize those risks.

在设计、开发和使用人工智能系统时,确保其安全可靠非常重要。这就需要在人工智能系统的整个生命周期内,采用系统化的方法来识别、分析、评估、减轻和监控风险。虽然标准化的风险管理流程正在开发中,但由于各种实施差距,组织可能难以有效和高效地实施人工智能风险管理。本文讨论了人工智能风险管理的主要差距,并介绍了一种可用于支持企业进行人工智能风险评估的工具。该工具包括一个结构化流程,用于识别、分析和评估特定人工智能应用和环境中的风险。该工具考虑到了人工智能风险的多维性和背景敏感性。它提供了人工智能风险的可视化和量化,可为减轻和尽量减少这些风险的战略提供信息。
{"title":"AI risk assessment using ethical dimensions","authors":"Alessio Tartaro,&nbsp;Enrico Panai,&nbsp;Mariangela Zoe Cocchiaro","doi":"10.1007/s43681-023-00401-6","DOIUrl":"10.1007/s43681-023-00401-6","url":null,"abstract":"<div><p>In the design, development, and use of artificial intelligence systems, it is important to ensure that they are safe and trustworthy. This requires a systematic approach to identifying, analyzing, evaluating, mitigating, and monitoring risks throughout the entire lifecycle of an AI system. While standardized risk management processes are being developed, organizations may struggle to implement AI risk management effectively and efficiently due to various implementation gaps. This paper discusses the main gaps in AI risks management and describes a tool that can be used to support organizations in AI risk assessment. The tool consists of a structured process for identifying, analyzing, and evaluating risks in the context of specific AI applications and environments. The tool accounts for the multidimensionality and context-sensitivity of AI risks. It provides a visualization and quantification of AI risks and can inform strategies to mitigate and minimize those risks.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"105 - 112"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139389056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conformity assessment under the EU AI act general approach 欧盟人工智能法案下的合规性评估一般方法
Pub Date : 2024-01-03 DOI: 10.1007/s43681-023-00402-5
Eva Thelisson, Himanshu Verma

The European Commission proposed harmonised rules on artificial intelligence (AI) on the 21st of April 2021 (namely the EU AI Act). Following a consultative process with the European Council and many amendments, a General Approach of the EU AI Act was published on the 25th of November 2022. The EU Parliament approved the initial draft in May 2023. Trilogue meetings took place in June, July, September and October 2023, with the aim for the European Parliament, the Council of the European Union and the European Commission to adopt a final version early 2024. This is the first attempt to build a legally binding legal instrument on Artificial Intelligence in the European Union (EU). In a similar way as the General Data Protection Regulation (GDPR), the EU AI Act has an extraterritorial effect. It has, therefore, the potential to become a global gold standard for AI regulation. It may also contribute to developing a global consensus on AI Trustworthiness because AI providers must conduct conformity assessments for high-risk AI systems prior to entry into the EU market. As the AI Act contains limited guidelines on how to conduct conformity assessments and ex-post monitoring in practice, there is a need for consensus building on this topic. This paper aims at studying the governance structure proposed by the EU AI Act, as approved by the European Council in November 2022, and proposes tools to conduct conformity assessments of AI systems.

欧盟委员会于 2021 年 4 月 21 日提出了关于人工智能(AI)的统一规则(即《欧盟人工智能法案》)。经过与欧洲理事会的磋商和多次修订,2022 年 11 月 25 日公布了《欧盟人工智能法总则》。欧盟议会于 2023 年 5 月批准了初稿。三方会议分别于 2023 年 6 月、7 月、9 月和 10 月举行,目的是让欧洲议会、欧盟理事会和欧盟委员会在 2024 年初通过最终版本。这是欧盟(EU)首次尝试制定具有法律约束力的人工智能法律文书。与《一般数据保护条例》(GDPR)类似,《欧盟人工智能法》具有域外效力。因此,它有可能成为全球人工智能监管的黄金标准。由于人工智能供应商在进入欧盟市场之前必须对高风险人工智能系统进行符合性评估,因此该法案还可能有助于就人工智能的可信性达成全球共识。由于《人工智能法》中关于如何在实践中进行符合性评估和事后监督的指导方针有限,因此有必要就这一主题达成共识。本文旨在研究欧洲理事会于 2022 年 11 月批准的《欧盟人工智能法》提出的治理结构,并提出对人工智能系统进行符合性评估的工具。
{"title":"Conformity assessment under the EU AI act general approach","authors":"Eva Thelisson,&nbsp;Himanshu Verma","doi":"10.1007/s43681-023-00402-5","DOIUrl":"10.1007/s43681-023-00402-5","url":null,"abstract":"<div><p>The European Commission proposed harmonised rules on artificial intelligence (AI) on the 21st of April 2021 (namely the EU AI Act). Following a consultative process with the European Council and many amendments, a General Approach of the EU AI Act was published on the 25th of November 2022. The EU Parliament approved the initial draft in May 2023. Trilogue meetings took place in June, July, September and October 2023, with the aim for the European Parliament, the Council of the European Union and the European Commission to adopt a final version early 2024. This is the first attempt to build a legally binding legal instrument on Artificial Intelligence in the European Union (EU). In a similar way as the General Data Protection Regulation (GDPR), the EU AI Act has an extraterritorial effect. It has, therefore, the potential to become a global gold standard for AI regulation. It may also contribute to developing a global consensus on AI Trustworthiness because AI providers must conduct conformity assessments for high-risk AI systems prior to entry into the EU market. As the AI Act contains limited guidelines on how to conduct conformity assessments and ex-post monitoring in practice, there is a need for consensus building on this topic. This paper aims at studying the governance structure proposed by the EU AI Act, as approved by the European Council in November 2022, and proposes tools to conduct conformity assessments of AI systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"113 - 121"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139387975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing deep learning: a work program for the humanities in the age of artificial intelligence
Pub Date : 2023-12-21 DOI: 10.1007/s43681-023-00408-z
Jan Segessenmann, Thilo Stadelmann, Andrew Davison, Oliver Dürr

Following the success of deep learning (DL) in research, we are now witnessing the fast and widespread adoption of artificial intelligence (AI) in daily life, influencing the way we act, think, and organize our lives. However, much still remains a mystery when it comes to how these systems achieve such high performance and why they reach the outputs they do. This presents us with an unusual combination: of technical mastery on the one hand, and a striking degree of mystery on the other. This conjunction is not only fascinating, but it also poses considerable risks, which urgently require our attention. Awareness of the need to analyze ethical implications, such as fairness, equality, and sustainability, is growing. However, other dimensions of inquiry receive less attention, including the subtle but pervasive ways in which our dealings with AI shape our way of living and thinking, transforming our culture and human self-understanding. If we want to deploy AI positively in the long term, a broader and more holistic assessment of the technology is vital, involving not only scientific and technical perspectives, but also those from the humanities. To this end, we present outlines of a work program for the humanities that aim to contribute to assessing and guiding the potential, opportunities, and risks of further developing and deploying DL systems. This paper contains a thematic introduction (Sect. 1), an introduction to the workings of DL for non-technical readers (Sect. 2), and a main part, containing the outlines of a work program for the humanities (Sect. 3). Readers familiar with DL might want to ignore 2 and instead directly read 3 after 1.

{"title":"Assessing deep learning: a work program for the humanities in the age of artificial intelligence","authors":"Jan Segessenmann,&nbsp;Thilo Stadelmann,&nbsp;Andrew Davison,&nbsp;Oliver Dürr","doi":"10.1007/s43681-023-00408-z","DOIUrl":"10.1007/s43681-023-00408-z","url":null,"abstract":"<div><p>Following the success of deep learning (DL) in research, we are now witnessing the fast and widespread adoption of artificial intelligence (AI) in daily life, influencing the way we act, think, and organize our lives. However, much still remains a mystery when it comes to how these systems achieve such high performance and why they reach the outputs they do. This presents us with an unusual combination: of technical mastery on the one hand, and a striking degree of mystery on the other. This conjunction is not only fascinating, but it also poses considerable risks, which urgently require our attention. Awareness of the need to analyze ethical implications, such as fairness, equality, and sustainability, is growing. However, other dimensions of inquiry receive less attention, including the subtle but pervasive ways in which our dealings with AI shape our way of living and thinking, transforming our culture and human self-understanding. If we want to deploy AI positively in the long term, a broader and more holistic assessment of the technology is vital, involving not only scientific and technical perspectives, but also those from the humanities. To this end, we present outlines of a <i>work program</i> for the humanities that aim to contribute to assessing and guiding the potential, opportunities, and risks of further developing and deploying DL systems. This paper contains a thematic introduction (Sect. 1), an introduction to the workings of DL for non-technical readers (Sect. 2), and a main part, containing the outlines of a work program for the humanities (Sect. 3). Readers familiar with DL might want to ignore 2 and instead directly read 3 after 1.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"1 - 32"},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00408-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1