Pub Date : 2024-02-23DOI: 10.1007/s43681-023-00412-3
Katarzyna Kapusta, Lucas Mattioli, Boussad Addad, Mohammed Lansari
In this paper, we present and analyze two novel—and seemingly distant—research trends in Machine Learning: ML watermarking and adversarial patches. First, we show how ML watermarking uses specially crafted inputs to provide a proof of model ownership. Second, we demonstrate how an attacker can craft adversarial samples in order to trigger an abnormal behavior in a model and thus perform an ambiguity attack on ML watermarking. Finally, we describe three countermeasures that could be applied in order to prevent ambiguity attacks. We illustrate our works using the example of a binary classification model for welding inspection.
在本文中,我们介绍并分析了机器学习领域的两个新颖且看似遥远的研究趋势:ML 水印和对抗补丁。首先,我们展示了 ML 水印如何使用特制输入来提供模型所有权证明。其次,我们展示了攻击者如何制作对抗样本,以触发模型中的异常行为,从而对 ML 水印进行模糊攻击。最后,我们介绍了可用于防止模糊攻击的三种对策。我们以用于焊接检测的二进制分类模型为例说明我们的工作。
{"title":"Protecting ownership rights of ML models using watermarking in the light of adversarial attacks","authors":"Katarzyna Kapusta, Lucas Mattioli, Boussad Addad, Mohammed Lansari","doi":"10.1007/s43681-023-00412-3","DOIUrl":"10.1007/s43681-023-00412-3","url":null,"abstract":"<div><p>In this paper, we present and analyze two novel—and seemingly distant—research trends in Machine Learning: ML watermarking and adversarial patches. First, we show how ML watermarking uses specially crafted inputs to provide a proof of model ownership. Second, we demonstrate how an attacker can craft adversarial samples in order to trigger an abnormal behavior in a model and thus perform an ambiguity attack on ML watermarking. Finally, we describe three countermeasures that could be applied in order to prevent ambiguity attacks. We illustrate our works using the example of a binary classification model for welding inspection.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"95 - 103"},"PeriodicalIF":0.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140436346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1007/s43681-024-00431-8
Jorge Luis Morton
{"title":"On inscription and bias: data, actor network theory, and the social problems of text-to-image AI models","authors":"Jorge Luis Morton","doi":"10.1007/s43681-024-00431-8","DOIUrl":"https://doi.org/10.1007/s43681-024-00431-8","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 11","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139957934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s43681-024-00434-5
Khadija Alam, Akhil Kumar, F. Samiullah
{"title":"Prospectives and drawbacks of ChatGPT in healthcare and clinical medicine","authors":"Khadija Alam, Akhil Kumar, F. Samiullah","doi":"10.1007/s43681-024-00434-5","DOIUrl":"https://doi.org/10.1007/s43681-024-00434-5","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"666 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140446147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1007/s43681-024-00428-3
Iskender Volkan Sancar
{"title":"How can we design autonomous weapon systems?","authors":"Iskender Volkan Sancar","doi":"10.1007/s43681-024-00428-3","DOIUrl":"https://doi.org/10.1007/s43681-024-00428-3","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"14 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140448487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19DOI: 10.1007/s43681-024-00433-6
Quintin P. McGrath
{"title":"Unveiling the ethical positions of conversational AIs: a study on OpenAI’s ChatGPT and Google’s Bard","authors":"Quintin P. McGrath","doi":"10.1007/s43681-024-00433-6","DOIUrl":"https://doi.org/10.1007/s43681-024-00433-6","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"35 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140449642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19DOI: 10.1007/s43681-024-00432-7
James Oluwaseyi Hodonu-Wusu
{"title":"The rise of artificial intelligence in libraries: the ethical and equitable methodologies, and prospects for empowering library users","authors":"James Oluwaseyi Hodonu-Wusu","doi":"10.1007/s43681-024-00432-7","DOIUrl":"https://doi.org/10.1007/s43681-024-00432-7","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139959105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19DOI: 10.1007/s43681-024-00425-6
Franziska Poszler, Edy Portmann, Christoph Lütge
{"title":"Formalizing ethical principles within AI systems: experts’ opinions on why (not) and how to do it","authors":"Franziska Poszler, Edy Portmann, Christoph Lütge","doi":"10.1007/s43681-024-00425-6","DOIUrl":"https://doi.org/10.1007/s43681-024-00425-6","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"32 31","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140450289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19DOI: 10.1007/s43681-024-00424-7
B. A. Kamphorst, J. H. Anderson
{"title":"E-coaching systems and social justice: ethical concerns about inequality, coercion, and stigmatization","authors":"B. A. Kamphorst, J. H. Anderson","doi":"10.1007/s43681-024-00424-7","DOIUrl":"https://doi.org/10.1007/s43681-024-00424-7","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"9 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139958963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning algorithms and their implementations are used as black-boxes to produce decision trees, e.g., for realizing critical classification tasks. A low confidence in (the learning ability of) the algorithms increases the mistrust of the produced decision trees, which leads to costly test and validation activities and to the waste of the learning time in case the decision trees are likely to be faulty due to the inability to learn. Methods for evaluating trustworthiness of the algorithms are needed especially when the testing of the learned decision trees is also challenging. We propose a novel oracle-centered approach to the evaluation. It consists of generating deterministic or noise-free datasets from reference trees playing the role of oracles, producing learned trees with existing (implementations of) learning algorithms, and determining the degree of equivalence (DOE) of the learned trees by comparing them with the oracles. We evaluate (six implementations of) five decision tree learning algorithms based on the proposed approach.
{"title":"Evaluating trustworthiness of decision tree learning algorithms based on equivalence checking","authors":"Omer Nguena Timo, Tianqi Xiao, Florent Avellaneda, Yasir Malik, Stefan Bruda","doi":"10.1007/s43681-023-00415-0","DOIUrl":"10.1007/s43681-023-00415-0","url":null,"abstract":"<div><p>Learning algorithms and their implementations are used as black-boxes to produce decision trees, e.g., for realizing critical classification tasks. A low confidence in (the learning ability of) the algorithms increases the mistrust of the produced decision trees, which leads to costly test and validation activities and to the waste of the learning time in case the decision trees are likely to be faulty due to the inability to learn. Methods for evaluating trustworthiness of the algorithms are needed especially when the testing of the learned decision trees is also challenging. We propose a novel oracle-centered approach to the evaluation. It consists of generating deterministic or noise-free datasets from reference trees playing the role of oracles, producing learned trees with existing (implementations of) learning algorithms, and determining the degree of equivalence (DOE) of the learned trees by comparing them with the oracles. We evaluate (six implementations of) five decision tree learning algorithms based on the proposed approach.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"37 - 46"},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139804670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}