Tommaso Paladini, Francesco Monti, Mario Polino, Michele Carminati, S. Zanero
{"title":"Fraud Detection Under Siege: Practical Poisoning Attacks and Defense Strategies","authors":"Tommaso Paladini, Francesco Monti, Mario Polino, Michele Carminati, S. Zanero","doi":"10.1145/3613244","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) models are vulnerable to adversarial machine learning (AML) attacks. Unlike other contexts, the fraud detection domain is characterized by inherent challenges that make conventional approaches hardly applicable. In this paper, we extend the application of AML techniques to the fraud detection task by studying poisoning attacks and their possible countermeasures. First, we present a novel approach for performing poisoning attacks that overcomes the fraud detection domain-specific constraints. It generates fraudulent candidate transactions and tests them against a machine learning-based Oracle, which simulates the target fraud detection system aiming at evading it. Misclassified fraudulent candidate transactions are then integrated into the target detection system’s training set, poisoning its model and shifting its decision boundary. Second, we propose a novel approach that extends the adversarial training technique to mitigate AML attacks: during the training phase of the detection system, we generate artificial frauds by modifying random original legitimate transactions; then, we include them in the training set with the correct label. By doing so, we instruct our model to recognize evasive transactions before an attack occurs. Using two real bank datasets, we evaluate the security of several state-of-the-art fraud detection systems by deploying our poisoning attack with different degrees of attacker’s knowledge and attacking strategies. The experimental results show that our attack works even when the attacker has minimal knowledge of the target system. Then, we demonstrate that the proposed countermeasure can mitigate adversarial attacks by reducing the stolen amount of money up to 100%.","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":" ","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Privacy and Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3613244","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning (ML) models are vulnerable to adversarial machine learning (AML) attacks. Unlike other contexts, the fraud detection domain is characterized by inherent challenges that make conventional approaches hardly applicable. In this paper, we extend the application of AML techniques to the fraud detection task by studying poisoning attacks and their possible countermeasures. First, we present a novel approach for performing poisoning attacks that overcomes the fraud detection domain-specific constraints. It generates fraudulent candidate transactions and tests them against a machine learning-based Oracle, which simulates the target fraud detection system aiming at evading it. Misclassified fraudulent candidate transactions are then integrated into the target detection system’s training set, poisoning its model and shifting its decision boundary. Second, we propose a novel approach that extends the adversarial training technique to mitigate AML attacks: during the training phase of the detection system, we generate artificial frauds by modifying random original legitimate transactions; then, we include them in the training set with the correct label. By doing so, we instruct our model to recognize evasive transactions before an attack occurs. Using two real bank datasets, we evaluate the security of several state-of-the-art fraud detection systems by deploying our poisoning attack with different degrees of attacker’s knowledge and attacking strategies. The experimental results show that our attack works even when the attacker has minimal knowledge of the target system. Then, we demonstrate that the proposed countermeasure can mitigate adversarial attacks by reducing the stolen amount of money up to 100%.
期刊介绍:
ACM Transactions on Privacy and Security (TOPS) (formerly known as TISSEC) publishes high-quality research results in the fields of information and system security and privacy. Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.