{"title":"基于 ML 的木马分类:有毒边界网的影响","authors":"Saleh Mulhem;Felix Muuss;Christian Ewert;Rainer Buchty;Mladen Berekovic","doi":"10.1109/LES.2023.3338543","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) algorithms were recently adapted for testing integrated circuits and detecting potential design backdoors. Such testing mechanisms mainly rely on the available training dataset and the extracted features of the Trojan circuit. In this letter, we demonstrate that this method is attackable by exploiting a structural problem of classifiers for hardware Trojan (HT) detection in gate-level netlists, called the boundary net (BN) problem. There, an adversary modifies the labels of those BNs, connecting the original logic to the Trojan circuit. We show that the proposed adversarial label-flipping attacks (ALFAs) are potentially highly toxic to the accuracy of supervised ML-based Trojan detection approaches. The experimental results indicate that an adversary needs to flip only 0.09% of all labels to achieve an accuracy drop of over 9%, demonstrating one of the most efficient ALFAs in the HT detection research domain.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"16 3","pages":"251-254"},"PeriodicalIF":1.7000,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10341539","citationCount":"0","resultStr":"{\"title\":\"ML-Based Trojan Classification: Repercussions of Toxic Boundary Nets\",\"authors\":\"Saleh Mulhem;Felix Muuss;Christian Ewert;Rainer Buchty;Mladen Berekovic\",\"doi\":\"10.1109/LES.2023.3338543\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) algorithms were recently adapted for testing integrated circuits and detecting potential design backdoors. Such testing mechanisms mainly rely on the available training dataset and the extracted features of the Trojan circuit. In this letter, we demonstrate that this method is attackable by exploiting a structural problem of classifiers for hardware Trojan (HT) detection in gate-level netlists, called the boundary net (BN) problem. There, an adversary modifies the labels of those BNs, connecting the original logic to the Trojan circuit. We show that the proposed adversarial label-flipping attacks (ALFAs) are potentially highly toxic to the accuracy of supervised ML-based Trojan detection approaches. The experimental results indicate that an adversary needs to flip only 0.09% of all labels to achieve an accuracy drop of over 9%, demonstrating one of the most efficient ALFAs in the HT detection research domain.\",\"PeriodicalId\":56143,\"journal\":{\"name\":\"IEEE Embedded Systems Letters\",\"volume\":\"16 3\",\"pages\":\"251-254\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10341539\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Embedded Systems Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10341539/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Embedded Systems Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10341539/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
ML-Based Trojan Classification: Repercussions of Toxic Boundary Nets
Machine learning (ML) algorithms were recently adapted for testing integrated circuits and detecting potential design backdoors. Such testing mechanisms mainly rely on the available training dataset and the extracted features of the Trojan circuit. In this letter, we demonstrate that this method is attackable by exploiting a structural problem of classifiers for hardware Trojan (HT) detection in gate-level netlists, called the boundary net (BN) problem. There, an adversary modifies the labels of those BNs, connecting the original logic to the Trojan circuit. We show that the proposed adversarial label-flipping attacks (ALFAs) are potentially highly toxic to the accuracy of supervised ML-based Trojan detection approaches. The experimental results indicate that an adversary needs to flip only 0.09% of all labels to achieve an accuracy drop of over 9%, demonstrating one of the most efficient ALFAs in the HT detection research domain.
期刊介绍:
The IEEE Embedded Systems Letters (ESL), provides a forum for rapid dissemination of latest technical advances in embedded systems and related areas in embedded software. The emphasis is on models, methods, and tools that ensure secure, correct, efficient and robust design of embedded systems and their applications.