{"title":"IdDecoder:一种人脸嵌入反转工具及其在人脸识别系统中的隐私和安全含义","authors":"Minh-Ha Le, Niklas Carlsson","doi":"10.1145/3577923.3583645","DOIUrl":null,"url":null,"abstract":"Most state-of-the-art facial recognition systems (FRS:s) use face embeddings. In this paper, we present the IdDecoder framework, capable of effectively synthesizing realistic-neutralized face images from face embeddings, and two effective attacks on state-of-the-art facial recognition models using embeddings. The first attack is a black-box version of a model inversion attack that allows the attacker to reconstruct a realistic face image that is both visually and numerically (as determined by the FRS:s) recognized as the same identity as the original face used to create a given face embedding. This attack raises significant privacy concerns regarding the membership of the gallery dataset of these systems and highlights the importance of both the people designing and deploying FRS:s paying greater attention to the protection of the face embeddings than currently done. The second attack is a novel attack that performs the model inversion, so to instead create the face of an alternative identity that is visually different from the original identity but has close identity distance (ensuring that it is recognized as being of the same identity). This attack increases the attacked system's false acceptance rate and raises significant security concerns. Finally, we use IdDecoder to visualize, evaluate, and provide insights into differences between three state-of-the-art facial embedding models.","PeriodicalId":387479,"journal":{"name":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","volume":"300 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"IdDecoder: A Face Embedding Inversion Tool and its Privacy and Security Implications on Facial Recognition Systems\",\"authors\":\"Minh-Ha Le, Niklas Carlsson\",\"doi\":\"10.1145/3577923.3583645\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most state-of-the-art facial recognition systems (FRS:s) use face embeddings. In this paper, we present the IdDecoder framework, capable of effectively synthesizing realistic-neutralized face images from face embeddings, and two effective attacks on state-of-the-art facial recognition models using embeddings. The first attack is a black-box version of a model inversion attack that allows the attacker to reconstruct a realistic face image that is both visually and numerically (as determined by the FRS:s) recognized as the same identity as the original face used to create a given face embedding. This attack raises significant privacy concerns regarding the membership of the gallery dataset of these systems and highlights the importance of both the people designing and deploying FRS:s paying greater attention to the protection of the face embeddings than currently done. The second attack is a novel attack that performs the model inversion, so to instead create the face of an alternative identity that is visually different from the original identity but has close identity distance (ensuring that it is recognized as being of the same identity). This attack increases the attacked system's false acceptance rate and raises significant security concerns. Finally, we use IdDecoder to visualize, evaluate, and provide insights into differences between three state-of-the-art facial embedding models.\",\"PeriodicalId\":387479,\"journal\":{\"name\":\"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy\",\"volume\":\"300 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3577923.3583645\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3577923.3583645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
IdDecoder: A Face Embedding Inversion Tool and its Privacy and Security Implications on Facial Recognition Systems
Most state-of-the-art facial recognition systems (FRS:s) use face embeddings. In this paper, we present the IdDecoder framework, capable of effectively synthesizing realistic-neutralized face images from face embeddings, and two effective attacks on state-of-the-art facial recognition models using embeddings. The first attack is a black-box version of a model inversion attack that allows the attacker to reconstruct a realistic face image that is both visually and numerically (as determined by the FRS:s) recognized as the same identity as the original face used to create a given face embedding. This attack raises significant privacy concerns regarding the membership of the gallery dataset of these systems and highlights the importance of both the people designing and deploying FRS:s paying greater attention to the protection of the face embeddings than currently done. The second attack is a novel attack that performs the model inversion, so to instead create the face of an alternative identity that is visually different from the original identity but has close identity distance (ensuring that it is recognized as being of the same identity). This attack increases the attacked system's false acceptance rate and raises significant security concerns. Finally, we use IdDecoder to visualize, evaluate, and provide insights into differences between three state-of-the-art facial embedding models.