Pub Date : 2021-12-01DOI: 10.1109/WIFS53200.2021.9648378
Dongjie Chen, S. Cheung, C. Chuah, S. Ozonoff
To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Frechet Inception Distance, and classification accuracy under the same privacy guarantee.
{"title":"Differentially Private Generative Adversarial Networks with Model Inversion","authors":"Dongjie Chen, S. Cheung, C. Chuah, S. Ozonoff","doi":"10.1109/WIFS53200.2021.9648378","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648378","url":null,"abstract":"To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Frechet Inception Distance, and classification accuracy under the same privacy guarantee.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-07DOI: 10.1109/WIFS53200.2021.9648392
Mathias Ibsen, Lázaro J. González Soler, C. Rathgeb, P. Drozdowski, M. Gomez-Barrero, C. Busch
Due to their convenience and high accuracy, face recognition systems are widely employed in governmental and personal security applications to automatically recognise individuals. Despite recent advances, face recognition systems have shown to be particularly vulnerable to identity attacks (i.e., digital manipulations and attack presentations). Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation. In this context, most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time. To tackle this problem, we introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images (i.e., reference and probe) and then combined for identity attack detection. The experimental evaluation conducted over several databases shows a high generalisation capability of the proposed method for detecting unknown attacks in both the digital and physical domains.
{"title":"Differential Anomaly Detection for Facial Images","authors":"Mathias Ibsen, Lázaro J. González Soler, C. Rathgeb, P. Drozdowski, M. Gomez-Barrero, C. Busch","doi":"10.1109/WIFS53200.2021.9648392","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648392","url":null,"abstract":"Due to their convenience and high accuracy, face recognition systems are widely employed in governmental and personal security applications to automatically recognise individuals. Despite recent advances, face recognition systems have shown to be particularly vulnerable to identity attacks (i.e., digital manipulations and attack presentations). Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation. In this context, most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time. To tackle this problem, we introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images (i.e., reference and probe) and then combined for identity attack detection. The experimental evaluation conducted over several databases shows a high generalisation capability of the proposed method for detecting unknown attacks in both the digital and physical domains.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132543166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-05DOI: 10.1109/WIFS53200.2021.9648387
Roman Chaban, O. Taran, Joakim Tutt, T. Holotyak, Slavi Bonev, S. Voloshynovskiy
Nowadays, the modern economy critically requires reliable yet cheap protection solutions against product counterfeiting for the mass market. Copy detection patterns (CDP) are considered as such a solution in several applications. It is assumed that being printed at the maximum achievable limit of a printing resolution of an industrial printer with the smallest symbol size $1times 1$, the CDP cannot be copied with sufficient accuracy and thus are unclonable. In this paper, we challenge this hypothesis and consider a copy attack against the CDP based on machine learning. The experimental results based on samples produced on two industrial printers demonstrate that simple detection metrics used in the CDP authentication cannot reliably distinguish the original CDP from their fakes under certain printing conditions. Thus, the paper calls for a need of careful reconsideration of CDP cloneability and search for new authentication techniques and CDP optimization facing the current attack.
{"title":"Machine learning attack on copy detection patterns: are $1times 1$ patterns cloneable?","authors":"Roman Chaban, O. Taran, Joakim Tutt, T. Holotyak, Slavi Bonev, S. Voloshynovskiy","doi":"10.1109/WIFS53200.2021.9648387","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648387","url":null,"abstract":"Nowadays, the modern economy critically requires reliable yet cheap protection solutions against product counterfeiting for the mass market. Copy detection patterns (CDP) are considered as such a solution in several applications. It is assumed that being printed at the maximum achievable limit of a printing resolution of an industrial printer with the smallest symbol size $1times 1$, the CDP cannot be copied with sufficient accuracy and thus are unclonable. In this paper, we challenge this hypothesis and consider a copy attack against the CDP based on machine learning. The experimental results based on samples produced on two industrial printers demonstrate that simple detection metrics used in the CDP authentication cannot reliably distinguish the original CDP from their fakes under certain printing conditions. Thus, the paper calls for a need of careful reconsideration of CDP cloneability and search for new authentication techniques and CDP optimization facing the current attack.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125747436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.1109/WIFS53200.2021.9648388
Jing Yang, D. Vega-Oliveros, Taís Seibt, Anderson Rocha
Researchers have been investigating automated solutions for fact-checking in various fronts. However, current approaches often overlook the fact that information released every day is escalating, and a large amount of them overlap. Intending to accelerate fact-checking, we bridge this gap by proposing a new pipeline – grouping similar messages and summarizing them into aggregated claims. Specifically, we first clean a set of social media posts (e.g., tweets) and build a graph of all posts based on their semantics; Then, we perform two clustering methods to group the messages for further claim summarization. We evaluate the summaries both quantitatively with ROUGE scores and qualitatively with human evaluation. We also generate a graph of summaries to verify that there is no significant overlap among them. The results reduced 28,818 original messages to 700 summary claims, showing the potential to speed up the fact-checking process by organizing and selecting representative claims from massive disorganized and redundant messages.
{"title":"Scalable Fact-checking with Human-in-the-Loop","authors":"Jing Yang, D. Vega-Oliveros, Taís Seibt, Anderson Rocha","doi":"10.1109/WIFS53200.2021.9648388","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648388","url":null,"abstract":"Researchers have been investigating automated solutions for fact-checking in various fronts. However, current approaches often overlook the fact that information released every day is escalating, and a large amount of them overlap. Intending to accelerate fact-checking, we bridge this gap by proposing a new pipeline – grouping similar messages and summarizing them into aggregated claims. Specifically, we first clean a set of social media posts (e.g., tweets) and build a graph of all posts based on their semantics; Then, we perform two clustering methods to group the messages for further claim summarization. We evaluate the summaries both quantitatively with ROUGE scores and qualitatively with human evaluation. We also generate a graph of summaries to verify that there is no significant overlap among them. The results reduced 28,818 original messages to 700 summary claims, showing the potential to speed up the fact-checking process by organizing and selecting representative claims from massive disorganized and redundant messages.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125475193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a sufficient payload, without sacrificing the usability of the DNN model. It is also demonstrated that the proposed work is robust against common transforms and attacks designed for conventional watermarking approaches.
{"title":"Structural Watermarking to Deep Neural Networks via Network Channel Pruning","authors":"Xiangyu Zhao, Yinzhe Yao, Hanzhou Wu, Xinpeng Zhang","doi":"10.1109/WIFS53200.2021.9648376","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648376","url":null,"abstract":"In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a sufficient payload, without sacrificing the usability of the DNN model. It is also demonstrated that the proposed work is robust against common transforms and attacks designed for conventional watermarking approaches.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117087203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-12DOI: 10.1109/WIFS53200.2021.9648377
O. Günlü, R. Schaefer, H. Poor
We consider a secret key agreement problem in which noisy physical unclonable function (PUF) outputs facilitate reliable, secure, and private key agreement with the help of public, noiseless, and authenticated storage. PUF outputs are highly correlated, so transform coding methods have been combined with scalar quantizers to extract uncorrelated bit sequences with reliability guarantees. For PUF circuits with continuous-valued outputs, the models for transformed outputs are made more realistic by replacing the fitted distributions with corresponding truncated ones. The state-of-the-art PUF methods that provide reliability guarantees to each extracted bit are shown to be inadequate to guarantee the same reliability level for all PUF outputs. Thus, a quality of service parameter is introduced to control the percentage of PUF outputs for which a target reliability level can be guaranteed. A public ring oscillator (RO) output dataset is used to illustrate that a truncated Gaussian distribution can be fitted to transformed RO outputs that are inputs to uniform scalar quantizers such that reliability guarantees can be provided for each bit extracted from any PUF device under additive Gaussian noise components by eliminating a small subset of PUF outputs. Furthermore, we conversely show that it is not possible to provide such reliability guarantees without eliminating any PUF output if no extra secrecy and privacy leakage is allowed.
{"title":"Quality of Service Guarantees for Physical Unclonable Functions","authors":"O. Günlü, R. Schaefer, H. Poor","doi":"10.1109/WIFS53200.2021.9648377","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648377","url":null,"abstract":"We consider a secret key agreement problem in which noisy physical unclonable function (PUF) outputs facilitate reliable, secure, and private key agreement with the help of public, noiseless, and authenticated storage. PUF outputs are highly correlated, so transform coding methods have been combined with scalar quantizers to extract uncorrelated bit sequences with reliability guarantees. For PUF circuits with continuous-valued outputs, the models for transformed outputs are made more realistic by replacing the fitted distributions with corresponding truncated ones. The state-of-the-art PUF methods that provide reliability guarantees to each extracted bit are shown to be inadequate to guarantee the same reliability level for all PUF outputs. Thus, a quality of service parameter is introduced to control the percentage of PUF outputs for which a target reliability level can be guaranteed. A public ring oscillator (RO) output dataset is used to illustrate that a truncated Gaussian distribution can be fitted to transformed RO outputs that are inputs to uniform scalar quantizers such that reliability guarantees can be provided for each bit extracted from any PUF device under additive Gaussian noise components by eliminating a small subset of PUF outputs. Furthermore, we conversely show that it is not possible to provide such reliability guarantees without eliminating any PUF output if no extra secrecy and privacy leakage is allowed.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128157691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}