Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412662
Jiangyuan Li, A. Petropulu
A Gaussian multiple-input multiple-output (MIMO) wiretap channel model is considered, where there exists a transmitter, a legitimate receiver and an eavesdropper each equipped with multiple antennas. The optimality of beamforming for secrecy capacity subject to sum power constraint is studied, and two sufficient conditions for beamforming to be globally optimal are given. The first sufficient condition states that when the difference between the Gram matrices of legitimate and eavesdropper channel matrices has exactly one positive eigenvalue, then beamforming is globally optimal. An alternative sufficient condition, which involves convex optimization, is also provided. For the case in which beamforming is globally optimal, the secrecy capacity is obtained. Otherwise, an upper bound of the difference between the secrecy capacity and beamforming secrecy rate is provided.
{"title":"Optimality of beamforming for secrecy capacity of MIMO wiretap channels","authors":"Jiangyuan Li, A. Petropulu","doi":"10.1109/WIFS.2012.6412662","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412662","url":null,"abstract":"A Gaussian multiple-input multiple-output (MIMO) wiretap channel model is considered, where there exists a transmitter, a legitimate receiver and an eavesdropper each equipped with multiple antennas. The optimality of beamforming for secrecy capacity subject to sum power constraint is studied, and two sufficient conditions for beamforming to be globally optimal are given. The first sufficient condition states that when the difference between the Gram matrices of legitimate and eavesdropper channel matrices has exactly one positive eigenvalue, then beamforming is globally optimal. An alternative sufficient condition, which involves convex optimization, is also provided. For the case in which beamforming is globally optimal, the secrecy capacity is obtained. Otherwise, an upper bound of the difference between the secrecy capacity and beamforming secrecy rate is provided.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412661
H. Boche, R. F. Wyrembelski
For communication over arbitrarily varying channels (AVC), common randomness is an important resource to establish reliable communication, especially if the AVC is symmetrizable. In this paper the arbitrarily varying wiretap channel (AVWC) with active wiretapper is studied. Here the wiretapper is active in the sense that it can exploit the knowledge about the common randomness to control the channel conditions of the legitimate users. The common randomness assisted secrecy capacity of the AVWC with active wiretapper is analyzed and is related to the corresponding secrecy capacity of the AVWC with passive wiretapper. If the active secrecy capacity is positive, it equals the corresponding passive secrecy capacity. The case of zero active capacity is also studied.
{"title":"Comparison of different attack classes in arbitrarily varying wiretap channels","authors":"H. Boche, R. F. Wyrembelski","doi":"10.1109/WIFS.2012.6412661","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412661","url":null,"abstract":"For communication over arbitrarily varying channels (AVC), common randomness is an important resource to establish reliable communication, especially if the AVC is symmetrizable. In this paper the arbitrarily varying wiretap channel (AVWC) with active wiretapper is studied. Here the wiretapper is active in the sense that it can exploit the knowledge about the common randomness to control the channel conditions of the legitimate users. The common randomness assisted secrecy capacity of the AVWC with active wiretapper is analyzed and is related to the corresponding secrecy capacity of the AVWC with passive wiretapper. If the active secrecy capacity is positive, it equals the corresponding passive secrecy capacity. The case of zero active capacity is also studied.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122870319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412627
Ravi Garg, Avinash L. Varna, Min Wu
Electric Network Frequency (ENF) fluctuations based forensic analysis is recently proposed for time-of-recording estimation, timestamp verification, and clip insertion/deletion forgery detection in multimedia recordings. Due to the load control mechanism of the electric grid, ENF fluctuations exhibit pseudo-periodic behavior and generally require a long duration of recording for forensic analysis. In this paper, a statistical study of the ENF signal is conducted to model it using an autoregressive process. The proposed model is used to understand the effect of the ENF signal duration and signal-to-noise ratio on the detection performance of a timestamp verification system under a hypothesis detection framework. Based on the proposed model, a decorrelation based approach is studied to match the ENF signals for timestamp verification. The proposed approach requires a shorter duration of the ENF signal to achieve the same detection performance as without decorrelation. Experiments are conducted on audio data to demonstrate an improvement in the detection performance of the proposed approach.
{"title":"Modeling and analysis of Electric Network Frequency signal for timestamp verification","authors":"Ravi Garg, Avinash L. Varna, Min Wu","doi":"10.1109/WIFS.2012.6412627","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412627","url":null,"abstract":"Electric Network Frequency (ENF) fluctuations based forensic analysis is recently proposed for time-of-recording estimation, timestamp verification, and clip insertion/deletion forgery detection in multimedia recordings. Due to the load control mechanism of the electric grid, ENF fluctuations exhibit pseudo-periodic behavior and generally require a long duration of recording for forensic analysis. In this paper, a statistical study of the ENF signal is conducted to model it using an autoregressive process. The proposed model is used to understand the effect of the ENF signal duration and signal-to-noise ratio on the detection performance of a timestamp verification system under a hypothesis detection framework. Based on the proposed model, a decorrelation based approach is studied to match the ENF signals for timestamp verification. The proposed approach requires a shorter duration of the ENF signal to achieve the same detection performance as without decorrelation. Experiments are conducted on audio data to demonstrate an improvement in the detection performance of the proposed approach.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127799282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412620
E. Maiorana, D. Blasi, P. Campisi
In this paper we propose a general biometric cryptosystem framework inspired by the code-offset sketch. Specifically, the properties of digital modulation and turbo codes with soft-decoding are exploited to design a template protection system able to guarantee high performance in terms of both verification rates and security, also when dealing with biometrics characterized by a high intra-class variability. The effectiveness of the presented approach is evaluated by its application as case study to on-line signature recognition.
{"title":"Biometric template protection using turbo codes and modulation constellations","authors":"E. Maiorana, D. Blasi, P. Campisi","doi":"10.1109/WIFS.2012.6412620","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412620","url":null,"abstract":"In this paper we propose a general biometric cryptosystem framework inspired by the code-offset sketch. Specifically, the properties of digital modulation and turbo codes with soft-decoding are exploited to design a template protection system able to guarantee high performance in terms of both verification rates and security, also when dealing with biometrics characterized by a high intra-class variability. The effectiveness of the presented approach is evaluated by its application as case study to on-line signature recognition.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131184397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412650
David Vázquez-Padín, Pedro Comesaña Alfaro
In this work, the problem of resampling factor estimation for tampering detection is addressed following the maximum likelihood criterion. By relying on the rounding operation applied after resampling, an approximation of the likelihood function of the quantized resampled signal is obtained. From the underlying statistical model, the maximum likelihood estimate is derived for one-dimensional signals and a piecewise linear interpolation. The performance of the obtained estimator is evaluated, showing that it outperforms state-of-the-art methods.
{"title":"ML estimation of the resampling factor","authors":"David Vázquez-Padín, Pedro Comesaña Alfaro","doi":"10.1109/WIFS.2012.6412650","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412650","url":null,"abstract":"In this work, the problem of resampling factor estimation for tampering detection is addressed following the maximum likelihood criterion. By relying on the rounding operation applied after resampling, an approximation of the likelihood function of the quantized resampled signal is obtained. From the underlying statistical model, the maximum likelihood estimate is derived for one-dimensional signals and a piecewise linear interpolation. The performance of the obtained estimator is evaluated, showing that it outperforms state-of-the-art methods.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132817573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412625
Tommaso Pignata, R. Lazzeretti, M. Barni
While in theory any computable functions can be evaluated in a Secure Two Party Computation (STPC) framework, practical applications are often limited for complexity reasons and by the kind of operations that the available cryptographic tools permit. In this paper we propose an algorithm that, given a function f() and an interval belonging to its domain, produces a piecewise linear approximation f() that can be easily implemented in a STPC setting. Two different implementations are proposed: the first one relies completely on Garbled Circuit (GC) theory, while the second one exploits a hybrid construction where GC and Homomorphic Encryption (HE) are used together. We show that from a communication complexity perspective the full-GC implementation is preferable when the input and output variables are represented with a small number of bits, otherwise the hybrid solution is preferable.
{"title":"General function evaluation in a STPC setting via piecewise linear approximation","authors":"Tommaso Pignata, R. Lazzeretti, M. Barni","doi":"10.1109/WIFS.2012.6412625","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412625","url":null,"abstract":"While in theory any computable functions can be evaluated in a Secure Two Party Computation (STPC) framework, practical applications are often limited for complexity reasons and by the kind of operations that the available cryptographic tools permit. In this paper we propose an algorithm that, given a function f() and an interval belonging to its domain, produces a piecewise linear approximation f() that can be easily implemented in a STPC setting. Two different implementations are proposed: the first one relies completely on Garbled Circuit (GC) theory, while the second one exploits a hybrid construction where GC and Homomorphic Encryption (HE) are used together. We show that from a communication complexity perspective the full-GC implementation is preferable when the input and output variables are represented with a small number of bits, otherwise the hybrid solution is preferable.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131808446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412654
Pham Hai Dang Le, M. Franz
Today, support vector machines (SVMs) seem to be the classifier of choice in blind steganalysis. This approach needs two steps: first, a training phase determines a separating hyperplane that distinguishes between cover and stego images; second, in a test phase the class membership of an unknown input image is detected using this hyperplane. As in all statistical classifiers, the number of training images is a critical factor: the more images that are used in the training phase, the better the steganalysis performance will be in the test phase, however at the price of a greatly increased training time of the SVM algorithm. Interestingly, only a few training data, the support vectors, determine the separating hyperplane of the SVM. In this paper, we introduce a paired bootstrapping approach specifically developed for the steganalysis scenario that selects likely candidates for support vectors. The resulting training set is considerably smaller, without a significant loss of steganalysis performance.
{"title":"How to find relevant training data: A paired bootstrapping approach to blind steganalysis","authors":"Pham Hai Dang Le, M. Franz","doi":"10.1109/WIFS.2012.6412654","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412654","url":null,"abstract":"Today, support vector machines (SVMs) seem to be the classifier of choice in blind steganalysis. This approach needs two steps: first, a training phase determines a separating hyperplane that distinguishes between cover and stego images; second, in a test phase the class membership of an unknown input image is detected using this hyperplane. As in all statistical classifiers, the number of training images is a critical factor: the more images that are used in the training phase, the better the steganalysis performance will be in the test phase, however at the price of a greatly increased training time of the SVM algorithm. Interestingly, only a few training data, the support vectors, determine the separating hyperplane of the SVM. In this paper, we introduce a paired bootstrapping approach specifically developed for the steganalysis scenario that selects likely candidates for support vectors. The resulting training set is considerably smaller, without a significant loss of steganalysis performance.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134031003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412642
Miguel Masciopinto, Pedro Comesaña Alfaro
In the last years video streaming over IP networks has changed the entertainment habits of society. Video on demand and multicast services have proliferated, bringing new challenges. Although the majority of the new service providers are compliant with copyright protection policies, some services are streamed without the proper rights, leading to a new kind of content piracy. Therefore, content right owners are interested in finding out the distribution channel followed by these unauthorized contents, in order to learn about illegal distributor sources. We present the first approach to this problem, in which we deal with the classification of IPTV streamed contents on satellital (DVB-S) or terrestrial (DVB-T) sources, both for live and delayed streaming. Our proposal is based on analyzing the time distribution of IP packet dispatches, extracting high-order statistics, and performing the source classification using a SVM. The reported results show the goodness of the proposed approach.
{"title":"IPTV streaming source classification","authors":"Miguel Masciopinto, Pedro Comesaña Alfaro","doi":"10.1109/WIFS.2012.6412642","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412642","url":null,"abstract":"In the last years video streaming over IP networks has changed the entertainment habits of society. Video on demand and multicast services have proliferated, bringing new challenges. Although the majority of the new service providers are compliant with copyright protection policies, some services are streamed without the proper rights, leading to a new kind of content piracy. Therefore, content right owners are interested in finding out the distribution channel followed by these unauthorized contents, in order to learn about illegal distributor sources. We present the first approach to this problem, in which we deal with the classification of IPTV streamed contents on satellital (DVB-S) or terrestrial (DVB-T) sources, both for live and delayed streaming. Our proposal is based on analyzing the time distribution of IP packet dispatches, extracting high-order statistics, and performing the source classification using a SVM. The reported results show the goodness of the proposed approach.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133646188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412647
T. Furon, A. Guyader, F. Cérou
This paper proposes a new fingerprinting decoder based on the Markov Chain Monte Carlo (MCMC) method. A Gibbs sampler generates groups of users according to the posterior probability that these users could have forged the sequence extracted from the pirated content. The marginal probability that a given user pertains to the collusion is then estimated by a Monte Carlo method. The users having the biggest empirical marginal probabilities are accused. This MCMC method can decode any type of fingerprinting codes. This paper is in the spirit of the `Learn and Match' decoding strategy: it assumes that the collusion attack belongs to a family of models. The Expectation-Maximization algorithm estimates the parameters of the collusion model from the extracted sequence. This part of the algorithm is described for the binary Tardos code and with the exploitation of the soft outputs of the watermarking decoder. The experimental body considers some extreme setups where the fingerprinting code lengths are very small. It reveals that the weak link of our approach is the estimation part. This is a clear warning to the `Learn and Match' decoding strategy.
{"title":"Decoding fingerprints using the Markov Chain Monte Carlo method","authors":"T. Furon, A. Guyader, F. Cérou","doi":"10.1109/WIFS.2012.6412647","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412647","url":null,"abstract":"This paper proposes a new fingerprinting decoder based on the Markov Chain Monte Carlo (MCMC) method. A Gibbs sampler generates groups of users according to the posterior probability that these users could have forged the sequence extracted from the pirated content. The marginal probability that a given user pertains to the collusion is then estimated by a Monte Carlo method. The users having the biggest empirical marginal probabilities are accused. This MCMC method can decode any type of fingerprinting codes. This paper is in the spirit of the `Learn and Match' decoding strategy: it assumes that the collusion attack belongs to a family of models. The Expectation-Maximization algorithm estimates the parameters of the collusion model from the extracted sequence. This part of the algorithm is described for the binary Tardos code and with the exploitation of the soft outputs of the watermarking decoder. The experimental body considers some extreme setups where the fingerprinting code lengths are very small. It reveals that the weak link of our approach is the estimation part. This is a clear warning to the `Learn and Match' decoding strategy.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/WIFS.2012.6412655
Vojtech Holub, J. Fridrich
This paper presents a new approach to defining additive steganographic distortion in the spatial domain. The change in the output of directional high-pass filters after changing one pixel is weighted and then aggregated using the reciprocal Hölder norm to define the individual pixel costs. In contrast to other adaptive embedding schemes, the aggregation rule is designed to force the embedding changes to highly textured or noisy regions and to avoid clean edges. Consequently, the new embedding scheme appears markedly more resistant to steganalysis using rich models. The actual embedding algorithm is realized using syndrome-trellis codes to minimize the expected distortion for a given payload.
{"title":"Designing steganographic distortion using directional filters","authors":"Vojtech Holub, J. Fridrich","doi":"10.1109/WIFS.2012.6412655","DOIUrl":"https://doi.org/10.1109/WIFS.2012.6412655","url":null,"abstract":"This paper presents a new approach to defining additive steganographic distortion in the spatial domain. The change in the output of directional high-pass filters after changing one pixel is weighted and then aggregated using the reciprocal Hölder norm to define the individual pixel costs. In contrast to other adaptive embedding schemes, the aggregation rule is designed to force the embedding changes to highly textured or noisy regions and to avoid clean edges. Consequently, the new embedding scheme appears markedly more resistant to steganalysis using rich models. The actual embedding algorithm is realized using syndrome-trellis codes to minimize the expected distortion for a given payload.","PeriodicalId":396789,"journal":{"name":"2012 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}