Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084297
Miguel A. Ochoa-Villegas, J. Nolazco-Flores, Olivia Barron-Cano, I. Kakadiaris
A face recognition system must be capable of handling facial data with head pose variations or different illumination conditions. However, as these conditions are uncontrolled the requirement of better algorithms has become essential. We propose a Bidimensional Empirical Mode Decomposition-based unlighting method that preprocesses the luminance and the reflectance parts of an image. First, three luminance components are estimated using Bidimensional Intrinsic Mode Functions residuals. Second, a shadow removal procedure using recursive Retinex is applied. Third, the reflectance part is denoised using mean-Gaussian filters. After that, a new image is created multiplying each shadow-free luminance by the reflectance. The final output is obtained using the geometric mean on the newly acquired images. This algorithm has been tested in two 3D- 2D face recognition databases: UHDB11 and FRGCv2.0. The performance of BEMDU demonstrates an improvement of up to 15.42% when compared with the AELM, LBEMD, PittPatt, the baseline, and EA algorithms.
{"title":"Bidimensional empirical mode decomposition-based unlighting for face recognition","authors":"Miguel A. Ochoa-Villegas, J. Nolazco-Flores, Olivia Barron-Cano, I. Kakadiaris","doi":"10.1109/WIFS.2014.7084297","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084297","url":null,"abstract":"A face recognition system must be capable of handling facial data with head pose variations or different illumination conditions. However, as these conditions are uncontrolled the requirement of better algorithms has become essential. We propose a Bidimensional Empirical Mode Decomposition-based unlighting method that preprocesses the luminance and the reflectance parts of an image. First, three luminance components are estimated using Bidimensional Intrinsic Mode Functions residuals. Second, a shadow removal procedure using recursive Retinex is applied. Third, the reflectance part is denoised using mean-Gaussian filters. After that, a new image is created multiplying each shadow-free luminance by the reflectance. The final output is obtained using the geometric mean on the newly acquired images. This algorithm has been tested in two 3D- 2D face recognition databases: UHDB11 and FRGCv2.0. The performance of BEMDU demonstrates an improvement of up to 15.42% when compared with the AELM, LBEMD, PittPatt, the baseline, and EA algorithms.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122136726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084308
Zhen Chen, Yongbo Zeng, Gerald Hefferman, Y. Sun
This paper describes a new physical unclonable function for identification, FiberID, which uses the molecular level Rayleigh backscatter pattern within a small section of telecommunication-grade optical fiber as a means of verification and identification. The verification process via FiberID is experimentally studied, and an equal error rate (EER) of 0.06% is achieved. Systematic evaluation of FiberID is conducted in term of physical length and ambient temperature. Due to its inherent irreproducibility, FiberID holds the promise to significantly enhance current identification, security, and anti-counterfeiting technologies.
{"title":"FiberID: molecular-level secret for identification of things","authors":"Zhen Chen, Yongbo Zeng, Gerald Hefferman, Y. Sun","doi":"10.1109/WIFS.2014.7084308","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084308","url":null,"abstract":"This paper describes a new physical unclonable function for identification, FiberID, which uses the molecular level Rayleigh backscatter pattern within a small section of telecommunication-grade optical fiber as a means of verification and identification. The verification process via FiberID is experimentally studied, and an equal error rate (EER) of 0.06% is achieved. Systematic evaluation of FiberID is conducted in term of physical length and ambient temperature. Due to its inherent irreproducibility, FiberID holds the promise to significantly enhance current identification, security, and anti-counterfeiting technologies.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"8 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127983499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084306
Simon Oya, C. Troncoso, F. Pérez-González
High-latency anonymous communication systems prevent passive eavesdroppers from inferring communicating partners with certainty. However, disclosure attacks allow an adversary to recover users' behavioral profiles when communications are persistent. Understanding how the system parameters affect the privacy of the users against such attacks is crucial. Earlier work in the area analyzes the performance of disclosure attacks in controlled scenarios, where a certain model about the users' behavior is assumed. In this paper, we analyze the profiling accuracy of one of the most efficient disclosure attack, the least squares disclosure attack, in realistic scenarios. We generate real traffic observations from datasets of different nature and find that the models considered in previous work do not fit this realistic behavior. We relax previous hypotheses on the behavior of the users and extend previous performance analyses, validating our results with real data and providing new insights into the parameters that affect the protection of the users in the real world.
{"title":"Understanding the effects of real-world behavior in statistical disclosure attacks","authors":"Simon Oya, C. Troncoso, F. Pérez-González","doi":"10.1109/WIFS.2014.7084306","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084306","url":null,"abstract":"High-latency anonymous communication systems prevent passive eavesdroppers from inferring communicating partners with certainty. However, disclosure attacks allow an adversary to recover users' behavioral profiles when communications are persistent. Understanding how the system parameters affect the privacy of the users against such attacks is crucial. Earlier work in the area analyzes the performance of disclosure attacks in controlled scenarios, where a certain model about the users' behavior is assumed. In this paper, we analyze the profiling accuracy of one of the most efficient disclosure attack, the least squares disclosure attack, in realistic scenarios. We generate real traffic observations from datasets of different nature and find that the models considered in previous work do not fit this realistic behavior. We relax previous hypotheses on the behavior of the users and extend previous performance analyses, validating our results with real data and providing new insights into the parameters that affect the protection of the users in the real world.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121437162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084319
L. Verdoliva, D. Cozzolino, G. Poggi
We propose a new camera-based technique for tampering localization. A large number of blocks are extracted off-line from training images and characterized through features based on a dense local descriptor. A multidimensional Gaussian model is then fit to the training features. In the testing phase, the image is analyzed in sliding-window modality: for each block, the log-likelihood of the associated feature is computed, reprojected in the image domain, and aggregated, so as to form a smooth decision map. Eventually, the tampering is localized by simple thresholding. Experiments carried out in a number of situation of interest show promising results.
{"title":"A feature-based approach for image tampering detection and localization","authors":"L. Verdoliva, D. Cozzolino, G. Poggi","doi":"10.1109/WIFS.2014.7084319","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084319","url":null,"abstract":"We propose a new camera-based technique for tampering localization. A large number of blocks are extracted off-line from training images and characterized through features based on a dense local descriptor. A multidimensional Gaussian model is then fit to the training features. In the testing phase, the image is analyzed in sliding-window modality: for each block, the log-likelihood of the associated feature is computed, reprojected in the image domain, and aggregated, so as to form a smooth decision map. Eventually, the tampering is localized by simple thresholding. Experiments carried out in a number of situation of interest show promising results.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132328522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084296
D. Mery, K. Bowyer
Unconstrained face recognition is still an open problem, as state-of-the-art algorithms have not yet reached high recognition performance in real-world environments (e.g., crowd scenes at the Boston Marathon). This paper addresses this problem by proposing a new approach called Adaptive Sparse Representation of Random Patches (ASR+). In the learning stage, for each enrolled subject, a number of random patches are extracted from the subject's gallery images in order to construct representative dictionaries. In the testing stage, random test patches of the query image are extracted, and for each test patch a dictionary is built concatenating the `best' representative dictionary of each subject. Using this adapted dictionary, each test patch is classified following the Sparse Representation Classification (SRC) methodology. Finally, the query image is classified by patch voting. Thus, our approach is able to deal with a larger degree of variability in ambient lighting, pose, expression, occlusion, face size and distance from the camera. Experiments were carried out on five widely-used face databases. Results show that ASR+ deals well with unconstrained conditions, outperforming various representative methods in the literature in many complex scenarios.
{"title":"Face recognition via adaptive sparse representations of random patches","authors":"D. Mery, K. Bowyer","doi":"10.1109/WIFS.2014.7084296","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084296","url":null,"abstract":"Unconstrained face recognition is still an open problem, as state-of-the-art algorithms have not yet reached high recognition performance in real-world environments (e.g., crowd scenes at the Boston Marathon). This paper addresses this problem by proposing a new approach called Adaptive Sparse Representation of Random Patches (ASR+). In the learning stage, for each enrolled subject, a number of random patches are extracted from the subject's gallery images in order to construct representative dictionaries. In the testing stage, random test patches of the query image are extracted, and for each test patch a dictionary is built concatenating the `best' representative dictionary of each subject. Using this adapted dictionary, each test patch is classified following the Sparse Representation Classification (SRC) methodology. Finally, the query image is classified by patch voting. Thus, our approach is able to deal with a larger degree of variability in ambient lighting, pose, expression, occlusion, face size and distance from the camera. Experiments were carried out on five widely-used face databases. Results show that ASR+ deals well with unconstrained conditions, outperforming various representative methods in the literature in many complex scenarios.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134070250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084331
Martin Grill, M. Rehák
Botnet detection systems that use Network Behavioral Analysis (NBA) principle struggle with performance and privacy issues on large-scale networks. Because of that many researchers focus on fast and simple bot detection methods that at the same time use as little information as possible to avoid privacy violations. Next, deep inspections, reverse engineering, clustering and other time consuming approaches are typically unfeasible in large-scale networks. In this paper we present a novel technique that uses User- Agent field contained in the HTTP header, that can be easily obtained from the web proxy logs, to identify malware that uses User-Agents discrepant with the ones actually used by the infected user. We are using statistical information about the usage of the User-Agent of each user together with the usage of particular User-Agent across the whole analyzed network and typically visited domains. Using those statistics we can identify anomalies, which we proved to be caused by malware-infected hosts in the network. Because of our simple and computationally inexpensive approach we can inspect data from extremely large networks with minimal computational costs.
{"title":"Malware detection using HTTP user-agent discrepancy identification","authors":"Martin Grill, M. Rehák","doi":"10.1109/WIFS.2014.7084331","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084331","url":null,"abstract":"Botnet detection systems that use Network Behavioral Analysis (NBA) principle struggle with performance and privacy issues on large-scale networks. Because of that many researchers focus on fast and simple bot detection methods that at the same time use as little information as possible to avoid privacy violations. Next, deep inspections, reverse engineering, clustering and other time consuming approaches are typically unfeasible in large-scale networks. In this paper we present a novel technique that uses User- Agent field contained in the HTTP header, that can be easily obtained from the web proxy logs, to identify malware that uses User-Agents discrepant with the ones actually used by the infected user. We are using statistical information about the usage of the User-Agent of each user together with the usage of particular User-Agent across the whole analyzed network and typically visited domains. Using those statistics we can identify anomalies, which we proved to be caused by malware-infected hosts in the network. Because of our simple and computationally inexpensive approach we can inspect data from extremely large networks with minimal computational costs.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123945067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084316
Michele Buccoli, Paolo Bestagini, M. Zanoni, A. Sarti, S. Tubaro
The widespread diffusion of portable devices capable of capturing high-quality multimedia data, together with the rapid proliferation of media sharing platforms, has determined an incredible growth of user-generated content available online. Since it is hard to strictly regulate this trend, illegal diffusion of copyrighted material is often likely to occur. This is the case of audio bootlegs, i.e., concerts illegally recorded and redistributed by fans. In this paper, we propose a bootleg detector, with the aim of disambiguating between: i) bootlegs unofficially recorded; ii) live concerts officially published; iii) studio recordings from officially released albums. The proposed method is based on audio feature analysis and machine learning techniques. We exploit a deep learning paradigm to extract highly characterizing features from audio excerpts, and a supervised classifier for detection. The method is validated against a dataset of nearly 500 songs, and results are compared to a state-of-the-art detector. The conducted experiments confirm the capability of deep learning techniques to outperform classic feature extraction approaches.
{"title":"Unsupervised feature learning for bootleg detection using deep learning architectures","authors":"Michele Buccoli, Paolo Bestagini, M. Zanoni, A. Sarti, S. Tubaro","doi":"10.1109/WIFS.2014.7084316","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084316","url":null,"abstract":"The widespread diffusion of portable devices capable of capturing high-quality multimedia data, together with the rapid proliferation of media sharing platforms, has determined an incredible growth of user-generated content available online. Since it is hard to strictly regulate this trend, illegal diffusion of copyrighted material is often likely to occur. This is the case of audio bootlegs, i.e., concerts illegally recorded and redistributed by fans. In this paper, we propose a bootleg detector, with the aim of disambiguating between: i) bootlegs unofficially recorded; ii) live concerts officially published; iii) studio recordings from officially released albums. The proposed method is based on audio feature analysis and machine learning techniques. We exploit a deep learning paradigm to extract highly characterizing features from audio excerpts, and a supervised classifier for detection. The method is validated against a dataset of nearly 500 songs, and results are compared to a state-of-the-art detector. The conducted experiments confirm the capability of deep learning techniques to outperform classic feature extraction approaches.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124096498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084317
Pedro Comesaña Alfaro, F. Pérez-González
In the last years a number of counterforensics tools have been proposed. Although most of them are heuristic and designed ad hoc, lately a formal approach to this problem, rooted in transportation theory, has been pursued. This paper follows this path by designing optimal attacks against histogrambased detectors where the detection region is non-convex. The usefulness of our strategy is demonstrated by providing for the first time the optimal solution to the design of attacks against Benford's Law-based detectors, a problem that has deserved large practical interest by the forensic community. The performance of the proposed scheme is compared with that of the best existing counterforensic method against Benford-based detectors, showing the goodness (indeed, the optimality) of our approach.
{"title":"The optimal attack to histogram-based forensic detectors is simple(x)","authors":"Pedro Comesaña Alfaro, F. Pérez-González","doi":"10.1109/WIFS.2014.7084317","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084317","url":null,"abstract":"In the last years a number of counterforensics tools have been proposed. Although most of them are heuristic and designed ad hoc, lately a formal approach to this problem, rooted in transportation theory, has been pursued. This paper follows this path by designing optimal attacks against histogrambased detectors where the detection region is non-convex. The usefulness of our strategy is demonstrated by providing for the first time the optimal solution to the design of attacks against Benford's Law-based detectors, a problem that has deserved large practical interest by the forensic community. The performance of the proposed scheme is compared with that of the best existing counterforensic method against Benford-based detectors, showing the goodness (indeed, the optimality) of our approach.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125974383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084294
W. Wong, M. D. Wong, Y. Kho, A. Teoh
Minutiae-based matching is commonly used in fingerprint recognition systems due to its proven performance. However, such matching procedure usually involves unordered and variable size templates and it does not favour emerging bio-cryptography applications and most classifiers. This paper proposes a solution by converting the original minutiae set into a bit-string through the amalgamation of bag-of-words modelling, multi-scale construction and dynamic quantization. Experimental results show that the proposed method has high potential in biocryptography applications due to its outstanding EER of <; 0:51% and entropy of 723 bits. Further security and privacy concerns are also analyzed.
{"title":"Minutiae set to bit-string conversion using multi-scale bag-of-words paradigm","authors":"W. Wong, M. D. Wong, Y. Kho, A. Teoh","doi":"10.1109/WIFS.2014.7084294","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084294","url":null,"abstract":"Minutiae-based matching is commonly used in fingerprint recognition systems due to its proven performance. However, such matching procedure usually involves unordered and variable size templates and it does not favour emerging bio-cryptography applications and most classifiers. This paper proposes a solution by converting the original minutiae set into a bit-string through the amalgamation of bag-of-words modelling, multi-scale construction and dynamic quantization. Experimental results show that the proposed method has high potential in biocryptography applications due to its outstanding EER of <; 0:51% and entropy of 723 bits. Further security and privacy concerns are also analyzed.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132431608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084302
Tomáš Denemark, V. Sedighi, Vojtech Holub, R. Cogranne, J. Fridrich
From the perspective of signal detection theory, it seems obvious that knowing the probabilities with which the individual cover elements are modified during message embedding (the so-called probabilistic selection channel) should improve steganalysis. It is, however, not clear how to incorporate this information into steganalysis features when the detector is built as a classifier. In this paper, we propose a variant of the popular spatial rich model (SRM) that makes use of the selection channel. We demonstrate on three state-of-the-art content-adaptive steganographic schemes that even an imprecise knowledge of the embedding probabilities can substantially increase the detection accuracy in comparison with feature sets that do not consider the selection channel. Overly adaptive embedding schemes seem to be more vulnerable than schemes that spread the embedding changes more evenly throughout the cover.
{"title":"Selection-channel-aware rich model for Steganalysis of digital images","authors":"Tomáš Denemark, V. Sedighi, Vojtech Holub, R. Cogranne, J. Fridrich","doi":"10.1109/WIFS.2014.7084302","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084302","url":null,"abstract":"From the perspective of signal detection theory, it seems obvious that knowing the probabilities with which the individual cover elements are modified during message embedding (the so-called probabilistic selection channel) should improve steganalysis. It is, however, not clear how to incorporate this information into steganalysis features when the detector is built as a classifier. In this paper, we propose a variant of the popular spatial rich model (SRM) that makes use of the selection channel. We demonstrate on three state-of-the-art content-adaptive steganographic schemes that even an imprecise knowledge of the embedding probabilities can substantially increase the detection accuracy in comparison with feature sets that do not consider the selection channel. Overly adaptive embedding schemes seem to be more vulnerable than schemes that spread the embedding changes more evenly throughout the cover.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121733488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}