Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084318
Irene Amerini, Rudy Becarelli, R. Caldelli, A. D. Mastio
One of the principal problems in image forensics is determining if a particular image is authentic or not and, if manipulated, to localize which parts have been altered. In fact, localization is basic within the process of image examination because it permits to link the modified zone with the corresponding image area and, above all, with the meaning of it. Forensic instruments dealing with copy-move manipulation quite always provides a localization map, but, on the contrary, only a few tools, devised to detect a splicing operation, are able to give information about localization too. In this paper, a method to distinguish and then localize a single and a double JPEG compression in portions of an image through the use of the DCT coefficients first digit features and employing a Support Vector Machine (SVM) classifier is proposed. Experimental results and a comparison with a state-of-the-art technique are provided to witness the performances offered by the proposed method in terms of forgery localization.
{"title":"Splicing forgeries localization through the use of first digit features","authors":"Irene Amerini, Rudy Becarelli, R. Caldelli, A. D. Mastio","doi":"10.1109/WIFS.2014.7084318","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084318","url":null,"abstract":"One of the principal problems in image forensics is determining if a particular image is authentic or not and, if manipulated, to localize which parts have been altered. In fact, localization is basic within the process of image examination because it permits to link the modified zone with the corresponding image area and, above all, with the meaning of it. Forensic instruments dealing with copy-move manipulation quite always provides a localization map, but, on the contrary, only a few tools, devised to detect a splicing operation, are able to give information about localization too. In this paper, a method to distinguish and then localize a single and a double JPEG compression in portions of an image through the use of the DCT coefficients first digit features and employing a Support Vector Machine (SVM) classifier is proposed. Experimental results and a comparison with a state-of-the-art technique are provided to witness the performances offered by the proposed method in terms of forgery localization.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121624127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084311
Shachar Siboni, A. Cohen
The problem of identifying and detecting Botnets Command and Control (C&C) channels is considered. A Botnet is a logical network of compromised machines (Bots) which are remotely controlled by an attacker (Botmaster) using a C&C infrastructure in order to perform malicious activities. Accordingly, a key objective is to identify and block the C&C before any real harm is caused. We propose an anomaly detection algorithm and apply it to timing data, which can be collected without deep inspection, from open as well as encrypted flows. The suggested algorithm utilizes the Lempel Ziv universal compression algorithm in order to optimally give a probability assignment for normal traffic (during learning), then estimate the likelihood of new sequences (during operation) and classify them accordingly. Furthermore, the algorithm is generic and can be applied to any sequence of events, not necessarily traffic-related. We evaluate the detection algorithm on real-world network traces, showing how a universal, low complexity C&C identifi- cation system can be built, with high detection rates for a given false-alarm probability.
{"title":"Botnet identification via universal anomaly detection","authors":"Shachar Siboni, A. Cohen","doi":"10.1109/WIFS.2014.7084311","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084311","url":null,"abstract":"The problem of identifying and detecting Botnets Command and Control (C&C) channels is considered. A Botnet is a logical network of compromised machines (Bots) which are remotely controlled by an attacker (Botmaster) using a C&C infrastructure in order to perform malicious activities. Accordingly, a key objective is to identify and block the C&C before any real harm is caused. We propose an anomaly detection algorithm and apply it to timing data, which can be collected without deep inspection, from open as well as encrypted flows. The suggested algorithm utilizes the Lempel Ziv universal compression algorithm in order to optimally give a probability assignment for normal traffic (during learning), then estimate the likelihood of new sequences (during operation) and classify them accordingly. Furthermore, the algorithm is generic and can be applied to any sequence of events, not necessarily traffic-related. We evaluate the detection algorithm on real-world network traces, showing how a universal, low complexity C&C identifi- cation system can be built, with high detection rates for a given false-alarm probability.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115805792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084321
Duc-Tien Dang-Nguyen, V. Conotter, G. Boato, F. D. Natale
Digital graphics tools are nowadays capable of rendering highly photorealistic imagery, which easily puzzle our perception of reality. This poses serious ethical and legal issues, which in turn create the need for further technologies able to ensure the trustworthiness of digital media as a true representation of reality, especially when depicting humans. In this work, we propose a novel forensic technique to tackle the problem of distinguishing computer generated (CG) from real humans in videos. It exploits the temporal information inherent of a video sequence by analyzing the spatio-temporal appearance of facial expressions in both CG and real humans. Even if rendering facial expression has reached outstanding performances, CG face appearance over time still presents some underlying mechanical properties that greatly differ from the natural muscle movements of real humans. We build an efficient classifier on a set of features describing facial dynamics and spatio-temporal changes during smiling to distinguish CG from human faces. Experimental results demonstrate the effectiveness of the proposed approach.
{"title":"Video forensics based on expression dynamics","authors":"Duc-Tien Dang-Nguyen, V. Conotter, G. Boato, F. D. Natale","doi":"10.1109/WIFS.2014.7084321","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084321","url":null,"abstract":"Digital graphics tools are nowadays capable of rendering highly photorealistic imagery, which easily puzzle our perception of reality. This poses serious ethical and legal issues, which in turn create the need for further technologies able to ensure the trustworthiness of digital media as a true representation of reality, especially when depicting humans. In this work, we propose a novel forensic technique to tackle the problem of distinguishing computer generated (CG) from real humans in videos. It exploits the temporal information inherent of a video sequence by analyzing the spatio-temporal appearance of facial expressions in both CG and real humans. Even if rendering facial expression has reached outstanding performances, CG face appearance over time still presents some underlying mechanical properties that greatly differ from the natural muscle movements of real humans. We build an efficient classifier on a set of features describing facial dynamics and spatio-temporal changes during smiling to distinguish CG from human faces. Experimental results demonstrate the effectiveness of the proposed approach.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130969243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084314
Mohsen Zandi, Ahmad Mahmoudi Aznaveh, Azadeh Mansouri
The objective of copy-move forgery detection methods are to find copied regions within the same image. There are two main approaches to detect copy-move forgery: keypoint-based and block-based methods. Although the former is superior in terms of computational complexity, these methods neglect the smooth regions since they confine their search to salient points. On the other hand, while block-based methods consider smooth areas, they introduce a huge number of false matches. In this paper, it is proposed to employ an adaptive threshold in the matching phase in order to overcome this problem. The experimental results demonstrate that the proposed method can greatly reduce the number of false matches which results in improving both performance and computational cost.
{"title":"Adaptive matching for copy-move Forgery detection","authors":"Mohsen Zandi, Ahmad Mahmoudi Aznaveh, Azadeh Mansouri","doi":"10.1109/WIFS.2014.7084314","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084314","url":null,"abstract":"The objective of copy-move forgery detection methods are to find copied regions within the same image. There are two main approaches to detect copy-move forgery: keypoint-based and block-based methods. Although the former is superior in terms of computational complexity, these methods neglect the smooth regions since they confine their search to salient points. On the other hand, while block-based methods consider smooth areas, they introduce a huge number of false matches. In this paper, it is proposed to employ an adaptive threshold in the matching phase in order to overcome this problem. The experimental results demonstrate that the proposed method can greatly reduce the number of false matches which results in improving both performance and computational cost.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131578787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084307
A. D. Harper, R. Baxley
In a multiple-input multiple-output (MIMO) wiretap channel system, it has been shown that artificial noise can be transmitted in the null space of the main channel to guarantee the secrecy at the intended receiver. Previous formulas for MIMO asymptotic capacity assume that all channel eigenmodes will be utilized. However, optimizing over possible antenna configurations requires partitioning the available eigenmodes. With only some eigenmodes used for signal transmission, finding an exact closed-form asymptotic solution is, in general, intractable. We present a large-scale MIMO approximation with eigenmode partitioning, accurate for realistic numbers of antennas, and with greatly reduced computational complexity.
{"title":"Asymptotic MIMO artificial-noise secrecy rates with eigenmode partitioning","authors":"A. D. Harper, R. Baxley","doi":"10.1109/WIFS.2014.7084307","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084307","url":null,"abstract":"In a multiple-input multiple-output (MIMO) wiretap channel system, it has been shown that artificial noise can be transmitted in the null space of the main channel to guarantee the secrecy at the intended receiver. Previous formulas for MIMO asymptotic capacity assume that all channel eigenmodes will be utilized. However, optimizing over possible antenna configurations requires partitioning the available eigenmodes. With only some eigenmodes used for signal transmission, finding an exact closed-form asymptotic solution is, in general, intractable. We present a large-scale MIMO approximation with eigenmode partitioning, accurate for realistic numbers of antennas, and with greatly reduced computational complexity.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129013964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084299
Xavier Rolland-Névière, G. Doërr, P. Alliez
To be relevant in copyright protection scenarios, watermarking systems need to provide appropriate levels of security. This paper investigates the security of a popular 3D watermarking method that alters the histogram of distances between the vertices of a surface mesh and its center of mass using a quadratic programming formulation.We study two conventional security mechanisms, namely (i) obfuscating the support of the content used for watermarking and (ii) relying on random projections to obfuscate the watermarking subspace. The different attacks surveyed throughout the paper clearly highlight the limitations of this family of 3D watermarking systems with respect to security.
{"title":"Security analysis of radial-based 3D watermarking systems","authors":"Xavier Rolland-Névière, G. Doërr, P. Alliez","doi":"10.1109/WIFS.2014.7084299","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084299","url":null,"abstract":"To be relevant in copyright protection scenarios, watermarking systems need to provide appropriate levels of security. This paper investigates the security of a popular 3D watermarking method that alters the histogram of distances between the vertices of a surface mesh and its center of mass using a quadratic programming formulation.We study two conventional security mechanisms, namely (i) obfuscating the support of the content used for watermarking and (ii) relying on random projections to obfuscate the watermarking subspace. The different attacks surveyed throughout the paper clearly highlight the limitations of this family of 3D watermarking systems with respect to security.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130851952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/WIFS.2014.7084312
J. Troncoso-Pastoriza, Serena Caputo
The increasingly popular paradigm of Cloud computing brings about many benefits both for clients and providers, but it also introduces privacy risks associated to outsourcing data and processes to an untrustworthy environment. In particular, the multi-user computing scenario is especially difficult to tackle from a privacy-preserving point of view, seeking to protect data from different users while allowing for flexible Cloud applications. This work leverages Gentry's cryptographic bootstrapping operation as a means to endow fully homomorphic cryptosystem with proxy reencryption functionalities, targeted at the private multi-user and multi-key computing scenario. We provide an example implementation based on Gentry-Halevi cryptosystem, and a secure protocol that employs this primitive for solving the private multi-user computing scenario with non-colluding parties.
{"title":"Bootstrap-based proxy reencryption for private multi-user computing","authors":"J. Troncoso-Pastoriza, Serena Caputo","doi":"10.1109/WIFS.2014.7084312","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084312","url":null,"abstract":"The increasingly popular paradigm of Cloud computing brings about many benefits both for clients and providers, but it also introduces privacy risks associated to outsourcing data and processes to an untrustworthy environment. In particular, the multi-user computing scenario is especially difficult to tackle from a privacy-preserving point of view, seeking to protect data from different users while allowing for flexible Cloud applications. This work leverages Gentry's cryptographic bootstrapping operation as a means to endow fully homomorphic cryptosystem with proxy reencryption functionalities, targeted at the private multi-user and multi-key computing scenario. We provide an example implementation based on Gentry-Halevi cryptosystem, and a secure protocol that employs this primitive for solving the private multi-user computing scenario with non-colluding parties.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116687332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-05-12DOI: 10.1109/WIFS.2014.7084330
Q. Tian, Máire O’Neill, Neil Hanley
In the last decade, many side channel attacks have been published in academic literature detailing how to efficiently extract secret keys by mounting various attacks, such as differential or correlation power analysis, on cryptosystems. Among the most efficient and widely utilized leakage models involved in these attacks are the Hamming weight and distance models which give a simple, yet effective, approximation of the power consumption for many real-world systems. These leakage models reflect the number of bits switching, which is assumed proportional to the power consumption. However, the actual power consumption changing in the circuits is unlikely to be directly of that form. We, therefore, propose a non-linear leakage model by mapping the existing leakage model via a transform function, by which the changing power consumption is depicted more precisely, hence the attack efficiency can be improved considerably. This has the advantage of utilising a non-linear power model while retaining the simplicity of the Hamming weight or distance models. A modified attack architecture is then suggested to yield the correct key efficiently in practice. Finally, an empirical comparison of the attack results is presented.
{"title":"Can leakage models be more efficient? non-linear models in side channel attacks","authors":"Q. Tian, Máire O’Neill, Neil Hanley","doi":"10.1109/WIFS.2014.7084330","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084330","url":null,"abstract":"In the last decade, many side channel attacks have been published in academic literature detailing how to efficiently extract secret keys by mounting various attacks, such as differential or correlation power analysis, on cryptosystems. Among the most efficient and widely utilized leakage models involved in these attacks are the Hamming weight and distance models which give a simple, yet effective, approximation of the power consumption for many real-world systems. These leakage models reflect the number of bits switching, which is assumed proportional to the power consumption. However, the actual power consumption changing in the circuits is unlikely to be directly of that form. We, therefore, propose a non-linear leakage model by mapping the existing leakage model via a transform function, by which the changing power consumption is depicted more precisely, hence the attack efficiency can be improved considerably. This has the advantage of utilising a non-linear power model while retaining the simplicity of the Hamming weight or distance models. A modified attack architecture is then suggested to yield the correct key efficiently in practice. Finally, an empirical comparison of the attack results is presented.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114724563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/WIFS.2014.7084329
Roberto Leyva, Victor Sanchez, Chang-Tsun Li
This paper proposes a video anomaly detection method based on wake motion descriptors. The method analyses the motion characteristics of the video data, on a video volume-by-video volume basis, by computing the wake left behind by moving objects in the scene. It then probabilistically identifies those never previously seen motion patterns in order to detect anomalies. The method also considers the perspective of the scene to compensate for the relative change in an object's size introduced by the camera's view angle. To this end, a perspective grid is proposed to define the size of video volumes for anomaly detection. Evaluation results against several state-of- the-art methods show that the proposed method attains high detection accuracies and competitive computational time.
{"title":"Video anomaly detection based on wake motion descriptors and perspective grids","authors":"Roberto Leyva, Victor Sanchez, Chang-Tsun Li","doi":"10.1109/WIFS.2014.7084329","DOIUrl":"https://doi.org/10.1109/WIFS.2014.7084329","url":null,"abstract":"This paper proposes a video anomaly detection method based on wake motion descriptors. The method analyses the motion characteristics of the video data, on a video volume-by-video volume basis, by computing the wake left behind by moving objects in the scene. It then probabilistically identifies those never previously seen motion patterns in order to detect anomalies. The method also considers the perspective of the scene to compensate for the relative change in an object's size introduced by the camera's view angle. To this end, a perspective grid is proposed to define the size of video volumes for anomaly detection. Evaluation results against several state-of- the-art methods show that the proposed method attains high detection accuracies and competitive computational time.","PeriodicalId":220523,"journal":{"name":"2014 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126391618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}