Jun-Liu Zhong , Yan-Fen Gan , Ji-Xiang Yang , Yu-Huan Chen , Ying-Qi Zhao , Zhi-Sheng Lv
{"title":"通过结合 TSF 特征和基于注意力的深度神经网络揭露视频监控对象伪造问题","authors":"Jun-Liu Zhong , Yan-Fen Gan , Ji-Xiang Yang , Yu-Huan Chen , Ying-Qi Zhao , Zhi-Sheng Lv","doi":"10.1016/j.jvcir.2024.104267","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, forensics has encountered a new challenge with video surveillance object forgery. This type of forgery combines the characteristics of popular video copy-move and splicing forgeries, failing most existing video forgery detection schemes. In response to this new forgery challenge, this paper proposes a Video Surveillance Object Forgery Detection (VSOFD) method including three parts components: (i) The proposed method presents a special-combined extraction technique that incorporates Temporal-Spatial-Frequent (TSF) perspectives for TSF feature extraction. Furthermore, TSF features can effectively represent video information and benefit from feature dimension reduction, improving computational efficiency. (ii) The proposed method introduces a universal, extensible attention-based Convolutional Neural Network (CNN) baseline for feature processing. This CNN processing architecture is compatible with various series and parallel feed-forward CNN structures, considering these structures as processing backbones. Therefore, the proposed CNN architecture benefits from various state-of-the-art structures, leading to addressing each independent TSF feature. (iii) The method adopts an encoder-attention-decoder RNN framework for feature classification. By incorporating temporal characteristics, the framework can further identify the correlations between the adjacent frames to classify the forgery frames better. Finally, experimental results show that the proposed network can achieve the best <em>F</em><sub>1</sub> = 94.69 % score, increasing at least 5–12 % from the existing State-Of-The-Art (SOTA) VSOFD schemes and other video forensics.</p></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"104 ","pages":"Article 104267"},"PeriodicalIF":2.6000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exposing video surveillance object forgery by combining TSF features and attention-based deep neural networks\",\"authors\":\"Jun-Liu Zhong , Yan-Fen Gan , Ji-Xiang Yang , Yu-Huan Chen , Ying-Qi Zhao , Zhi-Sheng Lv\",\"doi\":\"10.1016/j.jvcir.2024.104267\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recently, forensics has encountered a new challenge with video surveillance object forgery. This type of forgery combines the characteristics of popular video copy-move and splicing forgeries, failing most existing video forgery detection schemes. In response to this new forgery challenge, this paper proposes a Video Surveillance Object Forgery Detection (VSOFD) method including three parts components: (i) The proposed method presents a special-combined extraction technique that incorporates Temporal-Spatial-Frequent (TSF) perspectives for TSF feature extraction. Furthermore, TSF features can effectively represent video information and benefit from feature dimension reduction, improving computational efficiency. (ii) The proposed method introduces a universal, extensible attention-based Convolutional Neural Network (CNN) baseline for feature processing. This CNN processing architecture is compatible with various series and parallel feed-forward CNN structures, considering these structures as processing backbones. Therefore, the proposed CNN architecture benefits from various state-of-the-art structures, leading to addressing each independent TSF feature. (iii) The method adopts an encoder-attention-decoder RNN framework for feature classification. By incorporating temporal characteristics, the framework can further identify the correlations between the adjacent frames to classify the forgery frames better. Finally, experimental results show that the proposed network can achieve the best <em>F</em><sub>1</sub> = 94.69 % score, increasing at least 5–12 % from the existing State-Of-The-Art (SOTA) VSOFD schemes and other video forensics.</p></div>\",\"PeriodicalId\":54755,\"journal\":{\"name\":\"Journal of Visual Communication and Image Representation\",\"volume\":\"104 \",\"pages\":\"Article 104267\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Visual Communication and Image Representation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1047320324002232\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320324002232","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Exposing video surveillance object forgery by combining TSF features and attention-based deep neural networks
Recently, forensics has encountered a new challenge with video surveillance object forgery. This type of forgery combines the characteristics of popular video copy-move and splicing forgeries, failing most existing video forgery detection schemes. In response to this new forgery challenge, this paper proposes a Video Surveillance Object Forgery Detection (VSOFD) method including three parts components: (i) The proposed method presents a special-combined extraction technique that incorporates Temporal-Spatial-Frequent (TSF) perspectives for TSF feature extraction. Furthermore, TSF features can effectively represent video information and benefit from feature dimension reduction, improving computational efficiency. (ii) The proposed method introduces a universal, extensible attention-based Convolutional Neural Network (CNN) baseline for feature processing. This CNN processing architecture is compatible with various series and parallel feed-forward CNN structures, considering these structures as processing backbones. Therefore, the proposed CNN architecture benefits from various state-of-the-art structures, leading to addressing each independent TSF feature. (iii) The method adopts an encoder-attention-decoder RNN framework for feature classification. By incorporating temporal characteristics, the framework can further identify the correlations between the adjacent frames to classify the forgery frames better. Finally, experimental results show that the proposed network can achieve the best F1 = 94.69 % score, increasing at least 5–12 % from the existing State-Of-The-Art (SOTA) VSOFD schemes and other video forensics.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.