Kai Zeng, Xiangyu Yu, Beibei Liu, Yu Guan, Yongjian Hu
{"title":"检测深度伪造在替代颜色空间,以抵御看不见的腐败","authors":"Kai Zeng, Xiangyu Yu, Beibei Liu, Yu Guan, Yongjian Hu","doi":"10.1109/IWBF57495.2023.10157416","DOIUrl":null,"url":null,"abstract":"The adverse impact of deepfakes has recently raised world-wide concerns. Many ways of deepfake detection are published in the literature. The reported results of existing methods are generally good under known settings. However, the robustness challenge in deepfake detection is not well addressed. Most detectors fail to identify deepfakes that have undergone post-processing. Observing that the conventionally adopted RGB space does not guarantee the best performance, we propose other color spaces that prove to be more effective in detecting corrupted deepfake videos. We design a robust detection approach that leverages an adaptive manipulation trace extraction network to reveal artifacts from two color spaces. To mimic practical scenarios, we conduct experiments to detect images with post-processings that are not seen in the training stage. The results demonstrate that our approach outperforms state-of-the-art methods, boosting the average detection accuracy by 7% ~ 17%.","PeriodicalId":273412,"journal":{"name":"2023 11th International Workshop on Biometrics and Forensics (IWBF)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Detecting Deepfakes in Alternative Color Spaces to Withstand Unseen Corruptions\",\"authors\":\"Kai Zeng, Xiangyu Yu, Beibei Liu, Yu Guan, Yongjian Hu\",\"doi\":\"10.1109/IWBF57495.2023.10157416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The adverse impact of deepfakes has recently raised world-wide concerns. Many ways of deepfake detection are published in the literature. The reported results of existing methods are generally good under known settings. However, the robustness challenge in deepfake detection is not well addressed. Most detectors fail to identify deepfakes that have undergone post-processing. Observing that the conventionally adopted RGB space does not guarantee the best performance, we propose other color spaces that prove to be more effective in detecting corrupted deepfake videos. We design a robust detection approach that leverages an adaptive manipulation trace extraction network to reveal artifacts from two color spaces. To mimic practical scenarios, we conduct experiments to detect images with post-processings that are not seen in the training stage. The results demonstrate that our approach outperforms state-of-the-art methods, boosting the average detection accuracy by 7% ~ 17%.\",\"PeriodicalId\":273412,\"journal\":{\"name\":\"2023 11th International Workshop on Biometrics and Forensics (IWBF)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 11th International Workshop on Biometrics and Forensics (IWBF)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IWBF57495.2023.10157416\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 11th International Workshop on Biometrics and Forensics (IWBF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWBF57495.2023.10157416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Detecting Deepfakes in Alternative Color Spaces to Withstand Unseen Corruptions
The adverse impact of deepfakes has recently raised world-wide concerns. Many ways of deepfake detection are published in the literature. The reported results of existing methods are generally good under known settings. However, the robustness challenge in deepfake detection is not well addressed. Most detectors fail to identify deepfakes that have undergone post-processing. Observing that the conventionally adopted RGB space does not guarantee the best performance, we propose other color spaces that prove to be more effective in detecting corrupted deepfake videos. We design a robust detection approach that leverages an adaptive manipulation trace extraction network to reveal artifacts from two color spaces. To mimic practical scenarios, we conduct experiments to detect images with post-processings that are not seen in the training stage. The results demonstrate that our approach outperforms state-of-the-art methods, boosting the average detection accuracy by 7% ~ 17%.