The malicious users generate fake videos, images and images that spread misinformation, harass and blackmail the poor people. A wide variety of techniques, including the combining, merging, replacement, and imposition of photos and video recordings, are used to construct deepfakes. Moreover, the audio spoofing and calls are generated through deepfakes, which need specially trained models. Machine learning and deep learning are being rapidly improved, and a variety of techniques and tools are employed in the detection of deepfakes and anti-spoofing. The detection of both anti-spoofing and deepfakes is possible by resolving existing issues like generalizability, overfitting and complexity. To overcome these challenges, the knowledge distillation model is introduced in this paper. The process initiates with pre-processing using the weighted median filter (WmF). Here, the averaging intensity of neighboring pixels helps to smooth out variations. After that, the feature extraction is carried out by Dual attention based dilated ResNeXT with Residual autoencoder (DAD-DRAE). The model provides features with fewer dimensionality. In the classification phase, models like the Optimized Multi-task Transformer induced Relational knowledge distillation model (OMT-RKD) are deployed to categorize distinct classes of Anti-Spoofing and Deepfake Detection. The hyperparameter used in the classification model is tuned by the Tent chaotic Hippo optimization algorithm (TCHOA). The chaotic function increases the convergence, which decreases the model parameter complexity. In the evaluation, the proposed model is trained with three datasets and achieved an accuracy of 98.68%, 98.22% and 98.44% in the Deepfake Detection Challenge (DFDC) dataset, ASVspoof dataset and FaceForensics++, respectively.
扫码关注我们
求助内容:
应助结果提醒方式:
