The right to be forgotten is an essential requirement for machine learning systems under privacy regulations such as the GDPR and CCPA. We introduce Adaptive Gradient Unlearning (AGU), a novel influence-based algorithm designed to efficiently remove the contribution of specified training data while preserving overall model utility. Unlike retraining-based methods, AGU calculates parameter-level gradient sensitivity scores over the forget set to identify which weights are most influenced by the data targeted for deletion. These scores are then normalized and used to adaptively scale gradient updates, selectively erasing data influence without disrupting unrelated knowledge. Convergence is managed via dual stopping criteria based on changes in model parameters and empirical privacy leakage, which is measured by prediction divergence before and after unlearning. AGU achieves strong empirical results on six benchmark datasets including MNIST, CIFAR-10, CIFAR-100, IMDB, UCI Adult, and Tiny-ImageNet-200. In comparison with state-of-the-art methods such as SISA, SCRUB, AmnesiacML, SALUN, Boundary Unlearning, and retraining (ORTR) as a benchmark, AGU yields the best accuracy retention, unlearning times, memory overhead, and privacy leak. For example: AGU achieves an average of 98.3 % on MNIST while unlearning four times faster and using a third of the memory cost in comparison with ORTR. These results make a case for AGU as a practical, scalable data deletion approach with privacy guarantees in an era of deep learning and further extendable into federated and decentralized systems.
扫码关注我们
求助内容:
应助结果提醒方式:
