Weakly supervised video anomaly detection (WSVAD) aims to localize frame-level anomalies using only video-level labels, offering scalability for large-scale surveillance systems. However, existing methods often struggle to adapt to previously unseen and continuously evolving anomaly patterns, limiting their practical applicability. This challenge necessitates the development of continual learning (CL) frameworks that support incremental adaptation while preserving previously acquired knowledge. To this end, we propose a novel CL-based framework, dubbed COMMAND, for WSVAD that enables robust and adaptive anomaly detection in dynamic environments. COMMAND incorporates TempMamba, a temporal modeling unit based on Mamba blocks, which effectively captures both short-range and long-range temporal dependencies essential for distinguishing normal and abnormal behavior. In addition, MemDualNet introduces a dual-memory mechanism that retains both short-term variations and long-term contextual information, facilitating more expressive temporal representations. The framework Notation, a continual learning strategy that integrates memory replay with a composite loss function comprising contrastive, focal, and multiple-instance objectives to alleviate catastrophic forgetting. Experimental results on benchmark datasets such as UCF-Crime and ShanghaiTech validate the effectiveness of the proposed approach, demonstrating superior performance in adaptability, generalization, and anomaly localization compared to existing state-of-the-art methods.
扫码关注我们
求助内容:
应助结果提醒方式:
