The primary challenge in continual few-shot relation extraction is mitigating catastrophic forgetting. Prevailing strategies involve saving a set of samples in memory and replaying them. However, these methods pose privacy and data security concerns. To address this, we propose a novel rehearsal-free approach called Contrastive Weighted Prompt (CWP). This approach categorizes learnable prompts into task-generic and task-specific prompts. Task-generic prompts are shared across all tasks and are injected into the higher layers of the BERT encoder to capture general task knowledge. Task-specific prompts are generated by weighting all the prompts in a task-specific prompt pool based on their relevance to individual samples. These task-specific prompts are injected into the lower layers of BERT to extract task-specific knowledge. Task-generic prompts retain knowledge from prior tasks, while task-specific prompts reduce mutual interference among tasks and improve the relevance between prompts and individual samples. To further enhance the discriminability of the prompt embeddings for samples belonging to different relations, we introduced a relation-aware contrastive learning strategy. Experimental results on two standard datasets indicate that the proposed method outperforms baseline methods and demonstrates superiority in mitigating catastrophic forgetting.