The rapid evolution of malware, particularly the emergence of novel families, poses a significant challenge for conventional detection systems, which struggle to classify threats from a limited number of labeled samples. This data scarcity often leads to issues such as feature differences and overfitting. To address this, MFDPN, a Multimodal Feature Dynamic Prototype Network, is proposed. The core of MFDPN is its use of Affinity Propagation (AP) clustering to generate adaptive dynamic prototypes, overcoming the limitations of conventional methods like K-Means. This approach reduces prototype misalignment, enabling the model to capture discriminative features adapted to each query. The model integrates a hierarchical fusion of multimodal features, including grayscale textures, Dynamic-link Library (DLL) information, and Application Programming Interface (API) call sequences. By leveraging AP within a contrastive learning framework, MFDPN achieves strong feature alignment and enhances inter-class separation. On the MOTIF dataset, MFDPN achieves 94.78 % accuracy in a 5-way 10-shot setting, better than state-of-the-art methods by 4.75 percentage points. This improvement comes from better handling of intra-class variance in polymorphic malware via dynamic prototypes, while the 25.7-fold inference speed increase over K-Means-based methods highlights efficiency gains from adaptive clustering. Cross-dataset validation on MFD-WPE (accuracies of 65.09 % for 5-way 1-shot, 80.67 % for 5-way 5-shot, and 82.85 % for 5-way 10-shot) shows strong generalization to unseen categories, showing the model’s ability to transfer learned feature alignments across distributions.
扫码关注我们
求助内容:
应助结果提醒方式:
