Unsupervised 3D model retrieval and classification have obtained much attention due to widespread applications. However, existing methods focus only on global representations while ignoring local saliency learning, leading to redundant distraction and insufficient complementarity. In addition, they neglect intra- and inter-class contextual relevance during representation learning, leading to inaccurate embedding space partitioning and missing representative prototypes. To address these challenges, we present Instance and Prototype Contrastive Learning (IPCL), an unsupervised dual-network framework that simultaneously captures view-level local features and model-level semantic information. Specifically, we treat each view as an instance and employ inter-instance contrastive learning to extract discriminative local salient features, mitigating redundancy and enhancing cross-view complementarity. For global semantic modeling, we establish class prototypes for 3D models and propagate the semantic information to global features via a prototype-aware contrastive loss, strengthening class-level discriminability. Innovatively, we employ a bottom-up adaptive clustering algorithm called voting clustering, which mines deeper semantic correlations to refine prototype selection and embedding space structure. Comprehensive evaluations demonstrate the superiority of IPCL, e.g., IPCL outperforms most unsupervised methods, achieving classification accuracy improvements by 0.1% to 18.1% on ModelNet40 and 0.1% to 16.7% on ShapeNet55. IPCL achieves average retrieval gains of 15.2% in NN on ModelNet40 and 16.7% in mAP under the micro setting on ShapeNet55.
扫码关注我们
求助内容:
应助结果提醒方式:
