While RGB-based methods have been extensively studied in Industrial Anomaly Detection (IAD), effectively incorporating point cloud data remains challenging. Alongside prevalent memory bank-based approaches, recent research has explored cross-modal feature mapping for multimodal IAD, achieving notable performance and efficient inference. However, cross-modal feature mapping, while effective for detecting anomalies in feature correspondences, struggles to identify those exclusive to a single modality, due to the inherent one-to-many mapping between 2D and 3D data. To overcome this limitation, we propose Cross-modal Prediction and Intra-modal Reconstruction (CPIR), a novel multimodal anomaly detection method. First, we introduce a Bidirectional Feature Mapping (BFM) framework that integrates intra-modal reconstruction tasks with cross-modal prediction tasks, enhancing single-modality anomaly detection while maintaining effective cross-modal consistency learning. Second, we propose a novel network architecture, Latent Bridged Modal Mapping Module (LB3M), which introduces a shared latent intermediate state to decouple feature mapping across modalities into mappings between each modality and a shared intermediate state. This design was initially proposed to effectively complete prediction and reconstruction tasks with minimal parameters. However, it also enabled the network to learn more comprehensive feature patterns, significantly improving anomaly detection capabilities. Experiments on the MVTec 3D-AD dataset demonstrate that CPIR outperforms state-of-the-art methods in both anomaly detection and segmentation tasks, while excelling in few-shot learning scenarios.