Interpreting the predictions of deep learning models on 3D point cloud data is an important challenge for safety-critical domains such as autonomous driving, robotics and geospatial analysis. Existing counterfactual explainability methods often struggle with the sparsity and unordered nature of 3D point clouds. To address this, we introduce a generative framework for counterfactual explanations in 3D semantic segmentation models. Our approach leverages autoencoder-based latent representations, combined with UMAP embeddings and Delaunay triangulation, to construct a graph that enables geodesic path search between semantic classes. Candidate counterfactuals are generated by interpolating latent vectors along these paths and decoding into plausible point clouds, while semantic plausibility is guided by the predictions of a 3D semantic segmentation model. We evaluate the framework on ShapeNet objects, demonstrating that semantically related classes yield realistic counterfactuals with minimal geometric change, whereas unrelated classes expose sharp decision boundaries and reduced plausibility. Quantitative results confirm that the method balances defined interpretability metrics, producing counterfactuals that are both interpretable and geometrically consistent. Overall, our work demonstrates that generative counterfactuals in latent space provide a promising alternative to input-level perturbations.