Semi-supervised medical image segmentation algorithms hold significant research and practical value due to their ability to reduce labeling dependency and annotation costs. However, most current algorithms lack diverse regularization methods to effectively exploit robust knowledge from unlabeled data. The pseudo-label filtering methods employed are often overly simplistic, which exacerbates the serious category imbalance problem in medical images. Additionally, these algorithms fail to provide robust semantic representations for comparative learning in multi-scenario settings, making it challenging for the model to learn more discriminative semantic information. To address these issues, we propose a semi-supervised medical image segmentation algorithm that utilizes dual-branch mixup-decoupling confidence training to establish a dual-stream semantic link between labeled and unlabeled images, thereby alleviating semantic ambiguity. Furthermore, we design a bidirectional confidence contrast learning method to maximize the consistency between similar pixels and the distinction between dissimilar pixels in both directions across different feature embeddings in dual views. This enables the model to learn the key features of intra-class similarity and inter-class separability. We conduct a series of experiments on both 2D and 3D datasets, and the experimental results demonstrate that the proposed algorithm achieves notable segmentation performance, outperforming other recent state-of-the-art algorithms.