Semi-supervised learning (SSL) is becoming the mainstream paradigm in medical image segmentation, which enables models to jointly leverage both annotated and unannotated images. Despite recent progress, several challenges significantly affect the reliability of SSL model: (1) the empirical distribution mismatch between labeled and unlabeled data results in reliable knowledge derived from limited labeled data being largely discarded; (2) the inherent cognitive biases of the model inevitably generate unreliable pseudo-labels, leading to confirmation bias. In this paper, we propose a reliable semi-supervised mutual learning framework (RSSML), which incorporates reliable knowledge utilization strategy into the mutual learning paradigm to address above challenges. Specifically, we first devise a recombination-and-recovery data augmentation strategy to mutually intermix labeled and unlabeled images. The recombined images are then fed into two subnets with entirely different network structures to promote the learning of common semantics among them. For labeled images, the prediction differences between subnets help identify regions prone to missegmentation. We devise a supervised discordance relearning (SDR) regularization to review these regions. Regarding unlabeled images, we propose a reliability-aware cross pseudo supervision (RCPS) regularization to evaluate the reliability of pseudo-labels from two subnets and select those reliable ones for cross supervision. Extensive experiments on both publicly available and clinically obtained medical image datasets demonstrate the superiority of our method against existing SSL methods. The code is available at: https://github.com/1KB0/RSSML.


