This study advances the utilization of semantic information in person re-identification (ReID) by leveraging pre-trained vision-language models, addressing the current limitations in semantic information processing within ReID systems. While recent studies have explored CLIP integration for ReID tasks, their training approaches have inadvertently diminished semantic information by focusing primarily on indirect alignment between person IDs through text encoders and image features. Through comprehensive empirical analysis of semantic information’s role in pedestrian ReID, we propose MoSCE-ReID, a mixed semantic clustering expert model. The framework incorporates two key components: a learnable Attribute Group Weight Extractor (AGWE) and a Mixed of LoRA Expert (MoLE) module, designed specifically for attribute group feature extraction. The final ReID decisions are made through the synergistic integration of attribute group features and global features. Extensive experiments across multiple public datasets demonstrate that our approach, by effectively incorporating person attribute group semantic information, achieves substantial performance improvements in ReID tasks, exhibiting superior generalization capabilities compared to existing frameworks.