Traffic classification is essential for network management and optimization, enhancing user experience, network performance, and security. However, evolving technologies and complex network environments pose challenges. Recently, researchers have turned to machine learning for traffic classification due to its ability to automatically extract and distinguish traffic features, outperforming traditional methods in handling complex patterns and environmental changes while maintaining high accuracy. Federated learning, a distributed learning approach, enables model training without revealing original data, making it appealing for traffic classification to safeguard user privacy and data security. However, applying it to this task poses two challenges. Firstly, common client devices like routers and switches have limited computing resources, which can hinder efficient training and increase time costs. Secondly, real-world applications often demand personalized models and tasks for clients, posing further complexities. To address these issues, we propose Split Federated Mutual Learning (SFML), an innovative federated learning architecture designed for traffic classification that combines split learning and mutual learning. In SFML, each client maintains two models: a privacy model for the local task and a public model for the global task. These two models learn from each other through knowledge distillation. Furthermore, by leveraging split learning, we offload most of the computational tasks to the server, significantly reducing the computational burden on the client. Experimental results demonstrate that SFML outperforms typical training architectures in terms of convergence speed, model performance, and privacy protection. Not only does SFML improve training efficiency, but it also satisfies the personalized needs of clients and reduces their computational workload and communication overhead, providing users with a superior network experience.