{"title":"SFML:基于分裂学习和相互学习的个性化、高效和保护隐私的协作式流量分类架构","authors":"","doi":"10.1016/j.future.2024.107487","DOIUrl":null,"url":null,"abstract":"<div><p>Traffic classification is essential for network management and optimization, enhancing user experience, network performance, and security. However, evolving technologies and complex network environments pose challenges. Recently, researchers have turned to machine learning for traffic classification due to its ability to automatically extract and distinguish traffic features, outperforming traditional methods in handling complex patterns and environmental changes while maintaining high accuracy. Federated learning, a distributed learning approach, enables model training without revealing original data, making it appealing for traffic classification to safeguard user privacy and data security. However, applying it to this task poses two challenges. Firstly, common client devices like routers and switches have limited computing resources, which can hinder efficient training and increase time costs. Secondly, real-world applications often demand personalized models and tasks for clients, posing further complexities. To address these issues, we propose Split Federated Mutual Learning (SFML), an innovative federated learning architecture designed for traffic classification that combines split learning and mutual learning. In SFML, each client maintains two models: a privacy model for the local task and a public model for the global task. These two models learn from each other through knowledge distillation. Furthermore, by leveraging split learning, we offload most of the computational tasks to the server, significantly reducing the computational burden on the client. Experimental results demonstrate that SFML outperforms typical training architectures in terms of convergence speed, model performance, and privacy protection. Not only does SFML improve training efficiency, but it also satisfies the personalized needs of clients and reduces their computational workload and communication overhead, providing users with a superior network experience.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SFML: A personalized, efficient, and privacy-preserving collaborative traffic classification architecture based on split learning and mutual learning\",\"authors\":\"\",\"doi\":\"10.1016/j.future.2024.107487\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Traffic classification is essential for network management and optimization, enhancing user experience, network performance, and security. However, evolving technologies and complex network environments pose challenges. Recently, researchers have turned to machine learning for traffic classification due to its ability to automatically extract and distinguish traffic features, outperforming traditional methods in handling complex patterns and environmental changes while maintaining high accuracy. Federated learning, a distributed learning approach, enables model training without revealing original data, making it appealing for traffic classification to safeguard user privacy and data security. However, applying it to this task poses two challenges. Firstly, common client devices like routers and switches have limited computing resources, which can hinder efficient training and increase time costs. Secondly, real-world applications often demand personalized models and tasks for clients, posing further complexities. To address these issues, we propose Split Federated Mutual Learning (SFML), an innovative federated learning architecture designed for traffic classification that combines split learning and mutual learning. In SFML, each client maintains two models: a privacy model for the local task and a public model for the global task. These two models learn from each other through knowledge distillation. Furthermore, by leveraging split learning, we offload most of the computational tasks to the server, significantly reducing the computational burden on the client. Experimental results demonstrate that SFML outperforms typical training architectures in terms of convergence speed, model performance, and privacy protection. Not only does SFML improve training efficiency, but it also satisfies the personalized needs of clients and reduces their computational workload and communication overhead, providing users with a superior network experience.</p></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24004436\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24004436","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
SFML: A personalized, efficient, and privacy-preserving collaborative traffic classification architecture based on split learning and mutual learning
Traffic classification is essential for network management and optimization, enhancing user experience, network performance, and security. However, evolving technologies and complex network environments pose challenges. Recently, researchers have turned to machine learning for traffic classification due to its ability to automatically extract and distinguish traffic features, outperforming traditional methods in handling complex patterns and environmental changes while maintaining high accuracy. Federated learning, a distributed learning approach, enables model training without revealing original data, making it appealing for traffic classification to safeguard user privacy and data security. However, applying it to this task poses two challenges. Firstly, common client devices like routers and switches have limited computing resources, which can hinder efficient training and increase time costs. Secondly, real-world applications often demand personalized models and tasks for clients, posing further complexities. To address these issues, we propose Split Federated Mutual Learning (SFML), an innovative federated learning architecture designed for traffic classification that combines split learning and mutual learning. In SFML, each client maintains two models: a privacy model for the local task and a public model for the global task. These two models learn from each other through knowledge distillation. Furthermore, by leveraging split learning, we offload most of the computational tasks to the server, significantly reducing the computational burden on the client. Experimental results demonstrate that SFML outperforms typical training architectures in terms of convergence speed, model performance, and privacy protection. Not only does SFML improve training efficiency, but it also satisfies the personalized needs of clients and reduces their computational workload and communication overhead, providing users with a superior network experience.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.