Ze Chai;Yijing Lin;Zhipeng Gao;Xinlei Yu;Zhiqiang Xie
{"title":"基于扩散模型的云边缘协作高效数据蒸馏方法","authors":"Ze Chai;Yijing Lin;Zhipeng Gao;Xinlei Yu;Zhiqiang Xie","doi":"10.1109/TCCN.2025.3527647","DOIUrl":null,"url":null,"abstract":"The application of AI-generated models demands substantial amounts of data, which not only increases training time and memory consumption but also poses challenges to computation and communication loads in cloud-edge collaborative environments. Especially when edge devices have limited resources and network bandwidth is constrained, efficiently handling large-scale data in cloud-edge collaboration becomes a critical issue. Data distillation compresses large datasets into smaller, synthetic datasets, reducing transmission and computation overhead while maintaining model performance. However, existing data distillation methods face two main challenges: 1) improving accuracy without compromising distillation efficiency, and 2) the distilled dataset may retain backdoor attack triggers. To address these challenges, we propose an efficient data distillation method based on diffusion models, tailored for cloud-edge collaborative environments. We design a diffusion model training mechanism based on Kullback-Leibler (KL) divergence and contrastive loss to effectively distill synthetic datasets and enhance their accuracy, thereby reducing the data storage requirements on edge devices and alleviating communication burden with the cloud. Additionally, we introduce a dynamic channel weighting method based on an adaptive attention mechanism to retain essential features during the distillation process, improving model adaptability in cloud-edge collaborative environments. For the first time, We propose an optimization method based on diffusion model classification loss to realize backdoor attack in data set distillation in edge environment. Experimental results demonstrate that our method improves the accuracy of distilled datasets by 2.8% to 11.7% compared to traditional DNN data distillation methods and by 0.9% to 4.1% compared to existing diffusion model-based data distillation methods. Additionally, our attack methodology achieves backdoor attacks while maintaining an accuracy loss of less than 5% on the original model when the triggers are not visible.","PeriodicalId":13069,"journal":{"name":"IEEE Transactions on Cognitive Communications and Networking","volume":"11 2","pages":"902-913"},"PeriodicalIF":7.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Diffusion Model Empowered Efficient Data Distillation Method for Cloud-Edge Collaboration\",\"authors\":\"Ze Chai;Yijing Lin;Zhipeng Gao;Xinlei Yu;Zhiqiang Xie\",\"doi\":\"10.1109/TCCN.2025.3527647\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The application of AI-generated models demands substantial amounts of data, which not only increases training time and memory consumption but also poses challenges to computation and communication loads in cloud-edge collaborative environments. Especially when edge devices have limited resources and network bandwidth is constrained, efficiently handling large-scale data in cloud-edge collaboration becomes a critical issue. Data distillation compresses large datasets into smaller, synthetic datasets, reducing transmission and computation overhead while maintaining model performance. However, existing data distillation methods face two main challenges: 1) improving accuracy without compromising distillation efficiency, and 2) the distilled dataset may retain backdoor attack triggers. To address these challenges, we propose an efficient data distillation method based on diffusion models, tailored for cloud-edge collaborative environments. We design a diffusion model training mechanism based on Kullback-Leibler (KL) divergence and contrastive loss to effectively distill synthetic datasets and enhance their accuracy, thereby reducing the data storage requirements on edge devices and alleviating communication burden with the cloud. Additionally, we introduce a dynamic channel weighting method based on an adaptive attention mechanism to retain essential features during the distillation process, improving model adaptability in cloud-edge collaborative environments. For the first time, We propose an optimization method based on diffusion model classification loss to realize backdoor attack in data set distillation in edge environment. Experimental results demonstrate that our method improves the accuracy of distilled datasets by 2.8% to 11.7% compared to traditional DNN data distillation methods and by 0.9% to 4.1% compared to existing diffusion model-based data distillation methods. Additionally, our attack methodology achieves backdoor attacks while maintaining an accuracy loss of less than 5% on the original model when the triggers are not visible.\",\"PeriodicalId\":13069,\"journal\":{\"name\":\"IEEE Transactions on Cognitive Communications and Networking\",\"volume\":\"11 2\",\"pages\":\"902-913\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2025-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cognitive Communications and Networking\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10835133/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10835133/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Diffusion Model Empowered Efficient Data Distillation Method for Cloud-Edge Collaboration
The application of AI-generated models demands substantial amounts of data, which not only increases training time and memory consumption but also poses challenges to computation and communication loads in cloud-edge collaborative environments. Especially when edge devices have limited resources and network bandwidth is constrained, efficiently handling large-scale data in cloud-edge collaboration becomes a critical issue. Data distillation compresses large datasets into smaller, synthetic datasets, reducing transmission and computation overhead while maintaining model performance. However, existing data distillation methods face two main challenges: 1) improving accuracy without compromising distillation efficiency, and 2) the distilled dataset may retain backdoor attack triggers. To address these challenges, we propose an efficient data distillation method based on diffusion models, tailored for cloud-edge collaborative environments. We design a diffusion model training mechanism based on Kullback-Leibler (KL) divergence and contrastive loss to effectively distill synthetic datasets and enhance their accuracy, thereby reducing the data storage requirements on edge devices and alleviating communication burden with the cloud. Additionally, we introduce a dynamic channel weighting method based on an adaptive attention mechanism to retain essential features during the distillation process, improving model adaptability in cloud-edge collaborative environments. For the first time, We propose an optimization method based on diffusion model classification loss to realize backdoor attack in data set distillation in edge environment. Experimental results demonstrate that our method improves the accuracy of distilled datasets by 2.8% to 11.7% compared to traditional DNN data distillation methods and by 0.9% to 4.1% compared to existing diffusion model-based data distillation methods. Additionally, our attack methodology achieves backdoor attacks while maintaining an accuracy loss of less than 5% on the original model when the triggers are not visible.
期刊介绍:
The IEEE Transactions on Cognitive Communications and Networking (TCCN) aims to publish high-quality manuscripts that push the boundaries of cognitive communications and networking research. Cognitive, in this context, refers to the application of perception, learning, reasoning, memory, and adaptive approaches in communication system design. The transactions welcome submissions that explore various aspects of cognitive communications and networks, focusing on innovative and holistic approaches to complex system design. Key topics covered include architecture, protocols, cross-layer design, and cognition cycle design for cognitive networks. Additionally, research on machine learning, artificial intelligence, end-to-end and distributed intelligence, software-defined networking, cognitive radios, spectrum sharing, and security and privacy issues in cognitive networks are of interest. The publication also encourages papers addressing novel services and applications enabled by these cognitive concepts.