基于扩散模型的云边缘协作高效数据蒸馏方法

IF 7 1区 计算机科学 Q1 TELECOMMUNICATIONS IEEE Transactions on Cognitive Communications and Networking Pub Date : 2025-01-09 DOI:10.1109/TCCN.2025.3527647
Ze Chai;Yijing Lin;Zhipeng Gao;Xinlei Yu;Zhiqiang Xie
{"title":"基于扩散模型的云边缘协作高效数据蒸馏方法","authors":"Ze Chai;Yijing Lin;Zhipeng Gao;Xinlei Yu;Zhiqiang Xie","doi":"10.1109/TCCN.2025.3527647","DOIUrl":null,"url":null,"abstract":"The application of AI-generated models demands substantial amounts of data, which not only increases training time and memory consumption but also poses challenges to computation and communication loads in cloud-edge collaborative environments. Especially when edge devices have limited resources and network bandwidth is constrained, efficiently handling large-scale data in cloud-edge collaboration becomes a critical issue. Data distillation compresses large datasets into smaller, synthetic datasets, reducing transmission and computation overhead while maintaining model performance. However, existing data distillation methods face two main challenges: 1) improving accuracy without compromising distillation efficiency, and 2) the distilled dataset may retain backdoor attack triggers. To address these challenges, we propose an efficient data distillation method based on diffusion models, tailored for cloud-edge collaborative environments. We design a diffusion model training mechanism based on Kullback-Leibler (KL) divergence and contrastive loss to effectively distill synthetic datasets and enhance their accuracy, thereby reducing the data storage requirements on edge devices and alleviating communication burden with the cloud. Additionally, we introduce a dynamic channel weighting method based on an adaptive attention mechanism to retain essential features during the distillation process, improving model adaptability in cloud-edge collaborative environments. For the first time, We propose an optimization method based on diffusion model classification loss to realize backdoor attack in data set distillation in edge environment. Experimental results demonstrate that our method improves the accuracy of distilled datasets by 2.8% to 11.7% compared to traditional DNN data distillation methods and by 0.9% to 4.1% compared to existing diffusion model-based data distillation methods. Additionally, our attack methodology achieves backdoor attacks while maintaining an accuracy loss of less than 5% on the original model when the triggers are not visible.","PeriodicalId":13069,"journal":{"name":"IEEE Transactions on Cognitive Communications and Networking","volume":"11 2","pages":"902-913"},"PeriodicalIF":7.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Diffusion Model Empowered Efficient Data Distillation Method for Cloud-Edge Collaboration\",\"authors\":\"Ze Chai;Yijing Lin;Zhipeng Gao;Xinlei Yu;Zhiqiang Xie\",\"doi\":\"10.1109/TCCN.2025.3527647\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The application of AI-generated models demands substantial amounts of data, which not only increases training time and memory consumption but also poses challenges to computation and communication loads in cloud-edge collaborative environments. Especially when edge devices have limited resources and network bandwidth is constrained, efficiently handling large-scale data in cloud-edge collaboration becomes a critical issue. Data distillation compresses large datasets into smaller, synthetic datasets, reducing transmission and computation overhead while maintaining model performance. However, existing data distillation methods face two main challenges: 1) improving accuracy without compromising distillation efficiency, and 2) the distilled dataset may retain backdoor attack triggers. To address these challenges, we propose an efficient data distillation method based on diffusion models, tailored for cloud-edge collaborative environments. We design a diffusion model training mechanism based on Kullback-Leibler (KL) divergence and contrastive loss to effectively distill synthetic datasets and enhance their accuracy, thereby reducing the data storage requirements on edge devices and alleviating communication burden with the cloud. Additionally, we introduce a dynamic channel weighting method based on an adaptive attention mechanism to retain essential features during the distillation process, improving model adaptability in cloud-edge collaborative environments. For the first time, We propose an optimization method based on diffusion model classification loss to realize backdoor attack in data set distillation in edge environment. Experimental results demonstrate that our method improves the accuracy of distilled datasets by 2.8% to 11.7% compared to traditional DNN data distillation methods and by 0.9% to 4.1% compared to existing diffusion model-based data distillation methods. Additionally, our attack methodology achieves backdoor attacks while maintaining an accuracy loss of less than 5% on the original model when the triggers are not visible.\",\"PeriodicalId\":13069,\"journal\":{\"name\":\"IEEE Transactions on Cognitive Communications and Networking\",\"volume\":\"11 2\",\"pages\":\"902-913\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2025-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cognitive Communications and Networking\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10835133/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10835133/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能生成模型的应用需要大量的数据,这不仅增加了训练时间和内存消耗,而且对云边缘协作环境中的计算和通信负载提出了挑战。特别是当边缘设备资源有限,网络带宽受限时,在云边缘协作中有效处理大规模数据成为一个关键问题。数据蒸馏将大型数据集压缩成较小的合成数据集,在保持模型性能的同时减少传输和计算开销。然而,现有的数据蒸馏方法面临两个主要挑战:1)在不影响蒸馏效率的情况下提高精度;2)蒸馏后的数据集可能保留后门攻击触发器。为了应对这些挑战,我们提出了一种基于扩散模型的高效数据蒸馏方法,为云边缘协作环境量身定制。我们设计了一种基于Kullback-Leibler (KL)散度和对比损失的扩散模型训练机制,有效地提取了合成数据集,提高了合成数据集的准确性,从而降低了边缘设备上的数据存储需求,减轻了与云的通信负担。此外,我们引入了一种基于自适应注意机制的动态通道加权方法,以保留蒸馏过程中的基本特征,提高模型在云边缘协作环境中的适应性。首次提出了一种基于扩散模型分类损失的优化方法,实现了边缘环境下数据集蒸馏中的后门攻击。实验结果表明,与传统的DNN数据蒸馏方法相比,我们的方法将蒸馏数据集的精度提高了2.8% ~ 11.7%,与现有的基于扩散模型的数据蒸馏方法相比,我们的方法提高了0.9% ~ 4.1%。此外,当触发器不可见时,我们的攻击方法实现了后门攻击,同时在原始模型上保持小于5%的精度损失。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Diffusion Model Empowered Efficient Data Distillation Method for Cloud-Edge Collaboration
The application of AI-generated models demands substantial amounts of data, which not only increases training time and memory consumption but also poses challenges to computation and communication loads in cloud-edge collaborative environments. Especially when edge devices have limited resources and network bandwidth is constrained, efficiently handling large-scale data in cloud-edge collaboration becomes a critical issue. Data distillation compresses large datasets into smaller, synthetic datasets, reducing transmission and computation overhead while maintaining model performance. However, existing data distillation methods face two main challenges: 1) improving accuracy without compromising distillation efficiency, and 2) the distilled dataset may retain backdoor attack triggers. To address these challenges, we propose an efficient data distillation method based on diffusion models, tailored for cloud-edge collaborative environments. We design a diffusion model training mechanism based on Kullback-Leibler (KL) divergence and contrastive loss to effectively distill synthetic datasets and enhance their accuracy, thereby reducing the data storage requirements on edge devices and alleviating communication burden with the cloud. Additionally, we introduce a dynamic channel weighting method based on an adaptive attention mechanism to retain essential features during the distillation process, improving model adaptability in cloud-edge collaborative environments. For the first time, We propose an optimization method based on diffusion model classification loss to realize backdoor attack in data set distillation in edge environment. Experimental results demonstrate that our method improves the accuracy of distilled datasets by 2.8% to 11.7% compared to traditional DNN data distillation methods and by 0.9% to 4.1% compared to existing diffusion model-based data distillation methods. Additionally, our attack methodology achieves backdoor attacks while maintaining an accuracy loss of less than 5% on the original model when the triggers are not visible.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Cognitive Communications and Networking
IEEE Transactions on Cognitive Communications and Networking Computer Science-Artificial Intelligence
CiteScore
15.50
自引率
7.00%
发文量
108
期刊介绍: The IEEE Transactions on Cognitive Communications and Networking (TCCN) aims to publish high-quality manuscripts that push the boundaries of cognitive communications and networking research. Cognitive, in this context, refers to the application of perception, learning, reasoning, memory, and adaptive approaches in communication system design. The transactions welcome submissions that explore various aspects of cognitive communications and networks, focusing on innovative and holistic approaches to complex system design. Key topics covered include architecture, protocols, cross-layer design, and cognition cycle design for cognitive networks. Additionally, research on machine learning, artificial intelligence, end-to-end and distributed intelligence, software-defined networking, cognitive radios, spectrum sharing, and security and privacy issues in cognitive networks are of interest. The publication also encourages papers addressing novel services and applications enabled by these cognitive concepts.
期刊最新文献
Topology-Cognitive Task Offloading and Resource Allocation: A GAT-Enhanced MADRL Approach Inception-ResNet-Crop-Based Deep Learning for Multi-Cell Intelligent Beamforming Optimization TAAformer: Transposed Angular Attention for Channel Estimation With Fluid Antennas Antenna Coding Design for Multi-User Transmissions Using Pixel Antennas An Efficient Cross-Agent Spatial-Temporal Collaboration Framework for Environmental Perception in IoV
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1