Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs

Jian Zhao, Shenao Wang, Yanjie Zhao, Xinyi Hou, Kailong Wang, Peiming Gao, Yuanchao Zhang, Chen Wei, Haoyu Wang
{"title":"Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs","authors":"Jian Zhao, Shenao Wang, Yanjie Zhao, Xinyi Hou, Kailong Wang, Peiming Gao, Yuanchao Zhang, Chen Wei, Haoyu Wang","doi":"arxiv-2409.09368","DOIUrl":null,"url":null,"abstract":"The proliferation of pre-trained models (PTMs) and datasets has led to the\nemergence of centralized model hubs like Hugging Face, which facilitate\ncollaborative development and reuse. However, recent security reports have\nuncovered vulnerabilities and instances of malicious attacks within these\nplatforms, highlighting growing security concerns. This paper presents the\nfirst systematic study of malicious code poisoning attacks on pre-trained model\nhubs, focusing on the Hugging Face platform. We conduct a comprehensive threat\nanalysis, develop a taxonomy of model formats, and perform root cause analysis\nof vulnerable formats. While existing tools like Fickling and ModelScan offer\nsome protection, they face limitations in semantic-level analysis and\ncomprehensive threat detection. To address these challenges, we propose MalHug,\nan end-to-end pipeline tailored for Hugging Face that combines dataset loading\nscript extraction, model deserialization, in-depth taint analysis, and\nheuristic pattern matching to detect and classify malicious code poisoning\nattacks in datasets and models. In collaboration with Ant Group, a leading\nfinancial technology company, we have implemented and deployed MalHug on a\nmirrored Hugging Face instance within their infrastructure, where it has been\noperational for over three months. During this period, MalHug has monitored\nmore than 705K models and 176K datasets, uncovering 91 malicious models and 9\nmalicious dataset loading scripts. These findings reveal a range of security\nthreats, including reverse shell, browser credential theft, and system\nreconnaissance. This work not only bridges a critical gap in understanding the\nsecurity of the PTM supply chain but also provides a practical, industry-tested\nsolution for enhancing the security of pre-trained model hubs.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09368","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The proliferation of pre-trained models (PTMs) and datasets has led to the emergence of centralized model hubs like Hugging Face, which facilitate collaborative development and reuse. However, recent security reports have uncovered vulnerabilities and instances of malicious attacks within these platforms, highlighting growing security concerns. This paper presents the first systematic study of malicious code poisoning attacks on pre-trained model hubs, focusing on the Hugging Face platform. We conduct a comprehensive threat analysis, develop a taxonomy of model formats, and perform root cause analysis of vulnerable formats. While existing tools like Fickling and ModelScan offer some protection, they face limitations in semantic-level analysis and comprehensive threat detection. To address these challenges, we propose MalHug, an end-to-end pipeline tailored for Hugging Face that combines dataset loading script extraction, model deserialization, in-depth taint analysis, and heuristic pattern matching to detect and classify malicious code poisoning attacks in datasets and models. In collaboration with Ant Group, a leading financial technology company, we have implemented and deployed MalHug on a mirrored Hugging Face instance within their infrastructure, where it has been operational for over three months. During this period, MalHug has monitored more than 705K models and 176K datasets, uncovering 91 malicious models and 9 malicious dataset loading scripts. These findings reveal a range of security threats, including reverse shell, browser credential theft, and system reconnaissance. This work not only bridges a critical gap in understanding the security of the PTM supply chain but also provides a practical, industry-tested solution for enhancing the security of pre-trained model hubs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
模型即代码:衡量针对预训练模型中枢的恶意代码中毒攻击
随着预训练模型(PTM)和数据集的激增,出现了像 "抱抱脸"(Hugging Face)这样的集中式模型中心,为协作开发和重复使用提供了便利。然而,最近的安全报告发现了这些平台的漏洞和恶意攻击实例,凸显了日益严重的安全问题。本文首次系统研究了针对预训练模型集群的恶意代码中毒攻击,重点关注 Hugging Face 平台。我们进行了全面的威胁分析,建立了模型格式分类法,并对易受攻击的格式进行了根本原因分析。虽然 Fickling 和 ModelScan 等现有工具提供了一定的保护,但它们在语义级分析和全面威胁检测方面存在局限性。为了应对这些挑战,我们提出了 MalHug,这是一个专为抱抱脸定制的端到端管道,它结合了数据集加载脚本提取、模型反序列化、深度污点分析和启发式模式匹配,以检测和分类数据集和模型中的恶意代码中毒攻击。我们与领先的金融技术公司蚂蚁金服集团(Ant Group)合作,在其基础架构中的irrored Hugging Face实例上实施并部署了MalHug,该实例已运行三个多月。在此期间,MalHug 监控了超过 705K 个模型和 176K 个数据集,发现了 91 个恶意模型和 9 个恶意数据集加载脚本。这些发现揭示了一系列安全威胁,包括反向外壳、浏览器凭证盗窃和系统反侦察。这项工作不仅弥补了在了解 PTM 供应链安全性方面的一个重要空白,还为增强预训练模型中心的安全性提供了一个实用的、经过行业测试的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing Motivations, Challenges, Best Practices, and Benefits for Bots and Conversational Agents in Software Engineering: A Multivocal Literature Review A Taxonomy of Self-Admitted Technical Debt in Deep Learning Systems Investigating team maturity in an agile automotive reorganization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1