Direct Adversarial Latent Estimation to Evaluate Decision Boundary Complexity in Black Box Models

Ashley S. Dale;Lauren Christopher
{"title":"Direct Adversarial Latent Estimation to Evaluate Decision Boundary Complexity in Black Box Models","authors":"Ashley S. Dale;Lauren Christopher","doi":"10.1109/TAI.2024.3455308","DOIUrl":null,"url":null,"abstract":"A trustworthy artificial intelligence (AI) model should be robust to perturbed data, where robustness correlates with the dimensionality and linearity of feature representations in the model latent space. Existing methods for evaluating feature representations in the latent space are restricted to white-box models. In this work, we introduce \n<italic>direct adversarial latent estimation</i>\n (DALE) for evaluating the robustness of feature representations and decision boundaries for target black-box models. A surrogate latent space is created using a variational autoencoder (VAE) trained on a disjoint dataset from an object classification backbone, then the VAE latent space is traversed to create sets of adversarial images. An object classification model is trained using transfer learning on the VAE image reconstructions, then classifies instances in the adversarial image set. We propose that the number of times the classification changes in an image set indicates the complexity of the decision boundaries in the classifier latent space; more complex decision boundaries are found to be more robust. This is confirmed by comparing the DALE distributions to the degradation of the classifier F1 scores in the presence of adversarial attacks. This work enables the first comparisons of latent-space complexity between black box models by relating model robustness to complex decision boundaries.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6043-6053"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10669090/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A trustworthy artificial intelligence (AI) model should be robust to perturbed data, where robustness correlates with the dimensionality and linearity of feature representations in the model latent space. Existing methods for evaluating feature representations in the latent space are restricted to white-box models. In this work, we introduce direct adversarial latent estimation (DALE) for evaluating the robustness of feature representations and decision boundaries for target black-box models. A surrogate latent space is created using a variational autoencoder (VAE) trained on a disjoint dataset from an object classification backbone, then the VAE latent space is traversed to create sets of adversarial images. An object classification model is trained using transfer learning on the VAE image reconstructions, then classifies instances in the adversarial image set. We propose that the number of times the classification changes in an image set indicates the complexity of the decision boundaries in the classifier latent space; more complex decision boundaries are found to be more robust. This is confirmed by comparing the DALE distributions to the degradation of the classifier F1 scores in the presence of adversarial attacks. This work enables the first comparisons of latent-space complexity between black box models by relating model robustness to complex decision boundaries.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
黑箱模型中决策边界复杂性的直接对抗潜在估计
一个值得信赖的人工智能(AI)模型应该对扰动数据具有鲁棒性,其中鲁棒性与模型潜在空间中特征表示的维数和线性相关。现有的评估潜在空间中特征表示的方法仅限于白盒模型。在这项工作中,我们引入了直接对抗潜在估计(DALE)来评估目标黑箱模型的特征表示和决策边界的鲁棒性。在对象分类主干的不相交数据集上训练变分自编码器(VAE)创建代理潜空间,然后遍历变分自编码器潜空间以创建对抗图像集。在VAE图像重建上使用迁移学习训练目标分类模型,然后在对抗图像集中对实例进行分类。我们提出一个图像集中分类变化的次数表示分类器潜在空间中决策边界的复杂性;研究发现,越复杂的决策边界越稳健。通过比较DALE分布与存在对抗性攻击时分类器F1分数的退化,可以证实这一点。这项工作通过将模型鲁棒性与复杂决策边界联系起来,实现了黑箱模型之间潜在空间复杂性的首次比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
Front Cover Table of Contents IEEE Transactions on Artificial Intelligence Publication Information Table of Contents Front Cover
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1