模型间的可解释性:自我监督模型案例研究

IF 2.3 Q2 COMPUTER SCIENCE, THEORY & METHODS Array Pub Date : 2024-05-18 DOI:10.1016/j.array.2024.100350
Ahmad Mustapha , Wael Khreich , Wes Masri
{"title":"模型间的可解释性:自我监督模型案例研究","authors":"Ahmad Mustapha ,&nbsp;Wael Khreich ,&nbsp;Wes Masri","doi":"10.1016/j.array.2024.100350","DOIUrl":null,"url":null,"abstract":"<div><p>Since early machine learning models, metrics such as accuracy and precision have been the de facto way to evaluate and compare trained models. However, a single metric number does not fully capture model similarities and differences, especially in the computer vision domain. A model with high accuracy on a certain dataset might provide a lower accuracy on another dataset without further insights. To address this problem, we build on a recent interpretability technique called Dissect to introduce <em>inter-model interpretability</em>, which determines how models relate or complement each other based on the visual concepts they have learned (such as objects and materials). Toward this goal, we project 13 top-performing self-supervised models into a Learned Concepts Embedding (LCE) space that reveals proximities among models from the perspective of learned concepts. We further crossed this information with the performance of these models on four computer vision tasks and 15 datasets. The experiment allowed us to categorize the models into three categories and revealed the type of visual concepts different tasks required for the first time. This is a step forward for designing cross-task learning algorithms.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"22 ","pages":"Article 100350"},"PeriodicalIF":2.3000,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S259000562400016X/pdfft?md5=33f9642cc8597d6783b926660acecf8c&pid=1-s2.0-S259000562400016X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Inter-model interpretability: Self-supervised models as a case study\",\"authors\":\"Ahmad Mustapha ,&nbsp;Wael Khreich ,&nbsp;Wes Masri\",\"doi\":\"10.1016/j.array.2024.100350\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Since early machine learning models, metrics such as accuracy and precision have been the de facto way to evaluate and compare trained models. However, a single metric number does not fully capture model similarities and differences, especially in the computer vision domain. A model with high accuracy on a certain dataset might provide a lower accuracy on another dataset without further insights. To address this problem, we build on a recent interpretability technique called Dissect to introduce <em>inter-model interpretability</em>, which determines how models relate or complement each other based on the visual concepts they have learned (such as objects and materials). Toward this goal, we project 13 top-performing self-supervised models into a Learned Concepts Embedding (LCE) space that reveals proximities among models from the perspective of learned concepts. We further crossed this information with the performance of these models on four computer vision tasks and 15 datasets. The experiment allowed us to categorize the models into three categories and revealed the type of visual concepts different tasks required for the first time. This is a step forward for designing cross-task learning algorithms.</p></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"22 \",\"pages\":\"Article 100350\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S259000562400016X/pdfft?md5=33f9642cc8597d6783b926660acecf8c&pid=1-s2.0-S259000562400016X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S259000562400016X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S259000562400016X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

自早期的机器学习模型以来,准确率和精确度等指标一直是评估和比较训练模型的事实方法。然而,单一的指标数字并不能完全反映模型的异同,尤其是在计算机视觉领域。在某个数据集上具有高精确度的模型,在另一个数据集上可能会提供较低的精确度,而不会有进一步的深入了解。为了解决这个问题,我们在最近一种名为 "剖析"(Dissect)的可解释性技术的基础上,引入了模型间的可解释性,这种可解释性决定了模型如何根据所学的视觉概念(如物体和材料)相互关联或互补。为了实现这一目标,我们将 13 个表现最佳的自监督模型投影到学习概念嵌入(LCE)空间中,从学习概念的角度揭示模型之间的近似性。我们进一步将这些信息与这些模型在四项计算机视觉任务和 15 个数据集上的表现进行了比较。通过实验,我们将模型分为三类,并首次揭示了不同任务所需的视觉概念类型。这为设计跨任务学习算法迈出了一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Inter-model interpretability: Self-supervised models as a case study

Since early machine learning models, metrics such as accuracy and precision have been the de facto way to evaluate and compare trained models. However, a single metric number does not fully capture model similarities and differences, especially in the computer vision domain. A model with high accuracy on a certain dataset might provide a lower accuracy on another dataset without further insights. To address this problem, we build on a recent interpretability technique called Dissect to introduce inter-model interpretability, which determines how models relate or complement each other based on the visual concepts they have learned (such as objects and materials). Toward this goal, we project 13 top-performing self-supervised models into a Learned Concepts Embedding (LCE) space that reveals proximities among models from the perspective of learned concepts. We further crossed this information with the performance of these models on four computer vision tasks and 15 datasets. The experiment allowed us to categorize the models into three categories and revealed the type of visual concepts different tasks required for the first time. This is a step forward for designing cross-task learning algorithms.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Array
Array Computer Science-General Computer Science
CiteScore
4.40
自引率
0.00%
发文量
93
审稿时长
45 days
期刊最新文献
SAMU-Net: A dual-stage polyp segmentation network with a custom attention-based U-Net and segment anything model for enhanced mask prediction Combining computational linguistics with sentence embedding to create a zero-shot NLIDB Development of automatic CNC machine with versatile applications in art, design, and engineering Dual-model approach for one-shot lithium-ion battery state of health sequence prediction Maximizing influence via link prediction in evolving networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1