{"title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","authors":"Angie Boggust;Venkatesh Sivaraman;Yannick Assogba;Donghao Ren;Dominik Moritz;Fred Hohman","doi":"10.1109/TVCG.2024.3456371","DOIUrl":null,"url":null,"abstract":"To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called COMPRESS AND COMPARE. Within a single interface, COMPRESS AND COMPARE surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how COMPRESS AND COMPARE supports common compression analysis tasks through two case studies, debugging failed compression on generative language models and identifying compression artifacts in image classification models. We further evaluate COMPRESS AND COMPARE in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression's effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and COMPRESS AND COMPARE visualizations that may generalize to broader model comparison tasks.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"809-819"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10672545/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called COMPRESS AND COMPARE. Within a single interface, COMPRESS AND COMPARE surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how COMPRESS AND COMPARE supports common compression analysis tasks through two case studies, debugging failed compression on generative language models and identifying compression artifacts in image classification models. We further evaluate COMPRESS AND COMPARE in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression's effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and COMPRESS AND COMPARE visualizations that may generalize to broader model comparison tasks.
为了在设备上部署机器学习模型,从业人员使用压缩算法来缩小和加速模型,同时保持其高质量输出。在实践中,压缩的一个重要方面是模型比较,包括跟踪许多压缩实验,识别模型行为的微妙变化,以及协商复杂的精度-效率权衡。然而,现有的压缩工具无法很好地支持比较工作,导致分析工作繁琐,有时甚至不完整,而且分析工作分散在不同的工具中。为了支持现实世界的比较工作流程,我们开发了一个名为 COMPRESS AND COMPARE 的交互式可视系统。在一个界面中,COMPRESS AND COMPARE 通过可视化压缩模型之间的出处关系来显示有前景的压缩策略,并通过比较模型的预测、权重和激活来揭示压缩引起的行为变化。我们通过两个案例研究展示了 COMPRESS AND COMPARE 如何支持常见的压缩分析任务,即调试生成语言模型中失败的压缩和识别图像分类模型中的压缩伪影。我们还在一项由八位压缩专家参与的用户研究中对 COMPRESS AND COMPARE 进行了进一步评估,说明了它在为压缩工作流程提供结构、帮助从业人员建立压缩直觉以及鼓励全面分析压缩对模型行为的影响等方面的潜力。通过这些评估,我们发现了未来可视化分析工具应考虑的压缩方面的特定挑战,以及 COMPRESS AND COMPARE 可视化功能在更广泛的模型比较任务中的应用。