D3: Differential Testing of Distributed Deep Learning With Model Generation

IF 5.6 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING IEEE Transactions on Software Engineering Pub Date : 2024-09-16 DOI:10.1109/TSE.2024.3461657
Jiannan Wang;Hung Viet Pham;Qi Li;Lin Tan;Yu Guo;Adnan Aziz;Erik Meijer
{"title":"D3: Differential Testing of Distributed Deep Learning With Model Generation","authors":"Jiannan Wang;Hung Viet Pham;Qi Li;Lin Tan;Yu Guo;Adnan Aziz;Erik Meijer","doi":"10.1109/TSE.2024.3461657","DOIUrl":null,"url":null,"abstract":"Deep Learning (DL) techniques have been widely deployed in many application domains. The growth of DL models’ size and complexity demands distributed training of DL models. Since DL training is complex, software implementing distributed DL training is error-prone. Thus, it is crucial to test distributed deep learning software to improve its reliability and quality. To address this issue, we propose a \n<italic>differential</i>\n testing technique—D\n<sup>3</sup>\n, which leverages a \n<italic>distributed</i>\n equivalence rule that we create to test distributed \n<italic>deep</i>\n learning software. The rationale is that the same model trained with the same model input under different distributed settings should produce equivalent prediction output within certain thresholds. The different output indicates potential bugs in the distributed deep learning software. D\n<sup>3</sup>\n automatically generates a diverse set of distributed settings, DL models, and model input to test distributed deep learning software. Our evaluation on two of the most popular DL libraries, i.e., PyTorch and TensorFlow, shows that D\n<sup>3</sup>\n detects 21 bugs, including 12 previously unknown bugs.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 1","pages":"38-52"},"PeriodicalIF":5.6000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10680992/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Deep Learning (DL) techniques have been widely deployed in many application domains. The growth of DL models’ size and complexity demands distributed training of DL models. Since DL training is complex, software implementing distributed DL training is error-prone. Thus, it is crucial to test distributed deep learning software to improve its reliability and quality. To address this issue, we propose a differential testing technique—D 3 , which leverages a distributed equivalence rule that we create to test distributed deep learning software. The rationale is that the same model trained with the same model input under different distributed settings should produce equivalent prediction output within certain thresholds. The different output indicates potential bugs in the distributed deep learning software. D 3 automatically generates a diverse set of distributed settings, DL models, and model input to test distributed deep learning software. Our evaluation on two of the most popular DL libraries, i.e., PyTorch and TensorFlow, shows that D 3 detects 21 bugs, including 12 previously unknown bugs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
D3:分布式深度学习与模型生成的差异测试
深度学习技术已广泛应用于许多应用领域。深度学习模型的规模和复杂性的增长要求对深度学习模型进行分布式训练。由于深度学习训练是复杂的,实现分布式深度学习训练的软件很容易出错。因此,对分布式深度学习软件进行测试以提高其可靠性和质量至关重要。为了解决这个问题,我们提出了一种差分测试技术- d3,它利用我们创建的分布式等效规则来测试分布式深度学习软件。其基本原理是,在不同的分布设置下,用相同的模型输入训练的相同模型应该在一定的阈值内产生等效的预测输出。不同的输出表明分布式深度学习软件中存在潜在的缺陷。D3自动生成各种分布式设置、DL模型和模型输入,以测试分布式深度学习软件。我们对两个最流行的深度学习库(PyTorch和TensorFlow)的评估显示,D3检测到21个bug,其中包括12个以前未知的bug。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering 工程技术-工程:电子与电气
CiteScore
9.70
自引率
10.80%
发文量
724
审稿时长
6 months
期刊介绍: IEEE Transactions on Software Engineering seeks contributions comprising well-defined theoretical results and empirical studies with potential impacts on software construction, analysis, or management. The scope of this Transactions extends from fundamental mechanisms to the development of principles and their application in specific environments. Specific topic areas include: a) Development and maintenance methods and models: Techniques and principles for specifying, designing, and implementing software systems, encompassing notations and process models. b) Assessment methods: Software tests, validation, reliability models, test and diagnosis procedures, software redundancy, design for error control, and measurements and evaluation of process and product aspects. c) Software project management: Productivity factors, cost models, schedule and organizational issues, and standards. d) Tools and environments: Specific tools, integrated tool environments, associated architectures, databases, and parallel and distributed processing issues. e) System issues: Hardware-software trade-offs. f) State-of-the-art surveys: Syntheses and comprehensive reviews of the historical development within specific areas of interest.
期刊最新文献
Towards Secure Program Partitioning for Smart Contracts with LLM’s In-Context Learning FENSE: Feedback-Driven Incremental Symbolic Execution for Redundant Path Elimination Agents4PLC: Automating Closed-loop PLC Code Generation and Verification in Industrial Control Systems using LLM-based Agents Bridging Bug Localization and Issue Fixing: A Hierarchical Localization Framework Leveraging Large Language Models Diagnosing Violations of State-based Specifications in iCFTL
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1