{"title":"Automated code transformation for distributed training of TensorFlow deep learning models","authors":"Yusung Sim , Wonho Shin , Sungho Lee","doi":"10.1016/j.scico.2024.103260","DOIUrl":null,"url":null,"abstract":"<div><div>Distributed training of deep learning models reduces training time by parallelizing training workloads across multiple GPUs. Distributed training frameworks, such as Horovod and DeepSpeed, provide APIs, and model engineers rewrite deep learning models using the APIs to parallelize their training. However, the rewriting is time-consuming and labor-intensive because it requires engineers to read and understand documents and examples of the frameworks as well as manual efforts to rewrite code.</div><div>In this paper, we propose an automated code transformation approach that transforms TensorFlow deep learning models designed for non-distributed training to models training on multiple GPUs with the Horovod framework. We closely inspect the Horovod document and code examples and identify four common training patterns of TensorFlow deep learning models. Then, we formalize code transformation rules for each training pattern. Using the rules, we implement an automated code transformation tool that takes a TensorFlow deep learning model written in Python and rewrites it with the Horovod APIs for distributed training. Through source-code level transformation, our approach enables developers to efficiently scale existing DL models to multiple GPUs. Our evaluation shows that the tool correctly transforms 15 out of 16 open-source TensorFlow deep learning models. To the best of our knowledge, our work is the first automatic transformation technique for distributing existing TensorFlow deep learning models at the source code level. We believe that our approach significantly reduces manual efforts to parallelize training of existing TensorFlow deep learning models.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"242 ","pages":"Article 103260"},"PeriodicalIF":1.5000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science of Computer Programming","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167642324001837","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Distributed training of deep learning models reduces training time by parallelizing training workloads across multiple GPUs. Distributed training frameworks, such as Horovod and DeepSpeed, provide APIs, and model engineers rewrite deep learning models using the APIs to parallelize their training. However, the rewriting is time-consuming and labor-intensive because it requires engineers to read and understand documents and examples of the frameworks as well as manual efforts to rewrite code.
In this paper, we propose an automated code transformation approach that transforms TensorFlow deep learning models designed for non-distributed training to models training on multiple GPUs with the Horovod framework. We closely inspect the Horovod document and code examples and identify four common training patterns of TensorFlow deep learning models. Then, we formalize code transformation rules for each training pattern. Using the rules, we implement an automated code transformation tool that takes a TensorFlow deep learning model written in Python and rewrites it with the Horovod APIs for distributed training. Through source-code level transformation, our approach enables developers to efficiently scale existing DL models to multiple GPUs. Our evaluation shows that the tool correctly transforms 15 out of 16 open-source TensorFlow deep learning models. To the best of our knowledge, our work is the first automatic transformation technique for distributing existing TensorFlow deep learning models at the source code level. We believe that our approach significantly reduces manual efforts to parallelize training of existing TensorFlow deep learning models.
期刊介绍:
Science of Computer Programming is dedicated to the distribution of research results in the areas of software systems development, use and maintenance, including the software aspects of hardware design.
The journal has a wide scope ranging from the many facets of methodological foundations to the details of technical issues andthe aspects of industrial practice.
The subjects of interest to SCP cover the entire spectrum of methods for the entire life cycle of software systems, including
• Requirements, specification, design, validation, verification, coding, testing, maintenance, metrics and renovation of software;
• Design, implementation and evaluation of programming languages;
• Programming environments, development tools, visualisation and animation;
• Management of the development process;
• Human factors in software, software for social interaction, software for social computing;
• Cyber physical systems, and software for the interaction between the physical and the machine;
• Software aspects of infrastructure services, system administration, and network management.