{"title":"Adaptive Offloading of Transformer Inference for Weak Edge Devices with Masked Autoencoders","authors":"Tao Liu, Peng Li, Yu Gu, Peng Liu, Hao Wang","doi":"10.1145/3639824","DOIUrl":null,"url":null,"abstract":"<p>Transformer is a popular machine learning model used by many intelligent applications in smart cities. However, it has high computational complexity and it would be hard to deploy it in weak-edge devices. This paper presents a novel two-round offloading scheme, called A-MOT, for efficient transformer inference. A-MOT only samples a small part of image data and sends it to edge servers, with negligible computational overhead at edge devices. The image is recovered by the server with the masked autoencoder (MAE) before the inference. In addition, an SLO-adaptive module is intended to achieve personalized transmission and effective bandwidth utilization. To avoid the large overhead on the repeat inference in the second round, A-MOT further contains a lightweight inference module to save inference time in the second round. Extensive experiments have been conducted to verify the effectiveness of the A-MOT.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"1 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Sensor Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3639824","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Transformer is a popular machine learning model used by many intelligent applications in smart cities. However, it has high computational complexity and it would be hard to deploy it in weak-edge devices. This paper presents a novel two-round offloading scheme, called A-MOT, for efficient transformer inference. A-MOT only samples a small part of image data and sends it to edge servers, with negligible computational overhead at edge devices. The image is recovered by the server with the masked autoencoder (MAE) before the inference. In addition, an SLO-adaptive module is intended to achieve personalized transmission and effective bandwidth utilization. To avoid the large overhead on the repeat inference in the second round, A-MOT further contains a lightweight inference module to save inference time in the second round. Extensive experiments have been conducted to verify the effectiveness of the A-MOT.
期刊介绍:
ACM Transactions on Sensor Networks (TOSN) is a central publication by the ACM in the interdisciplinary area of sensor networks spanning a broad discipline from signal processing, networking and protocols, embedded systems, information management, to distributed algorithms. It covers research contributions that introduce new concepts, techniques, analyses, or architectures, as well as applied contributions that report on development of new tools and systems or experiences and experiments with high-impact, innovative applications. The Transactions places special attention on contributions to systemic approaches to sensor networks as well as fundamental contributions.