{"title":"为TVM运行时启用Android NNAPI流","authors":"Ming-Yi Lai, Chia-Yu Sung, Jenq-Kuen Lee, Ming-Yu Hung","doi":"10.1145/3409390.3409393","DOIUrl":null,"url":null,"abstract":"With machine learning on the rise, mobile platforms are striving to offer inference acceleration on edge devices so that related applications can achieve satisfiable performance. With this background, this work aims at interfacing inference on Android with TVM, an inference-focusing compiler for machine learning, and NNAPI, the official neural network API provided by Android. This work presents a flow to integrate NNAPI into TVM-generated inference model with a partition algorithm to determine which parts of the model should be computed on NNAPI and which should not. Conducted experiments show that properly partitioned models can achieve significant speedup using NNAPI when compared to pure TVM-generated CPU inference. In addition, our enable flow potentially benefits both frameworks by allowing them to leverage each other in AI model deployments.","PeriodicalId":350506,"journal":{"name":"Workshop Proceedings of the 49th International Conference on Parallel Processing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Enabling Android NNAPI Flow for TVM Runtime\",\"authors\":\"Ming-Yi Lai, Chia-Yu Sung, Jenq-Kuen Lee, Ming-Yu Hung\",\"doi\":\"10.1145/3409390.3409393\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With machine learning on the rise, mobile platforms are striving to offer inference acceleration on edge devices so that related applications can achieve satisfiable performance. With this background, this work aims at interfacing inference on Android with TVM, an inference-focusing compiler for machine learning, and NNAPI, the official neural network API provided by Android. This work presents a flow to integrate NNAPI into TVM-generated inference model with a partition algorithm to determine which parts of the model should be computed on NNAPI and which should not. Conducted experiments show that properly partitioned models can achieve significant speedup using NNAPI when compared to pure TVM-generated CPU inference. In addition, our enable flow potentially benefits both frameworks by allowing them to leverage each other in AI model deployments.\",\"PeriodicalId\":350506,\"journal\":{\"name\":\"Workshop Proceedings of the 49th International Conference on Parallel Processing\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Workshop Proceedings of the 49th International Conference on Parallel Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3409390.3409393\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop Proceedings of the 49th International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3409390.3409393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
With machine learning on the rise, mobile platforms are striving to offer inference acceleration on edge devices so that related applications can achieve satisfiable performance. With this background, this work aims at interfacing inference on Android with TVM, an inference-focusing compiler for machine learning, and NNAPI, the official neural network API provided by Android. This work presents a flow to integrate NNAPI into TVM-generated inference model with a partition algorithm to determine which parts of the model should be computed on NNAPI and which should not. Conducted experiments show that properly partitioned models can achieve significant speedup using NNAPI when compared to pure TVM-generated CPU inference. In addition, our enable flow potentially benefits both frameworks by allowing them to leverage each other in AI model deployments.