{"title":"演示:在异构边缘中发现、提供和编排机器学习推理服务","authors":"Roberto Morabito, M. Chiang","doi":"10.1109/ICDCS51616.2021.00115","DOIUrl":null,"url":null,"abstract":"In recent years, the research community started to extensively study how edge computing can enhance the provisioning of a seamless and performing Machine Learning (ML) experience. Boosting the performance of ML inference at the edge became a driving factor especially for enabling those use-cases in which proximity to the data sources, near real-time requirements, and need of a reduced network latency represent a determining factor. The growing demand of edge-based ML services has been also boosted by an increasing market release of small-form factor inference accelerators devices that feature, however, heterogeneous and not fully interoperable software and hardware characteristics. A key aspect that has not yet been fully investigated is how to discover and efficiently optimize the provision of ML inference services in distributed edge systems featuring heterogeneous edge inference accelerators - not neglecting also that the limited devices computation capabilities may imply the need of orchestrating the inference execution provisioning among the different system's devices. The main goal of this demo is to showcase how ML inference services can be agnostically discovered, provisioned, and orchestrated in a cluster of heterogeneous and distributed edge nodes.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Demo: Discover, Provision, and Orchestration of Machine Learning Inference Services in Heterogeneous Edge\",\"authors\":\"Roberto Morabito, M. Chiang\",\"doi\":\"10.1109/ICDCS51616.2021.00115\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, the research community started to extensively study how edge computing can enhance the provisioning of a seamless and performing Machine Learning (ML) experience. Boosting the performance of ML inference at the edge became a driving factor especially for enabling those use-cases in which proximity to the data sources, near real-time requirements, and need of a reduced network latency represent a determining factor. The growing demand of edge-based ML services has been also boosted by an increasing market release of small-form factor inference accelerators devices that feature, however, heterogeneous and not fully interoperable software and hardware characteristics. A key aspect that has not yet been fully investigated is how to discover and efficiently optimize the provision of ML inference services in distributed edge systems featuring heterogeneous edge inference accelerators - not neglecting also that the limited devices computation capabilities may imply the need of orchestrating the inference execution provisioning among the different system's devices. The main goal of this demo is to showcase how ML inference services can be agnostically discovered, provisioned, and orchestrated in a cluster of heterogeneous and distributed edge nodes.\",\"PeriodicalId\":222376,\"journal\":{\"name\":\"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCS51616.2021.00115\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS51616.2021.00115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Demo: Discover, Provision, and Orchestration of Machine Learning Inference Services in Heterogeneous Edge
In recent years, the research community started to extensively study how edge computing can enhance the provisioning of a seamless and performing Machine Learning (ML) experience. Boosting the performance of ML inference at the edge became a driving factor especially for enabling those use-cases in which proximity to the data sources, near real-time requirements, and need of a reduced network latency represent a determining factor. The growing demand of edge-based ML services has been also boosted by an increasing market release of small-form factor inference accelerators devices that feature, however, heterogeneous and not fully interoperable software and hardware characteristics. A key aspect that has not yet been fully investigated is how to discover and efficiently optimize the provision of ML inference services in distributed edge systems featuring heterogeneous edge inference accelerators - not neglecting also that the limited devices computation capabilities may imply the need of orchestrating the inference execution provisioning among the different system's devices. The main goal of this demo is to showcase how ML inference services can be agnostically discovered, provisioned, and orchestrated in a cluster of heterogeneous and distributed edge nodes.