{"title":"Selecting Resource-Efficient ML Models for Transport Mode Detection on Mobile Devices","authors":"Philipp Matthes, T. Springer","doi":"10.1109/IoTaIS56727.2022.9976004","DOIUrl":null,"url":null,"abstract":"Processing data closer to the source to minimize latency and the amount of data to be transmitted is a major driver for research on the Internet of Things (IoT). Since data processing in many IoT scenarios heavily depends on machine learning (ML), designing ML models for resource constraint devices at the edge of IoT infrastructures is one of the big challenges. Which ML model performs best highly depends on the problem domain but also on the availability of resources. Thus, to find an appropriate ML model in the broad search space of options, the trade-off between accuracy and resource consumption in terms of memory, CPU, and energy needs to be considered. However, there are ML problems where most current research focuses on accuracy, and the resource consumption of applicable models is not well investigated yet. We show that transport mode detection (TMD) is such a problem and present a case study for designing an ML model running on smartphones. To transform the search for the needle in the haystack into a structured design process, we propose an engineering workflow to systematically evolve ML model candidates, considering portability and resource consumption in addition to model accuracy. At the example of the Sussex-Huawei-Locomotion (SHL) dataset, we apply this process to multiple ML architectures and find a suitable model that convinces with high accuracy and low measured resource consumption for smartphone deployment. We discuss lessons learned, enabling engineers and researchers to use our workflow as a blueprint to identify solutions for their ML problems systematically.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IoTaIS56727.2022.9976004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Processing data closer to the source to minimize latency and the amount of data to be transmitted is a major driver for research on the Internet of Things (IoT). Since data processing in many IoT scenarios heavily depends on machine learning (ML), designing ML models for resource constraint devices at the edge of IoT infrastructures is one of the big challenges. Which ML model performs best highly depends on the problem domain but also on the availability of resources. Thus, to find an appropriate ML model in the broad search space of options, the trade-off between accuracy and resource consumption in terms of memory, CPU, and energy needs to be considered. However, there are ML problems where most current research focuses on accuracy, and the resource consumption of applicable models is not well investigated yet. We show that transport mode detection (TMD) is such a problem and present a case study for designing an ML model running on smartphones. To transform the search for the needle in the haystack into a structured design process, we propose an engineering workflow to systematically evolve ML model candidates, considering portability and resource consumption in addition to model accuracy. At the example of the Sussex-Huawei-Locomotion (SHL) dataset, we apply this process to multiple ML architectures and find a suitable model that convinces with high accuracy and low measured resource consumption for smartphone deployment. We discuss lessons learned, enabling engineers and researchers to use our workflow as a blueprint to identify solutions for their ML problems systematically.