P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle
{"title":"利用手机位置检测和最近邻平滑来解决SHL识别难题","authors":"P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle","doi":"10.1145/3410530.3414344","DOIUrl":null,"url":null,"abstract":"We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two \"target\" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) \"smoothing\" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"90 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Tackling the SHL recognition challenge with phone position detection and nearest neighbour smoothing\",\"authors\":\"P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle\",\"doi\":\"10.1145/3410530.3414344\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two \\\"target\\\" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) \\\"smoothing\\\" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.\",\"PeriodicalId\":7183,\"journal\":{\"name\":\"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers\",\"volume\":\"90 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3410530.3414344\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3410530.3414344","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tackling the SHL recognition challenge with phone position detection and nearest neighbour smoothing
We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two "target" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) "smoothing" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.