Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献
Yida Zhu, Haiyong Luo, Runze Chen, Fang Zhao, Li Su
The Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge organized at the HASCA Workshop of UbiComp 2020 presents a large and realistic dataset with different activities and transportation. The goal of this human activity recognition challenge is to recognize eight modes of locomotion and transportation from 5-second frames of sensor data of a smartphone carried in the unknown position. In this paper, our team (We can fly) summarize our submission to the competition. We proposed a one-dimensional (1D) DenseNetX model, a deep learning method for transportation mode classification. We first convert sensor readings from the phone coordinate system to the navigation coordinate system. Then, we normalized each sensor using different maximums and minimums and construct multi-channel sensor input. Finally, 1D DenseNetX with the Gated Recurrent Unit (GRU) model output the predictions. In the experiment, we utilized four internal datasets for training our model and achieved averaged F1 score of 0.7848 on four valid datasets.
{"title":"DenseNetX and GRU for the sussex-huawei locomotion-transportation recognition challenge","authors":"Yida Zhu, Haiyong Luo, Runze Chen, Fang Zhao, Li Su","doi":"10.1145/3410530.3414349","DOIUrl":"https://doi.org/10.1145/3410530.3414349","url":null,"abstract":"The Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge organized at the HASCA Workshop of UbiComp 2020 presents a large and realistic dataset with different activities and transportation. The goal of this human activity recognition challenge is to recognize eight modes of locomotion and transportation from 5-second frames of sensor data of a smartphone carried in the unknown position. In this paper, our team (We can fly) summarize our submission to the competition. We proposed a one-dimensional (1D) DenseNetX model, a deep learning method for transportation mode classification. We first convert sensor readings from the phone coordinate system to the navigation coordinate system. Then, we normalized each sensor using different maximums and minimums and construct multi-channel sensor input. Finally, 1D DenseNetX with the Gated Recurrent Unit (GRU) model output the predictions. In the experiment, we utilized four internal datasets for training our model and achieved averaged F1 score of 0.7848 on four valid datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77009143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To recognize locomotion and transportation modes in a user-independent manner with an unknown target phone position, we (team Eagles) propose an approach based on two main steps: reduction of the impact of regular effects that stem from each phone position, followed by the recognition of the appropriate activity. The general architecture is composed of three groups of neural networks organized in the following order. The first group allows the recognition of the source, the second group allows the normalization of data to neutralize the impact of the source on the activity learning process, and the last group allows the recognition of the activity itself. We perform extensive experiments and the preliminary results encourage us to follow this direction, including the source learning to reduce the phone position's biases and activity separately.
{"title":"A multi-view architecture for the SHL challenge","authors":"Massinissa Hamidi, A. Osmani, Pegah Alizadeh","doi":"10.1145/3410530.3414351","DOIUrl":"https://doi.org/10.1145/3410530.3414351","url":null,"abstract":"To recognize locomotion and transportation modes in a user-independent manner with an unknown target phone position, we (team Eagles) propose an approach based on two main steps: reduction of the impact of regular effects that stem from each phone position, followed by the recognition of the appropriate activity. The general architecture is composed of three groups of neural networks organized in the following order. The first group allows the recognition of the source, the second group allows the normalization of data to neutralize the impact of the source on the activity learning process, and the last group allows the recognition of the activity itself. We perform extensive experiments and the preliminary results encourage us to follow this direction, including the source learning to reduce the phone position's biases and activity separately.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83393815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Respiratory related events (RE) during nocturnal sleep disturb the natural physiological pattern of sleep. This events may include all types of apnea and hypopnea, respiratory-event-related arousals and snoring. The particular importance of breath analysis is currently associated with the COVID-19 pandemic. The proposed algorithm is a deep learning model with long short-term memory cells for RE detection for each 1 minute epoch during nocturnal sleep. Our approach provides the basis for a smartwatch based respiratory-related sleep pattern analysis (accuracy of epoch-by-epoch classification is greater than 80 %), can be applied for a potential risk of respiratory-related diseases screening (mean absolute error of AHI estimation is about 6.5 events/h on the test set, which includes participants with all types of apnea severity; two class screening accuracy (AHI threshold is 15 events/h) is greater than 90 %).
{"title":"Respiratory events screening using consumer smartwatches","authors":"Illia Fedorin, Kostyantyn Slyusarenko, Margaryta Nastenko","doi":"10.1145/3410530.3414399","DOIUrl":"https://doi.org/10.1145/3410530.3414399","url":null,"abstract":"Respiratory related events (RE) during nocturnal sleep disturb the natural physiological pattern of sleep. This events may include all types of apnea and hypopnea, respiratory-event-related arousals and snoring. The particular importance of breath analysis is currently associated with the COVID-19 pandemic. The proposed algorithm is a deep learning model with long short-term memory cells for RE detection for each 1 minute epoch during nocturnal sleep. Our approach provides the basis for a smartwatch based respiratory-related sleep pattern analysis (accuracy of epoch-by-epoch classification is greater than 80 %), can be applied for a potential risk of respiratory-related diseases screening (mean absolute error of AHI estimation is about 6.5 events/h on the test set, which includes participants with all types of apnea severity; two class screening accuracy (AHI threshold is 15 events/h) is greater than 90 %).","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83769477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda
An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.
{"title":"Human activity recognition using multi-input CNN model with FFT spectrograms","authors":"Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda","doi":"10.1145/3410530.3414342","DOIUrl":"https://doi.org/10.1145/3410530.3414342","url":null,"abstract":"An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77467663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smriti Rani, A. Chowdhury, Andrew Gigie, T. Chakravarty, A. Pal
Small form factor off-the shelf radar sensor nodes are being investigated for various privacy preserving non-contact sensing applications. This paper, presents a novel method, based on a system of spatially distributed radar setup(panel radar), for real time action recognition. Proposed method uses spatially distributed two single channel Continuous Wave (CW) radars to classify actions. For classification, a unique two layered classifier, is employed on novel features. Layer I performs coarse limb level classification followed by finer action detection in Layer II. For validation of the proposed system, 7 actions were targeted and data was collected for 20 people. Accuracy of 88.6 % was obtained, with a precision and recall of 0.9 and 0.89 respectively, hence proving the efficacy of this novel approach.
{"title":"Action recognition using spatially distributed radar setup through microdoppler signature","authors":"Smriti Rani, A. Chowdhury, Andrew Gigie, T. Chakravarty, A. Pal","doi":"10.1145/3410530.3414362","DOIUrl":"https://doi.org/10.1145/3410530.3414362","url":null,"abstract":"Small form factor off-the shelf radar sensor nodes are being investigated for various privacy preserving non-contact sensing applications. This paper, presents a novel method, based on a system of spatially distributed radar setup(panel radar), for real time action recognition. Proposed method uses spatially distributed two single channel Continuous Wave (CW) radars to classify actions. For classification, a unique two layered classifier, is employed on novel features. Layer I performs coarse limb level classification followed by finer action detection in Layer II. For validation of the proposed system, 7 actions were targeted and data was collected for 20 people. Accuracy of 88.6 % was obtained, with a precision and recall of 0.9 and 0.89 respectively, hence proving the efficacy of this novel approach.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80011979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliably labeled datasets are crucial to the performance of supervised learning methods. Time-series data pose additional challenges. Data points lying on borders between classes can be mislabeled due to perception limitations of human labelers. Sensor measurements may not be directly interpretable by humans. Thus label noise cannot be manually removed. As a result, time-series datasets often contain a significant amount of label noise that can degrade the performance of machine learning models. This work focuses on label noise identification and removal by extending previous methods developed for static instances to the domain of time-series data. We use a combination of deep learning and visualization algorithms to facilitate automatic noise removal. We show that our approach can identify mislabeled instances, which results in improved classification accuracy on four synthetic and two real publicly available human activity datasets.
{"title":"Identifying label noise in time-series datasets","authors":"G. Atkinson, V. Metsis","doi":"10.1145/3410530.3414366","DOIUrl":"https://doi.org/10.1145/3410530.3414366","url":null,"abstract":"Reliably labeled datasets are crucial to the performance of supervised learning methods. Time-series data pose additional challenges. Data points lying on borders between classes can be mislabeled due to perception limitations of human labelers. Sensor measurements may not be directly interpretable by humans. Thus label noise cannot be manually removed. As a result, time-series datasets often contain a significant amount of label noise that can degrade the performance of machine learning models. This work focuses on label noise identification and removal by extending previous methods developed for static instances to the domain of time-series data. We use a combination of deep learning and visualization algorithms to facilitate automatic noise removal. We show that our approach can identify mislabeled instances, which results in improved classification accuracy on four synthetic and two real publicly available human activity datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78862112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the growth of interactive text or voice-enabled systems, such as intelligent personal assistants and chatbots, it is now possible to easily measure a user's mood using a conversation-based interaction instead of traditional questionnaires. However, it is still unclear if such mood measurements would be valid, akin to traditional measures, and user-engaging. Using smartphones, we compare in this paper two of the most popular traditional measures of mood: International PANAS-Short Form (I-PANAS-SF) and Affect Grid. For each of these measures, we then investigate the validity of mood measurement with a modified, chatbot-based user interface design. Our preliminary results suggest that some mood measures may not be resilient to modifications and that their alteration could lead to invalid, if not meaningless results. This exploratory paper then presents and discusses four voice-based mood tracker designs and summarizes user perception of and satisfaction with these tools.
随着交互式文本或语音系统(如智能个人助理和聊天机器人)的发展,现在可以通过基于对话的互动而不是传统的问卷调查来轻松测量用户的情绪。然而,目前尚不清楚这种情绪测量是否有效,类似于传统的测量方法,以及用户粘性。使用智能手机,我们在本文中比较了两种最流行的传统情绪测量方法:国际PANAS-Short Form (I-PANAS-SF)和Affect Grid。对于每一种测量方法,我们用一个改进的、基于聊天机器人的用户界面设计来研究情绪测量的有效性。我们的初步结果表明,一些情绪测量方法可能无法适应修改,它们的改变可能导致无效的结果,如果不是无意义的结果。这篇探索性论文随后提出并讨论了四种基于语音的情绪追踪器设计,并总结了用户对这些工具的感知和满意度。
{"title":"Exploring chatbot user interfaces for mood measurement: a study of validity and user experience","authors":"Helma Torkamaan, J. Ziegler","doi":"10.1145/3410530.3414395","DOIUrl":"https://doi.org/10.1145/3410530.3414395","url":null,"abstract":"With the growth of interactive text or voice-enabled systems, such as intelligent personal assistants and chatbots, it is now possible to easily measure a user's mood using a conversation-based interaction instead of traditional questionnaires. However, it is still unclear if such mood measurements would be valid, akin to traditional measures, and user-engaging. Using smartphones, we compare in this paper two of the most popular traditional measures of mood: International PANAS-Short Form (I-PANAS-SF) and Affect Grid. For each of these measures, we then investigate the validity of mood measurement with a modified, chatbot-based user interface design. Our preliminary results suggest that some mood measures may not be resilient to modifications and that their alteration could lead to invalid, if not meaningless results. This exploratory paper then presents and discusses four voice-based mood tracker designs and summarizes user perception of and satisfaction with these tools.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79470998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers are interested in understanding the dyadic interactions of couples as they relate to relationship quality and chronic disease management. Currently, ambulatory assessment of couples' interactions entail collecting data at random times in the day. There is no ubiquitous system that leverages the dyadic nature of couples' interactions (eg. collecting data when partners are interacting) and also performs real-time inference relevant for relationship quality and chronic disease management. In this work, we seek to develop a smartwatch system that can collect data about couples' dyadic interactions, and infer and track indicators of relationship quality and chronic disease management. We plan to collect data from couples in the field and use the data to develop methods to detect the indicators. Then, we plan to implement these methods as a smartwatch system and evaluate its performance in real-time and everyday life through another field study. Such a system can be used by social psychology researchers to understand the social dynamics of couples in everyday life and their impact on relationship quality, and also by health psychology researchers for developing and delivering behavioral interventions for couples who are managing chronic diseases.
{"title":"Towards a wearable system for assessing couples' dyadic interactions in daily life","authors":"George Boateng","doi":"10.1145/3410530.3414331","DOIUrl":"https://doi.org/10.1145/3410530.3414331","url":null,"abstract":"Researchers are interested in understanding the dyadic interactions of couples as they relate to relationship quality and chronic disease management. Currently, ambulatory assessment of couples' interactions entail collecting data at random times in the day. There is no ubiquitous system that leverages the dyadic nature of couples' interactions (eg. collecting data when partners are interacting) and also performs real-time inference relevant for relationship quality and chronic disease management. In this work, we seek to develop a smartwatch system that can collect data about couples' dyadic interactions, and infer and track indicators of relationship quality and chronic disease management. We plan to collect data from couples in the field and use the data to develop methods to detect the indicators. Then, we plan to implement these methods as a smartwatch system and evaluate its performance in real-time and everyday life through another field study. Such a system can be used by social psychology researchers to understand the social dynamics of couples in everyday life and their impact on relationship quality, and also by health psychology researchers for developing and delivering behavioral interventions for couples who are managing chronic diseases.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80887541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Okoshi, J. Nakazawa, JeongGil Ko, F. Kawsar, S. Pirttikangas
With the advancements in ubiquitous computing, ubicomp technology has deeply spread into our daily lives, including office work, home and house-keeping, health management, transportation, or even urban living environments. Furthermore, beyond the initial metric of computing, such as "efficiency" and "productivity", the benefits that people (users) benefit on a well-being perspective based on such ubiquitous technology has been greatly paid attention in the recent years. In our third "WellComp" (Computing for Well-being) workshop, we intensively discuss about the contribution of ubiquitous computing towards users' well-being that covers physical, mental, and social wellness (and their combinations), from the viewpoints of various different layers of computing. After big success of two previous workshops WellComp 2018 and 2019, with strong international organization members in various ubicomp research domains, WellComp 2020 will bring together researchers and practitioners from the academia and industry to explore versatile topics related to well-being and ubiquitous computing.
{"title":"WellComp 2020: third international workshop on computing for well-being","authors":"T. Okoshi, J. Nakazawa, JeongGil Ko, F. Kawsar, S. Pirttikangas","doi":"10.1145/3410530.3414614","DOIUrl":"https://doi.org/10.1145/3410530.3414614","url":null,"abstract":"With the advancements in ubiquitous computing, ubicomp technology has deeply spread into our daily lives, including office work, home and house-keeping, health management, transportation, or even urban living environments. Furthermore, beyond the initial metric of computing, such as \"efficiency\" and \"productivity\", the benefits that people (users) benefit on a well-being perspective based on such ubiquitous technology has been greatly paid attention in the recent years. In our third \"WellComp\" (Computing for Well-being) workshop, we intensively discuss about the contribution of ubiquitous computing towards users' well-being that covers physical, mental, and social wellness (and their combinations), from the viewpoints of various different layers of computing. After big success of two previous workshops WellComp 2018 and 2019, with strong international organization members in various ubicomp research domains, WellComp 2020 will bring together researchers and practitioners from the academia and industry to explore versatile topics related to well-being and ubiquitous computing.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81577319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle
We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two "target" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) "smoothing" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.
{"title":"Tackling the SHL recognition challenge with phone position detection and nearest neighbour smoothing","authors":"P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle","doi":"10.1145/3410530.3414344","DOIUrl":"https://doi.org/10.1145/3410530.3414344","url":null,"abstract":"We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two \"target\" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) \"smoothing\" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80865936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers