Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献
The broad availability of smartphones and Inertial Measurement Units in particular brings them into focus of recent research. Inertial Measurement Unit data is used for a variety of tasks. One important task is the classification of the mode of transportation. In this paper, we present a deep-learning-based algorithm, that combines long-short-term-memory (LSTM) layer and convolutional layer to classify eight different modes of transportation on the Sussex-Huawei Locomotion-Transportation (SHL) dataset. The inputs of our model are the accelerometer, gyroscope, linear acceleration, magnetometer, gravity and pressure values as well as the orientation information. We achieve a F1 score of 98.96 % on our private test set. We participated as team 103114102106|8 in the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge.
{"title":"Combining LSTM and CNN for mode of transportation classification from smartphone sensors","authors":"Björn Friedrich, Carolin Lübbe, A. Hein","doi":"10.1145/3410530.3414350","DOIUrl":"https://doi.org/10.1145/3410530.3414350","url":null,"abstract":"The broad availability of smartphones and Inertial Measurement Units in particular brings them into focus of recent research. Inertial Measurement Unit data is used for a variety of tasks. One important task is the classification of the mode of transportation. In this paper, we present a deep-learning-based algorithm, that combines long-short-term-memory (LSTM) layer and convolutional layer to classify eight different modes of transportation on the Sussex-Huawei Locomotion-Transportation (SHL) dataset. The inputs of our model are the accelerometer, gyroscope, linear acceleration, magnetometer, gravity and pressure values as well as the orientation information. We achieve a F1 score of 98.96 % on our private test set. We participated as team 103114102106|8 in the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87453245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nursing activity recognition adds a new dimension to the healthcare automation system. But nursing activity recognition is very challenging than identifying simple human activities like walking, cycling, swimming, etc. due to intra-class variability between activities. Besides, the lack of proper dataset does not allow researchers to develop a generalized method for nursing activity or comparing baseline methods on different datasets. Nurse Care Activity Recognition Challenge 2020 provides a dataset of twelve nursing activities. In this paper, we have described our (Team Hex Code) approach where we have emphasized on developing method, which can cope up with real-world data with noise and uncertainty. In our method, we have resampled our data to deal with a variable sample frequency of dataset and we have also applied feature selection method on the extracted feature to have the best combination of feature set for classification. We have used random forest classifier which is a classical machine learning algorithm. Applying our methodology, we have got 78% validation accuracy on the dataset. We have trained our model on the lab dataset and validate them on the field dataset.
{"title":"A pragmatic signal processing approach for nurse care activity recognition using classical machine learning","authors":"Md. Ahasan Atick Faisal, Md. Sadman Siraj, Md. Tahmeed Abdullah, Omar Shahid, Farhan Fuad Abir, Md Atiqur Rahman Ahad","doi":"10.1145/3410530.3414337","DOIUrl":"https://doi.org/10.1145/3410530.3414337","url":null,"abstract":"Nursing activity recognition adds a new dimension to the healthcare automation system. But nursing activity recognition is very challenging than identifying simple human activities like walking, cycling, swimming, etc. due to intra-class variability between activities. Besides, the lack of proper dataset does not allow researchers to develop a generalized method for nursing activity or comparing baseline methods on different datasets. Nurse Care Activity Recognition Challenge 2020 provides a dataset of twelve nursing activities. In this paper, we have described our (Team Hex Code) approach where we have emphasized on developing method, which can cope up with real-world data with noise and uncertainty. In our method, we have resampled our data to deal with a variable sample frequency of dataset and we have also applied feature selection method on the extracted feature to have the best combination of feature set for classification. We have used random forest classifier which is a classical machine learning algorithm. Applying our methodology, we have got 78% validation accuracy on the dataset. We have trained our model on the lab dataset and validate them on the field dataset.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"32 4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87665612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human activity recognition (HAR) based on wearable sensors has brought tremendous benefit to several industries ranging from healthcare to entertainment. However, to build reliable machine-learned models from wearables, labeled on-body sensor datasets obtained from real-world settings are needed. It is often prohibitively expensive to obtain large-scale, labeled on-body sensor datasets from real-world deployments. The lack of labeled datasets is a major obstacle in the wearable sensor-based activity recognition community. To overcome this problem, I aim to develop two deep generative cross-modal architectures to synthesize accelerometer data streams from video data streams. In the proposed approach, a conditional generative adversarial network (cGAN) is first used to generate sensor data conditioned on video data. Then, a conditional variational autoencoder (cVAE)-cGAN is proposed to further improve representation of the data. The effectiveness and efficacy of the proposed methods will be evaluated through two popular applications in HAR: eating recognition and physical activity recognition. Extensive experiments will be conducted on public sensor-based activity recognition datasets by building models with synthetic data and comparing the models against those trained from real sensor data. This work aims to expand labeled on-body sensor data, by generating synthetic on-body sensor data from video, which will equip the community with methods to transfer labels from video to on-body sensors.
{"title":"Deep generative cross-modal on-body accelerometer data synthesis from videos","authors":"Shibo Zhang, N. Alshurafa","doi":"10.1145/3410530.3414329","DOIUrl":"https://doi.org/10.1145/3410530.3414329","url":null,"abstract":"Human activity recognition (HAR) based on wearable sensors has brought tremendous benefit to several industries ranging from healthcare to entertainment. However, to build reliable machine-learned models from wearables, labeled on-body sensor datasets obtained from real-world settings are needed. It is often prohibitively expensive to obtain large-scale, labeled on-body sensor datasets from real-world deployments. The lack of labeled datasets is a major obstacle in the wearable sensor-based activity recognition community. To overcome this problem, I aim to develop two deep generative cross-modal architectures to synthesize accelerometer data streams from video data streams. In the proposed approach, a conditional generative adversarial network (cGAN) is first used to generate sensor data conditioned on video data. Then, a conditional variational autoencoder (cVAE)-cGAN is proposed to further improve representation of the data. The effectiveness and efficacy of the proposed methods will be evaluated through two popular applications in HAR: eating recognition and physical activity recognition. Extensive experiments will be conducted on public sensor-based activity recognition datasets by building models with synthetic data and comparing the models against those trained from real sensor data. This work aims to expand labeled on-body sensor data, by generating synthetic on-body sensor data from video, which will equip the community with methods to transfer labels from video to on-body sensors.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87781617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Varun Mishra, A. Sano, Saeed Abdullah, J. Bardram, S. Servia, Elizabeth L. Murnane, Tanzeem Choudhury, Mirco Musolesi, G. N. Vilaza, R. Nandakumar, Tauhidur Rahman
Mental health issues affect a significant portion of the world's population and can result in debilitating and life-threatening outcomes. To address this increasingly pressing healthcare challenge, there is a need to research novel approaches for early detection and prevention. Toward this, ubiquitous systems can play a central role in revealing and tracking clinically relevant behaviors, contexts, and symptoms. Further, such systems can passively detect relapse onset and enable the opportune delivery of effective intervention strategies. However, despite their clear potential, the uptake of ubiquitous technologies into clinical mental healthcare is slow, and a number of challenges still face the overall efficacy of such technology-based solutions. The goal of this workshop is to bring together researchers interested in identifying, articulating, and addressing such issues and opportunities. Following the success of this workshop for the last four years, we aim to continue facilitating the UbiComp community in developing a holistic approach for sensing and intervention in the context of mental health.
{"title":"5th international workshop on mental health and well-being: sensing and intervention","authors":"Varun Mishra, A. Sano, Saeed Abdullah, J. Bardram, S. Servia, Elizabeth L. Murnane, Tanzeem Choudhury, Mirco Musolesi, G. N. Vilaza, R. Nandakumar, Tauhidur Rahman","doi":"10.1145/3410530.3414615","DOIUrl":"https://doi.org/10.1145/3410530.3414615","url":null,"abstract":"Mental health issues affect a significant portion of the world's population and can result in debilitating and life-threatening outcomes. To address this increasingly pressing healthcare challenge, there is a need to research novel approaches for early detection and prevention. Toward this, ubiquitous systems can play a central role in revealing and tracking clinically relevant behaviors, contexts, and symptoms. Further, such systems can passively detect relapse onset and enable the opportune delivery of effective intervention strategies. However, despite their clear potential, the uptake of ubiquitous technologies into clinical mental healthcare is slow, and a number of challenges still face the overall efficacy of such technology-based solutions. The goal of this workshop is to bring together researchers interested in identifying, articulating, and addressing such issues and opportunities. Following the success of this workshop for the last four years, we aim to continue facilitating the UbiComp community in developing a holistic approach for sensing and intervention in the context of mental health.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"129 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80345507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wataru Sasaki, Ryouya Ozawa, T. Okoshi, J. Nakazawa, K. Yagasaki, H. Komatsu
Chemotherapy-induced peripheral neuropathy (CIPN) is a common side effect of anticancer drugs that causes muscle weakness in the cancer patients, causing them to fall. Therefore, we constructed "FD-AWARE", a system to understand the users' fall context and users' CIPN symptoms as the first step in preventing these falls. This system can collect the various sensor data from the iPhone and the Apple Watch, self-reported fall information data, self-reported user status data of CIPN symptoms, and their physical condition. We conducted a 2-week in-the-wild experiment with 8 patients who were actually suffering from CIPN. We constructed the machine learning models for estimating the users' status of CIPN symptoms and successfully achieved high accuracy of performance for several estimating models.
{"title":"Estimating symptoms caused by CIPN using mobile and wearable devices","authors":"Wataru Sasaki, Ryouya Ozawa, T. Okoshi, J. Nakazawa, K. Yagasaki, H. Komatsu","doi":"10.1145/3410530.3414435","DOIUrl":"https://doi.org/10.1145/3410530.3414435","url":null,"abstract":"Chemotherapy-induced peripheral neuropathy (CIPN) is a common side effect of anticancer drugs that causes muscle weakness in the cancer patients, causing them to fall. Therefore, we constructed \"FD-AWARE\", a system to understand the users' fall context and users' CIPN symptoms as the first step in preventing these falls. This system can collect the various sensor data from the iPhone and the Apple Watch, self-reported fall information data, self-reported user status data of CIPN symptoms, and their physical condition. We conducted a 2-week in-the-wild experiment with 8 patients who were actually suffering from CIPN. We constructed the machine learning models for estimating the users' status of CIPN symptoms and successfully achieved high accuracy of performance for several estimating models.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84842991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youhong Friendred Peng, Atau Tanaka, Jamie A. Ward
This work explores the potential of a set comprised of wearable sensors, a performative lighting installation, and a public museum space, to inspire performative and collaborative social behavior among members of the public. Our installation, The Light, was first exhibited as part of the Late at Tate Britain event in 2019. In this paper we discuss the concept and technological implementation behind the work, and present an initial qualitative study of observations made of the people who interacted with it. The study provides a subjective evaluation based on people's facial expressions and body language as they improvise and coordinate their movements with one another and with the installation.
这个作品探索了一套由可穿戴传感器、表演照明装置和公共博物馆空间组成的潜力,以激发公众成员之间的表演和协作社会行为。我们的装置作品《光》(The Light)于2019年在泰特美术馆(Late at Tate Britain)首次展出。在本文中,我们讨论了工作背后的概念和技术实现,并对与之互动的人进行了初步的定性研究。这项研究根据人们的面部表情和肢体语言进行主观评价,因为他们即兴发挥,协调彼此和装置的动作。
{"title":"The light: exploring socially improvised movements using wearable sensors in a performative installation","authors":"Youhong Friendred Peng, Atau Tanaka, Jamie A. Ward","doi":"10.1145/3410530.3414378","DOIUrl":"https://doi.org/10.1145/3410530.3414378","url":null,"abstract":"This work explores the potential of a set comprised of wearable sensors, a performative lighting installation, and a public museum space, to inspire performative and collaborative social behavior among members of the public. Our installation, The Light, was first exhibited as part of the Late at Tate Britain event in 2019. In this paper we discuss the concept and technological implementation behind the work, and present an initial qualitative study of observations made of the people who interacted with it. The study provides a subjective evaluation based on people's facial expressions and body language as they improvise and coordinate their movements with one another and with the installation.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82658601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anooshmita Das, Emil Stubbe Kolvig Raun, Fisayo Caleb Sangogboye, M. Kjærgaard
Sensing occupant presence and their trajectories of movement in buildings enable new types of analysis and building operation strategies. However, obtaining such information in a cost-efficient and non-intrusive manner is a challenge. This paper proposes the Occu-track method for how inexpensive battery-powered sensors can be used at scale to estimate occupant presence and movement trajectories. The technique combines graph analysis and advanced clustering to produce accurate estimates. This paper validates the efficiency of Occu-track in two different settings; a music room and a private office. The experimental results from two room-level deployments demonstrate the benefits of the approach obtaining an average Root Mean Squared Error of 1.19 meters for case 1 and 0.88 meters for case 2 for trajectory estimation. The results can contribute to new dimensions of research associated with the generation of metadata from non-intrusive sensors to make informed decisions about efficient space utilization and floor plans, intelligent building operations, crowd management, comfortable indoor environment, or managing personnel.
{"title":"Occu-track: occupant presence sensing and trajectory detection using non-intrusive sensors in buildings","authors":"Anooshmita Das, Emil Stubbe Kolvig Raun, Fisayo Caleb Sangogboye, M. Kjærgaard","doi":"10.1145/3410530.3414597","DOIUrl":"https://doi.org/10.1145/3410530.3414597","url":null,"abstract":"Sensing occupant presence and their trajectories of movement in buildings enable new types of analysis and building operation strategies. However, obtaining such information in a cost-efficient and non-intrusive manner is a challenge. This paper proposes the Occu-track method for how inexpensive battery-powered sensors can be used at scale to estimate occupant presence and movement trajectories. The technique combines graph analysis and advanced clustering to produce accurate estimates. This paper validates the efficiency of Occu-track in two different settings; a music room and a private office. The experimental results from two room-level deployments demonstrate the benefits of the approach obtaining an average Root Mean Squared Error of 1.19 meters for case 1 and 0.88 meters for case 2 for trajectory estimation. The results can contribute to new dimensions of research associated with the generation of metadata from non-intrusive sensors to make informed decisions about efficient space utilization and floor plans, intelligent building operations, crowd management, comfortable indoor environment, or managing personnel.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87895046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuma Tsurugasaki, Koichi Shimoda, Michael Hefenbrock, Akihito Taya, Sejun Song, Y. Tobe
Telemedicine using information technology (IT) and communication networks is becoming common. Often, the medical doctor and the patient can discuss the problem by video teleconference and, if necessary, the patient's physiological data can be sent to the doctor. As part of this trend, we believe that brain waves can be used for telemedicine in the future. We expect that the diagnosis of remote patients will be realized by transferring electroencephalogram (EEG) data to a server or cloud. However, if EEG data are sent as they are, the data size will be significantly large. Thus, the compression of EEG data is desirable. Furthermore, should not affect the accuracy of diagnosis if data compression is performed. In this study, the relationship between the selected EEG signal features and the accuracy is investigated.
{"title":"Scalable selection of EEG features for compression","authors":"Yuma Tsurugasaki, Koichi Shimoda, Michael Hefenbrock, Akihito Taya, Sejun Song, Y. Tobe","doi":"10.1145/3410530.3414438","DOIUrl":"https://doi.org/10.1145/3410530.3414438","url":null,"abstract":"Telemedicine using information technology (IT) and communication networks is becoming common. Often, the medical doctor and the patient can discuss the problem by video teleconference and, if necessary, the patient's physiological data can be sent to the doctor. As part of this trend, we believe that brain waves can be used for telemedicine in the future. We expect that the diagnosis of remote patients will be realized by transferring electroencephalogram (EEG) data to a server or cloud. However, if EEG data are sent as they are, the data size will be significantly large. Thus, the compression of EEG data is desirable. Furthermore, should not affect the accuracy of diagnosis if data compression is performed. In this study, the relationship between the selected EEG signal features and the accuracy is investigated.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83433908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lixing He, C. Ruiz, Mostafa Mirshekari, Shijia Pan
Structural vibration sensing has been explored to acquire indoor human information. This non-intrusive sensing modality enables various smart building applications such as long-term in-home elderly monitoring, ubiquitous gait analysis, etc. However, for applications that utilize multiple sensors to collaboratively infer this information (e.g., localization, activities of daily living recognition), the system configuration requires the location of the anchor sensor, which are usually acquired manually. This labor-intensive manual system configuration limited the scalability of the system. In this paper, we propose SCSV2, a self-configuration scheme to compute these vibration sensor locations utilizing shared context information acquired from complementary sensing modalities - vibration sensor itself and co-located cameras. SCSV2 combines 1) the physics models of wave propagation together with structural element effects and 2) the data-driven model from the multimodal data to infer the vibration sensor's location. We conducted real-world experiments to verify our proposed method and achieved an up to 7cm anchor sensor localization accuracy.
{"title":"SCSV\u0000 2","authors":"Lixing He, C. Ruiz, Mostafa Mirshekari, Shijia Pan","doi":"10.1145/3410530.3414586","DOIUrl":"https://doi.org/10.1145/3410530.3414586","url":null,"abstract":"Structural vibration sensing has been explored to acquire indoor human information. This non-intrusive sensing modality enables various smart building applications such as long-term in-home elderly monitoring, ubiquitous gait analysis, etc. However, for applications that utilize multiple sensors to collaboratively infer this information (e.g., localization, activities of daily living recognition), the system configuration requires the location of the anchor sensor, which are usually acquired manually. This labor-intensive manual system configuration limited the scalability of the system. In this paper, we propose SCSV2, a self-configuration scheme to compute these vibration sensor locations utilizing shared context information acquired from complementary sensing modalities - vibration sensor itself and co-located cameras. SCSV2 combines 1) the physics models of wave propagation together with structural element effects and 2) the data-driven model from the multimodal data to infer the vibration sensor's location. We conducted real-world experiments to verify our proposed method and achieved an up to 7cm anchor sensor localization accuracy.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81715012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}