{"title":"TagSleep3D","authors":"Chen Liu, Zixuan Dong, Li Huang, Wenlong Yan, Xin Wang, Dingyi Fang, Xiaojiang Chen","doi":"10.1145/3643512","DOIUrl":null,"url":null,"abstract":"Sleep posture plays a crucial role in maintaining good morpheus quality and overall health. As a result, long-term monitoring of 3D sleep postures is significant for sleep analysis and chronic disease prevention. To recognize sleep postures, traditional methods either use cameras to record image data or require the user to wear wearable devices or sleep on pressure mattresses. However, these methods could raise privacy concerns and cause discomfort during sleep. Accordingly, the RF (Radio Frequency) based method has emerged as a promising alternative. Despite most of these methods achieving high precision in classifying sleep postures, they struggle to retrieve 3D sleep postures due to difficulties in capturing 3D positions of static body joints. In this work, we propose TagSleep3D to resolve all the above issues. Specifically, inspired by the concept of RFID tag sheets, we explore the possibility of recognizing 3D sleep posture by deploying an RFID tag array under the bedsheet. When a user sleeps in bed, the signals of some tags could be blocked or reflected by the sleep posture, which can produce a body imprint. We then propose a novel deep learning model composed of the attention mechanism, convolutional neural network, and together with two data augmentation methods to retrieve the 3D sleep postures by analyzing these body imprints. We evaluate TagSleep3D with 43 users and we totally collect 27,300 sleep posture samples. Our extensive experiments demonstrate that TagSleep3D can recognize each joint on the human skeleton with a median MPJPE (Mean Per Joint Position Error) of 4.76 cm for seen users and 7.58 cm for unseen users.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3643512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Sleep posture plays a crucial role in maintaining good morpheus quality and overall health. As a result, long-term monitoring of 3D sleep postures is significant for sleep analysis and chronic disease prevention. To recognize sleep postures, traditional methods either use cameras to record image data or require the user to wear wearable devices or sleep on pressure mattresses. However, these methods could raise privacy concerns and cause discomfort during sleep. Accordingly, the RF (Radio Frequency) based method has emerged as a promising alternative. Despite most of these methods achieving high precision in classifying sleep postures, they struggle to retrieve 3D sleep postures due to difficulties in capturing 3D positions of static body joints. In this work, we propose TagSleep3D to resolve all the above issues. Specifically, inspired by the concept of RFID tag sheets, we explore the possibility of recognizing 3D sleep posture by deploying an RFID tag array under the bedsheet. When a user sleeps in bed, the signals of some tags could be blocked or reflected by the sleep posture, which can produce a body imprint. We then propose a novel deep learning model composed of the attention mechanism, convolutional neural network, and together with two data augmentation methods to retrieve the 3D sleep postures by analyzing these body imprints. We evaluate TagSleep3D with 43 users and we totally collect 27,300 sleep posture samples. Our extensive experiments demonstrate that TagSleep3D can recognize each joint on the human skeleton with a median MPJPE (Mean Per Joint Position Error) of 4.76 cm for seen users and 7.58 cm for unseen users.