Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826781
Lee B. Hinkle, V. Metsis
The Activities of Daily Living (ADL) include activities such as brushing teeth, sweeping, and walking that are critical to on-going health, especially in older adults. Activities may be determined using recorded video and 2D-CNNs, however video recordings present privacy and coverage challenges in personal spaces. Smartphones and newer wristworn devices that record motion data can also be used for activity recognition tasks. Ankle or shoe-based devices such as the retired Nike+ sensor are less common, however ear-based devices which may record head movement are gaining popularity. In this work we use accelerometer data from a recently released dataset using devices placed on the ankle, hip, and wrist. First, we evaluate a simple 1D-CNNs ability to classify the 17 included activities in subject-dependent and subject-independent analysis. Then we process the accelerometer data from the three sensors individually to evaluate each location’s ability to predict activities. Finally, we develop a functional model which independently executes a 1D-CNN for each sensor’s data and combines the results using Global Average Pooling. The functional model achieves a subject-independent accuracy of 70.7%.
日常生活活动(ADL)包括刷牙、扫地和散步等对持续健康至关重要的活动,尤其是对老年人。可以使用录制的视频和2d - cnn来确定活动,但视频记录在个人空间中存在隐私和覆盖方面的挑战。记录运动数据的智能手机和较新的腕带设备也可以用于活动识别任务。脚踝或鞋子上的设备,如退役的Nike+传感器,不太常见,但是可以记录头部运动的耳式设备越来越受欢迎。在这项工作中,我们使用了最近发布的数据集中的加速度计数据,这些数据集使用了放置在脚踝、臀部和手腕上的设备。首先,我们评估了一个简单的1d - cnn在主题依赖和主题独立分析中对17个包含的活动进行分类的能力。然后,我们分别处理来自三个传感器的加速度计数据,以评估每个位置预测活动的能力。最后,我们开发了一个功能模型,该模型对每个传感器的数据独立执行1D-CNN,并使用Global Average Pooling将结果结合起来。该功能模型与主体无关的准确率达到70.7%。
{"title":"Individual Convolution of Ankle, Hip, and Wrist Data for Activities-of-Daily-Living Classification","authors":"Lee B. Hinkle, V. Metsis","doi":"10.1109/ie54923.2022.9826781","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826781","url":null,"abstract":"The Activities of Daily Living (ADL) include activities such as brushing teeth, sweeping, and walking that are critical to on-going health, especially in older adults. Activities may be determined using recorded video and 2D-CNNs, however video recordings present privacy and coverage challenges in personal spaces. Smartphones and newer wristworn devices that record motion data can also be used for activity recognition tasks. Ankle or shoe-based devices such as the retired Nike+ sensor are less common, however ear-based devices which may record head movement are gaining popularity. In this work we use accelerometer data from a recently released dataset using devices placed on the ankle, hip, and wrist. First, we evaluate a simple 1D-CNNs ability to classify the 17 included activities in subject-dependent and subject-independent analysis. Then we process the accelerometer data from the three sensors individually to evaluate each location’s ability to predict activities. Finally, we develop a functional model which independently executes a 1D-CNN for each sensor’s data and combines the results using Global Average Pooling. The functional model achieves a subject-independent accuracy of 70.7%.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125580080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826769
Mengyao Liu, J. Oostvogels, Sam Michiels, W. Joosen, D. Hughes
Intelligent Environments (IEs) enrich the physical world by connecting it to software applications in order to increase user comfort, safety and efficiency. IEs are often supported by wireless networks of smart sensors and actuators, which offer multi-year battery life within small packages. However, existing radio mesh networks suffer from high latency, which precludes their use in many user interface systems such as real-time speech, touch or positioning. While recent advances in optical networks promise low end-to-end latency through symbol-synchronous transmission, current approaches are power hungry and therefore cannot be battery powered. We tackle this problem by introducing BoboLink, a mesh network that delivers low-power and low-latency optical networking through a combination of symbol-synchronous transmission and a novel wake-up technology. BoboLink delivers mesh-wide wake-up in 1.13ms, with a quiescent power consumption of 237µW. This enables building-wide human computer interfaces to be seamlessly delivered using wireless mesh networks for the first time.
{"title":"BoboLink: Low Latency and Low Power Communication for Intelligent Environments","authors":"Mengyao Liu, J. Oostvogels, Sam Michiels, W. Joosen, D. Hughes","doi":"10.1109/ie54923.2022.9826769","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826769","url":null,"abstract":"Intelligent Environments (IEs) enrich the physical world by connecting it to software applications in order to increase user comfort, safety and efficiency. IEs are often supported by wireless networks of smart sensors and actuators, which offer multi-year battery life within small packages. However, existing radio mesh networks suffer from high latency, which precludes their use in many user interface systems such as real-time speech, touch or positioning. While recent advances in optical networks promise low end-to-end latency through symbol-synchronous transmission, current approaches are power hungry and therefore cannot be battery powered. We tackle this problem by introducing BoboLink, a mesh network that delivers low-power and low-latency optical networking through a combination of symbol-synchronous transmission and a novel wake-up technology. BoboLink delivers mesh-wide wake-up in 1.13ms, with a quiescent power consumption of 237µW. This enables building-wide human computer interfaces to be seamlessly delivered using wireless mesh networks for the first time.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133895462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826760
Adel Noureddine
Monitoring the power consumption of applications and source code is an important step in writing green software. In this paper, we propose PowerJoular and JoularJX, our software power monitoring tools. We aim to help software developers in understanding and analyzing the power consumption of their programs, and help system administrators and automated tools in monitoring the power consumption of large numbers of heterogeneous devices.
{"title":"PowerJoular and JoularJX: Multi-Platform Software Power Monitoring Tools","authors":"Adel Noureddine","doi":"10.1109/ie54923.2022.9826760","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826760","url":null,"abstract":"Monitoring the power consumption of applications and source code is an important step in writing green software. In this paper, we propose PowerJoular and JoularJX, our software power monitoring tools. We aim to help software developers in understanding and analyzing the power consumption of their programs, and help system administrators and automated tools in monitoring the power consumption of large numbers of heterogeneous devices.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134265193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826783
Tomoki Okuro, Yumiko Nakayama, Yoshitada Takeshima, Yusuke Kondo, Nobuya Tachimori, M. Yoshida, Hiromu Yoshihara, H. Suwa, K. Yasumoto
Road traffic censuses have been carried out manually for many years since measurements by machines were not widely spread due to the difficulty of installation. To solve installation difficulty, the size issues of the necessary equipment, and privacy issues of the existing traffic counter, we are conducting research and development of portable traffic counters using a vibration sensor and machine learning. However, vehicle type classification was not realized in the previous work, hence it was not possible to survey traffic volume by vehicle types. In addition, to the best of our knowledge, there is no existing study that can detect and classify vehicles based on road vibrations with a single sensor. In this paper, we propose a method of vehicle type classification that is capable of binary classification of small and large vehicles by machine learning combined with Support Vector Machine and Random Forest for vibrations of passing vehicles. We evaluated the proposed method by conducting measurements for up to 12 hours at two actual road locations. We tested over 5 hours of data and confirmed that small vehicles classified with the F-measure of 0.96 and large vehicles with the F-measure of 0.83.
{"title":"Vehicle Detection and Classification using Vibration Sensor and Machine Learning","authors":"Tomoki Okuro, Yumiko Nakayama, Yoshitada Takeshima, Yusuke Kondo, Nobuya Tachimori, M. Yoshida, Hiromu Yoshihara, H. Suwa, K. Yasumoto","doi":"10.1109/ie54923.2022.9826783","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826783","url":null,"abstract":"Road traffic censuses have been carried out manually for many years since measurements by machines were not widely spread due to the difficulty of installation. To solve installation difficulty, the size issues of the necessary equipment, and privacy issues of the existing traffic counter, we are conducting research and development of portable traffic counters using a vibration sensor and machine learning. However, vehicle type classification was not realized in the previous work, hence it was not possible to survey traffic volume by vehicle types. In addition, to the best of our knowledge, there is no existing study that can detect and classify vehicles based on road vibrations with a single sensor. In this paper, we propose a method of vehicle type classification that is capable of binary classification of small and large vehicles by machine learning combined with Support Vector Machine and Random Forest for vibrations of passing vehicles. We evaluated the proposed method by conducting measurements for up to 12 hours at two actual road locations. We tested over 5 hours of data and confirmed that small vehicles classified with the F-measure of 0.96 and large vehicles with the F-measure of 0.83.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116333140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826758
Yuma Okochi, Hamada Rizk, H. Yamaguchi
The technology of 3D recognition is evolving rapidly, enabling unprecedented growth of applications towards human-centric intelligent environments. On top of these applications human segmentation is a key technology towards analyzing and understanding human mobility in those environments. However, existing segmentation techniques rely on deep learning models, which are computationally intensive and data-hungry solutions. This hinders their practical deployment on edge devices in realistic environments. In this paper, we introduce a novel micro-size LiDAR device for understanding human mobility in the surrounding environment. The device is supplied with an on-device lightweight human segmentation technique for the captured 3D point cloud data using density-based clustering. The proposed technique significantly reduces the computational complexity of the clustering algorithm by leveraging the Spatiotemporal relation between consecutive frames. We implemented and evaluated the proposed technique in a real-world environment. The results show that the proposed technique obtains a human segmentation accuracy of 99% with a drastic reduction of the processing time by 66%.
{"title":"On-the-Fly Spatio-Temporal Human Segmentation of 3D Point Cloud Data By Micro-Size LiDAR","authors":"Yuma Okochi, Hamada Rizk, H. Yamaguchi","doi":"10.1109/ie54923.2022.9826758","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826758","url":null,"abstract":"The technology of 3D recognition is evolving rapidly, enabling unprecedented growth of applications towards human-centric intelligent environments. On top of these applications human segmentation is a key technology towards analyzing and understanding human mobility in those environments. However, existing segmentation techniques rely on deep learning models, which are computationally intensive and data-hungry solutions. This hinders their practical deployment on edge devices in realistic environments. In this paper, we introduce a novel micro-size LiDAR device for understanding human mobility in the surrounding environment. The device is supplied with an on-device lightweight human segmentation technique for the captured 3D point cloud data using density-based clustering. The proposed technique significantly reduces the computational complexity of the clustering algorithm by leveraging the Spatiotemporal relation between consecutive frames. We implemented and evaluated the proposed technique in a real-world environment. The results show that the proposed technique obtains a human segmentation accuracy of 99% with a drastic reduction of the processing time by 66%.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126595784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826761
Yixiao Wang, K. Green
Architecture has long been conceptualized as “a machine for living in” and more recently as “a robot for living in.” Human-Robot Interaction (HRI) has developed robots as social agents—our friends, companions, and partners. Could robotic environments be perceived and interacted with as socially intelligent agents? If so, how should we design a Socially Interactive, Robotic Environment (SIRE)? To address the first question, we offer the empirical evidence and theoretical support of SIREs. We then address the second question by discussing the “Spatial Design” and “Interaction Design” of SIREs through an explorative, pattern-based approach. For “Spatial Design,” we present a co-design study for a partner-like office, generating new spatial patterns that form pattern languages to convey sociality to individual users. For “Interaction Design,” we employed four “Design Patterns for Sociality in HRI.” Our results show that “Spatial Patterns” and “HRI Patterns” can be integrated as one pattern language for sociality and that such a pattern language can vary from person to person. Through the explorative works of this paper, we wish to introduce SIRE to IE communities and cultivate the conversation about the design and application of SIREs in everyday life.
{"title":"Designing Socially Interactive, Robotic Environments through Pattern Languages","authors":"Yixiao Wang, K. Green","doi":"10.1109/ie54923.2022.9826761","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826761","url":null,"abstract":"Architecture has long been conceptualized as “a machine for living in” and more recently as “a robot for living in.” Human-Robot Interaction (HRI) has developed robots as social agents—our friends, companions, and partners. Could robotic environments be perceived and interacted with as socially intelligent agents? If so, how should we design a Socially Interactive, Robotic Environment (SIRE)? To address the first question, we offer the empirical evidence and theoretical support of SIREs. We then address the second question by discussing the “Spatial Design” and “Interaction Design” of SIREs through an explorative, pattern-based approach. For “Spatial Design,” we present a co-design study for a partner-like office, generating new spatial patterns that form pattern languages to convey sociality to individual users. For “Interaction Design,” we employed four “Design Patterns for Sociality in HRI.” Our results show that “Spatial Patterns” and “HRI Patterns” can be integrated as one pattern language for sociality and that such a pattern language can vary from person to person. Through the explorative works of this paper, we wish to introduce SIRE to IE communities and cultivate the conversation about the design and application of SIREs in everyday life.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121937243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826759
Wei Fan, Kevin A. Kam, Haokai Zhao, P. Culligan, I. Kymissis
An optical absorbance-based sensor designed to measure the concentration of vital Nitrogen (N), Phosphorus (P) and Potassium (K) nutrients in urban soil was developed. This device was characterized and tested in nine diverse green spaces around New York City’s Morningside Heights neighborhood, including street-tree pits and park spaces. The results show that the sensor can detect at minimum, a 1.4% change in nutrient concentration. Additionally, it was shown that the sensor can operate in various ambient light settings (indoor and outdoor) after calibration. A study of NYC’s green spaces shows that, on average, soil in street-tree pits that supports plant life has 54% more N, 34% more P, and 37% more K than park spaces, respectively. This new sensor technology will enable more detailed monitoring of soil nutrient conditions and thus help promote healthy green spaces in large urban environments.
{"title":"An Optical Soil Sensor for NPK Nutrient Detection in Smart Cities","authors":"Wei Fan, Kevin A. Kam, Haokai Zhao, P. Culligan, I. Kymissis","doi":"10.1109/ie54923.2022.9826759","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826759","url":null,"abstract":"An optical absorbance-based sensor designed to measure the concentration of vital Nitrogen (N), Phosphorus (P) and Potassium (K) nutrients in urban soil was developed. This device was characterized and tested in nine diverse green spaces around New York City’s Morningside Heights neighborhood, including street-tree pits and park spaces. The results show that the sensor can detect at minimum, a 1.4% change in nutrient concentration. Additionally, it was shown that the sensor can operate in various ambient light settings (indoor and outdoor) after calibration. A study of NYC’s green spaces shows that, on average, soil in street-tree pits that supports plant life has 54% more N, 34% more P, and 37% more K than park spaces, respectively. This new sensor technology will enable more detailed monitoring of soil nutrient conditions and thus help promote healthy green spaces in large urban environments.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130826565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826768
Yuusuke Kawakita, Kota Tamura, Yoshito Tobe, S. Yokogawa, H. Ichikawa
We developed a prototype of a virtual grid hub (VG-Hub), which is a device for controlling direct current (DC) power flow by utilizing the characteristics of USB-PD. In a network of interconnected VG-hubs, deciding the entire power distribution path every time a load changes results in disruption of the power flow during operation as well as high-cost of recomputing over the entire network. Therefore, we propose Minimum-Hop Power-Path Routing (MHPPR), which determines a new power flow only at the hub where the load fluctuates. In the MHPPR, the route that minimizes the total number of hops from the power supply source to the load is determined to achieve the goal of the nearest power supply. In this study, we show that the power flow is more efficiently determined compared to the case of recalculation of the entire network when the load changes.
{"title":"Distributed Power-Delivery Decision for a USB-PD-based Network","authors":"Yuusuke Kawakita, Kota Tamura, Yoshito Tobe, S. Yokogawa, H. Ichikawa","doi":"10.1109/ie54923.2022.9826768","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826768","url":null,"abstract":"We developed a prototype of a virtual grid hub (VG-Hub), which is a device for controlling direct current (DC) power flow by utilizing the characteristics of USB-PD. In a network of interconnected VG-hubs, deciding the entire power distribution path every time a load changes results in disruption of the power flow during operation as well as high-cost of recomputing over the entire network. Therefore, we propose Minimum-Hop Power-Path Routing (MHPPR), which determines a new power flow only at the hub where the load fluctuates. In the MHPPR, the route that minimizes the total number of hops from the power supply source to the load is determined to achieve the goal of the nearest power supply. In this study, we show that the power flow is more efficiently determined compared to the case of recalculation of the entire network when the load changes.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134108382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-20DOI: 10.1109/ie54923.2022.9826773
Asmaa Saeed, Ahmed Wasfey, Hamada Rizk, H. Yamaguchi
As the demand for location-based services increases, several research efforts have aimed for robust and accurate indoor localization, especially 3D localization. Due to the widespread availability of cellular networks and their support by commodity cellphones, cellular-based systems have recently been proposed as a means of achieving this. However, because of the inherent noise and instability of wireless signals, localization accuracy typically degrades and is not robust to the dynamic heterogeneity of mobile devices.In this paper, we present a CellStory, a deep learning-based floor estimation system that achieves a fine-grained and robust accuracy in the presence of noise. CellStory combines stacked denoising autoencoder learning models, and a probabilistic framework to handle noise in the received signal and capture the complex relationship between the signals detected by the mobile phone and its location. Evaluation using different Android phones in a real testbed shows that CellStory can accurately estimate the user’s floor 98.7% of the time and within one floor error 100% of the time. This accuracy demonstrates CellStory’s superiority over state-of-the-art systems as well as its robustness to heterogeneous devices.
{"title":"CellStory: Extendable Cellular Signals-Based Floor Estimator Using Deep Learning","authors":"Asmaa Saeed, Ahmed Wasfey, Hamada Rizk, H. Yamaguchi","doi":"10.1109/ie54923.2022.9826773","DOIUrl":"https://doi.org/10.1109/ie54923.2022.9826773","url":null,"abstract":"As the demand for location-based services increases, several research efforts have aimed for robust and accurate indoor localization, especially 3D localization. Due to the widespread availability of cellular networks and their support by commodity cellphones, cellular-based systems have recently been proposed as a means of achieving this. However, because of the inherent noise and instability of wireless signals, localization accuracy typically degrades and is not robust to the dynamic heterogeneity of mobile devices.In this paper, we present a CellStory, a deep learning-based floor estimation system that achieves a fine-grained and robust accuracy in the presence of noise. CellStory combines stacked denoising autoencoder learning models, and a probabilistic framework to handle noise in the received signal and capture the complex relationship between the signals detected by the mobile phone and its location. Evaluation using different Android phones in a real testbed shows that CellStory can accurately estimate the user’s floor 98.7% of the time and within one floor error 100% of the time. This accuracy demonstrates CellStory’s superiority over state-of-the-art systems as well as its robustness to heterogeneous devices.","PeriodicalId":157754,"journal":{"name":"2022 18th International Conference on Intelligent Environments (IE)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}