Hard-hitting guitar riffs accompanied by gradual, yet syncopated drumming. That is how Interpol has chosen to open their latest album, Marauders. Interpol is a band that I have followed since at least one of their songs was featured during an episode of Fox’s former hit teen soap, The O.C. Their sound back then sounded like a newer take on alternative, and with their latest release Marauders their sound seems to have remained aggressively alternative with some other influences mixed-in.
{"title":"Marauder","authors":"M. Ramanujam, H. Madhyastha, R. Netravali","doi":"10.1145/3458864.3466866","DOIUrl":"https://doi.org/10.1145/3458864.3466866","url":null,"abstract":"Hard-hitting guitar riffs accompanied by gradual, yet syncopated drumming. That is how Interpol has chosen to open their latest album, Marauders. Interpol is a band that I have followed since at least one of their songs was featured during an episode of Fox’s former hit teen soap, The O.C. Their sound back then sounded like a newer take on alternative, and with their latest release Marauders their sound seems to have remained aggressively alternative with some other influences mixed-in.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) enables smartphone users to interact with virtual content spatially overlaid on a continuously captured physical world. Under the current permission enforcement model in popular operating systems, AR apps are given Internet permission at installation time, and request camera permission and external storage write permission at runtime through a user's approval. With these permissions granted, any Internet-enabled AR app could silently collect camera frames and derived visual information for malicious intent without a user's awareness. This raises serious concerns about the disclosure of private user data in their living environments. To give users more control over application usage of their camera frames and the information derived from them, we introduce LensCap, a split-process app design framework, in which the app is split into a camera-handling visual process and a connectivity-handling network process. At runtime, LensCap manages secured communications between split processes, enacting fine-grained data usage monitoring. LensCap also allows both processes to present interactive user interfaces. With LensCap, users can decide what forms of visual data can be transmitted to the network, while still allowing visual data to be used for AR purposes on device. We prototype LensCap as an Android library and demonstrate its usability as a plugin in Unreal Engine. Performance evaluation results on five AR apps confirm that visual privacy can be preserved with an insignificant latency penalty (< 1.3 ms) at 60 FPS.
{"title":"LensCap","authors":"Jinhan Hu, Andrei Iosifescu, R. Likamwa","doi":"10.1145/3458864.3467676","DOIUrl":"https://doi.org/10.1145/3458864.3467676","url":null,"abstract":"Augmented Reality (AR) enables smartphone users to interact with virtual content spatially overlaid on a continuously captured physical world. Under the current permission enforcement model in popular operating systems, AR apps are given Internet permission at installation time, and request camera permission and external storage write permission at runtime through a user's approval. With these permissions granted, any Internet-enabled AR app could silently collect camera frames and derived visual information for malicious intent without a user's awareness. This raises serious concerns about the disclosure of private user data in their living environments. To give users more control over application usage of their camera frames and the information derived from them, we introduce LensCap, a split-process app design framework, in which the app is split into a camera-handling visual process and a connectivity-handling network process. At runtime, LensCap manages secured communications between split processes, enacting fine-grained data usage monitoring. LensCap also allows both processes to present interactive user interfaces. With LensCap, users can decide what forms of visual data can be transmitted to the network, while still allowing visual data to be used for AR purposes on device. We prototype LensCap as an Android library and demonstrate its usability as a plugin in Unreal Engine. Performance evaluation results on five AR apps confirm that visual privacy can be preserved with an insignificant latency penalty (< 1.3 ms) at 60 FPS.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128185224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Ramprasad, Hongkai Chen, A. Veith, K. Truong, E. D. Lara
Chronic pain is often an ongoing challenge for patients to track and collect data. Pain-O-Vision is a smartwatch enabled pain management system that uses computer vision to capture the details of painful events from the user. A natural reaction to pain is to clench ones fist. The embedded camera is used to capture different types of fist clenching, to represent different levels of pain. An initial prototype was built on an Android smartwatch that uses a cloud-based classification service to detect the fist clench gestures. Our results show that it is possible to map a fist clench to different levels of pain which allows the patient to record the intensity of a painful event without carrying a specialized pain management device.
慢性疼痛往往是一个持续的挑战,患者跟踪和收集数据。pain - o - vision是一款支持智能手表的疼痛管理系统,它使用计算机视觉从用户那里捕捉疼痛事件的细节。对疼痛的自然反应是握紧拳头。嵌入式摄像头用于捕捉不同类型的握拳动作,以表示不同程度的疼痛。最初的原型是建立在Android智能手表上的,它使用基于云的分类服务来检测握拳的手势。我们的研究结果表明,握拳可以映射到不同程度的疼痛,这使得患者可以在不携带专门的疼痛管理设备的情况下记录疼痛事件的强度。
{"title":"Pain-o-vision, effortless pain management","authors":"B. Ramprasad, Hongkai Chen, A. Veith, K. Truong, E. D. Lara","doi":"10.1145/3458864.3466907","DOIUrl":"https://doi.org/10.1145/3458864.3466907","url":null,"abstract":"Chronic pain is often an ongoing challenge for patients to track and collect data. Pain-O-Vision is a smartwatch enabled pain management system that uses computer vision to capture the details of painful events from the user. A natural reaction to pain is to clench ones fist. The embedded camera is used to capture different types of fist clenching, to represent different levels of pain. An initial prototype was built on an Android smartwatch that uses a cloud-based classification service to detect the fist clench gestures. Our results show that it is possible to map a fist clench to different levels of pain which allows the patient to record the intensity of a painful event without carrying a specialized pain management device.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132059929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-24DOI: 10.5040/9781350122741.1001332
H. Pasandi, T. Nadeem
{"title":"LATTE","authors":"H. Pasandi, T. Nadeem","doi":"10.5040/9781350122741.1001332","DOIUrl":"https://doi.org/10.5040/9781350122741.1001332","url":null,"abstract":"","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"51 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129997908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyu Chen, Jia Liu, Fu Xiao, Shigang Chen, Lijun Chen
Temperature sensing plays a significant role in upholding quality assurance and meeting regulatory compliance in a wide variety of applications, such as fire safety and cold chain monitoring. However, existing temperature measurement devices are bulky, cost-prohibitive, or battery-powered, making item-level sensing and intelligence costly. In this paper, we present a novel tag-based thermometer called Thermotag, which uses a common passive RFID tag to sense the temperature with competitive advantages of being low-cost, battery-free, and robust to environmental conditions. The basic idea of Thermotag is that the resistance of a semiconductor diode in a tag's chip is temperature-sensitive. By measuring the discharging period through the reverse-polarized diode, we can estimate the temperature indirectly. We propose a standards-compliant measurement scheme of the discharging period by using a tag's volatile memory and build a mapping model between the discharging period and temperature for accurate and reliable temperature sensing. We implement Thermotag using a commercial off-the-shelf RFID system, with no need for any firmware or hardware modifications. Extensive experiments show that the temperature measurement has a large span ranging from 0 °C to 85 °C and a mean error of 2.7 °C.
{"title":"Thermotag","authors":"Xingyu Chen, Jia Liu, Fu Xiao, Shigang Chen, Lijun Chen","doi":"10.1145/3458864.3467879","DOIUrl":"https://doi.org/10.1145/3458864.3467879","url":null,"abstract":"Temperature sensing plays a significant role in upholding quality assurance and meeting regulatory compliance in a wide variety of applications, such as fire safety and cold chain monitoring. However, existing temperature measurement devices are bulky, cost-prohibitive, or battery-powered, making item-level sensing and intelligence costly. In this paper, we present a novel tag-based thermometer called Thermotag, which uses a common passive RFID tag to sense the temperature with competitive advantages of being low-cost, battery-free, and robust to environmental conditions. The basic idea of Thermotag is that the resistance of a semiconductor diode in a tag's chip is temperature-sensitive. By measuring the discharging period through the reverse-polarized diode, we can estimate the temperature indirectly. We propose a standards-compliant measurement scheme of the discharging period by using a tag's volatile memory and build a mapping model between the discharging period and temperature for accurate and reliable temperature sensing. We implement Thermotag using a commercial off-the-shelf RFID system, with no need for any firmware or hardware modifications. Extensive experiments show that the temperature measurement has a large span ranging from 0 °C to 85 °C and a mean error of 2.7 °C.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy Woodford, Xinyu Zhang, Eugene Chai, K. Sundaresan, Amir Khojastepour
mmWave 5G networks promise to enable a new generation of networked applications requiring a combination of high throughput and ultra-low latency. However, in practice, mmWave performance scales poorly for large numbers of users due to the significant overhead required to manage the highly-directional beams. We find that we can substantially reduce or eliminate this overhead by using out-of-band infrared measurements of the surrounding environment generated by a LiDAR sensor. To accomplish this, we develop a ray-tracing system that is robust to noise and other artifacts from the infrared sensor, create a method to estimate the reflection strength from sensor data, and finally apply this information to the multiuser beam selection process. We demonstrate that this approach reduces beam-selection overhead by over 95% in indoor multi-user scenarios, reducing network latency by over 80% and increasing throughput by over 2× in mobile scenarios.
{"title":"SpaceBeam","authors":"Timothy Woodford, Xinyu Zhang, Eugene Chai, K. Sundaresan, Amir Khojastepour","doi":"10.1145/3458864.3466864","DOIUrl":"https://doi.org/10.1145/3458864.3466864","url":null,"abstract":"mmWave 5G networks promise to enable a new generation of networked applications requiring a combination of high throughput and ultra-low latency. However, in practice, mmWave performance scales poorly for large numbers of users due to the significant overhead required to manage the highly-directional beams. We find that we can substantially reduce or eliminate this overhead by using out-of-band infrared measurements of the surrounding environment generated by a LiDAR sensor. To accomplish this, we develop a ray-tracing system that is robust to noise and other artifacts from the infrared sensor, create a method to estimate the reflection strength from sensor data, and finally apply this information to the multiuser beam selection process. We demonstrate that this approach reduces beam-selection overhead by over 95% in indoor multi-user scenarios, reducing network latency by over 80% and increasing throughput by over 2× in mobile scenarios.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128375610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We advocate ThingSpire OS, a new IoT operating system based on WebAssembly for cloud-edge integration. By design, WebAssembly is considered as the first-class citizen in ThingSpire OS to achieve coherent execution among IoT device, edge and cloud. Furthermore, ThingSpire OS enables efficient execution of WebAssembly on resource-constrained devices by implementing a WebAssembly runtime based on Ahead-of-Time (AoT) compilation with a small footprint, achieves seamless inter-module communication wherever the modules locate, and leverages several optimizations such as lightweight preemptible invocation for memory isolation and control-flow integrity. We implement a prototype of ThingSpire OS and conduct preliminary evaluations on its inter-module communication performance.
{"title":"ThingSpire OS: a WebAssembly-based IoT operating system for cloud-edge integration","authors":"Borui Li, Hongchang Fan, Yi Gao, Wei Dong","doi":"10.1145/3458864.3466910","DOIUrl":"https://doi.org/10.1145/3458864.3466910","url":null,"abstract":"We advocate ThingSpire OS, a new IoT operating system based on WebAssembly for cloud-edge integration. By design, WebAssembly is considered as the first-class citizen in ThingSpire OS to achieve coherent execution among IoT device, edge and cloud. Furthermore, ThingSpire OS enables efficient execution of WebAssembly on resource-constrained devices by implementing a WebAssembly runtime based on Ahead-of-Time (AoT) compilation with a small footprint, achieves seamless inter-module communication wherever the modules locate, and leverages several optimizations such as lightweight preemptible invocation for memory isolation and control-flow integrity. We implement a prototype of ThingSpire OS and conduct preliminary evaluations on its inter-module communication performance.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115644493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration will showcase our efforts to develop a radio access network (RAN) slicing mechanism that is controllable via management software in an Open RAN framework. To our knowledge, our work represents the first effort that combines an open source Open RAN framework with an open source mobility stack, provides a top-to-bottom RAN application via the RAN intelligent control (RIC) provided by that framework and illustrates its functionality in a realistic wireless environment. Our software is publicly available and we provide a profile in the POWDER platform to enable others to replicate and build on our work.
{"title":"Open source RAN slicing on POWDER: a top-to-bottom O-RAN use case","authors":"David Johnson, Dustin Maas, J. Merwe","doi":"10.1145/3458864.3466912","DOIUrl":"https://doi.org/10.1145/3458864.3466912","url":null,"abstract":"This demonstration will showcase our efforts to develop a radio access network (RAN) slicing mechanism that is controllable via management software in an Open RAN framework. To our knowledge, our work represents the first effort that combines an open source Open RAN framework with an open source mobility stack, provides a top-to-bottom RAN application via the RAN intelligent control (RIC) provided by that framework and illustrates its functionality in a realistic wireless environment. Our software is publicly available and we provide a profile in the POWDER platform to enable others to replicate and build on our work.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115022649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongfei Xue, Yan Ju, Chenglin Miao, Yijiang Wang, Shiyang Wang, Aidong Zhang, Lu Su
In this paper, we present mmMesh, the first real-time 3D human mesh estimation system using commercial portable millimeter-wave devices. mmMesh is built upon a novel deep learning framework that can dynamically locate the moving subject and capture his/her body shape and pose by analyzing the 3D point cloud generated from the mmWave signals that bounce off the human body. The proposed deep learning framework addresses a series of challenges. First, it encodes a 3D human body model, which enables mmMesh to estimate complex and realistic-looking 3D human meshes from sparse point clouds. Second, it can accurately align the 3D points with their corresponding body segments despite the influence of ambient points as well as the error-prone nature and the multi-path effect of the RF signals. Third, the proposed model can infer missing body parts from the information of the previous frames. Our evaluation results on a commercial mmWave sensing testbed show that our mmMesh system can accurately localize the vertices on the human mesh with an average error of 2.47 cm. The superior experimental results demonstrate the effectiveness of our proposed human mesh construction system.
{"title":"mmMesh","authors":"Hongfei Xue, Yan Ju, Chenglin Miao, Yijiang Wang, Shiyang Wang, Aidong Zhang, Lu Su","doi":"10.1145/3458864.3467679","DOIUrl":"https://doi.org/10.1145/3458864.3467679","url":null,"abstract":"In this paper, we present mmMesh, the first real-time 3D human mesh estimation system using commercial portable millimeter-wave devices. mmMesh is built upon a novel deep learning framework that can dynamically locate the moving subject and capture his/her body shape and pose by analyzing the 3D point cloud generated from the mmWave signals that bounce off the human body. The proposed deep learning framework addresses a series of challenges. First, it encodes a 3D human body model, which enables mmMesh to estimate complex and realistic-looking 3D human meshes from sparse point clouds. Second, it can accurately align the 3D points with their corresponding body segments despite the influence of ambient points as well as the error-prone nature and the multi-path effect of the RF signals. Third, the proposed model can infer missing body parts from the information of the previous frames. Our evaluation results on a commercial mmWave sensing testbed show that our mmMesh system can accurately localize the vertices on the human mesh with an average error of 2.47 cm. The superior experimental results demonstrate the effectiveness of our proposed human mesh construction system.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122659095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we are interested in the problem of counting a crowd of stationary people (i.e., seated) using a pair of WiFi transceivers. While the people in the crowd are stationary, i.e. with no major body motion except breathing, people do not stay still for a long period of time and frequently engage in small in-place body motions called fidgets (e.g., adjusting their seating position, crossing their legs, checking their phones, etc). In this paper, we propose that the aggregate natural fidgeting and in-place motions of a stationary crowd carry crucial information on the crowd count. We then mathematically characterize the Probability Distribution Function (PDF) of the crowd fidgeting and silent periods (which we can extract from the received WiFi signal) and show their dependency on the total number of people in the area. In developing our mathematical models, we show how our problem of interest resembles a several-decade-old M/G/∞ queuing theory problem, which allows us to borrow mathematical tools from the literature on M/G/∞ queues. We extensively validate our proposed approach with a total of 47 experiments in four different environments (including through-wall settings), in which up to and including N = 10 people are seated. We further test our system in different scenarios, and with different activities, representing various engagement levels of the crowd, such as attending a lecture, watching a movie, and reading. Moreover, we test our proposed system with different number of people seated in several different configurations. Our evaluation results show that our proposed approach achieves a very high counting accuracy, with the estimated number of people being only 0 or 1 off from the true number 96.3% of the time in non-through-wall settings, and 90% of the time in through-wall settings. Our results show the potential of our proposed framework for crowd counting in real-world scenarios.
{"title":"Counting a stationary crowd using off-the-shelf wifi","authors":"Belal Korany, Y. Mostofi","doi":"10.1145/3458864.3468012","DOIUrl":"https://doi.org/10.1145/3458864.3468012","url":null,"abstract":"In this paper, we are interested in the problem of counting a crowd of stationary people (i.e., seated) using a pair of WiFi transceivers. While the people in the crowd are stationary, i.e. with no major body motion except breathing, people do not stay still for a long period of time and frequently engage in small in-place body motions called fidgets (e.g., adjusting their seating position, crossing their legs, checking their phones, etc). In this paper, we propose that the aggregate natural fidgeting and in-place motions of a stationary crowd carry crucial information on the crowd count. We then mathematically characterize the Probability Distribution Function (PDF) of the crowd fidgeting and silent periods (which we can extract from the received WiFi signal) and show their dependency on the total number of people in the area. In developing our mathematical models, we show how our problem of interest resembles a several-decade-old M/G/∞ queuing theory problem, which allows us to borrow mathematical tools from the literature on M/G/∞ queues. We extensively validate our proposed approach with a total of 47 experiments in four different environments (including through-wall settings), in which up to and including N = 10 people are seated. We further test our system in different scenarios, and with different activities, representing various engagement levels of the crowd, such as attending a lecture, watching a movie, and reading. Moreover, we test our proposed system with different number of people seated in several different configurations. Our evaluation results show that our proposed approach achieves a very high counting accuracy, with the estimated number of people being only 0 or 1 off from the true number 96.3% of the time in non-through-wall settings, and 90% of the time in through-wall settings. Our results show the potential of our proposed framework for crowd counting in real-world scenarios.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129564475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}