Miguel Chávez Tapia, Talia Xu, Zehang Wu, M. Z. Zamalloa
A recent development in wireless communication is the use of optical shutters and smartphone cameras to create optical links solely from ambient light . At the transmitter, a liquid crystal display (LCD) modulates ambient light by changing its level of transparency. At the receiver, a smartphone camera decodes the optical pattern. This LCD-to-camera link requires low-power levels at the transmitter, and it is easy to deploy because it does not require modifying the existing lighting infrastructure. The system, however, provides a low data rate, of just a few tens of bps. This occurs because the LCDs used in the state-of-the-art are slow single-pixel transmitters. To overcome this limitation, we introduce a novel multi-pixel display. Our display is similar to a simple screen, but instead of using embedded LEDs to radiate information, it uses only the surrounding ambient light. We build a prototype, called SunBox, and evaluate it indoors and outdoors with both, artificial and natural ambient light. Our results show that SunBox can achieve a throughput between 2kbps and 10kbps using a low-end smartphone camera with just 30FPS. To the best of our knowledge, this is the first screen-to-camera system that works solely with ambient light. ;
{"title":"SunBox: Screen-to-Camera Communication with Ambient Light","authors":"Miguel Chávez Tapia, Talia Xu, Zehang Wu, M. Z. Zamalloa","doi":"10.1145/3534602","DOIUrl":"https://doi.org/10.1145/3534602","url":null,"abstract":"A recent development in wireless communication is the use of optical shutters and smartphone cameras to create optical links solely from ambient light . At the transmitter, a liquid crystal display (LCD) modulates ambient light by changing its level of transparency. At the receiver, a smartphone camera decodes the optical pattern. This LCD-to-camera link requires low-power levels at the transmitter, and it is easy to deploy because it does not require modifying the existing lighting infrastructure. The system, however, provides a low data rate, of just a few tens of bps. This occurs because the LCDs used in the state-of-the-art are slow single-pixel transmitters. To overcome this limitation, we introduce a novel multi-pixel display. Our display is similar to a simple screen, but instead of using embedded LEDs to radiate information, it uses only the surrounding ambient light. We build a prototype, called SunBox, and evaluate it indoors and outdoors with both, artificial and natural ambient light. Our results show that SunBox can achieve a throughput between 2kbps and 10kbps using a low-end smartphone camera with just 30FPS. To the best of our knowledge, this is the first screen-to-camera system that works solely with ambient light. ;","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"124 1","pages":"46:1-46:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77342239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ultra-WideBand (UWB) localization has shown promising prospects in both academia and industry. However, accurate UWB localization for a large number of tags (i.e., targets) is still an open problem. Existing works usually require tedious time synchronization and labor-intensive calibrations. We present VULoc, an accurate UWB localization system with high scalability for an unlimited number of targets, which significantly reduces synchronization and calibration overhead. The key idea of VULoc is an accurate localization method based on passive reception without time synchronization. Specifically, we propose a novel virtual -Two Way Ranging (V-TWR) method to enable accurate localization for an unlimited number of tags. We theoretically analyze the performance of our method and show its superiority. We leverage redundant ranging packets among anchors with known positions to infer a range mapping for auto-calibration, which eliminates the ranging bias arising from the hardware and multipath issues. We finally design an anchor scheduling algorithm, which estimates reception quality for adaptive anchor selection to minimize the influence of NLOS. We implement VULoc with DW1000 chips and extensively evaluate its performance in various environments. The results show that VULoc can achieve accurate localization with a median error of 10.5 cm and 90% error of 15.7 cm, reducing the error of ATLAS (an open-source TDOA-based UWB localization system) by 57.6% while supporting countless targets with no synchronization and low calibration overhead.
超宽带(UWB)定位在学术界和工业界都显示出良好的前景。然而,对大量的标签(即目标)进行精确的超宽带定位仍然是一个悬而未决的问题。现有的工作通常需要繁琐的时间同步和劳动密集的校准。我们提出了VULoc,一种精确的超宽带定位系统,具有高可扩展性,可用于无限数量的目标,大大减少了同步和校准开销。VULoc的核心思想是一种基于无源接收而不需要时间同步的精确定位方法。具体来说,我们提出了一种新的虚拟双向测距(V-TWR)方法,可以对无限数量的标签进行精确定位。从理论上分析了该方法的性能,证明了其优越性。我们利用已知位置的锚点之间的冗余测距数据包来推断自动校准的范围映射,从而消除了由硬件和多路径问题引起的测距偏差。最后,我们设计了一种主播调度算法,该算法通过估计接收质量来进行自适应主播选择,以最大限度地减少NLOS的影响。我们使用DW1000芯片实现了VULoc,并广泛评估了其在各种环境中的性能。结果表明,VULoc可以实现精确定位,中值误差为10.5 cm, 90%误差为15.7 cm,使ATLAS(一种基于tdoa的开源超宽带定位系统)的误差降低57.6%,同时支持无数目标,无需同步,校准开销低。
{"title":"VULoc: Accurate UWB Localization for Countless Targets without Synchronization","authors":"Jing Yang, Baishun Dong, Jiliang Wang","doi":"10.1145/3550286","DOIUrl":"https://doi.org/10.1145/3550286","url":null,"abstract":"Ultra-WideBand (UWB) localization has shown promising prospects in both academia and industry. However, accurate UWB localization for a large number of tags (i.e., targets) is still an open problem. Existing works usually require tedious time synchronization and labor-intensive calibrations. We present VULoc, an accurate UWB localization system with high scalability for an unlimited number of targets, which significantly reduces synchronization and calibration overhead. The key idea of VULoc is an accurate localization method based on passive reception without time synchronization. Specifically, we propose a novel virtual -Two Way Ranging (V-TWR) method to enable accurate localization for an unlimited number of tags. We theoretically analyze the performance of our method and show its superiority. We leverage redundant ranging packets among anchors with known positions to infer a range mapping for auto-calibration, which eliminates the ranging bias arising from the hardware and multipath issues. We finally design an anchor scheduling algorithm, which estimates reception quality for adaptive anchor selection to minimize the influence of NLOS. We implement VULoc with DW1000 chips and extensively evaluate its performance in various environments. The results show that VULoc can achieve accurate localization with a median error of 10.5 cm and 90% error of 15.7 cm, reducing the error of ATLAS (an open-source TDOA-based UWB localization system) by 57.6% while supporting countless targets with no synchronization and low calibration overhead.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"3 1","pages":"148:1-148:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82541267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unmanned robots are increasingly used around humans in factories, malls, and hotels. As they navigate our space, it is important to ensure that such robots do not collide with people who suddenly appear as they turn a corner. Today, however, there is no practical solution for localizing people around corners. Optical solutions try to track hidden people through their visible shadows on the floor or a sidewall, but they can easily fail depending on the ambient light and the environment. More recent work has considered the use of radio frequency (RF) signals to track people and vehicles around street corners. However, past RF-based proposals rely on a simplistic ray-tracing model that fails in practical indoor scenarios. This paper introduces CornerRadar, an RF-based method that provides accurate around-corner indoor localization. CornerRadar addresses the limitations of the ray-tracing model used in past work. It does so through a novel encoding of how RF signals bounce off walls and occlusions. The encoding, which we call the hint map , is then fed to a neural network along with the radio signals to localize people around corners. Empirical evaluation with people moving around corners in 56 indoor environments shows that CornerRadar achieves a median error that is 3x to 12x smaller than past RF-based solutions for localizing people around corners.
{"title":"CornerRadar: RF-Based Indoor Localization Around Corners","authors":"Shichao Yue, Hao He, Peng-Xia Cao, Kaiwen Zha, Masayuki Koizumi, D. Katabi","doi":"10.1145/3517226","DOIUrl":"https://doi.org/10.1145/3517226","url":null,"abstract":"Unmanned robots are increasingly used around humans in factories, malls, and hotels. As they navigate our space, it is important to ensure that such robots do not collide with people who suddenly appear as they turn a corner. Today, however, there is no practical solution for localizing people around corners. Optical solutions try to track hidden people through their visible shadows on the floor or a sidewall, but they can easily fail depending on the ambient light and the environment. More recent work has considered the use of radio frequency (RF) signals to track people and vehicles around street corners. However, past RF-based proposals rely on a simplistic ray-tracing model that fails in practical indoor scenarios. This paper introduces CornerRadar, an RF-based method that provides accurate around-corner indoor localization. CornerRadar addresses the limitations of the ray-tracing model used in past work. It does so through a novel encoding of how RF signals bounce off walls and occlusions. The encoding, which we call the hint map , is then fed to a neural network along with the radio signals to localize people around corners. Empirical evaluation with people moving around corners in 56 indoor environments shows that CornerRadar achieves a median error that is 3x to 12x smaller than past RF-based solutions for localizing people around corners.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"123 1","pages":"34:1-34:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79481741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zongjian Liu, Jieling He, Jianjiang Feng, Jie Zhou
A 12-person user study was conducted to evaluate the performance of different strategies. Our user evaluation showed that participants achieved an average of 29.56, 32.38, and 34.22 WPM with 0.79%, 0.20%, and 0.21% not corrected error rate in the three strategies. In addition, we provided a detailed analysis of various micro metrics to further understand user performance and technical characteristics. Overall, PrinType is favored by users for its usability, efficiency, and novelty.
{"title":"PrinType: Text Entry via Fingerprint Recognition","authors":"Zongjian Liu, Jieling He, Jianjiang Feng, Jie Zhou","doi":"10.1145/3569491","DOIUrl":"https://doi.org/10.1145/3569491","url":null,"abstract":"A 12-person user study was conducted to evaluate the performance of different strategies. Our user evaluation showed that participants achieved an average of 29.56, 32.38, and 34.22 WPM with 0.79%, 0.20%, and 0.21% not corrected error rate in the three strategies. In addition, we provided a detailed analysis of various micro metrics to further understand user performance and technical characteristics. Overall, PrinType is favored by users for its usability, efficiency, and novelty.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"6 1","pages":"174:1-174:31"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86577279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyuin Lee, Yucheng Yang, Omkar Prabhune, Aishwarya Lekshmi Chithra, Jack West, Kassem Fawaz, Neil Klingensmith, Suman Banerjee, Younghyun Kim
Wireless connectivity is becoming common in increasingly diverse personal devices, enabling various interoperation- and Internet-based applications and services. More and more interconnected devices are simultaneously operated by a single user with short-lived connections, making usable device authentication methods imperative to ensure both high security and seamless user experience. Unfortunately, current authentication methods that heavily require human involvement, in addition to form factor and mobility constraints, make this balance hard to achieve, often forcing users to choose between security and convenience. In this work, we present a novel over-the-air device authentication scheme named AeroKey that achieves both high security and high usability. With virtually no hardware overhead, AeroKey leverages ubiquitously observable ambient electromagnetic radiation to autonomously generate spatiotemporally unique secret that can be derived only by devices that are closely located to each other. Devices can make use of this unique secret to form the basis of a symmetric key, making the authentication procedure more practical, secure and usable with no active human involvement. We propose and implement essential techniques to overcome challenges in realizing AeroKey on low-cost microcontroller units, such as poor time synchronization, lack of precision analog front-end, and inconsistent sampling rates. Our real-world experiments demonstrate reliable authentication as well as its robustness against various realistic adversaries with low equal-error rates of 3.4% or less and usable authentication time of as low as 24 s. .
{"title":"AEROKEY: Using Ambient Electromagnetic Radiation for Secure and Usable Wireless Device Authentication","authors":"Kyuin Lee, Yucheng Yang, Omkar Prabhune, Aishwarya Lekshmi Chithra, Jack West, Kassem Fawaz, Neil Klingensmith, Suman Banerjee, Younghyun Kim","doi":"10.1145/3517254","DOIUrl":"https://doi.org/10.1145/3517254","url":null,"abstract":"Wireless connectivity is becoming common in increasingly diverse personal devices, enabling various interoperation- and Internet-based applications and services. More and more interconnected devices are simultaneously operated by a single user with short-lived connections, making usable device authentication methods imperative to ensure both high security and seamless user experience. Unfortunately, current authentication methods that heavily require human involvement, in addition to form factor and mobility constraints, make this balance hard to achieve, often forcing users to choose between security and convenience. In this work, we present a novel over-the-air device authentication scheme named AeroKey that achieves both high security and high usability. With virtually no hardware overhead, AeroKey leverages ubiquitously observable ambient electromagnetic radiation to autonomously generate spatiotemporally unique secret that can be derived only by devices that are closely located to each other. Devices can make use of this unique secret to form the basis of a symmetric key, making the authentication procedure more practical, secure and usable with no active human involvement. We propose and implement essential techniques to overcome challenges in realizing AeroKey on low-cost microcontroller units, such as poor time synchronization, lack of precision analog front-end, and inconsistent sampling rates. Our real-world experiments demonstrate reliable authentication as well as its robustness against various realistic adversaries with low equal-error rates of 3.4% or less and usable authentication time of as low as 24 s. .","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"23 1","pages":"20:1-20:29"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82197084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Sun, Yuwen Chen, Yanjun Chen, Xiaopeng Zhang, Simon Zhan, Yixin Li, Jie Wu, Teng Han, Haipeng Mi, Jingxian Wang, Feng Tian, Xing-Dong Yang
{"title":"MicroFluID: A Multi-Chip RFID Tag for Interaction Sensing Based on Microfluidic Switches","authors":"Wei Sun, Yuwen Chen, Yanjun Chen, Xiaopeng Zhang, Simon Zhan, Yixin Li, Jie Wu, Teng Han, Haipeng Mi, Jingxian Wang, Feng Tian, Xing-Dong Yang","doi":"10.1145/3550296","DOIUrl":"https://doi.org/10.1145/3550296","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"10 1","pages":"141:1-141:23"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84139591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brice Parilusyan, M. Teyssier, Valentin Martinez-Missir, Clément Duhart, Marcos Serrano
Ubiquitous touch sensing surfaces are largely influenced by touchscreens’ look and feel and fail to express the physical richness of existing surrounding materials. We introduce Sensurfaces , a plug-and-play electronic module that allows to rapidly experiment with touch-sensitive surfaces while preserving the original appearance of materials. Sensurfaces is composed of plug-and-play modules that can be connected together to expand the size and number of materials composing a sensitive surface. The combination of Sensurfaces modules allows the creation of small or large multi-material sensitive surfaces that can detect multi-touch but also body proximity, pose, pass, or even human steps. In this paper, we present the design and implementation of Sensurfaces . We propose a design space describing the factors of Sensurfaces interfaces. Then, through a series of technical evaluations, we demonstrate the capabilities of our system. Finally, we report on two workshops validating the usability of our system.
{"title":"Sensurfaces: A Novel Approach for Embedded Touch Sensing on Everyday Surfaces","authors":"Brice Parilusyan, M. Teyssier, Valentin Martinez-Missir, Clément Duhart, Marcos Serrano","doi":"10.1145/3534616","DOIUrl":"https://doi.org/10.1145/3534616","url":null,"abstract":"Ubiquitous touch sensing surfaces are largely influenced by touchscreens’ look and feel and fail to express the physical richness of existing surrounding materials. We introduce Sensurfaces , a plug-and-play electronic module that allows to rapidly experiment with touch-sensitive surfaces while preserving the original appearance of materials. Sensurfaces is composed of plug-and-play modules that can be connected together to expand the size and number of materials composing a sensitive surface. The combination of Sensurfaces modules allows the creation of small or large multi-material sensitive surfaces that can detect multi-touch but also body proximity, pose, pass, or even human steps. In this paper, we present the design and implementation of Sensurfaces . We propose a design space describing the factors of Sensurfaces interfaces. Then, through a series of technical evaluations, we demonstrate the capabilities of our system. Finally, we report on two workshops validating the usability of our system.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"8 1","pages":"67:1-67:19"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82744526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Ding, Yizhan Wang, Hao Li, Cui Zhao, Ge Wang, Wei Xi, Jizhong Zhao
Speech enhancement can bene � t lots of practical voice-based interaction applications, where the goal is to generate clean speech from noisy ambient conditions. This paper presents a practical design, namely UltraSpeech, to enhance speech by exploring the correlation between the ultrasound (pro � led articulatory gestures) and speech. UltraSpeech uses a commodity smartphone to emit the ultrasound and collect the composed acoustic signal for analysis. We design a complex masking framework to deal with complex-valued spectrograms, incorporating the magnitude and phase recti � cation of speech simultaneously. We further introduce an interaction module to share information between ultrasound and speech two branches and thus enhance their discrimination capabilities. Extensive experiments demonstrate that UltraSpeech increases the Scale Invariant SDR by 12dB, improves the speech intelligibility and quality e � ectively, and is capable to generalize to unknown speakers.
{"title":"UltraSpeech: Speech Enhancement by Interaction between Ultrasound and Speech","authors":"H. Ding, Yizhan Wang, Hao Li, Cui Zhao, Ge Wang, Wei Xi, Jizhong Zhao","doi":"10.1145/3550303","DOIUrl":"https://doi.org/10.1145/3550303","url":null,"abstract":"Speech enhancement can bene � t lots of practical voice-based interaction applications, where the goal is to generate clean speech from noisy ambient conditions. This paper presents a practical design, namely UltraSpeech, to enhance speech by exploring the correlation between the ultrasound (pro � led articulatory gestures) and speech. UltraSpeech uses a commodity smartphone to emit the ultrasound and collect the composed acoustic signal for analysis. We design a complex masking framework to deal with complex-valued spectrograms, incorporating the magnitude and phase recti � cation of speech simultaneously. We further introduce an interaction module to share information between ultrasound and speech two branches and thus enhance their discrimination capabilities. Extensive experiments demonstrate that UltraSpeech increases the Scale Invariant SDR by 12dB, improves the speech intelligibility and quality e � ectively, and is capable to generalize to unknown speakers.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"12 1","pages":"111:1-111:25"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81310907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Food analytic and estimation of food nutrients have an increasing demand in recent years to monitor and control food intake and calorie consumption by individuals. Microwave ovens have recently replaced conventional cooking methods due to efficient and quick heating and cooking techniques. Users estimate the food nutrient composition by using some lookup information for each of the food’s ingredients or by using applications that map the picture of the food to their pre-defined dataset. These techniques are often time-consuming and not in real-time and thus can result in low accuracy. In this paper, we present WiNE , a system that introduces a new technique to estimate food nutrient composition and calorie content in real-time using microwave radiation. Our system monitors microwave oven leakage in the time and frequency domains and estimates the percentage of nutrients (carbohydrate, fat, protein, and water) present in the food. To evaluate the real-world performance of WiNE, we build a prototype using software-defined radios and conducted experiments on various food items using household microwave ovens. WiNE can estimate the food nutrient composition with a mean absolute error of ≤ 5% and the calorie content of the food with a high correlation of ∼ 0.97. and time-frequency domains.
{"title":"WiNE: Monitoring Microwave Oven Leakage to Estimate Food Nutrients and Calorie","authors":"A. Banerjee, K. Srinivasan","doi":"10.1145/3550313","DOIUrl":"https://doi.org/10.1145/3550313","url":null,"abstract":"Food analytic and estimation of food nutrients have an increasing demand in recent years to monitor and control food intake and calorie consumption by individuals. Microwave ovens have recently replaced conventional cooking methods due to efficient and quick heating and cooking techniques. Users estimate the food nutrient composition by using some lookup information for each of the food’s ingredients or by using applications that map the picture of the food to their pre-defined dataset. These techniques are often time-consuming and not in real-time and thus can result in low accuracy. In this paper, we present WiNE , a system that introduces a new technique to estimate food nutrient composition and calorie content in real-time using microwave radiation. Our system monitors microwave oven leakage in the time and frequency domains and estimates the percentage of nutrients (carbohydrate, fat, protein, and water) present in the food. To evaluate the real-world performance of WiNE, we build a prototype using software-defined radios and conducted experiments on various food items using household microwave ovens. WiNE can estimate the food nutrient composition with a mean absolute error of ≤ 5% and the calorie content of the food with a high correlation of ∼ 0.97. and time-frequency domains.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"8 1","pages":"99:1-99:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88768100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunji Liang, Yuchen Qin, Qi Li, Xiaokai Yan, Zhiwen Yu, Bin Guo, S. Samtani, Yanyong Zhang
The built-in loudspeakers of mobile devices (e.g., smartphones, smartwatches, and tablets) play significant roles in human-machine interaction, such as playing music, making phone calls, and enabling voice-based interaction. Prior studies have pointed out that it is feasible to eavesdrop on the speaker via motion sensors, but whether it is possible to synthesize speech from non-acoustic signals with sub-Nyquist sampling frequency has not been studied. In this paper, we present an end-to-end model to reconstruct the acoustic waveforms that are playing on the loudspeaker through the vibration captured by the built-in accelerometer. Specifically, we present an end-to-end speech synthesis framework dubbed AccMyrinx to eavesdrop on the speaker using the built-in low-resolution accelerometer of mobile devices. AccMyrinx takes advantage of the coexistence of an accelerometer with the loudspeaker on the same motherboard and compromises the loudspeaker by the solid-borne vibrations captured by the accelerometer. Low-resolution vibration signals are fed to a wavelet-based MelGAN to generate intelligible acoustic waveforms. We conducted extensive experiments on a large-scale dataset created based on audio clips downloaded from Voice of America (VOA). The experimental results show that AccMyrinx is capable of reconstructing intelligible acoustic signals that are playing on the loudspeaker with a smoothed word error rate (SWER) of 42.67%. The quality of synthesized speeches could be severely affected by several factors including gender, speech rate, and volume.
{"title":"AccMyrinx: Speech Synthesis with Non-Acoustic Sensor","authors":"Yunji Liang, Yuchen Qin, Qi Li, Xiaokai Yan, Zhiwen Yu, Bin Guo, S. Samtani, Yanyong Zhang","doi":"10.1145/3550338","DOIUrl":"https://doi.org/10.1145/3550338","url":null,"abstract":"The built-in loudspeakers of mobile devices (e.g., smartphones, smartwatches, and tablets) play significant roles in human-machine interaction, such as playing music, making phone calls, and enabling voice-based interaction. Prior studies have pointed out that it is feasible to eavesdrop on the speaker via motion sensors, but whether it is possible to synthesize speech from non-acoustic signals with sub-Nyquist sampling frequency has not been studied. In this paper, we present an end-to-end model to reconstruct the acoustic waveforms that are playing on the loudspeaker through the vibration captured by the built-in accelerometer. Specifically, we present an end-to-end speech synthesis framework dubbed AccMyrinx to eavesdrop on the speaker using the built-in low-resolution accelerometer of mobile devices. AccMyrinx takes advantage of the coexistence of an accelerometer with the loudspeaker on the same motherboard and compromises the loudspeaker by the solid-borne vibrations captured by the accelerometer. Low-resolution vibration signals are fed to a wavelet-based MelGAN to generate intelligible acoustic waveforms. We conducted extensive experiments on a large-scale dataset created based on audio clips downloaded from Voice of America (VOA). The experimental results show that AccMyrinx is capable of reconstructing intelligible acoustic signals that are playing on the loudspeaker with a smoothed word error rate (SWER) of 42.67%. The quality of synthesized speeches could be severely affected by several factors including gender, speech rate, and volume.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"60 1","pages":"127:1-127:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75016533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}