Wenqiang Chen, Y. Lian, Lu Wang, Rukhsana Ruby, Wen Hu, Kaishun Wu
The wearable devices are small and easy to carry but typically with poor interaction experience. For example, Apple iWatch does not support instant text message input feature because of the lack of keyboard availability on the tiny touch screen. To address this problem, we develop a novel system, termed iKey, which enables users to use the back of one of their hands as virtual keyboard for wearable wristbands. iKey recognizes keystrokes based on a location-based training model via body vibration. We will demonstrate a real time functional prototype of iKey in this demo.
{"title":"Virtual Keyboard for Wearable Wristbands","authors":"Wenqiang Chen, Y. Lian, Lu Wang, Rukhsana Ruby, Wen Hu, Kaishun Wu","doi":"10.1145/3131672.3136984","DOIUrl":"https://doi.org/10.1145/3131672.3136984","url":null,"abstract":"The wearable devices are small and easy to carry but typically with poor interaction experience. For example, Apple iWatch does not support instant text message input feature because of the lack of keyboard availability on the tiny touch screen. To address this problem, we develop a novel system, termed iKey, which enables users to use the back of one of their hands as virtual keyboard for wearable wristbands. iKey recognizes keystrokes based on a location-based training model via body vibration. We will demonstrate a real time functional prototype of iKey in this demo.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132263266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eun Sun Lee, J. Jeyakumar, Bharathan Balaji, R. P. Wilson, Mani Srivastava
Tagging animals with sensors is a powerful approach to acquire critical information about the behavioural ecology of free-living animals, which ultimately can provide data to inform best practice in conservation efforts. Sensor tags for such deployments need long lifetimes and incorporate multiple sensors, especially location because space use can contextualize behavior. The tag size needs to be minimal so as not to affect the activities of the animal. Aquatic animals in particular present challenges due to lack of wireless communication and water-proofing. Taking these points into consideration we have designed Aquamote: an ultra-low power, tiny sensor tag (20 x 29 mm2) which integrates accelerometer, gyroscope, magnetometer, depth sensor, GPS and BLE. Our poster will showcase the performance of AquaMote and highlight our design decisions to reduce its size and power consumption.
{"title":"AquaMote","authors":"Eun Sun Lee, J. Jeyakumar, Bharathan Balaji, R. P. Wilson, Mani Srivastava","doi":"10.1145/3131672.3136992","DOIUrl":"https://doi.org/10.1145/3131672.3136992","url":null,"abstract":"Tagging animals with sensors is a powerful approach to acquire critical information about the behavioural ecology of free-living animals, which ultimately can provide data to inform best practice in conservation efforts. Sensor tags for such deployments need long lifetimes and incorporate multiple sensors, especially location because space use can contextualize behavior. The tag size needs to be minimal so as not to affect the activities of the animal. Aquatic animals in particular present challenges due to lack of wireless communication and water-proofing. Taking these points into consideration we have designed Aquamote: an ultra-low power, tiny sensor tag (20 x 29 mm2) which integrates accelerometer, gyroscope, magnetometer, depth sensor, GPS and BLE. Our poster will showcase the performance of AquaMote and highlight our design decisions to reduce its size and power consumption.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123450715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ruta, F. Scioscia, S. Ieva, Giovanna Capurso, E. Sciascio
Supply chains can be seen as cyber-physical networks grounded on object identification and tracking. Conventional trust models featuring centralized information management architectures and simplistic things classification lend two of the most relevant limitations to current solutions. Blockchain introduces novel and a valuable trust approaches while semantic technologies better permit a things description. This paper introduces a semantic-enhanced blockchain platform allowing a flexible object discovery. It is based on validation by consensus of smart contracts and adopt a semantic matchmaking between queries and object annotations expressed w.r.t. ontology models. Early experiments assess the good behaviour of the proposed framework.
{"title":"Supply Chain Object Discovery with Semantic-enhanced Blockchain","authors":"M. Ruta, F. Scioscia, S. Ieva, Giovanna Capurso, E. Sciascio","doi":"10.1145/3131672.3136974","DOIUrl":"https://doi.org/10.1145/3131672.3136974","url":null,"abstract":"Supply chains can be seen as cyber-physical networks grounded on object identification and tracking. Conventional trust models featuring centralized information management architectures and simplistic things classification lend two of the most relevant limitations to current solutions. Blockchain introduces novel and a valuable trust approaches while semantic technologies better permit a things description. This paper introduces a semantic-enhanced blockchain platform allowing a flexible object discovery. It is based on validation by consensus of smart contracts and adopt a semantic matchmaking between queries and object annotations expressed w.r.t. ontology models. Early experiments assess the good behaviour of the proposed framework.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126189849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animesh Srivastava, Puneet Jain, Soteris Demetriou, Landon P. Cox, Kyu-Han Kim
Many mobile apps, including augmented-reality games, bar-code readers, and document scanners, digitize information from the physical world by applying computer-vision algorithms to live camera data. However, because camera permissions for existing mobile operating systems are coarse (i.e., an app may access a camera's entire view or none of it), users are vulnerable to visual privacy leaks. An app violates visual privacy if it extracts information from camera data in unexpected ways. For example, a user might be surprised to find that an augmented-reality makeup app extracts text from the camera's view in addition to detecting faces. This paper presents results from the first large-scale study of visual privacy leaks in the wild. We build CamForensics to identify the kind of information that apps extract from camera data. Our extensive user surveys determine what kind of information users expected an app to extract. Finally, our results show that camera apps frequently defy users' expectations based on their descriptions.
{"title":"CamForensics: Understanding Visual Privacy Leaks in the Wild","authors":"Animesh Srivastava, Puneet Jain, Soteris Demetriou, Landon P. Cox, Kyu-Han Kim","doi":"10.1145/3131672.3131683","DOIUrl":"https://doi.org/10.1145/3131672.3131683","url":null,"abstract":"Many mobile apps, including augmented-reality games, bar-code readers, and document scanners, digitize information from the physical world by applying computer-vision algorithms to live camera data. However, because camera permissions for existing mobile operating systems are coarse (i.e., an app may access a camera's entire view or none of it), users are vulnerable to visual privacy leaks. An app violates visual privacy if it extracts information from camera data in unexpected ways. For example, a user might be surprised to find that an augmented-reality makeup app extracts text from the camera's view in addition to detecting faces. This paper presents results from the first large-scale study of visual privacy leaks in the wild. We build CamForensics to identify the kind of information that apps extract from camera data. Our extensive user surveys determine what kind of information users expected an app to extract. Finally, our results show that camera apps frequently defy users' expectations based on their descriptions.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127503359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensing has been obsessed with delivering on the "smart dust" vision outlined decades ago, where trillions of tiny invisible computers support daily life, infrastructure, and humanity in general. Batteries are the single greatest threat to this vision of a sustainable Internet of Things. They are expensive, bulky, hazardous, and wear out after a few years (even rechargeables). Replacing and disposing of billions or trillions of dead batteries per year would be expensive and irresponsible. By leaving the batteries behind and surviving off energy harvested from the environment, tiny intermittently powered computers can monitor objects in hard to reach places maintenance free for decades. The intermittent execution, constrained compute and energy resources, and unreliability of these devices creates new challenges for the sensing and embedded systems community. However, the rewards and potential impact across many fields are worth it, enabling currently impractical applications in health services and patient care, commercial and consumer applications, wildlife conservation, industrial and infrastructure management, even space exploration. This paper highlights major research questions and establishes new directions for the community to embrace and investigate.
{"title":"The Future of Sensing is Batteryless, Intermittent, and Awesome","authors":"Josiah D. Hester, Jacob M. Sorber","doi":"10.1145/3131672.3131699","DOIUrl":"https://doi.org/10.1145/3131672.3131699","url":null,"abstract":"Sensing has been obsessed with delivering on the \"smart dust\" vision outlined decades ago, where trillions of tiny invisible computers support daily life, infrastructure, and humanity in general. Batteries are the single greatest threat to this vision of a sustainable Internet of Things. They are expensive, bulky, hazardous, and wear out after a few years (even rechargeables). Replacing and disposing of billions or trillions of dead batteries per year would be expensive and irresponsible. By leaving the batteries behind and surviving off energy harvested from the environment, tiny intermittently powered computers can monitor objects in hard to reach places maintenance free for decades. The intermittent execution, constrained compute and energy resources, and unreliability of these devices creates new challenges for the sensing and embedded systems community. However, the rewards and potential impact across many fields are worth it, enabling currently impractical applications in health services and patient care, commercial and consumer applications, wildlife conservation, industrial and infrastructure management, even space exploration. This paper highlights major research questions and establishes new directions for the community to embrace and investigate.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121670795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Klara Seitz, Sebastian Serth, Konrad-Felix Krentz, C. Meinel
IoT devices usually are battery-powered and directly connected to the Internet. This makes them vulnerable to so-called path-based denial-of-service (PDoS) attacks. For example, in a PDoS attack an adversary sends multiple Constrained Application Protocol (CoAP) messages towards an IoT device, thereby causing each IoT device along the path to expend energy for forwarding this message. Current end-to-end security solutions, such as DTLS or IPsec, fail to prevent such attacks since they only filter out inauthentic CoAP messages at their destination. This demonstration shows an approach to allow en-route filtering where a trusted gateway has all necessary information to check the integrity, decrypt and, if necessary, drop a message before forwarding it to the constrained mote. Our approach preserves precious resources of IoT devices in the face of path-based denial-of-service attacks by remote attackers.
{"title":"Enabling En-Route Filtering for End-to-End Encrypted CoAP Messages","authors":"Klara Seitz, Sebastian Serth, Konrad-Felix Krentz, C. Meinel","doi":"10.1145/3131672.3136960","DOIUrl":"https://doi.org/10.1145/3131672.3136960","url":null,"abstract":"IoT devices usually are battery-powered and directly connected to the Internet. This makes them vulnerable to so-called path-based denial-of-service (PDoS) attacks. For example, in a PDoS attack an adversary sends multiple Constrained Application Protocol (CoAP) messages towards an IoT device, thereby causing each IoT device along the path to expend energy for forwarding this message. Current end-to-end security solutions, such as DTLS or IPsec, fail to prevent such attacks since they only filter out inauthentic CoAP messages at their destination. This demonstration shows an approach to allow en-route filtering where a trusted gateway has all necessary information to check the integrity, decrypt and, if necessary, drop a message before forwarding it to the constrained mote. Our approach preserves precious resources of IoT devices in the face of path-based denial-of-service attacks by remote attackers.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131099452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using geophones to sense bed vibrations caused by ballistic force has shown great potential in monitoring a person's heart rate during sleep. It does not require a special mattress or sheets, and the user is free to move around and change position during sleep. Earlier work has studied how to process the geophone signal to detect heartbeats when a single subject occupies the entire bed. In this study, we develop a system called VitalMon, aiming to monitor a person's respiratory rate as well as heart rate, even when she is sharing a bed with another person. In such situations, the vibrations from both persons are mixed together. VitalMon first separates the two heartbeat signals, and then distinguishes the respiration signal from the heartbeat signal for each person. Our heartbeat separation algorithm relies on the spatial difference between two signal sources with respect to each vibration sensor, and our respiration extraction algorithm deciphers the breathing rate embedded in amplitude fluctuation of the heartbeat signal. We have developed a prototype bed to evaluate the proposed algorithms. A total of 86 subjects participated in our study, and we collected 5084 geophone samples, totaling 56 hours of data. We show that our technique is accurate -- its breathing rate estimation error for a single person is 0.38 breaths per minute (median error is 0.22 breaths per minute), heart rate estimation error when two persons share a bed is 1.90 beats per minute (median error is 0.72 beats per minute), and breathing rate estimation error when two persons share a bed is 2.62 breaths per minute (median error is 1.95 breaths per minute). By varying sleeping posture and mattress type, we show that our system can work in many different scenarios.
{"title":"Monitoring a Person's Heart Rate and Respiratory Rate on a Shared Bed Using Geophones","authors":"Zhenhua Jia, Amelie Bonde, Sugang Li, Chenren Xu, Jingxian Wang, Yanyong Zhang, R. Howard, Pei Zhang","doi":"10.1145/3131672.3131679","DOIUrl":"https://doi.org/10.1145/3131672.3131679","url":null,"abstract":"Using geophones to sense bed vibrations caused by ballistic force has shown great potential in monitoring a person's heart rate during sleep. It does not require a special mattress or sheets, and the user is free to move around and change position during sleep. Earlier work has studied how to process the geophone signal to detect heartbeats when a single subject occupies the entire bed. In this study, we develop a system called VitalMon, aiming to monitor a person's respiratory rate as well as heart rate, even when she is sharing a bed with another person. In such situations, the vibrations from both persons are mixed together. VitalMon first separates the two heartbeat signals, and then distinguishes the respiration signal from the heartbeat signal for each person. Our heartbeat separation algorithm relies on the spatial difference between two signal sources with respect to each vibration sensor, and our respiration extraction algorithm deciphers the breathing rate embedded in amplitude fluctuation of the heartbeat signal. We have developed a prototype bed to evaluate the proposed algorithms. A total of 86 subjects participated in our study, and we collected 5084 geophone samples, totaling 56 hours of data. We show that our technique is accurate -- its breathing rate estimation error for a single person is 0.38 breaths per minute (median error is 0.22 breaths per minute), heart rate estimation error when two persons share a bed is 1.90 beats per minute (median error is 0.72 beats per minute), and breathing rate estimation error when two persons share a bed is 2.62 breaths per minute (median error is 1.95 breaths per minute). By varying sleeping posture and mattress type, we show that our system can work in many different scenarios.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130671787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Piumwardane, C. Suduwella, I. Dharmadasa, A. Sayakkara, Prabhash Kumarasinghe, K. Zoysa, C. Keppitiyagama
Radio Tomographic Imaging (RTI) enables device free localization of physical objects by using signal attenuation in wireless networks. In this paper, we explore how existing RTI methods can be used in WiFi networks to do tomographic imaging. Moreover we analyze and evaluate the properties that affect the accuracy of WiFi tomographic imaging process.
{"title":"An Empirical Study of WiFi-based Radio Tomographic Imaging","authors":"D. Piumwardane, C. Suduwella, I. Dharmadasa, A. Sayakkara, Prabhash Kumarasinghe, K. Zoysa, C. Keppitiyagama","doi":"10.1145/3131672.3136983","DOIUrl":"https://doi.org/10.1145/3131672.3136983","url":null,"abstract":"Radio Tomographic Imaging (RTI) enables device free localization of physical objects by using signal attenuation in wireless networks. In this paper, we explore how existing RTI methods can be used in WiFi networks to do tomographic imaging. Moreover we analyze and evaluate the properties that affect the accuracy of WiFi tomographic imaging process.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"150 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130907376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jincao Zhu, Youngbin Im, Shivakant Mishra, Sangtae Ha
Current COTS WiFi based work on wireless motion sensing extracts human movements such as keystroking and hand motion mainly from amplitude training to classify different types of motions, as obtaining meaningful phase values is very challenging due to time-varying phase noises occurred with the movement. However, the methods based only on amplitude training are not very practical since their accuracy is not environment and location independent. This paper proposes an effective phase noise calibration technique which can be broadly applicable to COTS WiFi based motion sensing. We leverage the fact that multi-path for indoor environment contains certain static paths, such as reflections from wall or static furniture, as well as dynamic paths due to human hand and arm movements. When a hand moves, the phase value of the signal from the hand rotates as the path length changes and causes the superposition of signals over static and dynamic paths in antenna and frequency domain. To evaluate the effectiveness of the proposed technique, we experiment with a prototype system that can track hand gestures in a non-intrusive manner, i.e. users are not equipped with any device, using COTS WiFi devices. Our evaluation shows that calibrated phase values provide much rich, yet robust information on motion tracking -- 80th percentile angle estimation error up to 14 degrees, 80th percentile tracking error up to 15 cm, and its robustness to the environment and the speed of movement.
{"title":"Calibrating Time-variant, Device-specific Phase Noise for COTS WiFi Devices","authors":"Jincao Zhu, Youngbin Im, Shivakant Mishra, Sangtae Ha","doi":"10.1145/3131672.3131695","DOIUrl":"https://doi.org/10.1145/3131672.3131695","url":null,"abstract":"Current COTS WiFi based work on wireless motion sensing extracts human movements such as keystroking and hand motion mainly from amplitude training to classify different types of motions, as obtaining meaningful phase values is very challenging due to time-varying phase noises occurred with the movement. However, the methods based only on amplitude training are not very practical since their accuracy is not environment and location independent. This paper proposes an effective phase noise calibration technique which can be broadly applicable to COTS WiFi based motion sensing. We leverage the fact that multi-path for indoor environment contains certain static paths, such as reflections from wall or static furniture, as well as dynamic paths due to human hand and arm movements. When a hand moves, the phase value of the signal from the hand rotates as the path length changes and causes the superposition of signals over static and dynamic paths in antenna and frequency domain. To evaluate the effectiveness of the proposed technique, we experiment with a prototype system that can track hand gestures in a non-intrusive manner, i.e. users are not equipped with any device, using COTS WiFi devices. Our evaluation shows that calibrated phase values provide much rich, yet robust information on motion tracking -- 80th percentile angle estimation error up to 14 degrees, 80th percentile tracking error up to 15 cm, and its robustness to the environment and the speed of movement.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131350596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianxing Li, Emmanuel S. Akosah, Qiang Liu, Xia Zhou
We present LiGaze, a low-power approach to gaze tracking tailored to VR. It relies on a few low-cost photodiodes, eliminating the need for cameras and active infrared emitters. Reusing light emitted from the VR screen, LiGaze leverages photodiodes around a VR lens to measure reflected screen light in different directions. It then infers gaze direction by exploiting pupil's light absorption property. The core of LiGaze is to deal with screen light dynamics and extract changes in reflected light related to pupil movement. We design and fabricate a LiGaze prototype using off-the-shelf photodiodes. Its sensing and computation consume 791μW in total.
{"title":"Ultra-Low Power Gaze Tracking for Virtual Reality","authors":"Tianxing Li, Emmanuel S. Akosah, Qiang Liu, Xia Zhou","doi":"10.1145/3131672.3136989","DOIUrl":"https://doi.org/10.1145/3131672.3136989","url":null,"abstract":"We present LiGaze, a low-power approach to gaze tracking tailored to VR. It relies on a few low-cost photodiodes, eliminating the need for cameras and active infrared emitters. Reusing light emitted from the VR screen, LiGaze leverages photodiodes around a VR lens to measure reflected screen light in different directions. It then infers gaze direction by exploiting pupil's light absorption property. The core of LiGaze is to deal with screen light dynamics and extract changes in reflected light related to pupil movement. We design and fabricate a LiGaze prototype using off-the-shelf photodiodes. Its sensing and computation consume 791μW in total.","PeriodicalId":424262,"journal":{"name":"Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131024869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}