We present a demonstration of the real-time capture and analysis of multi-user MIMO (MU-MIMO) channel state information from commercial Wi-Fi devices. Our system is built with an array of WARP v3 nodes running the Mango Communications 802.11 Reference Design, an open-source, real-time FPGA implementation of the 802.11a/g MAC and PHY. One WARP v3 node acts as an 802.11 access point (AP), which serves Internet access to client devices. The other nodes implement an array of multi-antenna 802.11 monitors. Every monitor node simultaneously receives packets transmitted by the Wi-Fi clients associated with the AP. The nodes extract MAC headers and channel estimates from each packet and offload these to a PC for analysis. All MAC, PHY, and channel analysis processes run in real time.
{"title":"Demo: real-time MU-MIMO channel analysis with a custom 802.11 implementation","authors":"Christopher Hunter, P. Murphy, E. Welsh","doi":"10.1145/2639108.2641746","DOIUrl":"https://doi.org/10.1145/2639108.2641746","url":null,"abstract":"We present a demonstration of the real-time capture and analysis of multi-user MIMO (MU-MIMO) channel state information from commercial Wi-Fi devices. Our system is built with an array of WARP v3 nodes running the Mango Communications 802.11 Reference Design, an open-source, real-time FPGA implementation of the 802.11a/g MAC and PHY. One WARP v3 node acts as an 802.11 access point (AP), which serves Internet access to client devices. The other nodes implement an array of multi-antenna 802.11 monitors. Every monitor node simultaneously receives packets transmitted by the Wi-Fi clients associated with the AP. The nodes extract MAC headers and channel estimates from each packet and offload these to a PC for analysis. All MAC, PHY, and channel analysis processes run in real time.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115522649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aishwarya Ganesan, S. Rallapalli, Krishna Chintalapudi, V. Padmanabhan, L. Qiu
Our demo tracks physical browsing by users in indoor spaces. Analogous to online browsing, where users choose to go to certain webpages, dwell on a subset of pages of interest to them, and click on links of interest while ignoring others, we can draw parallels in the physical setting, where a user might walk purposefully to a section of interest, dwell there for a while, and gaze at specific items that they wish to know more about.
{"title":"Demo: tracking user browsing on a demo floor","authors":"Aishwarya Ganesan, S. Rallapalli, Krishna Chintalapudi, V. Padmanabhan, L. Qiu","doi":"10.1145/2639108.2641754","DOIUrl":"https://doi.org/10.1145/2639108.2641754","url":null,"abstract":"Our demo tracks physical browsing by users in indoor spaces. Analogous to online browsing, where users choose to go to certain webpages, dwell on a subset of pages of interest to them, and click on links of interest while ignoring others, we can draw parallels in the physical setting, where a user might walk purposefully to a section of interest, dwell there for a while, and gaze at specific items that they wish to know more about.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125608825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Nikaein, R. Knopp, F. Kaltenberger, Lionel Gauthier, C. Bonnet, D. Nussbaum, R. Ghaddab
LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems. In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.
{"title":"Demo: OpenAirInterface: an open LTE network in a PC","authors":"N. Nikaein, R. Knopp, F. Kaltenberger, Lionel Gauthier, C. Bonnet, D. Nussbaum, R. Ghaddab","doi":"10.1145/2639108.2641745","DOIUrl":"https://doi.org/10.1145/2639108.2641745","url":null,"abstract":"LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems. In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120968311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have seen the advent of new RF-localization systems that demonstrate tens of centimeters of accuracy. However, such systems require either deployment of new infrastructure, or extensive fingerprinting of the environment through training or crowdsourcing, impeding their wide-scale adoption. We present Ubicarse, an accurate indoor localization system for commodity mobile devices, with no specialized infrastructure or fingerprinting. Ubicarse enables handheld devices to emulate large antenna arrays using a new formulation of Synthetic Aperture Radar (SAR). Past work on SAR requires measuring mechanically controlled device movement with millimeter precision, far beyond what commercial accelerometers can provide. In contrast, Ubicarse's core contribution is the ability to perform SAR on handheld devices twisted by their users along unknown paths. Ubicarse is not limited to localizing RF devices; it combines RF localization with stereo-vision algorithms to localize common objects with no RF source attached to them. We implement Ubicarse on a HP SplitX2 tablet and empirically demonstrate a median error of 39 cm in 3-D device localization and 17 cm in object geotagging in complex indoor settings.
{"title":"Accurate indoor localization with zero start-up cost","authors":"Swarun Kumar, Stephanie Gil, D. Katabi, D. Rus","doi":"10.1145/2639108.2639142","DOIUrl":"https://doi.org/10.1145/2639108.2639142","url":null,"abstract":"Recent years have seen the advent of new RF-localization systems that demonstrate tens of centimeters of accuracy. However, such systems require either deployment of new infrastructure, or extensive fingerprinting of the environment through training or crowdsourcing, impeding their wide-scale adoption. We present Ubicarse, an accurate indoor localization system for commodity mobile devices, with no specialized infrastructure or fingerprinting. Ubicarse enables handheld devices to emulate large antenna arrays using a new formulation of Synthetic Aperture Radar (SAR). Past work on SAR requires measuring mechanically controlled device movement with millimeter precision, far beyond what commercial accelerometers can provide. In contrast, Ubicarse's core contribution is the ability to perform SAR on handheld devices twisted by their users along unknown paths. Ubicarse is not limited to localizing RF devices; it combines RF localization with stereo-vision algorithms to localize common objects with no RF source attached to them. We implement Ubicarse on a HP SplitX2 tablet and empirically demonstrate a median error of 39 cm in 3-D device localization and 17 cm in object geotagging in complex indoor settings.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128230442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruipeng Gao, Mingmin Zhao, Tao Ye, Fan Ye, Yizhou Wang, Kaigui Bian, Tao Wang, Xiaoming Li
The lack of floor plans is a critical reason behind the current sporadic availability of indoor localization service. Service providers have to go through effort-intensive and time-consuming business negotiations with building operators, or hire dedicated personnel to gather such data. In this paper, we propose Jigsaw, a floor plan reconstruction system that leverages crowdsensed data from mobile users. It extracts the position, size and orientation information of individual landmark objects from images taken by users. It also obtains the spatial relation between adjacent landmark objects from inertial sensor data, then computes the coordinates and orientations of these objects on an initial floor plan. By combining user mobility traces and locations where images are taken, it produces complete floor plans with hallway connectivity, room sizes and shapes. Our experiments on 3 stories of 2 large shopping malls show that the 90-percentile errors of positions and orientations of landmark objects are about 1~2m and 5~9°, while the hallway connectivity is 100% correct.
{"title":"Jigsaw: indoor floor plan reconstruction via mobile crowdsensing","authors":"Ruipeng Gao, Mingmin Zhao, Tao Ye, Fan Ye, Yizhou Wang, Kaigui Bian, Tao Wang, Xiaoming Li","doi":"10.1145/2639108.2639134","DOIUrl":"https://doi.org/10.1145/2639108.2639134","url":null,"abstract":"The lack of floor plans is a critical reason behind the current sporadic availability of indoor localization service. Service providers have to go through effort-intensive and time-consuming business negotiations with building operators, or hire dedicated personnel to gather such data. In this paper, we propose Jigsaw, a floor plan reconstruction system that leverages crowdsensed data from mobile users. It extracts the position, size and orientation information of individual landmark objects from images taken by users. It also obtains the spatial relation between adjacent landmark objects from inertial sensor data, then computes the coordinates and orientations of these objects on an initial floor plan. By combining user mobility traces and locations where images are taken, it produces complete floor plans with hallway connectivity, room sizes and shapes. Our experiments on 3 stories of 2 large shopping malls show that the 90-percentile errors of positions and orientations of landmark objects are about 1~2m and 5~9°, while the hallway connectivity is 100% correct.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134059254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information-Centric Networking (ICN) is an alternative architecture for computer networks, where the communication is focused on the data being transferred instead of the communicating hosts. This paper describes a demo of an experience sharing application for mobile phones built on an ICN platform designed for devices with intermittent connectivity. In particular, we detail how this application will be showcased in an indoor exhibition where experience is shared with media content that is geo-tagged using Bluetooth beacons and spread opportunistically to other users.
{"title":"Demo: mobile opportunistic system for experience sharing (MOSES) in indoor exhibitions","authors":"F. Abdesslem, Anders Lindgren","doi":"10.1145/2639108.2641750","DOIUrl":"https://doi.org/10.1145/2639108.2641750","url":null,"abstract":"Information-Centric Networking (ICN) is an alternative architecture for computer networks, where the communication is focused on the data being transferred instead of the communicating hosts. This paper describes a demo of an experience sharing application for mobile phones built on an ICN platform designed for devices with intermittent connectivity. In particular, we detail how this application will be showcased in an indoor exhibition where experience is shared with media content that is geo-tagged using Bluetooth beacons and spread opportunistically to other users.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114777138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Homin Park, DaeHan Ahn, M. Won, S. Son, Taejoon Park
In this work, we address a fundamental problem of distinguishing the driver from passengers using a fusion of embedded sensors (accelerometers, gyroscopes, microphones, and magnetic sensors) in a smart phone. Compared with the state-of-the-art solutions, a key property of our solution is non-intrusiveness, i.e., enabling accurate driver phone detection without relying on any particular situations, events, and dedicated hardware devices. Our system only utilizes naturally arising driver motions, i.e., sitting down sideways, closing the vehicle door, and starting the vehicle, to determine whether the user enters the vehicle from left or right and whether the user is seated in the front or rear seats.
{"title":"Poster: Are you driving?: non-intrusive driver detection using built-in smartphone sensors","authors":"Homin Park, DaeHan Ahn, M. Won, S. Son, Taejoon Park","doi":"10.1145/2639108.2642896","DOIUrl":"https://doi.org/10.1145/2639108.2642896","url":null,"abstract":"In this work, we address a fundamental problem of distinguishing the driver from passengers using a fusion of embedded sensors (accelerometers, gyroscopes, microphones, and magnetic sensors) in a smart phone. Compared with the state-of-the-art solutions, a key property of our solution is non-intrusiveness, i.e., enabling accurate driver phone detection without relying on any particular situations, events, and dedicated hardware devices. Our system only utilizes naturally arising driver motions, i.e., sitting down sideways, closing the vehicle door, and starting the vehicle, to determine whether the user enters the vehicle from left or right and whether the user is seated in the front or rear seats.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114987111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We explore the indoor positioning problem with unmodified smartphones and slightly-modified commercial LED luminaires. The luminaires - modified to allow rapid, on-off keying - transmit their identifiers and/or locations encoded in human-imperceptible optical pulses. A camera-equipped smartphone, using just a single image frame capture, can detect the presence of the luminaires in the image, decode their transmitted identifiers and/or locations, and determine the smartphone's location and orientation relative to the luminaires. Continuous image capture and processing enables continuous position updates. The key insights underlying this work are (i) the driver circuits of emerging LED lighting systems can be easily modified to transmit data through on-off keying; (ii) the rolling shutter effect of CMOS imagers can be leveraged to receive many bits of data encoded in the optical transmissions with just a single frame capture, (iii) a camera is intrinsically an angle-of-arrival sensor, so the projection of multiple nearby light sources with known positions onto a camera's image plane can be framed as an instance of a sufficiently-constrained angle-of-arrival localization problem, and (iv) this problem can be solved with optimization techniques.
{"title":"Demo: Luxapose: indoor positioning with mobile phones and visible light","authors":"Ye-Sheng Kuo, P. Pannuto, P. Dutta","doi":"10.1145/2639108.2641747","DOIUrl":"https://doi.org/10.1145/2639108.2641747","url":null,"abstract":"We explore the indoor positioning problem with unmodified smartphones and slightly-modified commercial LED luminaires. The luminaires - modified to allow rapid, on-off keying - transmit their identifiers and/or locations encoded in human-imperceptible optical pulses. A camera-equipped smartphone, using just a single image frame capture, can detect the presence of the luminaires in the image, decode their transmitted identifiers and/or locations, and determine the smartphone's location and orientation relative to the luminaires. Continuous image capture and processing enables continuous position updates. The key insights underlying this work are (i) the driver circuits of emerging LED lighting systems can be easily modified to transmit data through on-off keying; (ii) the rolling shutter effect of CMOS imagers can be leveraged to receive many bits of data encoded in the optical transmissions with just a single frame capture, (iii) a camera is intrinsically an angle-of-arrival sensor, so the projection of multiple nearby light sources with known positions onto a camera's image plane can be framed as an instance of a sufficiently-constrained angle-of-arrival localization problem, and (iv) this problem can be solved with optimization techniques.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133797218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Energy accounting determines how much a software principal contributes to the total system energy consumption. It is the foundation for evaluating software and for operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track all hardware usage by software in a heterogeneous multicore system like modern smartphones and tablets. In this work, we argue that energy accounting should be formulated as a cooperative game and that the Shapley value provides the ultimate ground truth for energy accounting policies. We reveal the important flaws of existing energy accounting policies based on the Shapley value theory and provide Shapley value-based energy accounting, a practical approximation of the Shapley value, for battery-powered mobile systems. We evaluate this approximation against existing energy accounting policies in two ways: (i) how well they identify the top energy consuming applications, and (ii) how effective they are in system energy management. Using a prototype based on Texas Instruments Pandaboard and smartphone workload, we experimentally demonstrate existing energy accounting policies can deviate by 400% in attributing energy consumption to running applications and can be up to 25% less effective in system energy management when compared to Shapley value-based energy accounting.
{"title":"Rethink energy accounting with cooperative game theory","authors":"Mian Dong, Tian Lan, Lin Zhong","doi":"10.1145/2639108.2639128","DOIUrl":"https://doi.org/10.1145/2639108.2639128","url":null,"abstract":"Energy accounting determines how much a software principal contributes to the total system energy consumption. It is the foundation for evaluating software and for operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track all hardware usage by software in a heterogeneous multicore system like modern smartphones and tablets. In this work, we argue that energy accounting should be formulated as a cooperative game and that the Shapley value provides the ultimate ground truth for energy accounting policies. We reveal the important flaws of existing energy accounting policies based on the Shapley value theory and provide Shapley value-based energy accounting, a practical approximation of the Shapley value, for battery-powered mobile systems. We evaluate this approximation against existing energy accounting policies in two ways: (i) how well they identify the top energy consuming applications, and (ii) how effective they are in system energy management. Using a prototype based on Texas Instruments Pandaboard and smartphone workload, we experimentally demonstrate existing energy accounting policies can deviate by 400% in attributing energy consumption to running applications and can be up to 25% less effective in system energy management when compared to Shapley value-based energy accounting.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116137786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present HiLight, a new form of screen-camera communication without the need of any coded images (e.g. barcodes) for off-the-shelf smart devices. HiLight hides information underlying any images shown on a LED or an OLED screen, so that camera-equipped smart devices can fetch the information by turning their cameras to the screen. HiLight achieves this by leveraging the orthogonal transparency (alpha) channel, a well-known concept in computer graphics, to embed bits into pixel translucency changes without the need of modifying pixel color values. We demonstrated HiLight's feasibility using smartphones. By offering an unobtrusive, flexible, and lightweight communication channel between screens and cameras, HiLight opens up opportunities for new HCI and context-aware applications to emerge, e.g. smart glass communicates with screens for additional personalized information to realize augmented reality.
{"title":"Poster: HiLight: hiding bits in pixel translucency changes","authors":"Tianxing Li, Chuankai An, A. Campbell, Xia Zhou","doi":"10.1145/2639108.2642895","DOIUrl":"https://doi.org/10.1145/2639108.2642895","url":null,"abstract":"We present HiLight, a new form of screen-camera communication without the need of any coded images (e.g. barcodes) for off-the-shelf smart devices. HiLight hides information underlying any images shown on a LED or an OLED screen, so that camera-equipped smart devices can fetch the information by turning their cameras to the screen. HiLight achieves this by leveraging the orthogonal transparency (alpha) channel, a well-known concept in computer graphics, to embed bits into pixel translucency changes without the need of modifying pixel color values. We demonstrated HiLight's feasibility using smartphones. By offering an unobtrusive, flexible, and lightweight communication channel between screens and cameras, HiLight opens up opportunities for new HCI and context-aware applications to emerge, e.g. smart glass communicates with screens for additional personalized information to realize augmented reality.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124207818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}