We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.
{"title":"Demo: I am a smartphone and i can tell my user's walking direction","authors":"Nirupam Roy, He Wang, Romit Roy Choudhury","doi":"10.1145/2594368.2601478","DOIUrl":"https://doi.org/10.1145/2594368.2601478","url":null,"abstract":"We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster, we design and implement a mobile application and back-end management system to help field scientists manage data collection, improve real-time communication, and optimize power consumption during a scientific field study.
{"title":"Poster: A power-aware mobile app for field scientists","authors":"Bo Wang, Xinghui Zhao, David Chiu","doi":"10.1145/2594368.2601463","DOIUrl":"https://doi.org/10.1145/2594368.2601463","url":null,"abstract":"In this poster, we design and implement a mobile application and back-end management system to help field scientists manage data collection, improve real-time communication, and optimize power consumption during a scientific field study.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124117011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kanchana Thilakarathna, Xinlong Guan, A. Seneviratne
Yalut is a novel user-centric hybrid content sharing overlay for social networking. Yalut enables the users to retain control over their own data and preserve their privacy, whilst still using the popular centralized services. In this demonstration, we show the feasibility of Yalut by integrating the service with the popular social networking apps on Android devices, Mac and Windows desktop platforms. We show that it is possible to provide the benefits of distributed content sharing on top of the existing centralized services with minimal changes to the content sharing process.
{"title":"Demo: Yalut -- user-centric social networking overlay","authors":"Kanchana Thilakarathna, Xinlong Guan, A. Seneviratne","doi":"10.1145/2594368.2601465","DOIUrl":"https://doi.org/10.1145/2594368.2601465","url":null,"abstract":"Yalut is a novel user-centric hybrid content sharing overlay for social networking. Yalut enables the users to retain control over their own data and preserve their privacy, whilst still using the popular centralized services. In this demonstration, we show the feasibility of Yalut by integrating the service with the popular social networking apps on Android devices, Mac and Windows desktop platforms. We show that it is possible to provide the benefits of distributed content sharing on top of the existing centralized services with minimal changes to the content sharing process.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114557908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consider the following scenario from the not-too-distant future: the CEO of a company is presenting his vision for the next quarter to a small group of co-workers. The CEO trusts everyone in the room, but many in attendance have smartphones and camera-equipped wearable computing devices running third-party apps. The CEO is worried that this third-party software could leak the highly confidential information in his slides and on the whiteboard. This raises the question: how can the CEO prevent apps with camera access from leaking company secrets? First, the CEO must have a way of identifying or marking visual secrets. Markings should be: (1) easy to create by hand and with digital tools, and (2) easy and efficient to recognize by software. Marking visual secrets by placing QR-codes or badges near them makes real-time recognition difficult due to scaling problems (particularly at far distances). Moreover, precisely encoding a two-dimensional region surrounding a visual secret in a QR-code or badge would be too awkward and slow for users. Finally, general-purpose object recognition is too slow and consumes too much energy. In this demo, we present PrivateEye, a system that prevents visual secrets from inadvertently leaking. PrivateEye consists of two pieces: (1) a specification for marking a two-dimensional space as secret, and (2) software on a recording device for recognizing markings and obscuring visual secrets in real-time. Figure 1(a) and 2(a) show examples of how PrivateEye users can define a region containing visual secrets by combining solid and dotted lines. Depending on the medium, users can define secret regions by hand (e.g., on a whiteboard) or use digital tools (e.g., within a presentation). PrivateEye is based on the principle that preventing leaks requires visual information to be withheld from third-party apps until the system can be confident that it is safe to reveal. As a result, PrivateEye works in three phases. Phase 1 requires the camera view to stabilize. During this phase, PrivateEye completely blurs the camera view so that apps cannot infer secret information from an image capture. Once the camera view stabilizes, PrivateEye enters Phase 2, in which the system detects all the rectangles in the camera view. At this point, all the detected rectangles appear blocked to the user (Figure 1(b) and 2(b)). PrivateEye then moves on to Phase 3. In this phase, the system searches each blocked rectangle for secret markings (i.e., dotted rectangles). PrivateEye can safely reveal the content of rectangles without secret markings to an app; however, the system must continue to block any rectangles containing secret markings. If PrivateEye detects that the camera view has changed, then it must return to Phase 1. During the demo, PrivateEye will be running on Google Nexus 4 and Google Glass. We plan to invite the audience to use one of these devices and view some of the objects already marked secret. We will also let them draw the markers
{"title":"Demo: Protecting visual secrets with privateeye","authors":"Animesh Srivastava, Landon P. Cox","doi":"10.1145/2594368.2601467","DOIUrl":"https://doi.org/10.1145/2594368.2601467","url":null,"abstract":"Consider the following scenario from the not-too-distant future: the CEO of a company is presenting his vision for the next quarter to a small group of co-workers. The CEO trusts everyone in the room, but many in attendance have smartphones and camera-equipped wearable computing devices running third-party apps. The CEO is worried that this third-party software could leak the highly confidential information in his slides and on the whiteboard. This raises the question: how can the CEO prevent apps with camera access from leaking company secrets? First, the CEO must have a way of identifying or marking visual secrets. Markings should be: (1) easy to create by hand and with digital tools, and (2) easy and efficient to recognize by software. Marking visual secrets by placing QR-codes or badges near them makes real-time recognition difficult due to scaling problems (particularly at far distances). Moreover, precisely encoding a two-dimensional region surrounding a visual secret in a QR-code or badge would be too awkward and slow for users. Finally, general-purpose object recognition is too slow and consumes too much energy. In this demo, we present PrivateEye, a system that prevents visual secrets from inadvertently leaking. PrivateEye consists of two pieces: (1) a specification for marking a two-dimensional space as secret, and (2) software on a recording device for recognizing markings and obscuring visual secrets in real-time. Figure 1(a) and 2(a) show examples of how PrivateEye users can define a region containing visual secrets by combining solid and dotted lines. Depending on the medium, users can define secret regions by hand (e.g., on a whiteboard) or use digital tools (e.g., within a presentation). PrivateEye is based on the principle that preventing leaks requires visual information to be withheld from third-party apps until the system can be confident that it is safe to reveal. As a result, PrivateEye works in three phases. Phase 1 requires the camera view to stabilize. During this phase, PrivateEye completely blurs the camera view so that apps cannot infer secret information from an image capture. Once the camera view stabilizes, PrivateEye enters Phase 2, in which the system detects all the rectangles in the camera view. At this point, all the detected rectangles appear blocked to the user (Figure 1(b) and 2(b)). PrivateEye then moves on to Phase 3. In this phase, the system searches each blocked rectangle for secret markings (i.e., dotted rectangles). PrivateEye can safely reveal the content of rectangles without secret markings to an app; however, the system must continue to block any rectangles containing secret markings. If PrivateEye detects that the camera view has changed, then it must return to Phase 1. During the demo, PrivateEye will be running on Google Nexus 4 and Google Glass. We plan to invite the audience to use one of these devices and view some of the objects already marked secret. We will also let them draw the markers ","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130872516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Facial recognition is a popular biometric authentica-tion technique, but it is rarely used in practice for de-vice unlock or website / app login in smartphones, alt-hough most of them are equipped with a front-facing camera. Security issues (e.g. 2D media attack and vir-tual camera attack) and ease of use are two important factors that impede the prevalence of facial authentica-tion in mobile devices. In this paper, we propose a new sensor-assisted facial authentication method to over-come these limitations. Our system uses motion and light sensors to defend against 2D media attacks and virtual camera attacks without the penalty of authenti-cation speed. We conduct experiments to validate our method. Results show 95-97% detection rate and 2-3% false alarm rate over 450 trials in real-settings, indicat-ing high security obtained by the scheme ten times faster than existing 3D facial authentications (3 sec-onds compared to 30 seconds).
{"title":"Sensor-assisted facial recognition: an enhanced biometric authentication system for smartphones","authors":"Shaxun Chen, A. Pande, P. Mohapatra","doi":"10.1145/2594368.2594373","DOIUrl":"https://doi.org/10.1145/2594368.2594373","url":null,"abstract":"Facial recognition is a popular biometric authentica-tion technique, but it is rarely used in practice for de-vice unlock or website / app login in smartphones, alt-hough most of them are equipped with a front-facing camera. Security issues (e.g. 2D media attack and vir-tual camera attack) and ease of use are two important factors that impede the prevalence of facial authentica-tion in mobile devices. In this paper, we propose a new sensor-assisted facial authentication method to over-come these limitations. Our system uses motion and light sensors to defend against 2D media attacks and virtual camera attacks without the penalty of authenti-cation speed. We conduct experiments to validate our method. Results show 95-97% detection rate and 2-3% false alarm rate over 450 trials in real-settings, indicat-ing high security obtained by the scheme ten times faster than existing 3D facial authentications (3 sec-onds compared to 30 seconds).","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130940530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A well-known bottleneck of contemporary mobile devices is the inefficient and error-prone touchscreen keyboard. In this paper, we propose UbiK, an alternative portable text-entry method that allows user to make keystrokes on conventional surfaces, e.g., wood desktop. UbiK enables text-input experience similar to that on a physical keyboard, but it only requires a keyboard outline printed on the surface or a piece of paper atop. The core idea is to leverage the microphone on a mobile device to accurately localize the keystrokes. To achieve fine-grained, centimeter scale granularity, UbiK extracts and optimizes the location-dependent multipath fading features from the audio signals, and takes advantage of the dual-microphone interface to improve signal diversity. We implement UbiK as an Android application. Our experiments demonstrate that UbiK is able to achieve above 95% of localization accuracy. Field trial involving first-time users shows that UbiK can significantly improve text-entry speed over current on-screen keyboards.
{"title":"Ubiquitous keyboard for small mobile devices: harnessing multipath fading for fine-grained keystroke localization","authors":"Junjue Wang, Kaichen Zhao, Xinyu Zhang, Chunyi Peng","doi":"10.1145/2594368.2594384","DOIUrl":"https://doi.org/10.1145/2594368.2594384","url":null,"abstract":"A well-known bottleneck of contemporary mobile devices is the inefficient and error-prone touchscreen keyboard. In this paper, we propose UbiK, an alternative portable text-entry method that allows user to make keystrokes on conventional surfaces, e.g., wood desktop. UbiK enables text-input experience similar to that on a physical keyboard, but it only requires a keyboard outline printed on the surface or a piece of paper atop. The core idea is to leverage the microphone on a mobile device to accurately localize the keystrokes. To achieve fine-grained, centimeter scale granularity, UbiK extracts and optimizes the location-dependent multipath fading features from the audio signals, and takes advantage of the dual-microphone interface to improve signal diversity. We implement UbiK as an Android application. Our experiments demonstrate that UbiK is able to achieve above 95% of localization accuracy. Field trial involving first-time users shows that UbiK can significantly improve text-entry speed over current on-screen keyboards.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133363772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Sherman, Gradeigh Clark, Yulong Yang, Shridatt Sugrim, Arttu Modig, J. Lindqvist, Antti Oulasvirta, Teemu Roos
This is a video demonstration for a full paper available in MobiSys'14 proceedings http://dx.doi.org/10.1145/2594368.2594375. The video demonstrates several forms of authentication on a common tablet, and compares them to our method for gesture-based authentication. Our method measures the security and memorability of user generated free-form gestures by estimating the mutual information of repeated gestures. We show examples of such gestures with high and low mutual information content. We also show what information from each is visible to a shoulder surfing attacker, and describe how our system is resistant to such an attack.
{"title":"Video: User-generated free-form gestures for authentication: security and memorability","authors":"Michael Sherman, Gradeigh Clark, Yulong Yang, Shridatt Sugrim, Arttu Modig, J. Lindqvist, Antti Oulasvirta, Teemu Roos","doi":"10.1145/2594368.2602429","DOIUrl":"https://doi.org/10.1145/2594368.2602429","url":null,"abstract":"This is a video demonstration for a full paper available in MobiSys'14 proceedings http://dx.doi.org/10.1145/2594368.2594375. The video demonstrates several forms of authentication on a common tablet, and compares them to our method for gesture-based authentication. Our method measures the security and memorability of user generated free-form gestures by estimating the mutual information of repeated gestures. We show examples of such gestures with high and low mutual information content. We also show what information from each is visible to a shoulder surfing attacker, and describe how our system is resistant to such an attack.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133426887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A user nowadays owns a variety of mobile systems, including smartphones, tablets, smart glasses, and smart watches, each equipped with a plethora of I/O devices, such as cameras, speakers, microphones, sensors, and cellular modems. There are many interesting use cases in which an application running on one mobile system accesses I/O on another system, for three fundamental reasons. (i) Mobile systems can be in different physical locations or orientations. For example, one can control a smartphone's high-resolution camera from a tablet camera application to more easily capture a self-portrait. (ii) Mobile systems can serve different users. For example, one can a play music for another user if one's smartphone can access the other device's speaker. (iii) Certain mobile systems have unique I/O devices due to their distinct form factor and targeted use cases. For example, a user can make a phone call from her tablet using the modem and SIM card in her smartphone. Solutions exist for sharing I/O devices, e.g., for camera [1], speaker [2], and modem (for messaging) [3]. However, these solutions have three limitations. (i) They do not support unmodified applications. (ii) They do not expose all the functionality of an I/O device for sharing. (iii) They are I/O class-specific, requiring significant engineering effort to support new I/O devices. We demonstrate Rio (Remote I/O), an I/O sharing solution for mobile systems that overcomes all three aforementioned limitations. Rio adopts a split-stack I/O sharing model, in which the I/O stack is split between the two mobile systems at a certain boundary. All communications that cross this boundary are intercepted on the mobile system hosting the application and forwarded to the mobile system with the I/O device, where they are served by the rest of the I/O stack. Rio uses device files as its boundary of choice. Device files are used in Unix-like OSes, such as Android and iOS, to abstract many classes of I/O devices, providing an I/O class-agnostic boundary. The device file boundary supports I/O sharing for unmodified applications, as it is transparent to the application layer. It also exposes the full functionality of each I/O device to other mobile systems by allowing processes in one system to directly communicate with the device drivers in another. Rio is not the first system to exploit the device file boundary; our previous work, Paradice [5], uses device files as the boundary for I/O virtualization inside a single system. However, Rio faces a different set of challenges regarding how to properly exploit this boundary, as explained in the full paper [6]. In this demo, we use a prototype implementation of Rio for Android systems. Our implementation supports four important I/O classes: camera, audio devices such as speaker and microphone, sensors such as accelerometer, and cellular modem (for phone calls and SMS). It consists of about 7100 lines of code, of which less than 500 are specific to I/O classes. Ri
{"title":"Demo: Rio: a system solution for sharing I/O between mobile systems","authors":"A. A. Sani, Kevin Boos, Minhong Yun, Lin Zhong","doi":"10.1145/2594368.2601471","DOIUrl":"https://doi.org/10.1145/2594368.2601471","url":null,"abstract":"A user nowadays owns a variety of mobile systems, including smartphones, tablets, smart glasses, and smart watches, each equipped with a plethora of I/O devices, such as cameras, speakers, microphones, sensors, and cellular modems. There are many interesting use cases in which an application running on one mobile system accesses I/O on another system, for three fundamental reasons. (i) Mobile systems can be in different physical locations or orientations. For example, one can control a smartphone's high-resolution camera from a tablet camera application to more easily capture a self-portrait. (ii) Mobile systems can serve different users. For example, one can a play music for another user if one's smartphone can access the other device's speaker. (iii) Certain mobile systems have unique I/O devices due to their distinct form factor and targeted use cases. For example, a user can make a phone call from her tablet using the modem and SIM card in her smartphone. Solutions exist for sharing I/O devices, e.g., for camera [1], speaker [2], and modem (for messaging) [3]. However, these solutions have three limitations. (i) They do not support unmodified applications. (ii) They do not expose all the functionality of an I/O device for sharing. (iii) They are I/O class-specific, requiring significant engineering effort to support new I/O devices. We demonstrate Rio (Remote I/O), an I/O sharing solution for mobile systems that overcomes all three aforementioned limitations. Rio adopts a split-stack I/O sharing model, in which the I/O stack is split between the two mobile systems at a certain boundary. All communications that cross this boundary are intercepted on the mobile system hosting the application and forwarded to the mobile system with the I/O device, where they are served by the rest of the I/O stack. Rio uses device files as its boundary of choice. Device files are used in Unix-like OSes, such as Android and iOS, to abstract many classes of I/O devices, providing an I/O class-agnostic boundary. The device file boundary supports I/O sharing for unmodified applications, as it is transparent to the application layer. It also exposes the full functionality of each I/O device to other mobile systems by allowing processes in one system to directly communicate with the device drivers in another. Rio is not the first system to exploit the device file boundary; our previous work, Paradice [5], uses device files as the boundary for I/O virtualization inside a single system. However, Rio faces a different set of challenges regarding how to properly exploit this boundary, as explained in the full paper [6]. In this demo, we use a prototype implementation of Rio for Android systems. Our implementation supports four important I/O classes: camera, audio devices such as speaker and microphone, sensors such as accelerometer, and cellular modem (for phone calls and SMS). It consists of about 7100 lines of code, of which less than 500 are specific to I/O classes. Ri","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131326430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bluetooth wide availability in vehicles either through passengers’ smartphones or vehicles hardware have been poorly exploited by researchers[2]. Neighbor Discovery is an exclusive Bluetooth [3] feature can be utilized to enhance transportation services while maintaining vehicle’s privacy. In this project, we advocate for using Bluetooth in developing intelligent transportation services. We name our project DriveBlue. Traffic incidents (e.g. congestion, accidents) are likely to affect drivers daily commute. So far solutions to such problems are based on statistical analysis. Bluetooth was used to estimate the travel time by extrapolating the period used to travel between two points [1]. In this project, we exploit Bluetooth neighbor discovery to detect traffic conditions (e.g. average road speed, differentiate between vehicles on regular lanes versus HOV) with receivers placed in a single site as in Fig. 1a. Features are extracted, and classified revealing some of the current traffic conditions.
{"title":"Poster: DriveBlue: can bluetooth enhance your driving experience?","authors":"Ahmed Salem, T. Nadeem, M. Cetin","doi":"10.1145/2594368.2601452","DOIUrl":"https://doi.org/10.1145/2594368.2601452","url":null,"abstract":"Bluetooth wide availability in vehicles either through passengers’ smartphones or vehicles hardware have been poorly exploited by researchers[2]. Neighbor Discovery is an exclusive Bluetooth [3] feature can be utilized to enhance transportation services while maintaining vehicle’s privacy. In this project, we advocate for using Bluetooth in developing intelligent transportation services. We name our project DriveBlue. Traffic incidents (e.g. congestion, accidents) are likely to affect drivers daily commute. So far solutions to such problems are based on statistical analysis. Bluetooth was used to estimate the travel time by extrapolating the period used to travel between two points [1]. In this project, we exploit Bluetooth neighbor discovery to detect traffic conditions (e.g. average road speed, differentiate between vehicles on regular lanes versus HOV) with receivers placed in a single site as in Fig. 1a. Features are extracted, and classified revealing some of the current traffic conditions.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115225608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Nirjon, Jie Liu, G. DeJean, B. Priyantha, Yuzhe Jin, Ted Hart
Due to poor signal strength, multipath effects, and limited on-device computation power, common GPS receivers do not work indoors. This work addresses these challenges by using a steerable, high-gain directional antenna as the front-end of a GPS receiver along with a robust signal processing step and a novel location estimation technique to achieve direct GPS-based indoor localization. By leveraging the computing power of the cloud, we accommodate longer signals for acquisition, and remove the requirement of decoding timestamps or ephemeris data from GPS signals. We have tested our system in 31 randomly chosen spots inside five single-story, indoor environments such as stores, warehouses and shopping centers. Our experiments show that the system is capable of obtaining location fixes from 20 of these spots with a median error of less than 10 m, where all normal GPS receivers fail.
{"title":"COIN-GPS: indoor localization from direct GPS receiving","authors":"S. Nirjon, Jie Liu, G. DeJean, B. Priyantha, Yuzhe Jin, Ted Hart","doi":"10.1145/2594368.2594378","DOIUrl":"https://doi.org/10.1145/2594368.2594378","url":null,"abstract":"Due to poor signal strength, multipath effects, and limited on-device computation power, common GPS receivers do not work indoors. This work addresses these challenges by using a steerable, high-gain directional antenna as the front-end of a GPS receiver along with a robust signal processing step and a novel location estimation technique to achieve direct GPS-based indoor localization. By leveraging the computing power of the cloud, we accommodate longer signals for acquisition, and remove the requirement of decoding timestamps or ephemeris data from GPS signals. We have tested our system in 31 randomly chosen spots inside five single-story, indoor environments such as stores, warehouses and shopping centers. Our experiments show that the system is capable of obtaining location fixes from 20 of these spots with a median error of less than 10 m, where all normal GPS receivers fail.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"11 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123451780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}