Xiping Hu, Jun-qi Deng, Wenyan Hu, G. Fotopoulos, E. Ngai, Zhengguo Sheng, Min Liang, Xitong Li, Victor C. M. Leung, S. Fels
Driving is an integral part of our everyday lives, but it is also a time when people are uniquely vulnerable. Poor road condition, traffic congestion and long driving time may bring negative emotion to drivers and increase the chance of traffic accidents. We propose SAfeDJ, a situation-aware in-car music delivery application, which turns people's trips into pleasant journeys and driving into a safe and enjoyable activity. SAfeDJ aims at helping drivers to diminish fatigue and negative emotion. It is built on a vehicular healthcare platform that enables communications among drivers and integrates with multiple types of sensors to promote safe driving. Prototype implementation and initial results of SAfeDJ have demonstrated its desired functionality in drivers' daily lives and feasibility for real-world deployment.
{"title":"Poster -- SAfeDJ community: situation-aware in-car music delivery for safe driving","authors":"Xiping Hu, Jun-qi Deng, Wenyan Hu, G. Fotopoulos, E. Ngai, Zhengguo Sheng, Min Liang, Xitong Li, Victor C. M. Leung, S. Fels","doi":"10.1145/2639108.2642902","DOIUrl":"https://doi.org/10.1145/2639108.2642902","url":null,"abstract":"Driving is an integral part of our everyday lives, but it is also a time when people are uniquely vulnerable. Poor road condition, traffic congestion and long driving time may bring negative emotion to drivers and increase the chance of traffic accidents. We propose SAfeDJ, a situation-aware in-car music delivery application, which turns people's trips into pleasant journeys and driving into a safe and enjoyable activity. SAfeDJ aims at helping drivers to diminish fatigue and negative emotion. It is built on a vehicular healthcare platform that enables communications among drivers and integrates with multiple types of sensors to promote safe driving. Prototype implementation and initial results of SAfeDJ have demonstrated its desired functionality in drivers' daily lives and feasibility for real-world deployment.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"58 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126627667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Implementing distributed wireless protocols at the physical layer today is challenging because different nodes have different clocks, each of which has slightly different frequencies. This causes the nodes to have frequency offset relative to each other. As a result, transmitted signals from these nodes do not combine in a predictable manner over time. Past work tackles this challenge and builds distributed PHY layer systems by attempting to address the effects of the frequency offset and compensating for it in the transmitted signals. In this extended abstract, we address this challenge by addressing the root cause - the different clocks with different frequencies on the different nodes. We present AirClock, a new wireless coordination primitive that enables multiple nodes to act as if they are driven by a single clock that they receive wirelessly over the air. AirClock presents a synchronized abstraction to the physical layer, and hence enables direct implementation of diverse kinds of distributed PHY protocols. We illustrate AirClock's versatility by using it to build two different systems: (1) distributed MIMO, and (2) distributed rate adaptation for wireless sensors, and show that they can provide significant performance benefits over today's systems.
{"title":"Poster: clock synchronization for distributed wireless protocols at the physical layer","authors":"Omid Salehi-Abari, Hariharan Rahul, D. Katabi","doi":"10.1145/2639108.2642894","DOIUrl":"https://doi.org/10.1145/2639108.2642894","url":null,"abstract":"Implementing distributed wireless protocols at the physical layer today is challenging because different nodes have different clocks, each of which has slightly different frequencies. This causes the nodes to have frequency offset relative to each other. As a result, transmitted signals from these nodes do not combine in a predictable manner over time. Past work tackles this challenge and builds distributed PHY layer systems by attempting to address the effects of the frequency offset and compensating for it in the transmitted signals. In this extended abstract, we address this challenge by addressing the root cause - the different clocks with different frequencies on the different nodes. We present AirClock, a new wireless coordination primitive that enables multiple nodes to act as if they are driven by a single clock that they receive wirelessly over the air. AirClock presents a synchronized abstraction to the physical layer, and hence enables direct implementation of diverse kinds of distributed PHY protocols. We illustrate AirClock's versatility by using it to build two different systems: (1) distributed MIMO, and (2) distributed rate adaptation for wireless sensors, and show that they can provide significant performance benefits over today's systems.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"345 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124312622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Yang, Yekui Chen, Cheng Chen, Xiangyang Li, Xuan Ding, Yi Guo, Yunhao Liu
In many applications, we have to identify an object and then locate the object to within high precision (centimeter- or millimeter-level). Tracking mobile RFID tags in real time has been a daunting task, especially challenging for achieving high precision. We achieve these three goals by leveraging the phase value of the backscattered signal, provided by the COTS RFID readers, to estimate the location of the object. To illustrate the basic idea of our system, we firstly focus on a simple scenario where the tag is moving along a fixed track known to the system. We propose Differential Augmented Hologram (DAH) which will facilitate the instant tracking of the mobile RFID tag to a high precision. We then devise a comprehensive solution to accurately recover the tag's moving trajectory and its locations, relaxing the assumption of knowing tag's track function in advance.
{"title":"Demo: high-precision RFID tracking using COTS devies","authors":"Lei Yang, Yekui Chen, Cheng Chen, Xiangyang Li, Xuan Ding, Yi Guo, Yunhao Liu","doi":"10.1145/2639108.2641743","DOIUrl":"https://doi.org/10.1145/2639108.2641743","url":null,"abstract":"In many applications, we have to identify an object and then locate the object to within high precision (centimeter- or millimeter-level). Tracking mobile RFID tags in real time has been a daunting task, especially challenging for achieving high precision. We achieve these three goals by leveraging the phase value of the backscattered signal, provided by the COTS RFID readers, to estimate the location of the object. To illustrate the basic idea of our system, we firstly focus on a simple scenario where the tag is moving along a fixed track known to the system. We propose Differential Augmented Hologram (DAH) which will facilitate the instant tracking of the mobile RFID tag to a high precision. We then devise a comprehensive solution to accurately recover the tag's moving trajectory and its locations, relaxing the assumption of knowing tag's track function in advance.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114349713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's Enterprise Wireless LANs are comprised of densely deployed access points. This paper proposes BBN, an interference nulling scheme that leverages the high density of access points to enable multiple mobile devices to transmit simultaneously to multiple access points (APs), all within a single collision domain. BBN also leverages the capability of the APs to communicate with each other on the wired backbone to migrate most of the decoding complexity to the APs, while keeping the design at the mobile clients simple. Finally, we leverage the static nature of the access points to make BBN more practical in networks where the mobility of clients inhibit the use of traditional interference alignment schemes. We implement a prototype of BBN on USRP testbed showing its feasibility. The experiment results show that BBN provides a throughput gain of 1.48X over omniscient TDMA. Results from our trace-driven simulations show that BBN obtains a throughput of up to 5.6X over omniscient TDMA.
{"title":"BBN: throughput scaling in dense enterprise WLANs with Bind Beamforming and Nulling","authors":"Wenjie Zhou, T. Bansal, P. Sinha, K. Srinivasan","doi":"10.1145/2639108.2639113","DOIUrl":"https://doi.org/10.1145/2639108.2639113","url":null,"abstract":"Today's Enterprise Wireless LANs are comprised of densely deployed access points. This paper proposes BBN, an interference nulling scheme that leverages the high density of access points to enable multiple mobile devices to transmit simultaneously to multiple access points (APs), all within a single collision domain. BBN also leverages the capability of the APs to communicate with each other on the wired backbone to migrate most of the decoding complexity to the APs, while keeping the design at the mobile clients simple. Finally, we leverage the static nature of the access points to make BBN more practical in networks where the mobility of clients inhibit the use of traditional interference alignment schemes. We implement a prototype of BBN on USRP testbed showing its feasibility. The experiment results show that BBN provides a throughput gain of 1.48X over omniscient TDMA. Results from our trace-driven simulations show that BBN obtains a throughput of up to 5.6X over omniscient TDMA.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124127462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lan Zhang, Xiangyang Li, Wenchao Huang, Kebin Liu, Shuwei Zong, X. Jian, Puchun Feng, Taeho Jung, Yunhao Liu
In this work, we explore a new networking mechanism with smart glasses, through which users can express their interest and connect to a target simply by a gaze. Doing this, we attempt to let wearable devices understand human attention and intention, and pair devices carried by users according to such attention and intention. To achieve this ambitious goal, we propose a proof-of-concept system iGaze, a visual attention driven networking suite: an iGaze glass (hardware), and a networking protocol VAN (software). Our glass, iGaze glass, is a low-cost head-mounted glass with a camera, orientation sensors, microphone and speakers, which are embedded with our software for visual attention capture and networking. A visual attention driven networking protocol (VAN) is carefully designed and implemented. In VAN, we design an energy efficient and highly accurate visual attention determination scheme using single camera to capture user's communication interest and a double-matching scheme based on visual direction detection and Doppler effect of acoustic signal to lock the target devices. Using our system, we conduct a series of trials for various application scenarios to demonstrate the effectiveness of our system.
{"title":"It starts with iGaze: visual attention driven networking with smart glasses","authors":"Lan Zhang, Xiangyang Li, Wenchao Huang, Kebin Liu, Shuwei Zong, X. Jian, Puchun Feng, Taeho Jung, Yunhao Liu","doi":"10.1145/2639108.2639119","DOIUrl":"https://doi.org/10.1145/2639108.2639119","url":null,"abstract":"In this work, we explore a new networking mechanism with smart glasses, through which users can express their interest and connect to a target simply by a gaze. Doing this, we attempt to let wearable devices understand human attention and intention, and pair devices carried by users according to such attention and intention. To achieve this ambitious goal, we propose a proof-of-concept system iGaze, a visual attention driven networking suite: an iGaze glass (hardware), and a networking protocol VAN (software). Our glass, iGaze glass, is a low-cost head-mounted glass with a camera, orientation sensors, microphone and speakers, which are embedded with our software for visual attention capture and networking. A visual attention driven networking protocol (VAN) is carefully designed and implemented. In VAN, we design an energy efficient and highly accurate visual attention determination scheme using single camera to capture user's communication interest and a double-matching scheme based on visual direction detection and Doppler effect of acoustic signal to lock the target devices. Using our system, we conduct a series of trials for various application scenarios to demonstrate the effectiveness of our system.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"15 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127659498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An appealing solution for unattended surveillance and monitoring applications is Self-powered Wireless Sensor Networks (WSNs). One of the main reasons is that the energy which is derived from power harvesting can significantly extend the network lifetime. Consequently, the network can work unattended for long periods. However, WSNs are characterized by multi-hop lossy links and resource constrained nodes while they have to face the coexistence problem with other applications. Opportunistic Routing (OR) is a routing paradigm to improve network performance in lossy wireless networks. At the same time, Cognitive Radio (CR) technology enables unlicensed operation in licensed bands. In this work, a combination of these two research approaches in a novel routing protocol is presented. A Spectrum and Energy Aware Opportunistic Routing (SEA-OR) protocol is proposed and designed for Self-powered WSNs. Moreover, a prioritization scheme which balances the packet advancement, the residual energy and the link reliability is introduced. Preliminary results show an improvement in network lifetime and delivery ratio. The performance of the introduced protocol is also evaluated in prototypes.
{"title":"Poster - SEA-OR: spectrum and energy aware opportunistic routing for self-powered wireless sensor networks","authors":"P. Spachos, D. Hatzinakos","doi":"10.1145/2639108.2642914","DOIUrl":"https://doi.org/10.1145/2639108.2642914","url":null,"abstract":"An appealing solution for unattended surveillance and monitoring applications is Self-powered Wireless Sensor Networks (WSNs). One of the main reasons is that the energy which is derived from power harvesting can significantly extend the network lifetime. Consequently, the network can work unattended for long periods. However, WSNs are characterized by multi-hop lossy links and resource constrained nodes while they have to face the coexistence problem with other applications. Opportunistic Routing (OR) is a routing paradigm to improve network performance in lossy wireless networks. At the same time, Cognitive Radio (CR) technology enables unlicensed operation in licensed bands. In this work, a combination of these two research approaches in a novel routing protocol is presented. A Spectrum and Energy Aware Opportunistic Routing (SEA-OR) protocol is proposed and designed for Self-powered WSNs. Moreover, a prioritization scheme which balances the packet advancement, the residual energy and the link reliability is introduced. Preliminary results show an improvement in network lifetime and delivery ratio. The performance of the introduced protocol is also evaluated in prototypes.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114174204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We explore the indoor positioning problem with unmodified smartphones and slightly-modified commercial LED luminaires. The luminaires-modified to allow rapid, on-off keying-transmit their identifiers and/or locations encoded in human-imperceptible optical pulses. A camera-equipped smartphone, using just a single image frame capture, can detect the presence of the luminaires in the image, decode their transmitted identifiers and/or locations, and determine the smartphone's location and orientation relative to the luminaires. Continuous image capture and processing enables continuous position updates. The key insights underlying this work are (i) the driver circuits of emerging LED lighting systems can be easily modified to transmit data through on-off keying; (ii) the rolling shutter effect of CMOS imagers can be leveraged to receive many bits of data encoded in the optical transmissions with just a single frame capture, (iii) a camera is intrinsically an angle-of-arrival sensor, so the projection of multiple nearby light sources with known positions onto a camera's image plane can be framed as an instance of a sufficiently-constrained angle-of-arrival localization problem, and (iv) this problem can be solved with optimization techniques. We explore the feasibility of the design through an analytical model, demonstrate the viability of the design through a prototype system, discuss the challenges to a practical deployment including usability and scalability, and demonstrate decimeter-level accuracy in both carefully controlled and more realistic human mobility scenarios.
{"title":"Luxapose: indoor positioning with mobile phones and visible light","authors":"Ye-Sheng Kuo, P. Pannuto, Ko-Jen Hsiao, P. Dutta","doi":"10.1145/2639108.2639109","DOIUrl":"https://doi.org/10.1145/2639108.2639109","url":null,"abstract":"We explore the indoor positioning problem with unmodified smartphones and slightly-modified commercial LED luminaires. The luminaires-modified to allow rapid, on-off keying-transmit their identifiers and/or locations encoded in human-imperceptible optical pulses. A camera-equipped smartphone, using just a single image frame capture, can detect the presence of the luminaires in the image, decode their transmitted identifiers and/or locations, and determine the smartphone's location and orientation relative to the luminaires. Continuous image capture and processing enables continuous position updates. The key insights underlying this work are (i) the driver circuits of emerging LED lighting systems can be easily modified to transmit data through on-off keying; (ii) the rolling shutter effect of CMOS imagers can be leveraged to receive many bits of data encoded in the optical transmissions with just a single frame capture, (iii) a camera is intrinsically an angle-of-arrival sensor, so the projection of multiple nearby light sources with known positions onto a camera's image plane can be framed as an instance of a sufficiently-constrained angle-of-arrival localization problem, and (iv) this problem can be solved with optimization techniques. We explore the feasibility of the design through an analytical model, demonstrate the viability of the design through a prototype system, discuss the challenges to a practical deployment including usability and scalability, and demonstrate decimeter-level accuracy in both carefully controlled and more realistic human mobility scenarios.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115402275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sleep monitoring has drawn increasingly attention as the quality and quantity of the sleep are important for maintaining a person's health and well-being. For example, inadequate and irregular sleep are usually associated with serious health problems such as fatigue, depression and cardiovascular disease. Traditional sleep monitoring systems, such as PSG, involve wearable sensors with professional installations, and thus are limited to clinical usage. Recent work in using smartphone sensors for sleep monitoring can detect several events related to sleep, such as body movement, cough and snore. Such coarse-grained sleep monitoring however is unable to detect the breathing rate which is a vital sign and health indicator. This work presents a fine-grained sleep monitoring system which is capable of detecting the breathing rate by leveraging smartphones. Our system exploits the readily available smartphone earphone that placed close to the user to capture the breath sound reliably. Given the captured acoustic signal, our system performs noise reduction to remove environmental noise and then identifies the breathing rate based on the signal envelope detection. Our experimental evaluation of six subjects over six months time period demonstrates that the breathing rate monitoring is highly accurate and robust under various environments. This strongly indicates the feasibility of using the smartphone and its earphone to perform continuous and noninvasive fine-grained sleep monitoring.
{"title":"Poster: hearing your breathing: fine-grained sleep monitoring using smartphones","authors":"Yanzhi Ren, Chen Wang, Yingying Chen, J. Yang","doi":"10.1145/2639108.2642898","DOIUrl":"https://doi.org/10.1145/2639108.2642898","url":null,"abstract":"Sleep monitoring has drawn increasingly attention as the quality and quantity of the sleep are important for maintaining a person's health and well-being. For example, inadequate and irregular sleep are usually associated with serious health problems such as fatigue, depression and cardiovascular disease. Traditional sleep monitoring systems, such as PSG, involve wearable sensors with professional installations, and thus are limited to clinical usage. Recent work in using smartphone sensors for sleep monitoring can detect several events related to sleep, such as body movement, cough and snore. Such coarse-grained sleep monitoring however is unable to detect the breathing rate which is a vital sign and health indicator. This work presents a fine-grained sleep monitoring system which is capable of detecting the breathing rate by leveraging smartphones. Our system exploits the readily available smartphone earphone that placed close to the user to capture the breath sound reliably. Given the captured acoustic signal, our system performs noise reduction to remove environmental noise and then identifies the breathing rate based on the signal envelope detection. Our experimental evaluation of six subjects over six months time period demonstrates that the breathing rate monitoring is highly accurate and robust under various environments. This strongly indicates the feasibility of using the smartphone and its earphone to perform continuous and noninvasive fine-grained sleep monitoring.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121827156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Liang, N. Lane, N. Brouwers, Li Zhang, Börje F. Karlsson, Hao Liu, Yan Liu, Jun Tang, Xiang Shan, Ranveer Chandra, Feng Zhao
Scalable and comprehensive testing of mobile apps is extremely challenging. Every test input needs to be run with a variety of contexts, such as: device heterogeneity, wireless network speeds, locations, and unpredictable sensor inputs. The range of values for each context, e.g. location, can be very large. In this paper we present Caiipa, a cloud service for testing apps over an expanded mobile context space in a scalable way. It incorporates key techniques to make app testing more tractable, including a context test space prioritizer to quickly discover failure scenarios for each app. We have implemented Caiipa on a cluster of VMs and real devices that can each emulate various combinations of contexts for tablet and phone apps. We evaluate Caiipa by testing 265 commercially available mobile apps based on a comprehensive library of real-world conditions. Our results show that Caiipa leads to improvements of 11.1x and 8.4x in the number of crashes and performance bugs discovered compared to conventional UI-based automation (i.e., monkey-testing).
{"title":"Caiipa: automated large-scale mobile app testing through contextual fuzzing","authors":"C. Liang, N. Lane, N. Brouwers, Li Zhang, Börje F. Karlsson, Hao Liu, Yan Liu, Jun Tang, Xiang Shan, Ranveer Chandra, Feng Zhao","doi":"10.1145/2639108.2639131","DOIUrl":"https://doi.org/10.1145/2639108.2639131","url":null,"abstract":"Scalable and comprehensive testing of mobile apps is extremely challenging. Every test input needs to be run with a variety of contexts, such as: device heterogeneity, wireless network speeds, locations, and unpredictable sensor inputs. The range of values for each context, e.g. location, can be very large. In this paper we present Caiipa, a cloud service for testing apps over an expanded mobile context space in a scalable way. It incorporates key techniques to make app testing more tractable, including a context test space prioritizer to quickly discover failure scenarios for each app. We have implemented Caiipa on a cluster of VMs and real devices that can each emulate various combinations of contexts for tablet and phone apps. We evaluate Caiipa by testing 265 commercially available mobile apps based on a comprehensive library of real-world conditions. Our results show that Caiipa leads to improvements of 11.1x and 8.4x in the number of crashes and performance bugs discovered compared to conventional UI-based automation (i.e., monkey-testing).","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124672649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Security and usability issues with pass-locks on mobile devices have prompted researchers to develop implicit authentication (IA) schemes, which continuously and transparently authenticate users using behavioural biometrics. Contemporary IA schemes proposed by the research community are challenging to deploy, and there is a need for a framework that supports: different behavioural classifiers, given that different apps have different requirements; app developers using IA without becoming domain experts; and real-time classification on resource-constrained mobile devices. We present Itus, an IA framework for Android that allows the research community to improve IA schemes incrementally, while allowing app developers to adopt these improvements at their own pace. We describe the Itus framework and how it provides: ease of use: Itus allows app developers to use IA by changing as few as two lines of their existing code - on the other hand, Itus provides an oracle capable of making advanced recommendations should developers wish to fine-tune the classifiers; flexibility: developers can deploy Itus in an application-specific manner, adapting to their unique needs; extensibility: researchers can contribute new behavioural features and classifiers without worrying about deployment particulars; low performance overhead: Itus operates with minimal performance overhead, allowing app developers to deploy it without compromising end-user experience. These goals are accomplished with an API allowing individual stakeholders to incrementally improve Itus without re-engineering new systems. We implement Itus in two demo apps and measure its performance impact. To our knowledge, Itus is the first open-source extensible IA framework for Android that can be deployed off-the-shelf.
{"title":"Itus: an implicit authentication framework for android","authors":"Hassan Khan, Aaron Atwater, U. Hengartner","doi":"10.1145/2639108.2639141","DOIUrl":"https://doi.org/10.1145/2639108.2639141","url":null,"abstract":"Security and usability issues with pass-locks on mobile devices have prompted researchers to develop implicit authentication (IA) schemes, which continuously and transparently authenticate users using behavioural biometrics. Contemporary IA schemes proposed by the research community are challenging to deploy, and there is a need for a framework that supports: different behavioural classifiers, given that different apps have different requirements; app developers using IA without becoming domain experts; and real-time classification on resource-constrained mobile devices. We present Itus, an IA framework for Android that allows the research community to improve IA schemes incrementally, while allowing app developers to adopt these improvements at their own pace. We describe the Itus framework and how it provides: ease of use: Itus allows app developers to use IA by changing as few as two lines of their existing code - on the other hand, Itus provides an oracle capable of making advanced recommendations should developers wish to fine-tune the classifiers; flexibility: developers can deploy Itus in an application-specific manner, adapting to their unique needs; extensibility: researchers can contribute new behavioural features and classifiers without worrying about deployment particulars; low performance overhead: Itus operates with minimal performance overhead, allowing app developers to deploy it without compromising end-user experience. These goals are accomplished with an API allowing individual stakeholders to incrementally improve Itus without re-engineering new systems. We implement Itus in two demo apps and measure its performance impact. To our knowledge, Itus is the first open-source extensible IA framework for Android that can be deployed off-the-shelf.","PeriodicalId":331897,"journal":{"name":"Proceedings of the 20th annual international conference on Mobile computing and networking","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128138708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}