Shuai Hao, B. Liu, Suman Nath, William G. J. Halfond, R. Govindan
Mobile app ecosystems have experienced tremendous growth in the last six years. This has triggered research on dynamic analysis of performance, security, and correctness properties of the mobile apps in the ecosystem. Exploration of app execution using automated UI actions has emerged as an important tool for this research. However, existing research has largely developed analysis-specific UI automation techniques, wherein the logic for exploring app execution is intertwined with the logic for analyzing app properties. PUMA is a programmable framework that separates these two concerns. It contains a generic UI automation capability (often called a Monkey) that exposes high-level events for which users can define handlers. These handlers can flexibly direct the Monkey's exploration, and also specify app instrumentation for collecting dynamic state information or for triggering changes in the environment during app execution. Targeted towards operators of app marketplaces, PUMA incorporates mechanisms for scaling dynamic analysis to thousands of apps. We demonstrate the capabilities of PUMA by analyzing seven distinct performance, security, and correctness properties for 3,600 apps downloaded from the Google Play store.
{"title":"PUMA: programmable UI-automation for large-scale dynamic analysis of mobile apps","authors":"Shuai Hao, B. Liu, Suman Nath, William G. J. Halfond, R. Govindan","doi":"10.1145/2594368.2594390","DOIUrl":"https://doi.org/10.1145/2594368.2594390","url":null,"abstract":"Mobile app ecosystems have experienced tremendous growth in the last six years. This has triggered research on dynamic analysis of performance, security, and correctness properties of the mobile apps in the ecosystem. Exploration of app execution using automated UI actions has emerged as an important tool for this research. However, existing research has largely developed analysis-specific UI automation techniques, wherein the logic for exploring app execution is intertwined with the logic for analyzing app properties. PUMA is a programmable framework that separates these two concerns. It contains a generic UI automation capability (often called a Monkey) that exposes high-level events for which users can define handlers. These handlers can flexibly direct the Monkey's exploration, and also specify app instrumentation for collecting dynamic state information or for triggering changes in the environment during app execution. Targeted towards operators of app marketplaces, PUMA incorporates mechanisms for scaling dynamic analysis to thousands of apps. We demonstrate the capabilities of PUMA by analyzing seven distinct performance, security, and correctness properties for 3,600 apps downloaded from the Google Play store.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128879565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasingly, everyday physical objects become “smart” by making their functionality accessible, controllable, and extensible for Internet-based users and services via a connection to the digital world. Deployed in scenarios that range from private households over offices to public spaces, smart objects enable ubiquitous “smart spaces” that build on interaction with mobile users. However, ubiquitous interaction with smart objects is currently complicated by three factors. 1) Communication with objects requires Internet or local network access, a requirement that is not met under ground, abroad, or when lacking access credentials to 802.11 networks. 2) Identifying a specific object from the envisioned billions of objects requires a suitable discovery mechanism, introducing delays and mandating object owners to disclose object semantics. 3) Interacting with object functionalities mandates an a-priori installation of a specific app, that provides a human-usable interface, per object and use case, resulting in an abundance of (redundant) apps. We argue that smart object interaction is thereby restricted to pre-defined scenarios and objects, e.g., at home or in offices. In this demonstration, we strive to make smart object interaction ubiquitous. Current approaches abstract from user locations and contexts via the Internet but lack support for spontaneous discovery and interaction with possibly unknown objects in the immediate vicinity of the user. In order to enable such interaction, we address the aforementioned factors by 1) enabling direct communication and interaction with objects over Bluetooth 4.0 Low Energy (BLE), removing the need for network access and reducing the discovery scope to the intuitive local interaction scope of the user and 2) enable
{"title":"Demo: Ubiquitous interaction with smart objects","authors":"Jan Rüth, Hanno Wirtz, Klaus Wehrle","doi":"10.1145/2594368.2601477","DOIUrl":"https://doi.org/10.1145/2594368.2601477","url":null,"abstract":"Increasingly, everyday physical objects become “smart” by making their functionality accessible, controllable, and extensible for Internet-based users and services via a connection to the digital world. Deployed in scenarios that range from private households over offices to public spaces, smart objects enable ubiquitous “smart spaces” that build on interaction with mobile users. However, ubiquitous interaction with smart objects is currently complicated by three factors. 1) Communication with objects requires Internet or local network access, a requirement that is not met under ground, abroad, or when lacking access credentials to 802.11 networks. 2) Identifying a specific object from the envisioned billions of objects requires a suitable discovery mechanism, introducing delays and mandating object owners to disclose object semantics. 3) Interacting with object functionalities mandates an a-priori installation of a specific app, that provides a human-usable interface, per object and use case, resulting in an abundance of (redundant) apps. We argue that smart object interaction is thereby restricted to pre-defined scenarios and objects, e.g., at home or in offices. In this demonstration, we strive to make smart object interaction ubiquitous. Current approaches abstract from user locations and contexts via the Internet but lack support for spontaneous discovery and interaction with possibly unknown objects in the immediate vicinity of the user. In order to enable such interaction, we address the aforementioned factors by 1) enabling direct communication and interaction with objects over Bluetooth 4.0 Low Energy (BLE), removing the need for network access and reducing the discovery scope to the intuitive local interaction scope of the user and 2) enable","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129201524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smartphones are equipped with various sensors that provide rich context information. By leveraging these sensors, several interesting and practical applications have emerged. Accelerometer data has been used, for example, to detect transportation [3], exercise activities [2], etc. A typical approach is to classify activity directly based on features extracted from raw sensing data. Cheng et. al. implemented a different approach by using two-stage classification: the system first detects several sub-behaviors, and uses the combination of attributes to infer higher-level behaviors. Built upon this approach, we foucus on exploring the time sequence of activities, which is an underexplored, yet natural and information-rich indicator. In this work, we explore this time sequence concept through detection of smoking events. In the public area, smoking is usually prohibited. Thus, smokers normally go to outdoor areas with fewer passerbys to smoke. Instead of detecting bio-signals through wearable sensors [1], we leverage movement patterns as indicators; smokers normally start from a stationary state (either the phone is on the desk or in their pocket), walk to the smoking spot which is usually outdoors, stand there for several minutes, then go back to their working area and resume stationary state. Although there are various activities with similar patterns that might cause false positives, e.g., buying lunch from an outdoor food truck, we believe there are subtleties in the sensor data to distinguish them apart, e.g. differences between standing casually (smoking), versus moving periodically when waiting in line (food truck). In this work we demonstrate the detection of the smoking movement pattern through data collected from the primary phone of one smoker for two days.
{"title":"Poster: M-Seven: monitoring smoking event by considering time sequence information via iPhone M7 API","authors":"Bo-Jhang Ho, M. Srivastava","doi":"10.1145/2594368.2601451","DOIUrl":"https://doi.org/10.1145/2594368.2601451","url":null,"abstract":"Smartphones are equipped with various sensors that provide rich context information. By leveraging these sensors, several interesting and practical applications have emerged. Accelerometer data has been used, for example, to detect transportation [3], exercise activities [2], etc. A typical approach is to classify activity directly based on features extracted from raw sensing data. Cheng et. al. implemented a different approach by using two-stage classification: the system first detects several sub-behaviors, and uses the combination of attributes to infer higher-level behaviors. Built upon this approach, we foucus on exploring the time sequence of activities, which is an underexplored, yet natural and information-rich indicator. In this work, we explore this time sequence concept through detection of smoking events. In the public area, smoking is usually prohibited. Thus, smokers normally go to outdoor areas with fewer passerbys to smoke. Instead of detecting bio-signals through wearable sensors [1], we leverage movement patterns as indicators; smokers normally start from a stationary state (either the phone is on the desk or in their pocket), walk to the smoking spot which is usually outdoors, stand there for several minutes, then go back to their working area and resume stationary state. Although there are various activities with similar patterns that might cause false positives, e.g., buying lunch from an outdoor food truck, we believe there are subtleties in the sensor data to distinguish them apart, e.g. differences between standing casually (smoking), versus moving periodically when waiting in line (food truck). In this work we demonstrate the detection of the smoking movement pattern through data collected from the primary phone of one smoker for two days.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124441641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kiryong Ha, Zhuo Chen, Wenlu Hu, Wolfgang Richter, P. Pillai, M. Satyanarayanan
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
{"title":"Towards wearable cognitive assistance","authors":"Kiryong Ha, Zhuo Chen, Wenlu Hu, Wolfgang Richter, P. Pillai, M. Satyanarayanan","doi":"10.1145/2594368.2594383","DOIUrl":"https://doi.org/10.1145/2594368.2594383","url":null,"abstract":"We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121244276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Rosen, Hongyi Yao, Ashkan Nikravesh, Yunhan Jia, D. Choffnes, Z. Morley Mao
Mobilyzer is an open-source network measurement library that coordinates network measurement tasks among different applications, facilitates measurement task design, and allows for more effective measurement task management than in existing standalone approaches. Unifying various network tasks into one framework greatly simplifies the problem of developing, deploying and managing measurement tasks which may otherwise interfere with one another. An intelligent scheduler, coordinated by a central server, dynamically schedules tasks to run in the background, preserving the user's battery life and respecting limits set by the user on task frequency and data consumption. We will demo MobiPerf, an open-source mobile network measurement tool built using the Mobilyzer library. MobiPerf collects a wide range of network performance data, ranging from the latency and throughput measurements common in existing client-based measurement frameworks, to HTTP loading times for specific URLs, to inferring RRC state configuration parameters and their impact on performance. We will also demo an interface for viewing a large, open dataset of performance data from around the world collected by MobiPerf.
{"title":"Demo: Mapping global mobile performance trends with mobilyzer and mobiPerf","authors":"S. Rosen, Hongyi Yao, Ashkan Nikravesh, Yunhan Jia, D. Choffnes, Z. Morley Mao","doi":"10.1145/2594368.2601469","DOIUrl":"https://doi.org/10.1145/2594368.2601469","url":null,"abstract":"Mobilyzer is an open-source network measurement library that coordinates network measurement tasks among different applications, facilitates measurement task design, and allows for more effective measurement task management than in existing standalone approaches. Unifying various network tasks into one framework greatly simplifies the problem of developing, deploying and managing measurement tasks which may otherwise interfere with one another. An intelligent scheduler, coordinated by a central server, dynamically schedules tasks to run in the background, preserving the user's battery life and respecting limits set by the user on task frequency and data consumption. We will demo MobiPerf, an open-source mobile network measurement tool built using the Mobilyzer library. MobiPerf collects a wide range of network performance data, ranging from the latency and throughput measurements common in existing client-based measurement frameworks, to HTTP loading times for specific URLs, to inferring RRC state configuration parameters and their impact on performance. We will also demo an interface for viewing a large, open dataset of performance data from around the world collected by MobiPerf.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many urgent problems facing the planet: a degrading environment, a healthcare system in crisis, and educational systems that are failing to produce creative, innovative thinkers to solve tomorrow's problems. Technology influences behavior, and I believe when we balance it with revolutionary design, we can reduce a family's energy and water use by 50%, double most people's daily physical activity, and educate any child anywhere in the world to a level of proficiency on par with the planet's best students. My research program tackles these grand challenges by using a new model of interdisciplinary research that takes a long view and encourages risk-taking and creativity. I will illustrate how we are addressing these grand challenges in our research by building systems that balance innovative user interfaces with novel activity inference technology. These systems have helped individuals stay fit, led families to be more sustainable in their everyday lives, and supported learners in acquiring second languages. I will also introduce the World Lab, a cross-cultural institute that embodies my balanced approach to attack the world's biggest problems today, while preparing the technology and design leaders of tomorrow. James Landay is a Professor of Information Science at Cornell Tech, specializing in human-computer interaction. He will become a Professor of Computer Science at Stanford in August, 2014. Previously, James was a Professor of Computer Science & Engineering at the University of Washington. His current research interests include Technology to Support Behavior Change, Demonstrational Interfaces, Mobile & Ubiquitous Computing, and User Interface Design Tools. He is the founder and co-director of the World Lab, a joint research and educational effort with Tsinghua University in Beijing. Landay received his BS in EECS from UC Berkeley in 1990 and MS and PhD in Computer Science from Carnegie Mellon University in 1993 and 1996, respectively. His PhD dissertation was the first to demonstrate the use of sketching in user interface design tools. He was previously the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that explored the new usage models, applications, and technology for ubiquitous computing. He was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. From 1997 through 2003 he was a tenured professor in EECS at UC Berkeley. He was named to the ACM SIGCHI Academy in 2011. He currently serves on the NSF CISE Advisory Committee.
{"title":"Balancing design and technology to tackle global grand challenges","authors":"J. Landay","doi":"10.1145/2594368.2620048","DOIUrl":"https://doi.org/10.1145/2594368.2620048","url":null,"abstract":"There are many urgent problems facing the planet: a degrading environment, a healthcare system in crisis, and educational systems that are failing to produce creative, innovative thinkers to solve tomorrow's problems. Technology influences behavior, and I believe when we balance it with revolutionary design, we can reduce a family's energy and water use by 50%, double most people's daily physical activity, and educate any child anywhere in the world to a level of proficiency on par with the planet's best students. My research program tackles these grand challenges by using a new model of interdisciplinary research that takes a long view and encourages risk-taking and creativity. I will illustrate how we are addressing these grand challenges in our research by building systems that balance innovative user interfaces with novel activity inference technology. These systems have helped individuals stay fit, led families to be more sustainable in their everyday lives, and supported learners in acquiring second languages. I will also introduce the World Lab, a cross-cultural institute that embodies my balanced approach to attack the world's biggest problems today, while preparing the technology and design leaders of tomorrow. James Landay is a Professor of Information Science at Cornell Tech, specializing in human-computer interaction. He will become a Professor of Computer Science at Stanford in August, 2014. Previously, James was a Professor of Computer Science & Engineering at the University of Washington. His current research interests include Technology to Support Behavior Change, Demonstrational Interfaces, Mobile & Ubiquitous Computing, and User Interface Design Tools. He is the founder and co-director of the World Lab, a joint research and educational effort with Tsinghua University in Beijing. Landay received his BS in EECS from UC Berkeley in 1990 and MS and PhD in Computer Science from Carnegie Mellon University in 1993 and 1996, respectively. His PhD dissertation was the first to demonstrate the use of sketching in user interface design tools. He was previously the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that explored the new usage models, applications, and technology for ubiquitous computing. He was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. From 1997 through 2003 he was a tenured professor in EECS at UC Berkeley. He was named to the ACM SIGCHI Academy in 2011. He currently serves on the NSF CISE Advisory Committee.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115523036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern mobile systems are equipped with a diverse collection of I/O devices, including cameras, microphones, various sensors, and cellular modem. There exist many novel use cases for allowing an application on one mobile system to utilize I/O devices from another. This video demonstrates Rio, an I/O sharing solution that supports unmodified applications and realizes many of these novel use cases. Rio's design is common to many classes of I/O devices, significantly reducing the engineering effort to support new I/O devices. Moreover, it supports all the functionalities of an I/O device for sharing. Rio also supports I/O sharing between mobile systems of different form factors, including smartphones and tablets.
{"title":"Video: Rio: a system solution for sharing i/o between mobile systems","authors":"A. A. Sani, Kevin Boos, Minhong Yun, Lin Zhong","doi":"10.1145/2594368.2602434","DOIUrl":"https://doi.org/10.1145/2594368.2602434","url":null,"abstract":"Modern mobile systems are equipped with a diverse collection of I/O devices, including cameras, microphones, various sensors, and cellular modem. There exist many novel use cases for allowing an application on one mobile system to utilize I/O devices from another. This video demonstrates Rio, an I/O sharing solution that supports unmodified applications and realizes many of these novel use cases. Rio's design is common to many classes of I/O devices, significantly reducing the engineering effort to support new I/O devices. Moreover, it supports all the functionalities of an I/O device for sharing. Rio also supports I/O sharing between mobile systems of different form factors, including smartphones and tablets.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129195190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lenin Ravindranath, S. Agarwal, J. Padhye, Christopher J. Riederer
Many popular, professionally-written smartphone apps today prefetch large amounts of network data to improve performance. However, the typical user may not use all of this network data. When a user is on a limited or pay-per-byte cellular data plan, such as when roaming internationally, this prefetching behavior can cost her in overage fees on her cellular bill. This video demonstrates Procrastinator, which is a system that automatically decides when to fetch each network object that an app requests. This decision is made based on whether the user is on Wi-Fi or cellular, how many bytes are remaining on her data plan, and whether the object is needed at the present time. Procrastinator does not require app developer effort, nor app source code, nor OS changes -- it modifies the app binary to trap specific system calls and inject custom code. Our system can achieve as little as no savings to 4X reduction in total bytes transferred by an app, depending on the user and the app. These savings for the data-poor user come with a 300ms median latency penalty on LTE if the user goes to a part of the app where Procrastinator did not allow data to be prefetched. This video shows how main content on the primary page of apps is unaffected, and the delay that the user will typically experience if she goes to secondary pages in apps when she is running out of cellular data plan bytes.
{"title":"Video: Procrastinator: pacing mobile apps' usage of the network","authors":"Lenin Ravindranath, S. Agarwal, J. Padhye, Christopher J. Riederer","doi":"10.1145/2594368.2602432","DOIUrl":"https://doi.org/10.1145/2594368.2602432","url":null,"abstract":"Many popular, professionally-written smartphone apps today prefetch large amounts of network data to improve performance. However, the typical user may not use all of this network data. When a user is on a limited or pay-per-byte cellular data plan, such as when roaming internationally, this prefetching behavior can cost her in overage fees on her cellular bill. This video demonstrates Procrastinator, which is a system that automatically decides when to fetch each network object that an app requests. This decision is made based on whether the user is on Wi-Fi or cellular, how many bytes are remaining on her data plan, and whether the object is needed at the present time. Procrastinator does not require app developer effort, nor app source code, nor OS changes -- it modifies the app binary to trap specific system calls and inject custom code. Our system can achieve as little as no savings to 4X reduction in total bytes transferred by an app, depending on the user and the app. These savings for the data-poor user come with a 300ms median latency penalty on LTE if the user goes to a part of the app where Procrastinator did not allow data to be prefetched. This video shows how main content on the primary page of apps is unaffected, and the delay that the user will typically experience if she goes to secondary pages in apps when she is running out of cellular data plan bytes.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130663731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mayberry, Pan Hu, Benjamin M Marlin, C. Salthouse, Deepak Ganesan
Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.
{"title":"iShadow: design of a wearable, real-time mobile gaze tracker","authors":"A. Mayberry, Pan Hu, Benjamin M Marlin, C. Salthouse, Deepak Ganesan","doi":"10.1145/2594368.2594388","DOIUrl":"https://doi.org/10.1145/2594368.2594388","url":null,"abstract":"Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many Android applications are distributed for free but are supported by advertisements. Ad libraries embedded in the app fetch content from the ad provider and display it on the app's user interface. The ad provider pays the developer for the ads displayed to the user and ads clicked by the user. A major threat to this ecosystem is ad fraud, where a miscreant's code fetches ads without displaying them to the user or "clicks" on ads automatically. Ad fraud has been extensively studied in the context of web advertising but has gone largely unstudied in the context of mobile advertising. We take the first step to study mobile ad fraud perpetrated by Android apps. We identify two fraudulent ad behaviors in apps: 1) requesting ads while the app is in the background, and 2) clicking on ads without user interaction. Based on these observations, we developed an analysis tool, MAdFraud, which automatically runs many apps simultaneously in emulators to trigger and expose ad fraud. Since the formats of ad impressions and clicks vary widely between different ad providers, we develop a novel approach for automatically identifying ad impressions and clicks in three steps: building HTTP request trees, identifying ad request pages using machine learning, and detecting clicks in HTTP request trees using heuristics. We apply our methodology and tool to two datasets: 1) 130,339 apps crawled from 19 Android markets including Play and many third-party markets, and 2) 35,087 apps that likely contain malware provided by a security company. From analyzing these datasets, we find that about 30% of apps with ads make ad requests while in running in the background. In addition, we find 27 apps which generate clicks without user interaction. We find that the click fraud apps attempt to remain stealthy when fabricating ad traffic by only periodically sending clicks and changing which ad provider is being targeted between installations.
{"title":"MAdFraud: investigating ad fraud in android applications","authors":"J. Crussell, Ryan Stevens, Hao Chen","doi":"10.1145/2594368.2594391","DOIUrl":"https://doi.org/10.1145/2594368.2594391","url":null,"abstract":"Many Android applications are distributed for free but are supported by advertisements. Ad libraries embedded in the app fetch content from the ad provider and display it on the app's user interface. The ad provider pays the developer for the ads displayed to the user and ads clicked by the user. A major threat to this ecosystem is ad fraud, where a miscreant's code fetches ads without displaying them to the user or \"clicks\" on ads automatically. Ad fraud has been extensively studied in the context of web advertising but has gone largely unstudied in the context of mobile advertising. We take the first step to study mobile ad fraud perpetrated by Android apps. We identify two fraudulent ad behaviors in apps: 1) requesting ads while the app is in the background, and 2) clicking on ads without user interaction. Based on these observations, we developed an analysis tool, MAdFraud, which automatically runs many apps simultaneously in emulators to trigger and expose ad fraud. Since the formats of ad impressions and clicks vary widely between different ad providers, we develop a novel approach for automatically identifying ad impressions and clicks in three steps: building HTTP request trees, identifying ad request pages using machine learning, and detecting clicks in HTTP request trees using heuristics. We apply our methodology and tool to two datasets: 1) 130,339 apps crawled from 19 Android markets including Play and many third-party markets, and 2) 35,087 apps that likely contain malware provided by a security company. From analyzing these datasets, we find that about 30% of apps with ads make ad requests while in running in the background. In addition, we find 27 apps which generate clicks without user interaction. We find that the click fraud apps attempt to remain stealthy when fabricating ad traffic by only periodically sending clicks and changing which ad provider is being targeted between installations.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134356801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}