Open Data Kit (ODK) is an open-source, modular toolkit that enables organizations to build application-specific mobile information services in resource-constrained environments. Feedback from users and developers about limitations experienced with the ODK 1.x set of tools led to a redesign of the system architecture and the creation of new tools. This demonstration presents a revised tool suite called ODK 2.0. This expanded ODK toolkit aims to increase an organization's data collection and management capabilities by supporting data synchronization, adaptable workflows, more configurable presentation screens, and increasing the diversity of input types by enabling new data input methods on mobile devices.
{"title":"Demo: open data kit 2.0 tool suite","authors":"Waylon Brunette, S. Sudar, G. Borriello","doi":"10.1145/2594368.2601466","DOIUrl":"https://doi.org/10.1145/2594368.2601466","url":null,"abstract":"Open Data Kit (ODK) is an open-source, modular toolkit that enables organizations to build application-specific mobile information services in resource-constrained environments. Feedback from users and developers about limitations experienced with the ODK 1.x set of tools led to a redesign of the system architecture and the creation of new tools. This demonstration presents a revised tool suite called ODK 2.0. This expanded ODK toolkit aims to increase an organization's data collection and management capabilities by supporting data synchronization, adaptable workflows, more configurable presentation screens, and increasing the diversity of input types by enabling new data input methods on mobile devices.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121115447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an augmented reality system on off-the-shelves smartphones which allows random physical object tagging. At later times, such tags could be retrieved from different locations and orientations. Our approach does not require any additional infrastructure support, localization scheme, specialized camera, or modification to smartphone's operating system. Designed and developed for current generation smartphones, our application shows promising initial results with retrieval accuracy of 82% in indoor environment without noticeable impact on the user experience. If made commercially available, such system could be used in city tourism, infrastructure maintenance, and enabling new kind of social interactions.
{"title":"Demo: real-time object tagging and retrieval","authors":"Puneet Jain, Romit Roy Choudhury","doi":"10.1145/2594368.2601472","DOIUrl":"https://doi.org/10.1145/2594368.2601472","url":null,"abstract":"We propose an augmented reality system on off-the-shelves smartphones which allows random physical object tagging. At later times, such tags could be retrieved from different locations and orientations. Our approach does not require any additional infrastructure support, localization scheme, specialized camera, or modification to smartphone's operating system. Designed and developed for current generation smartphones, our application shows promising initial results with retrieval accuracy of 82% in indoor environment without noticeable impact on the user experience. If made commercially available, such system could be used in city tourism, infrastructure maintenance, and enabling new kind of social interactions.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121317891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Near Field Communication (NFC) technology is gaining increasing popularity among mobile users. However, as a relatively new and developing technology, NFC may also introduce security threats that make mobile devices vulnerable to various malicious attacks. This work presents the first system study on the feasibility of and defense again passive NFC eavesdropping. Our experiments show that commodity NFC-enabled mobile devices can be eavesdropped from up to 240 cm away, which is at least an order of magnitude of the intended NFC communication distance. This finding challenges the general perception that NFC is largely immune to eavesdropping because of its short working range. We then present the design of a hardware security system called nShield. With a small form factor, nShield can be attached to the back of mobile devices to attenuate the signal strength against passive eavesdropping. At the same time, the absorbed RF energy is scavenged by nShield for its perpetual operation. nShield intelligently determines the right attenuation level that is just enough to sustain reliable data communication. We implement a prototype of nShield, and evaluate its performance via extensive experiments. Our results show that nShield has low power consumption (23 uW), can harvest significant amount of power (55 mW), and adaptively attenuates the signal strength of NFC in a variety of realistic settings, while only introducing insignificant delay (up to 2.2 s).
{"title":"nShield: a noninvasive NFC security system for mobiledevices","authors":"Ruogu Zhou, G. Xing","doi":"10.1145/2594368.2594376","DOIUrl":"https://doi.org/10.1145/2594368.2594376","url":null,"abstract":"The Near Field Communication (NFC) technology is gaining increasing popularity among mobile users. However, as a relatively new and developing technology, NFC may also introduce security threats that make mobile devices vulnerable to various malicious attacks. This work presents the first system study on the feasibility of and defense again passive NFC eavesdropping. Our experiments show that commodity NFC-enabled mobile devices can be eavesdropped from up to 240 cm away, which is at least an order of magnitude of the intended NFC communication distance. This finding challenges the general perception that NFC is largely immune to eavesdropping because of its short working range. We then present the design of a hardware security system called nShield. With a small form factor, nShield can be attached to the back of mobile devices to attenuate the signal strength against passive eavesdropping. At the same time, the absorbed RF energy is scavenged by nShield for its perpetual operation. nShield intelligently determines the right attenuation level that is just enough to sustain reliable data communication. We implement a prototype of nShield, and evaluate its performance via extensive experiments. Our results show that nShield has low power consumption (23 uW), can harvest significant amount of power (55 mW), and adaptively attenuates the signal strength of NFC in a variety of realistic settings, while only introducing insignificant delay (up to 2.2 s).","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"20 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lenin Ravindranath, Suman Nath, J. Padhye, H. Balakrishnan
This paper describes the design, implementation, and evaluation of VanarSena, an automated fault finder for mobile applications (``apps''). The techniques in VanarSena are driven by a study of 25 million real-world crash reports of Windows Phone apps reported in 2012. Our analysis indicates that a modest number of root causes are responsible for many observed failures, but that they occur in a wide range of places in an app, requiring a wide coverage of possible execution paths. VanarSena adopts a ``greybox'' testing method, instrumenting the app binary to achieve both coverage and speed. VanarSena runs on cloud servers: the developer uploads the app binary; VanarSena then runs several app ``monkeys'' in parallel to emulate user, network, and sensor data behavior, returning a detailed report of crashes and failures. We have tested VanarSena with 3000 apps from the Windows Phone store, finding that 1108 of them had failures; VanarSena uncovered 2969 distinct bugs in existing apps, including 1227 that were not previously reported. Because we anticipate VanarSena being used in regular regression tests, testing speed is important. VanarSena uses two techniques to improve speed. First, it uses a ``hit testing'' method to quickly emulate an app by identifying which user interface controls map to the same execution handlers in the code. Second, it generates a ProcessingCompleted event to accurately determine when to start the next interaction. These features are key benefits of VanarSena's greybox philosophy.
{"title":"Automatic and scalable fault detection for mobile applications","authors":"Lenin Ravindranath, Suman Nath, J. Padhye, H. Balakrishnan","doi":"10.1145/2594368.2594377","DOIUrl":"https://doi.org/10.1145/2594368.2594377","url":null,"abstract":"This paper describes the design, implementation, and evaluation of VanarSena, an automated fault finder for mobile applications (``apps''). The techniques in VanarSena are driven by a study of 25 million real-world crash reports of Windows Phone apps reported in 2012. Our analysis indicates that a modest number of root causes are responsible for many observed failures, but that they occur in a wide range of places in an app, requiring a wide coverage of possible execution paths. VanarSena adopts a ``greybox'' testing method, instrumenting the app binary to achieve both coverage and speed. VanarSena runs on cloud servers: the developer uploads the app binary; VanarSena then runs several app ``monkeys'' in parallel to emulate user, network, and sensor data behavior, returning a detailed report of crashes and failures. We have tested VanarSena with 3000 apps from the Windows Phone store, finding that 1108 of them had failures; VanarSena uncovered 2969 distinct bugs in existing apps, including 1227 that were not previously reported. Because we anticipate VanarSena being used in regular regression tests, testing speed is important. VanarSena uses two techniques to improve speed. First, it uses a ``hit testing'' method to quickly emulate an app by identifying which user interface controls map to the same execution handlers in the code. Second, it generates a ProcessingCompleted event to accurately determine when to start the next interaction. These features are key benefits of VanarSena's greybox philosophy.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Dhondge, Sejun Song, Young-Wan Jang, Hyungbae Park, Sunae Shin, Baek-Young Choi
As smartphones gain their popularity, vulnerable road users (VRUs) are increasingly distracted by activities with their devices such as listening to music, watching videos, texting or making calls while walking or bicycling on the road. In spite of the development of various high-tech Car-to-Car (C2C) and Car-to-Infrastructure (C2I) communications for enhancing the traffic safety, protecting such VRUs from vehicles still relies heavily on traditional sound warning methods. Furthermore, as smartphones continue to become highly ubiquitous, VRUs are increasingly oblivious to safety related warning sounds. A traffic accident study shows the number of headphone-wearing VRUs involved in roadside accidents has increased by 300% in the last 10 years. Although recently a few Car2Pedestrian-communication methods have been proposed by various car manufacturers, their practical usage is limited, as they mostly require special communication devices to cope with the wide range of mobility, and also assume VRUs' active attention to the communication while walking. We propose a smartphone-based Car2X-communication system, named WiFi-Honk, which can alert the potential collisions to both VRUs and vehicles in order to especially protect the distracted VRUs. WiFi-Honk provides a practical safety means for the distracted VRUs without requiring any special device using WiFi of smartphone. WiFi-Honk removes the WiFi association overhead using the beacon stuffed WiFi communication with the geographic location, speed, and direction information of the smartphone replacing its SSID while operating in WiFi Direct/Hotspot mode, and also provides an efficient collision estimation algorithm to issue appropriate warnings. Our experimental and simulation studies validate that WiFi-Honk can successfully alert VRUs within a sufficient reaction time frame, even in high mobility environments.
{"title":"Video: WiFi-honk: smartphone-based beacon stuffed WiFi Car2X-communication system for vulnerable road user safety","authors":"K. Dhondge, Sejun Song, Young-Wan Jang, Hyungbae Park, Sunae Shin, Baek-Young Choi","doi":"10.1145/2594368.2602430","DOIUrl":"https://doi.org/10.1145/2594368.2602430","url":null,"abstract":"As smartphones gain their popularity, vulnerable road users (VRUs) are increasingly distracted by activities with their devices such as listening to music, watching videos, texting or making calls while walking or bicycling on the road. In spite of the development of various high-tech Car-to-Car (C2C) and Car-to-Infrastructure (C2I) communications for enhancing the traffic safety, protecting such VRUs from vehicles still relies heavily on traditional sound warning methods. Furthermore, as smartphones continue to become highly ubiquitous, VRUs are increasingly oblivious to safety related warning sounds. A traffic accident study shows the number of headphone-wearing VRUs involved in roadside accidents has increased by 300% in the last 10 years. Although recently a few Car2Pedestrian-communication methods have been proposed by various car manufacturers, their practical usage is limited, as they mostly require special communication devices to cope with the wide range of mobility, and also assume VRUs' active attention to the communication while walking. We propose a smartphone-based Car2X-communication system, named WiFi-Honk, which can alert the potential collisions to both VRUs and vehicles in order to especially protect the distracted VRUs. WiFi-Honk provides a practical safety means for the distracted VRUs without requiring any special device using WiFi of smartphone. WiFi-Honk removes the WiFi association overhead using the beacon stuffed WiFi communication with the geographic location, speed, and direction information of the smartphone replacing its SSID while operating in WiFi Direct/Hotspot mode, and also provides an efficient collision estimation algorithm to issue appropriate warnings. Our experimental and simulation studies validate that WiFi-Honk can successfully alert VRUs within a sufficient reaction time frame, even in high mobility environments.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129633583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile devices are integral to the workflows of many organizations working in rural or disconnected contexts. Data is often collected on mobile phones and tablets using tools like Open Data Kit (ODK). However, some of the applications require users to revisit and update previously collected data, necessitating easy viewing of stored data. To make it easier for organizations to create flexible information services, we present ODK Tables, an Android tool that allows users to enter and curate data on mobile devices. Tables leverages web tools to make mobile app creation simple. It provides abstractions to make the process straightforward and allows app designers to access data through a JavaScript API. App designers can create a custom app using only a small number of HTML and JavaScript files. This facilitates the creation of a custom user interface but leaves storage, data management, and synchronization to the framework. The result is a fully featured Android app based on established web-based tools with full support for disconnected operation.
{"title":"Video: Open data kit tables","authors":"S. Sudar, Waylon Brunette, G. Borriello","doi":"10.1145/2594368.2602532","DOIUrl":"https://doi.org/10.1145/2594368.2602532","url":null,"abstract":"Mobile devices are integral to the workflows of many organizations working in rural or disconnected contexts. Data is often collected on mobile phones and tablets using tools like Open Data Kit (ODK). However, some of the applications require users to revisit and update previously collected data, necessitating easy viewing of stored data. To make it easier for organizations to create flexible information services, we present ODK Tables, an Android tool that allows users to enter and curate data on mobile devices. Tables leverages web tools to make mobile app creation simple. It provides abstractions to make the process straightforward and allows app designers to access data through a JavaScript API. App designers can create a custom app using only a small number of HTML and JavaScript files. This facilitates the creation of a custom user interface but leaves storage, data management, and synchronization to the framework. The result is a fully featured Android app based on established web-based tools with full support for disconnected operation.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122192402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-quality, speaker-location-aware audio capturing has traditionally been realized using dedicated microphone arrays. But high cost and lack of portability prevents such systems from being widely adopted. Today's smartphones are relatively more convenient for audio recording, but the audio quality is much lower in noisy environment and speaker location cannot be readily obtained. In this paper, we design and implement Dia, which leverages smartphone cooperation to overcome the above limitations. Dia supports spontaneous setup, by allowing a group of users to rapidly assemble an array of smartphones to emulate a dedicated microphone array. It employs a novel framework to accurately synchronize the audio I/O clocks of the smartphones. The synchronized smartphone array further enables autodirective audio capturing, i.e., tracking the speaker's location, and beamforming the audio capturing towards the speaker to improve audio quality. We implement Dia on a testbed consisting of 8 Android phones. Our experiments demonstrate that Dia can synchronize the microphones of different smartphones with sample-level accuracy. It achieves high localization accuracy, and similar beamforming performance compared with a microphone array with perfect synchronization.
{"title":"Autodirective audio capturing through a synchronized smartphone array","authors":"Sanjib Sur, Teng Wei, Xinyu Zhang","doi":"10.1145/2594368.2594380","DOIUrl":"https://doi.org/10.1145/2594368.2594380","url":null,"abstract":"High-quality, speaker-location-aware audio capturing has traditionally been realized using dedicated microphone arrays. But high cost and lack of portability prevents such systems from being widely adopted. Today's smartphones are relatively more convenient for audio recording, but the audio quality is much lower in noisy environment and speaker location cannot be readily obtained. In this paper, we design and implement Dia, which leverages smartphone cooperation to overcome the above limitations. Dia supports spontaneous setup, by allowing a group of users to rapidly assemble an array of smartphones to emulate a dedicated microphone array. It employs a novel framework to accurately synchronize the audio I/O clocks of the smartphones. The synchronized smartphone array further enables autodirective audio capturing, i.e., tracking the speaker's location, and beamforming the audio capturing towards the speaker to improve audio quality. We implement Dia on a testbed consisting of 8 Android phones. Our experiments demonstrate that Dia can synchronize the microphones of different smartphones with sample-level accuracy. It achieves high localization accuracy, and similar beamforming performance compared with a microphone array with perfect synchronization.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129243421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taeyeon Ki, Satyaditya Munipalle, Karthik Dantu, Steven Y. Ko, Lukasz Ziarek
Today's mobile applications operate in a diverse set of environments, where it is difficult for a developer to know beforehand what conditions his or her application will be put under. For example, once deployed on an online application store, an application can be downloaded on different types of hardware, ranging from budget smartphones to high-end tablets. In addition, network conditions can vary widely from Wi-Fi to 3G to 4G. Mobile applications also need to co-exist with other applications that compete for resources at different times. Due to this diverse set of operating conditions, it is difficult to understand what problems are occurring in the wild for mobile applications. Moreover, it is even more difficult to reproduce problems in a lab environment where developers can debug the problems. Some platforms support bug reports and stack traces, but they are inadequate in scenarios when operating conditions and inputs are not consistent. To address these issues, we propose Retro, an automated, application-layer record and replay system for Android. Unlike previous record and replay systems, Retro aims to support mobile Android applications with three features. First, Retro provides an automated instrumentation framework that transforms a regular Android application into a traceable application. This means that Retro does not require any change in the Android platform; thus, it enables developers to distribute instrumented applications via online application stores. Through the instrumentation, Retro records application-layer events such as click events, sensor readings, method calls, and return values. In order to reduce the overhead of logging, Retro also uses a selective logging mechanism that decides which event types to log at runtime. Second, Retro provides a replayer that a developer can use in a lab environment to faithfully replay a recorded run. To maximize the ease of use, Retro seamlessly integrates this replay functionality into Android's existing development workflow by adding the replayer into the Android platform. This means that a developer can replay using a regular phone as well as an emulator. Also, Retro provides a VCR-like interface for replaying that is capable of forwarding and rewinding executions. Third, Retro examines Android-specific issues in enabling record and replay and incorporates design choices that are tailored towards Android. The goal for doing this is efficiency and faithfulness; by examining Android-specific issues, Retro can provide efficient recording and replaying functionalities as well as faithfulness in replaying.
{"title":"Poster: Retro: an automated, application-layer record and replay for android","authors":"Taeyeon Ki, Satyaditya Munipalle, Karthik Dantu, Steven Y. Ko, Lukasz Ziarek","doi":"10.1145/2594368.2601453","DOIUrl":"https://doi.org/10.1145/2594368.2601453","url":null,"abstract":"Today's mobile applications operate in a diverse set of environments, where it is difficult for a developer to know beforehand what conditions his or her application will be put under. For example, once deployed on an online application store, an application can be downloaded on different types of hardware, ranging from budget smartphones to high-end tablets. In addition, network conditions can vary widely from Wi-Fi to 3G to 4G. Mobile applications also need to co-exist with other applications that compete for resources at different times. Due to this diverse set of operating conditions, it is difficult to understand what problems are occurring in the wild for mobile applications. Moreover, it is even more difficult to reproduce problems in a lab environment where developers can debug the problems. Some platforms support bug reports and stack traces, but they are inadequate in scenarios when operating conditions and inputs are not consistent. To address these issues, we propose Retro, an automated, application-layer record and replay system for Android. Unlike previous record and replay systems, Retro aims to support mobile Android applications with three features. First, Retro provides an automated instrumentation framework that transforms a regular Android application into a traceable application. This means that Retro does not require any change in the Android platform; thus, it enables developers to distribute instrumented applications via online application stores. Through the instrumentation, Retro records application-layer events such as click events, sensor readings, method calls, and return values. In order to reduce the overhead of logging, Retro also uses a selective logging mechanism that decides which event types to log at runtime. Second, Retro provides a replayer that a developer can use in a lab environment to faithfully replay a recorded run. To maximize the ease of use, Retro seamlessly integrates this replay functionality into Android's existing development workflow by adding the replayer into the Android platform. This means that a developer can replay using a regular phone as well as an emulator. Also, Retro provides a VCR-like interface for replaying that is capable of forwarding and rewinding executions. Third, Retro examines Android-specific issues in enabling record and replay and incorporates design choices that are tailored towards Android. The goal for doing this is efficiency and faithfulness; by examining Android-specific issues, Retro can provide efficient recording and replaying functionalities as well as faithfulness in replaying.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paarijaat Aditya, V. Erdélyi, Matthew Lentz, E. Shi, Bobby Bhattacharjee, P. Druschel
Mobile social apps provide sharing and networking opportunities based on a user's location, activity, and set of nearby users. A platform for these apps must meet a wide range of communication needs while ensuring users' control over their privacy. In this paper, we introduce EnCore, a mobile platform that builds on secure encounters between pairs of devices as a foundation for privacy-preserving communication. An encounter occurs whenever two devices are within Bluetooth radio range of each other, and generates a unique encounter ID and associated shared key. EnCore detects nearby users and resources, bootstraps named communication abstractions called events for groups of proximal users, and enables communication and sharing among event participants, while relying on existing network, storage and online social network services. At the same time, EnCore puts users in control of their privacy and the confidentiality of the information they share. Using an Android implementation of EnCore and an app for event-based communication and sharing, we evaluate EnCore's utility using a live testbed deployment with 35 users.
{"title":"EnCore: private, context-based communication for mobile social apps","authors":"Paarijaat Aditya, V. Erdélyi, Matthew Lentz, E. Shi, Bobby Bhattacharjee, P. Druschel","doi":"10.1145/2594368.2594374","DOIUrl":"https://doi.org/10.1145/2594368.2594374","url":null,"abstract":"Mobile social apps provide sharing and networking opportunities based on a user's location, activity, and set of nearby users. A platform for these apps must meet a wide range of communication needs while ensuring users' control over their privacy. In this paper, we introduce EnCore, a mobile platform that builds on secure encounters between pairs of devices as a foundation for privacy-preserving communication. An encounter occurs whenever two devices are within Bluetooth radio range of each other, and generates a unique encounter ID and associated shared key. EnCore detects nearby users and resources, bootstraps named communication abstractions called events for groups of proximal users, and enables communication and sharing among event participants, while relying on existing network, storage and online social network services. At the same time, EnCore puts users in control of their privacy and the confidentiality of the information they share. Using an Android implementation of EnCore and an app for event-based communication and sharing, we evaluate EnCore's utility using a live testbed deployment with 35 users.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115608314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tauhidur Rahman, A. Adams, Mi Zhang, E. Cherry, Bobby Zhou, Huaishu Peng, Tanzeem Choudhury
In this paper, we propose BodyBeat, a novel mobile sensing system for capturing and recognizing a diverse range of non-speech body sounds in real-life scenarios. Non-speech body sounds, such as sounds of food intake, breath, laughter, and cough contain invaluable information about our dietary behavior, respiratory physiology, and affect. The BodyBeat mobile sensing system consists of a custom-built piezoelectric microphone and a distributed computational framework that utilizes an ARM microcontroller and an Android smartphone. The custom-built microphone is designed to capture subtle body vibrations directly from the body surface without being perturbed by external sounds. The microphone is attached to a 3D printed neckpiece with a suspension mechanism. The ARM embedded system and the Android smartphone process the acoustic signal from the microphone and identify non-speech body sounds. We have extensively evaluated the BodyBeat mobile sensing system. Our results show that BodyBeat outperforms other existing solutions in capturing and recognizing different types of important non-speech body sounds.
{"title":"BodyBeat: a mobile system for sensing non-speech body sounds","authors":"Tauhidur Rahman, A. Adams, Mi Zhang, E. Cherry, Bobby Zhou, Huaishu Peng, Tanzeem Choudhury","doi":"10.1145/2594368.2594386","DOIUrl":"https://doi.org/10.1145/2594368.2594386","url":null,"abstract":"In this paper, we propose BodyBeat, a novel mobile sensing system for capturing and recognizing a diverse range of non-speech body sounds in real-life scenarios. Non-speech body sounds, such as sounds of food intake, breath, laughter, and cough contain invaluable information about our dietary behavior, respiratory physiology, and affect. The BodyBeat mobile sensing system consists of a custom-built piezoelectric microphone and a distributed computational framework that utilizes an ARM microcontroller and an Android smartphone. The custom-built microphone is designed to capture subtle body vibrations directly from the body surface without being perturbed by external sounds. The microphone is attached to a 3D printed neckpiece with a suspension mechanism. The ARM embedded system and the Android smartphone process the acoustic signal from the microphone and identify non-speech body sounds. We have extensively evaluated the BodyBeat mobile sensing system. Our results show that BodyBeat outperforms other existing solutions in capturing and recognizing different types of important non-speech body sounds.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130645690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}