Andres Molina-Markham, Ronald A. Peterson, Joseph Skinner, R. Halter, Jacob M. Sorber, D. Kotz
Many of the most compelling mHealth applications are designed to enable long-term health monitoring for outpatients with chronic medical conditions, for individuals seeking to change behavior, for physicians seeking to quantify and detect behavioral aberrations for early diagnosis, for home-care providers needing to track movements of elders under their care in order to respond quickly to emergencies, or for athletes monitoring their physiology to improve performance. Developing BAHN applications that require consistent presence and strong security, without depending on a smartphone or without building lots of computation/communication resources into every BAHN device presents a critical challenge for the wide-spread adoption of mHealth technologies. The smartphone is not always with its user [1]: many people set aside their phone while at home or while driving, exercising, or bathing. According to a Pew study, a third of smartphones have been lost or stolen [2]! When the smartphone is not present, the BAHN could lose its foundation; valuable data could be lost, critical events may go unrecognized. Second, smartphones have limited means to authenticate or identify the person holding them; if the phone has been lost or stolen, an app could inappropriately disclose personal health information about the phone’s owner. Third, smartphones are general-purpose devices, not dedicated to health-related applications; it is thus more difficult to evaluate the safety and security of a system when it is sharing resources with other applications.
{"title":"Poster: Enabling computational jewelry for mHealth applications","authors":"Andres Molina-Markham, Ronald A. Peterson, Joseph Skinner, R. Halter, Jacob M. Sorber, D. Kotz","doi":"10.1145/2594368.2601454","DOIUrl":"https://doi.org/10.1145/2594368.2601454","url":null,"abstract":"Many of the most compelling mHealth applications are designed to enable long-term health monitoring for outpatients with chronic medical conditions, for individuals seeking to change behavior, for physicians seeking to quantify and detect behavioral aberrations for early diagnosis, for home-care providers needing to track movements of elders under their care in order to respond quickly to emergencies, or for athletes monitoring their physiology to improve performance. Developing BAHN applications that require consistent presence and strong security, without depending on a smartphone or without building lots of computation/communication resources into every BAHN device presents a critical challenge for the wide-spread adoption of mHealth technologies. The smartphone is not always with its user [1]: many people set aside their phone while at home or while driving, exercising, or bathing. According to a Pew study, a third of smartphones have been lost or stolen [2]! When the smartphone is not present, the BAHN could lose its foundation; valuable data could be lost, critical events may go unrecognized. Second, smartphones have limited means to authenticate or identify the person holding them; if the phone has been lost or stolen, an app could inappropriately disclose personal health information about the phone’s owner. Third, smartphones are general-purpose devices, not dedicated to health-related applications; it is thus more difficult to evaluate the safety and security of a system when it is sharing resources with other applications.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhinav Parate, Meng-Chieh Chiu, Chaniel Chadowitz, Deepak Ganesan, E. Kalogerakis
Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are four-fold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods.
{"title":"RisQ: recognizing smoking gestures with inertial sensors on a wristband","authors":"Abhinav Parate, Meng-Chieh Chiu, Chaniel Chadowitz, Deepak Ganesan, E. Kalogerakis","doi":"10.1145/2594368.2594379","DOIUrl":"https://doi.org/10.1145/2594368.2594379","url":null,"abstract":"Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are four-fold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134514895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Cuervo, A. Wolman, Landon P. Cox, Kiron Lebeck, Ali Razeen, S. Saroiu, M. Musuvathi
This paper presents Kahawai1, a system that provides high-quality gaming on mobile devices, such as tablets and smartphones, by offloading a portion of the GPU computation to server-side infrastructure. In contrast with previous thin-client approaches that require a server-side GPU to render the entire content, Kahawai uses collaborative rendering to combine the output of a mobile GPU and a server-side GPU into the displayed output. Compared to a thin client, collaborative rendering requires significantly less network bandwidth between the mobile device and the server to achieve the same visual quality and, unlike a thin client, collaborative rendering supports disconnected operation, allowing a user to play offline - albeit with reduced visual quality. Kahawai implements two separate techniques for collaborative rendering: (1) a mobile device can render each frame with reduced detail while a server sends a stream of per-frame differences to transform each frame into a high detail version, or (2) a mobile device can render a subset of the frames while a server provides the missing frames. Both techniques are compatible with the hardware-accelerated H.264 video decoders found on most modern mobile devices. We implemented a Kahawai prototype and integrated it with the idTech 4 open-source game engine, an advanced engine used by many commercial games. In our evaluation, we show that Kahawai can deliver gameplay at an acceptable frame rate, and achieve high visual quality using as little as one-sixth of the bandwidth of the conventional thin-client approach. Furthermore, a 50-person user study with our prototype shows that Kahawai can deliver the same gaming experience as a thin client under excellent network conditions.
{"title":"Demo: Kahawai: high-quality mobile gaming using GPU offload","authors":"Eduardo Cuervo, A. Wolman, Landon P. Cox, Kiron Lebeck, Ali Razeen, S. Saroiu, M. Musuvathi","doi":"10.1145/2594368.2601482","DOIUrl":"https://doi.org/10.1145/2594368.2601482","url":null,"abstract":"This paper presents Kahawai1, a system that provides high-quality gaming on mobile devices, such as tablets and smartphones, by offloading a portion of the GPU computation to server-side infrastructure. In contrast with previous thin-client approaches that require a server-side GPU to render the entire content, Kahawai uses collaborative rendering to combine the output of a mobile GPU and a server-side GPU into the displayed output. Compared to a thin client, collaborative rendering requires significantly less network bandwidth between the mobile device and the server to achieve the same visual quality and, unlike a thin client, collaborative rendering supports disconnected operation, allowing a user to play offline - albeit with reduced visual quality. Kahawai implements two separate techniques for collaborative rendering: (1) a mobile device can render each frame with reduced detail while a server sends a stream of per-frame differences to transform each frame into a high detail version, or (2) a mobile device can render a subset of the frames while a server provides the missing frames. Both techniques are compatible with the hardware-accelerated H.264 video decoders found on most modern mobile devices. We implemented a Kahawai prototype and integrated it with the idTech 4 open-source game engine, an advanced engine used by many commercial games. In our evaluation, we show that Kahawai can deliver gameplay at an acceptable frame rate, and achieve high visual quality using as little as one-sixth of the bandwidth of the conventional thin-client approach. Furthermore, a 50-person user study with our prototype shows that Kahawai can deliver the same gaming experience as a thin client under excellent network conditions.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133476122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Messaging app developers are beginning to take the security and privacy of their users' communication more seriously. Unfortunately, a recent study has shown that the developers of many popular apps incorrectly use cryptography. As a result, they make mistakes that may result in trivially broken encryption schemes. For example, the developers of Snapchat use a constant symmetric encryption key hardcoded into the app and it only takes 12 lines of Ruby to crack the encryption. In this work, we propose ZIPR (Zero-Interaction PRivacy), a system that relieves developers from the task of using cryptography correctly. Designed for text-messaging apps, ZIPR automatically negotiates shared secret keys, and encrypts and decrypts messages as users of these apps chat with each other. No manual intervention is required by users for them to enjoy secure messaging. There are two key ideas behind ZIPR. First, most text-messaging apps follow a basic UI scheme that contains (i) a text box for users to compose their message, (ii) a "send" button which they click on to send the message, and (iii) a list view to display sent and received messages. By intercepting events on these UI elements, ZIPR can manipulate the composed message before it is sent and before it is displayed. This allows the system to transparently encrypt and decrypt message data. The second key idea is that ZIPR can reuse the communication channel defined by an app to negotiate a shared secret key between two users. This is done by piggy-backing negotiation data on the messages users send to each other. A major advantage of this approach is that ZIPR can avoid the difficult task of establishing user identities. After all, a user of a text-messaging app is likely to carry out a conversation only with someone she knows, and both of them would have signed up for the chat service using some personal data such as their email addresses or phone numbers. Developers use ZIPR by tagging UI elements; no changes to their source code are required. This is similar to HTTPS where web developers only need to configure their servers with SSL certificates to encrypt data transmission with their users. However, unlike HTTPS, the end-to-end encryption in ZIPR takes place between the two users carrying out a conversation and not between a server and a user. This ensures that even if the app servers are compromised, users' messages would remain secure. ZIPR is implemented in Android 4.3 and works with existing apps with very few modifications. In this demo, we show that our current prototype works with several apps including Whatsapp, Facebook Messenger, and Skype. These apps required only four, five, and three lines of modification to their UI XML definition files, respectively. In Figure 1, we show a screenshot of Whatsapp running under ZIPR. In the first two messages exchanged between the users, a new shared secret key is negotiated. Subsequently, all following messages are securely transmitted, and these encrypted
{"title":"Demo: Zero interaction private messaging with ZIPR","authors":"Ali Razeen, Landon P. Cox","doi":"10.1145/2594368.2601470","DOIUrl":"https://doi.org/10.1145/2594368.2601470","url":null,"abstract":"Messaging app developers are beginning to take the security and privacy of their users' communication more seriously. Unfortunately, a recent study has shown that the developers of many popular apps incorrectly use cryptography. As a result, they make mistakes that may result in trivially broken encryption schemes. For example, the developers of Snapchat use a constant symmetric encryption key hardcoded into the app and it only takes 12 lines of Ruby to crack the encryption. In this work, we propose ZIPR (Zero-Interaction PRivacy), a system that relieves developers from the task of using cryptography correctly. Designed for text-messaging apps, ZIPR automatically negotiates shared secret keys, and encrypts and decrypts messages as users of these apps chat with each other. No manual intervention is required by users for them to enjoy secure messaging. There are two key ideas behind ZIPR. First, most text-messaging apps follow a basic UI scheme that contains (i) a text box for users to compose their message, (ii) a \"send\" button which they click on to send the message, and (iii) a list view to display sent and received messages. By intercepting events on these UI elements, ZIPR can manipulate the composed message before it is sent and before it is displayed. This allows the system to transparently encrypt and decrypt message data. The second key idea is that ZIPR can reuse the communication channel defined by an app to negotiate a shared secret key between two users. This is done by piggy-backing negotiation data on the messages users send to each other. A major advantage of this approach is that ZIPR can avoid the difficult task of establishing user identities. After all, a user of a text-messaging app is likely to carry out a conversation only with someone she knows, and both of them would have signed up for the chat service using some personal data such as their email addresses or phone numbers. Developers use ZIPR by tagging UI elements; no changes to their source code are required. This is similar to HTTPS where web developers only need to configure their servers with SSL certificates to encrypt data transmission with their users. However, unlike HTTPS, the end-to-end encryption in ZIPR takes place between the two users carrying out a conversation and not between a server and a user. This ensures that even if the app servers are compromised, users' messages would remain secure. ZIPR is implemented in Android 4.3 and works with existing apps with very few modifications. In this demo, we show that our current prototype works with several apps including Whatsapp, Facebook Messenger, and Skype. These apps required only four, five, and three lines of modification to their UI XML definition files, respectively. In Figure 1, we show a screenshot of Whatsapp running under ZIPR. In the first two messages exchanged between the users, a new shared secret key is negotiated. Subsequently, all following messages are securely transmitted, and these encrypted","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129311768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents a virtual sensing framework by exploiting operating system events for energy efficient context inference. Specifically, we present a novel set of features that can be extracted from virtual sensors and used to infer the logical status of mobile users, such as isWorking, isSocial, and isStressful. The preliminary results indicate promising inference performance and suggest a wide range of applications of the proposed framework.
{"title":"Poster: A virtual sensing framework for mobile phones","authors":"J. Hammer, Tingxin Yan","doi":"10.1145/2594368.2601457","DOIUrl":"https://doi.org/10.1145/2594368.2601457","url":null,"abstract":"This work presents a virtual sensing framework by exploiting operating system events for energy efficient context inference. Specifically, we present a novel set of features that can be extracted from virtual sensors and used to infer the logical status of mobile users, such as isWorking, isSocial, and isStressful. The preliminary results indicate promising inference performance and suggest a wide range of applications of the proposed framework.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127051425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas Capurso, Eric Elsken, Donnell Payne, Liran Ma
In the event of a vehicular accident, there are many scenarios in which the occupants become incapacitated and unable to call for assistance. Currently, there exist systems such as OnStar [1] that provides accident detection and roadside assistance services. However, the cost of these proprietary systems and their availability for all vehicular models limit their use. We propose an inexpensive and robust system that provides accurate accident detection and emergency responder notification as our senior capstone project at Texas Christian University. The proposed system contains three primary components: a smartphone, a single-on-board computer (the Raspberry Pi [2]), and Texas Instruments SensorTags [3] as shown in Figure 1.
{"title":"Poster: A robust vehicular accident detection system using inexpensive portable devices","authors":"Nicholas Capurso, Eric Elsken, Donnell Payne, Liran Ma","doi":"10.1145/2594368.2601456","DOIUrl":"https://doi.org/10.1145/2594368.2601456","url":null,"abstract":"In the event of a vehicular accident, there are many scenarios in which the occupants become incapacitated and unable to call for assistance. Currently, there exist systems such as OnStar [1] that provides accident detection and roadside assistance services. However, the cost of these proprietary systems and their availability for all vehicular models limit their use. We propose an inexpensive and robust system that provides accurate accident detection and emergency responder notification as our senior capstone project at Texas Christian University. The proposed system contains three primary components: a smartphone, a single-on-board computer (the Raspberry Pi [2]), and Texas Instruments SensorTags [3] as shown in Figure 1.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123337603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Urban transit systems play a significant role in preventing daily commutes from becoming more congested than they are now, e.g., transit systems reduced 865 million hours of travel delay and 450 million gallons of gas during U.S. commutes in 2013. However, the previous theory and practice on urban transit research have typically focused on individual transit modes in isolation, and thus there is a lack of research on how to integrate real-time data feeds about different transit modes (e.g., taxicab, bus and subway) and other urban infrastructures (e.g., cellular networks) to improve transit efficiency. To address this issue, we propose and implement a novel architecture for multi-mode transit services based on the urban infrastructures in a Chinese city Shenzhen. The key contributions of this poster are as follows.
{"title":"Poster: Improving efficiency of metropolitan-scale transit systems with multi-mode data feeds","authors":"Desheng Zhang, T. He","doi":"10.1145/2594368.2601459","DOIUrl":"https://doi.org/10.1145/2594368.2601459","url":null,"abstract":"Urban transit systems play a significant role in preventing daily commutes from becoming more congested than they are now, e.g., transit systems reduced 865 million hours of travel delay and 450 million gallons of gas during U.S. commutes in 2013. However, the previous theory and practice on urban transit research have typically focused on individual transit modes in isolation, and thus there is a lack of research on how to integrate real-time data feeds about different transit modes (e.g., taxicab, bus and subway) and other urban infrastructures (e.g., cellular networks) to improve transit efficiency. To address this issue, we propose and implement a novel architecture for multi-mode transit services based on the urban infrastructures in a Chinese city Shenzhen. The key contributions of this poster are as follows.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131253609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Murase, Ryo Kanaoka, N. Thepvilojanapong, Tsubasa Ito, T. Leppänen, H. Saito, Y. Tobe
We study an instant messaging system between two smartphones, without the need of visually following the screen of the smartphones. In the sending side, a Morse-code-type touchscreen input is used to encode the message and a vibration in the receiver side to perceive the message.
{"title":"Demo: Hand-to-hand communication using smartphones","authors":"K. Murase, Ryo Kanaoka, N. Thepvilojanapong, Tsubasa Ito, T. Leppänen, H. Saito, Y. Tobe","doi":"10.1145/2594368.2601479","DOIUrl":"https://doi.org/10.1145/2594368.2601479","url":null,"abstract":"We study an instant messaging system between two smartphones, without the need of visually following the screen of the smartphones. In the sending side, a Morse-code-type touchscreen input is used to encode the message and a vibration in the receiver side to perceive the message.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116470952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Mascarenas, Logan Ott, Aaron Curtis, S. Brambilla, A. Larson, S. Brumby, C. Farrar
The goal of this work is to develop a new autonomous capability for remotely deploying precisely located sensor nodes without damaging the sensor nodes in the process. Over the course of the last decade there has been significant interest in research to deploy sensor networks. This research is driven by the fact that the costs associated with installing sensor networks can be very high. In order to rapidly deploy sensor networks consisting of large numbers of sensor nodes, alternative techniques must be developed to place the sensor nodes in the field. To date much of the research on sensor network deployment has focused on strategies that involve the random dispersion of sensor nodes [1]. In addition other researchers have investigated deployment strategies utilizing small unmanned aerial helicopters for dropping sensor networks from the air. [2]. The problem with these strategies is that often sensor nodes need to be very precisely located for their measurements to be of any use. The reason for this could be that the sensor being used only have limited range, or need to be properly coupled to the environment which they are sensing. The problem with simply dropping sensor nodes is that for many applications it is necessary to deploy sensor nodes horizontally. In addition, to properly install many types of sensors, the sensor must assume a specific pose relative to the object being measured. In order to address these challenges we are currently developing a technology to remotely and rapidly deploy precisely located sensor nodes. The remote sensor placement device being developed can be described as an intelligent gas gun (Figure 1). A laser rangefinder is used to measure the distance to a specified target sensor location. This distance is then used to estimate the amount of energy required to propel the sensor node to the target location with just enough additional energy left over to ensure the sensor node is able to attach itself to the target of interest. We are currently in the process of developing attachment mechanisms for steel, wood, fiberglass (Figure 2). In this demonstration we will perform a contained, live demo of our prototype pneumatic remote sensor placement device along with some prototype sensor attachment mechanisms we are developing.
{"title":"Video: Remote sensor placement","authors":"D. Mascarenas, Logan Ott, Aaron Curtis, S. Brambilla, A. Larson, S. Brumby, C. Farrar","doi":"10.1145/2594368.2602433","DOIUrl":"https://doi.org/10.1145/2594368.2602433","url":null,"abstract":"The goal of this work is to develop a new autonomous capability for remotely deploying precisely located sensor nodes without damaging the sensor nodes in the process. Over the course of the last decade there has been significant interest in research to deploy sensor networks. This research is driven by the fact that the costs associated with installing sensor networks can be very high. In order to rapidly deploy sensor networks consisting of large numbers of sensor nodes, alternative techniques must be developed to place the sensor nodes in the field. To date much of the research on sensor network deployment has focused on strategies that involve the random dispersion of sensor nodes [1]. In addition other researchers have investigated deployment strategies utilizing small unmanned aerial helicopters for dropping sensor networks from the air. [2]. The problem with these strategies is that often sensor nodes need to be very precisely located for their measurements to be of any use. The reason for this could be that the sensor being used only have limited range, or need to be properly coupled to the environment which they are sensing. The problem with simply dropping sensor nodes is that for many applications it is necessary to deploy sensor nodes horizontally. In addition, to properly install many types of sensors, the sensor must assume a specific pose relative to the object being measured. In order to address these challenges we are currently developing a technology to remotely and rapidly deploy precisely located sensor nodes. The remote sensor placement device being developed can be described as an intelligent gas gun (Figure 1). A laser rangefinder is used to measure the distance to a specified target sensor location. This distance is then used to estimate the amount of energy required to propel the sensor node to the target location with just enough additional energy left over to ensure the sensor node is able to attach itself to the target of interest. We are currently in the process of developing attachment mechanisms for steel, wood, fiberglass (Figure 2). In this demonstration we will perform a contained, live demo of our prototype pneumatic remote sensor placement device along with some prototype sensor attachment mechanisms we are developing.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126867687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Mascarenas, Logan Ott, Aaron Curtis, S. Brambilla, A. Larson, S. Brumby, C. Farrar
The goal of this work is to develop a new autonomous capability for remotely deploying precisely located sensor nodes without damaging the sensor nodes in the process. Over the course of the last decade there has been significant interest in research to deploy sensor networks. This research is driven by the fact that the costs associated with installing sensor networks can be very high. In order to rapidly deploy sensor networks consisting of large numbers of sensor nodes, alternative techniques must be developed to place the sensor nodes in the field. The goal of this work is to develop a new autonomous capability for remotely deploying precisely located sensor nodes without damaging the sensor nodes in the process. Over the course of the last decade there has been significant interest in research to deploy sensor networks. This research is driven by the fact that the costs associated with installing sensor networks can be very high. In order to rapidly deploy sensor networks consisting of large numbers of sensor nodes, alternative techniques must be developed to place the sensor nodes in the field. To date much of the research on sensor network deployment has focused on strategies that involve the random dispersion of sensor nodes [1]. In addition other researchers have investigated deployment strategies utilizing small unmanned aerial helicopters for dropping sensor networks from the air. [2]. The problem with these strategies is that often sensor nodes need to be very precisely located for their measurements to be of any use. The reason for this could be that the sensor being used only have limited range, or need to be properly coupled to the environment which they are sensing. The problem with simply dropping sensor nodes is that for many applications it is necessary to deploy sensor nodes horizontally. In addition, to properly install many types of sensors, the sensor must assume a specific pose relative to the object being measured. In order to address these challenges we are currently developing a technology to remotely and rapidly deploy precisely located sensor nodes. The remote sensor placement device being developed can be described as an intelligent gas gun (Figure 1). A laser rangefinder is used to measure the distance to a specified target sensor location. This distance is then used to estimate the amount of energy required to propel the sensor node to the target location with just enough additional energy left over to ensure the sensor node is able to attach itself to the target of interest. We are currently in the process of developing attachment mechanisms for steel, wood, fiberglass (Figure 2). In this demonstration we will perform a contained, live demo of our prototype pneumatic remote sensor placement device along with some prototype sensor attachment mechanisms we are developing.
{"title":"Demo: A remote sensor placement device for scalable and precise deployment of sensor networks","authors":"D. Mascarenas, Logan Ott, Aaron Curtis, S. Brambilla, A. Larson, S. Brumby, C. Farrar","doi":"10.1145/2594368.2601481","DOIUrl":"https://doi.org/10.1145/2594368.2601481","url":null,"abstract":"The goal of this work is to develop a new autonomous capability for remotely deploying precisely located sensor nodes without damaging the sensor nodes in the process. Over the course of the last decade there has been significant interest in research to deploy sensor networks. This research is driven by the fact that the costs associated with installing sensor networks can be very high. In order to rapidly deploy sensor networks consisting of large numbers of sensor nodes, alternative techniques must be developed to place the sensor nodes in the field. The goal of this work is to develop a new autonomous capability for remotely deploying precisely located sensor nodes without damaging the sensor nodes in the process. Over the course of the last decade there has been significant interest in research to deploy sensor networks. This research is driven by the fact that the costs associated with installing sensor networks can be very high. In order to rapidly deploy sensor networks consisting of large numbers of sensor nodes, alternative techniques must be developed to place the sensor nodes in the field. To date much of the research on sensor network deployment has focused on strategies that involve the random dispersion of sensor nodes [1]. In addition other researchers have investigated deployment strategies utilizing small unmanned aerial helicopters for dropping sensor networks from the air. [2]. The problem with these strategies is that often sensor nodes need to be very precisely located for their measurements to be of any use. The reason for this could be that the sensor being used only have limited range, or need to be properly coupled to the environment which they are sensing. The problem with simply dropping sensor nodes is that for many applications it is necessary to deploy sensor nodes horizontally. In addition, to properly install many types of sensors, the sensor must assume a specific pose relative to the object being measured. In order to address these challenges we are currently developing a technology to remotely and rapidly deploy precisely located sensor nodes. The remote sensor placement device being developed can be described as an intelligent gas gun (Figure 1). A laser rangefinder is used to measure the distance to a specified target sensor location. This distance is then used to estimate the amount of energy required to propel the sensor node to the target location with just enough additional energy left over to ensure the sensor node is able to attach itself to the target of interest. We are currently in the process of developing attachment mechanisms for steel, wood, fiberglass (Figure 2). In this demonstration we will perform a contained, live demo of our prototype pneumatic remote sensor placement device along with some prototype sensor attachment mechanisms we are developing.","PeriodicalId":131209,"journal":{"name":"Proceedings of the 12th annual international conference on Mobile systems, applications, and services","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127304094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}