N. Huy, Gihan Hettiarachchi, Youngki Lee, R. Balan
Citation NGUYEN, Nguyen Huy Hoang; HETTIARACHCHI, Gihan; LEE, Youngki; and BALAN, Rajesh Krishna. Demo: Real-world deployment of seat occupancy detectors. (2016). MobiSys '16 Companion: Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services Companion, Singapore, June 26-30. 103-103. Research Collection School Of Information Systems. Available at: https://ink.library.smu.edu.sg/sis_research/3279
{"title":"Demo: Real-world Deployment of Seat Occupancy Detectors","authors":"N. Huy, Gihan Hettiarachchi, Youngki Lee, R. Balan","doi":"10.1145/2938559.2938588","DOIUrl":"https://doi.org/10.1145/2938559.2938588","url":null,"abstract":"Citation NGUYEN, Nguyen Huy Hoang; HETTIARACHCHI, Gihan; LEE, Youngki; and BALAN, Rajesh Krishna. Demo: Real-world deployment of seat occupancy detectors. (2016). MobiSys '16 Companion: Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services Companion, Singapore, June 26-30. 103-103. Research Collection School Of Information Systems. Available at: https://ink.library.smu.edu.sg/sis_research/3279","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133229239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikita Jaiman, Thivya Kandappu, Randy Tandriansyah, Archan Misra
We design and develop TA$Ker, a real-world mobile crowd-sourcing platform to empirically study the worker responses to various task recommendation and selection strategies.
{"title":"Demo: TA$Ker: Campus-Scale Mobile Crowd-Tasking Platform","authors":"Nikita Jaiman, Thivya Kandappu, Randy Tandriansyah, Archan Misra","doi":"10.1145/2938559.2938587","DOIUrl":"https://doi.org/10.1145/2938559.2938587","url":null,"abstract":"We design and develop TA$Ker, a real-world mobile crowd-sourcing platform to empirically study the worker responses to various task recommendation and selection strategies.","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129282965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taeyeon Ki, Alexander Simeonov, Karthik Dantu, Steven Y. Ko, Lukasz Ziarek
We propose a novel technique called API virtualization to enable open innovation in Android. API virtualization inserts a shim layer between the Android platform layer and the app layer as shown in Figure 1, which can intercept any and every platform API call made by an app. In addition, API virtualization allows third-party developers to inject custom code, so that they can modify, reimplement, or customize existing Android APIs. This is achieved by (i) injecting a wrapper class for each platform API class that a third-party developer wants to replace, and (ii) rewriting the binary of an app so that the app code uses wrapper classes instead of platform API classes. Our API virtualization is motivated by the lack of openness in mobile systems at the platform level. For example, Android is known to be an open platform since the source code is open; thirdparty developers easily access and modify the source. However, when it comes to deploying their platform-level modifications, there is a stiff barrier. Only Google and other mobile vendors such as Samsung, LG, etc. have the privilege to distribute platform modifications at a large scale. In other words, there are only a select few players who can control the innovation on Android.
{"title":"Demo: API Virtualization for Platform Openness in Android","authors":"Taeyeon Ki, Alexander Simeonov, Karthik Dantu, Steven Y. Ko, Lukasz Ziarek","doi":"10.1145/2938559.2948646","DOIUrl":"https://doi.org/10.1145/2938559.2948646","url":null,"abstract":"We propose a novel technique called API virtualization to enable open innovation in Android. API virtualization inserts a shim layer between the Android platform layer and the app layer as shown in Figure 1, which can intercept any and every platform API call made by an app. In addition, API virtualization allows third-party developers to inject custom code, so that they can modify, reimplement, or customize existing Android APIs. This is achieved by (i) injecting a wrapper class for each platform API class that a third-party developer wants to replace, and (ii) rewriting the binary of an app so that the app code uses wrapper classes instead of platform API classes.\u0000 Our API virtualization is motivated by the lack of openness in mobile systems at the platform level. For example, Android is known to be an open platform since the source code is open; thirdparty developers easily access and modify the source. However, when it comes to deploying their platform-level modifications, there is a stiff barrier. Only Google and other mobile vendors such as Samsung, LG, etc. have the privilege to distribute platform modifications at a large scale. In other words, there are only a select few players who can control the innovation on Android.","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115522082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Poster: Reconstruction Accuracy of Data Perturbation in Mobile Environmental Sensing","authors":"Takao Suzuki, Masaki Ito, K. Sezaki","doi":"10.1145/2938559.2948800","DOIUrl":"https://doi.org/10.1145/2938559.2948800","url":null,"abstract":"","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124319029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures
{"title":"Demo: Multi-device Gestural Interfaces","authors":"Vu H. Tran, Youngki Lee, Archan Misra","doi":"10.1145/2938559.2938574","DOIUrl":"https://doi.org/10.1145/2938559.2938574","url":null,"abstract":"Varieties of wearable devices such as smart watches, Virtual/Augmented Reality devices (AR/VR) are much more affordable with interesting capabilities. In our vision, a person may use more than one devices at a time, and they form an eco-system of wearable devices. Therefore, we aim to build a system where an application expands its input and output among different devices, and adapts its input/output stream for different contexts. For example, a user wears a smart watch, a pair of smart glasses, and a smart phone in his pocket. Normally, the application on the mobile phone uses its touch screen as the input/output modality; but if the user put the mobile phone in his pocket, and wear the smart glasses, the application uses the gestures from smart watches as input, and the display of the smart glasses as output. Another advantage of such a multi-device system we want to support is multi-limb gesture. There is quite equal preference between one-handed and two-handed gestures [2]. Especially, two-handed gestures may have a potential use in VR/AR, and they provide a more natural input modality. However, there are three main challenges that need to be solved to achieve our goal. The first challenge is latency. For interactive applications, latency is crucial. For example, in virtual drumming application, what a user hears affect the timing of the next drum-hit. The second challenge is energy. It is well known that energy consumption is the bottle-neck of wearable devices. In an environment of multiple devices, energy consumption has to be optimized for all devices. We believe another challenge for such an multi-device environment is the ability of adaptation. It is even annoying to require the user to configure devices whenever the context changes, so the adaptability will be much more beneficial. For example, when the user start walking and wearing the smart glasses, the system automatically disables gesture control and shows the notification on the glasses. In multi-device system, the architecture is crucial for every device to work efficiently. Combining all data and process them in a central device forces the central device to stay in the system forever. Moreover, transmission of a large amount of data via bluetooth consumes quite much energy [1]. We therefore deploy a lightweight recognizer on each wearable device to recognize primitive gestures. Other devices can acquire these primitive gestures and fuse them into more complex gestures. For example, fusion of motion gestures from two devices, or fusion of motion gestures","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121285530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People interact frequently with others in their daily life. Users with similar mobility patterns should have a certain degree of social relationships. Therefore, to discover user relationships, we focus on the similarity of users’ behavior patterns. Our first observation is that users with high interaction frequency are more likely to have relationships. Our second observation is that users who stay together for a long time are more likely to be related to each other, or have potential relationship. Third, we assume that users who always meet at the same place are likely to have a kind of relationships. Now, it is possible to collect data from smartphones and infer user social relationships and activities[1]. We propose a new probabilistic model for analyzing human interaction data, represented as a set of proximity links between pairs of users add with the interaction timestamp. We conduct our analysis with a slice-based approach, where all links within 10 minutes are grouped together, forming a slice of the dynamic social links graph. As shown in Figure 1, we collected raw Wi-Fi Direct proximity links over months of real life interaction to infer actual events in the life of a community. When Wi-Fi Direct devices sense the environment, they can also detect Wi-Fi access point(Red Node in Figure 1), which can be used to infer the location of users and their interactions. The time and location of the interaction are keys to deduce the interaction type.
{"title":"Poster: Discovering User Relationships Through Smartphone Wi-Fi Probes","authors":"Jiang Tiantian, Masaki Ito, K. Sezaki","doi":"10.1145/2938559.2948785","DOIUrl":"https://doi.org/10.1145/2938559.2948785","url":null,"abstract":"People interact frequently with others in their daily life. Users with similar mobility patterns should have a certain degree of social relationships. Therefore, to discover user relationships, we focus on the similarity of users’ behavior patterns. Our first observation is that users with high interaction frequency are more likely to have relationships. Our second observation is that users who stay together for a long time are more likely to be related to each other, or have potential relationship. Third, we assume that users who always meet at the same place are likely to have a kind of relationships. Now, it is possible to collect data from smartphones and infer user social relationships and activities[1]. We propose a new probabilistic model for analyzing human interaction data, represented as a set of proximity links between pairs of users add with the interaction timestamp. We conduct our analysis with a slice-based approach, where all links within 10 minutes are grouped together, forming a slice of the dynamic social links graph. As shown in Figure 1, we collected raw Wi-Fi Direct proximity links over months of real life interaction to infer actual events in the life of a community. When Wi-Fi Direct devices sense the environment, they can also detect Wi-Fi access point(Red Node in Figure 1), which can be used to infer the location of users and their interactions. The time and location of the interaction are keys to deduce the interaction type.","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"95 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116495825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are bright prospects for 3D reconstruction on smartphones, such as 3D video, virtual reality, E-commerce and historic preservation. Although advent smartphones are equipped with high resolution touchscreens, powerful CPUs and GPUs, the performance of these smartphones is not comparable to desktop computers, especially in processing time and power consumption. Therefore, how to reconstruct 3D models on smartphones to increase its entertainment and functionality is an attractive challenge. In this work, we propose 3DBuilder, a versatile scheme to reconstruct 3D models on smartphones. It includes two parts to collaboratively render 3D models. On the client side, we provide an app running on Android smartphones to achieve image collection, keyframes uploading, 3D models downloading, rendering and displaying. On the cloud server side, cloud servers choose one algorithm from two different algorithms to reconstruct 3D models in non-real time or real-time. We use TCP/IP protocol to connect the client side and cloud server side to send images and models through Cellular network or Wi-Fi. Compared to previous work [1][2][3], 3DBuilder makes full use of mobile cloud computing to assist smartphones in 3D reconstruction. It provides various ways to make 3D reconstruction more useful in many scenarios.
{"title":"Poster: 3DBuilder - A Versatile Scheme to Reconstruct 3D Models on Smartphones","authors":"Hao Wang, Bin Xiang, Lei Chen, Lin Zhang","doi":"10.1145/2938559.2938598","DOIUrl":"https://doi.org/10.1145/2938559.2938598","url":null,"abstract":"There are bright prospects for 3D reconstruction on smartphones, such as 3D video, virtual reality, E-commerce and historic preservation. Although advent smartphones are equipped with high resolution touchscreens, powerful CPUs and GPUs, the performance of these smartphones is not comparable to desktop computers, especially in processing time and power consumption. Therefore, how to reconstruct 3D models on smartphones to increase its entertainment and functionality is an attractive challenge.\u0000 In this work, we propose 3DBuilder, a versatile scheme to reconstruct 3D models on smartphones. It includes two parts to collaboratively render 3D models. On the client side, we provide an app running on Android smartphones to achieve image collection, keyframes uploading, 3D models downloading, rendering and displaying. On the cloud server side, cloud servers choose one algorithm from two different algorithms to reconstruct 3D models in non-real time or real-time. We use TCP/IP protocol to connect the client side and cloud server side to send images and models through Cellular network or Wi-Fi.\u0000 Compared to previous work [1][2][3], 3DBuilder makes full use of mobile cloud computing to assist smartphones in 3D reconstruction. It provides various ways to make 3D reconstruction more useful in many scenarios.","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124703673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multidimensional sensing capability of a smart-phone based accelerometer and gyroscope provide detailed information about changes in magnitude and direction of forces experienced in 3D space. This leads to a better resolution of the events occurring during a collision which can be detected using a signature of such events. Event logs can further provide a deep insight for a detailed forensic analysis thus aid in realizing the knowledge for causes of collisions.
{"title":"Poster: Smart-Phones as Active Sensing Platform for Road Safety Solutions","authors":"Ashutosh Raina, D. Bansal","doi":"10.1145/2938559.2948779","DOIUrl":"https://doi.org/10.1145/2938559.2948779","url":null,"abstract":"Multidimensional sensing capability of a smart-phone based accelerometer and gyroscope provide detailed information about changes in magnitude and direction of forces experienced in 3D space. This leads to a better resolution of the events occurring during a collision which can be detected using a signature of such events. Event logs can further provide a deep insight for a detailed forensic analysis thus aid in realizing the knowledge for causes of collisions.","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125166834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic spectrum access through Cognitive Radio Networks (CRNs) and exploiting multiple radios on a single node are two different well accepted techniques for enhancing network performance. Simultaneous usage of both the techniques, i.e., augmenting dynamic spectrum access with multiple radios can improve delay, however, makes throughput worse. Therefore, in this paper, we propose a novel approach to improve network throughput for multi-radio cognitive radio networks. Through ns-3 simulation, we show that our approach can boost throughput without degrading the delay.
{"title":"Poster: Overcoming Throughput Degradation in Multi-Radio Cognitive Radio Networks","authors":"Tanvir Ahmed Khan, A. Islam","doi":"10.1145/2938559.2948809","DOIUrl":"https://doi.org/10.1145/2938559.2948809","url":null,"abstract":"Dynamic spectrum access through Cognitive Radio Networks (CRNs) and exploiting multiple radios on a single node are two different well accepted techniques for enhancing network performance. Simultaneous usage of both the techniques, i.e., augmenting dynamic spectrum access with multiple radios can improve delay, however, makes throughput worse. Therefore, in this paper, we propose a novel approach to improve network throughput for multi-radio cognitive radio networks. Through ns-3 simulation, we show that our approach can boost throughput without degrading the delay.","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127198771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sudden growth in the smart-phone industry in recent years has caught the localization technology quite unprepared, with GPS coming out as a trivial solution. Using GPS, although quite effective, results in high energy consumption. This makes way for the several inertial sensors present in smart-phones like accelerometer, gyroscope, compass, etc. There are number of works like UnLoc[3] which use these inertial sensors for pedestrian localization. Looking into outdoor vehicular localization, Dejavu[1] is one of the good solutions. In our research problems we plan to make use of inertial sensors to develop energy efficient navigation systems and the underlying infrastructure required for the same. Here, we present a novel generalized energy-efficient outdoor navigation scheme - UrbanEye[2]
{"title":"Poster: Energy Efficient Navigation Systems","authors":"Rohit Verma","doi":"10.1145/2938559.2948792","DOIUrl":"https://doi.org/10.1145/2938559.2948792","url":null,"abstract":"The sudden growth in the smart-phone industry in recent years has caught the localization technology quite unprepared, with GPS coming out as a trivial solution. Using GPS, although quite effective, results in high energy consumption. This makes way for the several inertial sensors present in smart-phones like accelerometer, gyroscope, compass, etc. There are number of works like UnLoc[3] which use these inertial sensors for pedestrian localization. Looking into outdoor vehicular localization, Dejavu[1] is one of the good solutions. In our research problems we plan to make use of inertial sensors to develop energy efficient navigation systems and the underlying infrastructure required for the same. Here, we present a novel generalized energy-efficient outdoor navigation scheme - UrbanEye[2]","PeriodicalId":298684,"journal":{"name":"MobiSys '16 Companion","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117043401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}