Pub Date : 2016-03-14DOI: 10.1109/PERCOM.2016.7456516
Suyeon Kim, Yohan Chon, Seokjun Lee, H. Cha
Mobile data offloading through WiFi is an essential requirement to reduce cellular network traffic. While extensive attempts have been made at mobile data offloading, previous studies have rarely addressed practical issues, such as dealing with diverse user contexts. In this paper, we propose a personalized data offloading scheme to provide maximum throughput within the cellular budget in daily life. We propose an adaptive policy that considers a user's mobility patterns, cellular budget, and network usage for applications. The proposed system employs an adaptive model to predict the throughput of WiFi APs and the network usage of smartphones. Among the three types of predictor model (i.e., spatial, temporal, and spatio-temporal), the system automatically chooses the optimal model for each mobile user without user intervention. The experimental results from 10 mobile users show that the proposed system provides 29% higher throughput than previous systems and minimizes extra data charges.
{"title":"Prediction-based personalized offloading of cellular traffic through WiFi networks","authors":"Suyeon Kim, Yohan Chon, Seokjun Lee, H. Cha","doi":"10.1109/PERCOM.2016.7456516","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456516","url":null,"abstract":"Mobile data offloading through WiFi is an essential requirement to reduce cellular network traffic. While extensive attempts have been made at mobile data offloading, previous studies have rarely addressed practical issues, such as dealing with diverse user contexts. In this paper, we propose a personalized data offloading scheme to provide maximum throughput within the cellular budget in daily life. We propose an adaptive policy that considers a user's mobility patterns, cellular budget, and network usage for applications. The proposed system employs an adaptive model to predict the throughput of WiFi APs and the network usage of smartphones. Among the three types of predictor model (i.e., spatial, temporal, and spatio-temporal), the system automatically chooses the optimal model for each mobile user without user intervention. The experimental results from 10 mobile users show that the proposed system provides 29% higher throughput than previous systems and minimizes extra data charges.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132499745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-14DOI: 10.1109/PERCOM.2016.7456513
Claudio Martella, Armando Miraglia, M. Cattani, M. Steen
Face-to-face proximity has been successfully leveraged to study the relationships between individuals in various contexts, from a working place, to a conference, a museum, a fair, and a date. We spend time facing the individuals with whom we chat, discuss, work, and play. However, face-to-face proximity is not the realm of solely person-to-person relationships, but it can be used as a proxy to study person-to-object relationships as well. We face the objects with which we interact on a daily basis, like a television, the kitchen appliances, a book, including more complex objects like a stage where a concert is taking place. In this paper, we focus on the relationship between the visitors of an art exhibition and its exhibits. We design, implement, and deploy a sensing infrastructure based on inexpensive mobile proximity sensors and a filtering pipeline that we use to measure face-to-face proximity between individuals and exhibits. Our pipeline produces an improvement in measurement accuracy of up to 64% relative to raw data. We use this data to mine the behavior of the visitors and show that group behavior can be recognized by means of data clustering and visualization.
{"title":"Leveraging proximity sensing to mine the behavior of museum visitors","authors":"Claudio Martella, Armando Miraglia, M. Cattani, M. Steen","doi":"10.1109/PERCOM.2016.7456513","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456513","url":null,"abstract":"Face-to-face proximity has been successfully leveraged to study the relationships between individuals in various contexts, from a working place, to a conference, a museum, a fair, and a date. We spend time facing the individuals with whom we chat, discuss, work, and play. However, face-to-face proximity is not the realm of solely person-to-person relationships, but it can be used as a proxy to study person-to-object relationships as well. We face the objects with which we interact on a daily basis, like a television, the kitchen appliances, a book, including more complex objects like a stage where a concert is taking place. In this paper, we focus on the relationship between the visitors of an art exhibition and its exhibits. We design, implement, and deploy a sensing infrastructure based on inexpensive mobile proximity sensors and a filtering pipeline that we use to measure face-to-face proximity between individuals and exhibits. Our pipeline produces an improvement in measurement accuracy of up to 64% relative to raw data. We use this data to mine the behavior of the visitors and show that group behavior can be recognized by means of data clustering and visualization.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"474 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114844237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-14DOI: 10.1109/PERCOM.2016.7456506
Junghyo Lee, Ayan Banerjee, S. Gupta
In this paper, we propose MT-Diet, a smartphone-based automated diet monitoring system that interfaces a thermal camera with a smartphone and identifies types of food consumed at the click of a button. The system uses thermal maps of a food plate to increase accuracy of segmentation and extraction of food parts, and combines thermal and visual images to improve accuracy in the detection of cooked food. Test results on 80 different types of cooked food show that MT-Diet can isolate food parts with an accuracy of 97.5% and determine the type of food with an accuracy of 88.93%, which is a significant improvement (nearly 25%) over the state-of-the-art.
{"title":"MT-Diet: Automated smartphone based diet assessment with infrared images","authors":"Junghyo Lee, Ayan Banerjee, S. Gupta","doi":"10.1109/PERCOM.2016.7456506","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456506","url":null,"abstract":"In this paper, we propose MT-Diet, a smartphone-based automated diet monitoring system that interfaces a thermal camera with a smartphone and identifies types of food consumed at the click of a button. The system uses thermal maps of a food plate to increase accuracy of segmentation and extraction of food parts, and combines thermal and visual images to improve accuracy in the detection of cooked food. Test results on 80 different types of cooked food show that MT-Diet can isolate food parts with an accuracy of 97.5% and determine the type of food with an accuracy of 88.93%, which is a significant improvement (nearly 25%) over the state-of-the-art.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115878209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-14DOI: 10.1109/PERCOM.2016.7456517
S. Tan, Xiaoliang Wang, G. Maier, Wenzhong Li
Public transport plays an importation role in our daily life. The information related to passengers satisfaction is very beneficial for optimizing the transportation service. This paper investigates an application of mobile crowd sensing to detect and analyze the riding quality of public transport vehicles. The lightweight system leverages sensors equipped on participants' smartphones to collect surrounding information. By analyzing the uploaded data at a server, we are able to estimate both aggressive driving behaviors and environment contexts. Series of data processing methods are exploited to overcome the affection of body movement and road condition, and crowd sourcing is applied to improve the robustness of the results. We have tested this system in 3 different transportation in 3 cities. The results indicate that the system can provide sufficient accuracy (up to 91% with 7 phones) to identify dozens of riding-comfort metrics.
{"title":"Riding quality evaluation through mobile crowd sensing","authors":"S. Tan, Xiaoliang Wang, G. Maier, Wenzhong Li","doi":"10.1109/PERCOM.2016.7456517","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456517","url":null,"abstract":"Public transport plays an importation role in our daily life. The information related to passengers satisfaction is very beneficial for optimizing the transportation service. This paper investigates an application of mobile crowd sensing to detect and analyze the riding quality of public transport vehicles. The lightweight system leverages sensors equipped on participants' smartphones to collect surrounding information. By analyzing the uploaded data at a server, we are able to estimate both aggressive driving behaviors and environment contexts. Series of data processing methods are exploited to overcome the affection of body movement and road condition, and crowd sourcing is applied to improve the robustness of the results. We have tested this system in 3 different transportation in 3 cities. The results indicate that the system can provide sufficient accuracy (up to 91% with 7 phones) to identify dozens of riding-comfort metrics.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"317 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-14DOI: 10.1109/PERCOM.2016.7456523
He Du, Zhiwen Yu, Fei Yi, Zhu Wang, Qi Han, Bin Guo
Monitoring group mobility and structure is crucial for public safety management and emergency evacuation. In this paper, we propose a fine-grained mobility classification and structure recognition approach for social groups based on hybrid sensing using mobile devices. First, we present a method which classifies group mobility into four levels, including stationary, strolling, walking and running. Second, by combining mobile sensing and Wi-Fi signals, a novel relative position relationship estimation algorithm is developed to understand moving group structures of different shapes. We have conducted real-life experiments in which eight volunteers form two to three small groups moving in a teaching building with different speed and structures. Experimental results show that our approach achieves an accuracy of 99.5% in mobility classification and about 80% in group structure recognition.
{"title":"Group mobility classification and structure recognition using mobile devices","authors":"He Du, Zhiwen Yu, Fei Yi, Zhu Wang, Qi Han, Bin Guo","doi":"10.1109/PERCOM.2016.7456523","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456523","url":null,"abstract":"Monitoring group mobility and structure is crucial for public safety management and emergency evacuation. In this paper, we propose a fine-grained mobility classification and structure recognition approach for social groups based on hybrid sensing using mobile devices. First, we present a method which classifies group mobility into four levels, including stationary, strolling, walking and running. Second, by combining mobile sensing and Wi-Fi signals, a novel relative position relationship estimation algorithm is developed to understand moving group structures of different shapes. We have conducted real-life experiments in which eight volunteers form two to three small groups moving in a teaching building with different speed and structures. Experimental results show that our approach achieves an accuracy of 99.5% in mobility classification and about 80% in group structure recognition.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132093466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}