Siripen Pongpaichet, V. Singh, R. Jain, A. Pentland
Geo-fencing has recently been applied to multiple applications including media recommendation, advertisements, wildlife monitoring, and recreational activities. However current geo-fencing systems work with static geographical boundaries. Situation Fencing allows for these boundaries to vary automatically based on situations derived by a combination of global and personal data streams. We present a generic approach for situation fencing, and demonstrate how it can be operationalized in practice. The results obtained in a personalized allergy alert application are encouraging and open door for building thousands of similar applications using the same framework in near future.
{"title":"Situation fencing: making geo-fencing personal and dynamic","authors":"Siripen Pongpaichet, V. Singh, R. Jain, A. Pentland","doi":"10.1145/2509352.2509401","DOIUrl":"https://doi.org/10.1145/2509352.2509401","url":null,"abstract":"Geo-fencing has recently been applied to multiple applications including media recommendation, advertisements, wildlife monitoring, and recreational activities. However current geo-fencing systems work with static geographical boundaries. Situation Fencing allows for these boundaries to vary automatically based on situations derived by a combination of global and personal data streams. We present a generic approach for situation fencing, and demonstrate how it can be operationalized in practice. The results obtained in a personalized allergy alert application are encouraging and open door for building thousands of similar applications using the same framework in near future.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126247494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a personal informatics system for Android smartphones that provides personal data on mobility and social interactions through interactive visualization interfaces. The mobile app has been made available to N=136 first year university students as part of a study of social network interactions in a university campus setting. The design of the interactive visualization interfaces enabling the participants to gain insights into own behaviors is described. We report initial findings based on device logging of participant interactions with the interactive visualization app on the smartphone and from a survey on usage with response from 45 (33%) of the participants indicating that the system allowed new insights into behavioral patterns.
{"title":"A mobile personal informatics system with interactive visualizations of mobility and social interactions","authors":"Andrea Cuttone, S. Lehmann, J. E. Larsen","doi":"10.1145/2509352.2509397","DOIUrl":"https://doi.org/10.1145/2509352.2509397","url":null,"abstract":"We describe a personal informatics system for Android smartphones that provides personal data on mobility and social interactions through interactive visualization interfaces. The mobile app has been made available to N=136 first year university students as part of a study of social network interactions in a university campus setting. The design of the interactive visualization interfaces enabling the participants to gain insights into own behaviors is described. We report initial findings based on device logging of participant interactions with the interactive visualization app on the smartphone and from a survey on usage with response from 45 (33%) of the participants indicating that the system allowed new insights into behavioral patterns.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133794364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines an increasingly relevant topic in the multimedia community of wearable devices that record the physical activity of a user throughout a day. While activity and other accelerometry-based data has been shown effective in various multimedia applications -- from context-aware music retrieval to approximating carbon footprint -- the most promising role of these target application for healthcare and personal fitness. Recently, several low-cost devices have become available to consumers. In this paper, we perform an evaluation on the most popular devices available on the market (in particular Fitbit and Nike+) and report our findings in terms of accuracy, type of data provided, available APIs, and user experience. This information is useful for researchers considering incorporating these activity-based data streams into their research and for getting a better idea of the reliability and accuracy for use in life-logging and other multimedia applications.
{"title":"An evaluation of wearable activity monitoring devices","authors":"Fangfang Guo, Yu Li, M. Kankanhalli, M. S. Brown","doi":"10.1145/2509352.2512882","DOIUrl":"https://doi.org/10.1145/2509352.2512882","url":null,"abstract":"This paper examines an increasingly relevant topic in the multimedia community of wearable devices that record the physical activity of a user throughout a day. While activity and other accelerometry-based data has been shown effective in various multimedia applications -- from context-aware music retrieval to approximating carbon footprint -- the most promising role of these target application for healthcare and personal fitness. Recently, several low-cost devices have become available to consumers. In this paper, we perform an evaluation on the most popular devices available on the market (in particular Fitbit and Nike+) and report our findings in terms of accuracy, type of data provided, available APIs, and user experience. This information is useful for researchers considering incorporating these activity-based data streams into their research and for getting a better idea of the reliability and accuracy for use in life-logging and other multimedia applications.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long-Van Nguyen-Dinh, M. Rossi, Ulf Blanke, G. Tröster
The growing ubiquity of sensors in mobile phones has opened many opportunities for personal daily activity sensing. Most context recognition systems require a cumbersome preparation by collecting and manually annotating training examples. Recently, mining online crowd-generated repositories for free annotated training data has been proposed to build context models. A crowd-generated dataset can capture a large variety both in terms of class number and in intra-class diversity, but may not cover all user-specific contexts. Thus, performance is often significantly worse than that of user-centric training. In this work, we exploit for the first time the combination of both crowd-generated audio dataset available in the web and unlabeled audio data obtained from users' mobile phones. We use a semi-supervised Gaussian mixture model to combine labeled data from the crowd-generated database and unlabeled personal recording data. Hereby we refine generic knowledge with data from the user to train a personalized model. This technique has been tested on 7 users on mobile phones with a total data of 14 days and up to 9 context classes. Preliminary results show that a semi-supervised model can improve the recognition accuracy up to 21%.
{"title":"Combining crowd-generated media and personal data: semi-supervised learning for context recognition","authors":"Long-Van Nguyen-Dinh, M. Rossi, Ulf Blanke, G. Tröster","doi":"10.1145/2509352.2509396","DOIUrl":"https://doi.org/10.1145/2509352.2509396","url":null,"abstract":"The growing ubiquity of sensors in mobile phones has opened many opportunities for personal daily activity sensing. Most context recognition systems require a cumbersome preparation by collecting and manually annotating training examples. Recently, mining online crowd-generated repositories for free annotated training data has been proposed to build context models. A crowd-generated dataset can capture a large variety both in terms of class number and in intra-class diversity, but may not cover all user-specific contexts. Thus, performance is often significantly worse than that of user-centric training. In this work, we exploit for the first time the combination of both crowd-generated audio dataset available in the web and unlabeled audio data obtained from users' mobile phones. We use a semi-supervised Gaussian mixture model to combine labeled data from the crowd-generated database and unlabeled personal recording data. Hereby we refine generic knowledge with data from the user to train a personalized model. This technique has been tested on 7 users on mobile phones with a total data of 14 days and up to 9 context classes. Preliminary results show that a semi-supervised model can improve the recognition accuracy up to 21%.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115013095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
1. PANEL OVERVIEW We are seeing a phenomenal increase in the amount of multimodal big data being produced every day. However, the data by themselves are useless unless they serve a practical purpose for human beings. This panel brings together some of the leading experts on multimodal big data and personal data analysis to discuss the questions of utility and relevance of big multimedia data for personal applications. In particular the panel will discuss the open opportunities for leveraging the distributed multimedia in close synergy with personal data being produced by various Quantified-Self technologies.
{"title":"\"what's in it for me?\": how can big multimedia aid quantified-self applications","authors":"R. Jain","doi":"10.1145/2509352.2509403","DOIUrl":"https://doi.org/10.1145/2509352.2509403","url":null,"abstract":"1. PANEL OVERVIEW We are seeing a phenomenal increase in the amount of multimodal big data being produced every day. However, the data by themselves are useless unless they serve a practical purpose for human beings. This panel brings together some of the leading experts on multimodal big data and personal data analysis to discuss the questions of utility and relevance of big multimedia data for personal applications. In particular the panel will discuss the open opportunities for leveraging the distributed multimedia in close synergy with personal data being produced by various Quantified-Self technologies.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122232502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdullah Almaatouq, Fahad Alhasoun, Riccardo Campari, A. Alfaris
Extensive theoretic work attempts to address the role of social norms in describing, explaining and predicting human behaviors. However, traditional methods of assessing the effect can be expensive and time consuming. In this work, we utilize data generated by the call detail records (CDRs) and geo-tagged Tweets (GTTs) as enabling proxies for understanding human activity patterns. We present preliminary results on the effect of social norms on communication patterns during different times of the day, including prayer times. Specifically, we investigate the variations in population behavioral patterns with respect to social norms between asynchronous (i.e., Twitter) and synchronous (i.e., phone calls) communication mediums in the city of Riyadh, the capital of Saudi Arabia.
{"title":"The influence of social norms on synchronous versus asynchronous communication technologies","authors":"Abdullah Almaatouq, Fahad Alhasoun, Riccardo Campari, A. Alfaris","doi":"10.1145/2509352.2509398","DOIUrl":"https://doi.org/10.1145/2509352.2509398","url":null,"abstract":"Extensive theoretic work attempts to address the role of social norms in describing, explaining and predicting human behaviors. However, traditional methods of assessing the effect can be expensive and time consuming. In this work, we utilize data generated by the call detail records (CDRs) and geo-tagged Tweets (GTTs) as enabling proxies for understanding human activity patterns. We present preliminary results on the effect of social norms on communication patterns during different times of the day, including prayer times. Specifically, we investigate the variations in population behavioral patterns with respect to social norms between asynchronous (i.e., Twitter) and synchronous (i.e., phone calls) communication mediums in the city of Riyadh, the capital of Saudi Arabia.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130410444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We live in a data rich world. Not only most (it not all) of our interactions in the digital world are permanently stored, but the vast majority of our interactions with the physical world also leave a digital trace. The opportunities around mining these huge amounts of data are immense. In fact, I would claim that the solution of many of the challenges that humanity faces today will involve analyzing this data. In my talk, I will present recent work at Telefonica Research that involves analyzing both personal and big data to enable a range of applications, including smart cities, personal multimedia story-telling and personalized context-aware mobile recommendations.
{"title":"The power of the data: opportunities and challenges in big and personal data mining","authors":"Nuria Oliver","doi":"10.1145/2509352.2509402","DOIUrl":"https://doi.org/10.1145/2509352.2509402","url":null,"abstract":"We live in a data rich world. Not only most (it not all) of our interactions in the digital world are permanently stored, but the vast majority of our interactions with the physical world also leave a digital trace. The opportunities around mining these huge amounts of data are immense. In fact, I would claim that the solution of many of the challenges that humanity faces today will involve analyzing this data.\u0000 In my talk, I will present recent work at Telefonica Research that involves analyzing both personal and big data to enable a range of applications, including smart cities, personal multimedia story-telling and personalized context-aware mobile recommendations.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114166698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most people already use phones with myriad sensors that continuously generate data streams related to most aspects of their life. By detecting events in basic data streams and correlating and reasoning among them, it is possible to create a chronicle of personal life. We call it Personicle and use this to build individual Health Persona. Such Health Persona may then be used for understanding societal health as well as making decisions in emerging Social Life Networks. In this paper, we present a framework that collects, manages, and correlates personal data from heterogeneous data sources and detects events happening at personal level to build health persona. We use several data streams such as motion tracking, location tracking, activity level, and personal calendar data. We illustrate how two recognition algorithms based on Formal Concept Analysis and Decision Trees can be applied to Life Event detection problem. Also, we demonstrate the applicability of this framework on simulated data from Moves app, GPS, Nike fuel band, and Google calendar. We expect to soon have results for several individuals using real data streams from disparate wearable and smart phone sensors.
{"title":"Building health persona from personal data streams","authors":"Laleh Jalali, R. Jain","doi":"10.1145/2509352.2509400","DOIUrl":"https://doi.org/10.1145/2509352.2509400","url":null,"abstract":"Most people already use phones with myriad sensors that continuously generate data streams related to most aspects of their life. By detecting events in basic data streams and correlating and reasoning among them, it is possible to create a chronicle of personal life. We call it Personicle and use this to build individual Health Persona. Such Health Persona may then be used for understanding societal health as well as making decisions in emerging Social Life Networks. In this paper, we present a framework that collects, manages, and correlates personal data from heterogeneous data sources and detects events happening at personal level to build health persona. We use several data streams such as motion tracking, location tracking, activity level, and personal calendar data. We illustrate how two recognition algorithms based on Formal Concept Analysis and Decision Trees can be applied to Life Event detection problem. Also, we demonstrate the applicability of this framework on simulated data from Moves app, GPS, Nike fuel band, and Google calendar. We expect to soon have results for several individuals using real data streams from disparate wearable and smart phone sensors.","PeriodicalId":173211,"journal":{"name":"PDM '13","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130827854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}