Many occupant-oriented smarthome applications such as automated lighting, heating and cooling, and activity recognition need room location information of residents within a building. Surveillance based tracking systems used to track people in commercial buildings, are privacy invasive in homes. In this paper, we present the RF Doormat - a RF threshold system that can accurately track people's room locations by monitoring their movement through the doorways in the home. We also present a set of guidelines and a visualization to easily and rapidly setup the RF-Doormat system on any doorway. To evaluate our system, we perform 580 doorway crossings across 11 different doorways in a home. Results indicate that our system can detect doorway crossings made by people with an average accuracy of 98%. To our knowledge, the RF Doormat is the first highly accurate room location tracking system that can be used for long time periods without the need for privacy invasive cameras.
{"title":"An RF doormat for tracking people's room locations","authors":"Juhi Ranjan, Yu Yao, K. Whitehouse","doi":"10.1145/2493432.2493514","DOIUrl":"https://doi.org/10.1145/2493432.2493514","url":null,"abstract":"Many occupant-oriented smarthome applications such as automated lighting, heating and cooling, and activity recognition need room location information of residents within a building. Surveillance based tracking systems used to track people in commercial buildings, are privacy invasive in homes. In this paper, we present the RF Doormat - a RF threshold system that can accurately track people's room locations by monitoring their movement through the doorways in the home. We also present a set of guidelines and a visualization to easily and rapidly setup the RF-Doormat system on any doorway. To evaluate our system, we perform 580 doorway crossings across 11 different doorways in a home. Results indicate that our system can detect doorway crossings made by people with an average accuracy of 98%. To our knowledge, the RF Doormat is the first highly accurate room location tracking system that can be used for long time periods without the need for privacy invasive cameras.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125834401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Location privacy","authors":"Frank Dürr","doi":"10.1145/3254797","DOIUrl":"https://doi.org/10.1145/3254797","url":null,"abstract":"","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134332620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heading information becomes widely used in ubiquitous computing applications for mobile devices. Digital magnetometers, also known as geomagnetic field sensors, provide absolute device headings relative to the earth's magnetic north. However, magnetometer readings are prone to significant errors in indoor environments due to the existence of magnetic interferences, such as from printers, walls, or metallic shelves. These errors adversely affect the performance and quality of user experience of the applications requiring device headings. In this paper, we propose Headio, a novel approach to provide reliable device headings in indoor environments. Headio achieves this by aggregating ceiling images of an indoor environment, and by using computer vision-based pattern detection techniques to provide directional references. To achieve zero-configured and energy-efficient heading sensing, Headio also utilizes multimodal sensing techniques to dynamically schedule sensing tasks. To fully evaluate the system, we implemented Headio on both Android and iOS mobile platforms, and performed comprehensive experiments in both small-scale controlled and large-scale public indoor environments. Evaluation results show that Headio constantly provides accurate heading detection performance in diverse situations, achieving better than 1 degree average heading accuracy, up to 33X improvement over existing techniques.
{"title":"Headio: zero-configured heading acquisition for indoor mobile devices through multimodal context sensing","authors":"Zheng Sun, Shijia Pan, Yu-Chi Su, Pei Zhang","doi":"10.1145/2493432.2493434","DOIUrl":"https://doi.org/10.1145/2493432.2493434","url":null,"abstract":"Heading information becomes widely used in ubiquitous computing applications for mobile devices. Digital magnetometers, also known as geomagnetic field sensors, provide absolute device headings relative to the earth's magnetic north. However, magnetometer readings are prone to significant errors in indoor environments due to the existence of magnetic interferences, such as from printers, walls, or metallic shelves. These errors adversely affect the performance and quality of user experience of the applications requiring device headings. In this paper, we propose Headio, a novel approach to provide reliable device headings in indoor environments. Headio achieves this by aggregating ceiling images of an indoor environment, and by using computer vision-based pattern detection techniques to provide directional references. To achieve zero-configured and energy-efficient heading sensing, Headio also utilizes multimodal sensing techniques to dynamically schedule sensing tasks. To fully evaluate the system, we implemented Headio on both Android and iOS mobile platforms, and performed comprehensive experiments in both small-scale controlled and large-scale public indoor environments. Evaluation results show that Headio constantly provides accurate heading detection performance in diverse situations, achieving better than 1 degree average heading accuracy, up to 33X improvement over existing techniques.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuan Bao, Songchun Fan, A. Varshavsky, Kevin A. Li, Romit Roy Choudhury
This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on today's smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the user's experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the user's actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.
{"title":"Your reactions suggest you liked the movie: automatic content rating via reaction sensing","authors":"Xuan Bao, Songchun Fan, A. Varshavsky, Kevin A. Li, Romit Roy Choudhury","doi":"10.1145/2493432.2493440","DOIUrl":"https://doi.org/10.1145/2493432.2493440","url":null,"abstract":"This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on today's smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the user's experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the user's actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115633452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: At work","authors":"A. Dey","doi":"10.1145/3254779","DOIUrl":"https://doi.org/10.1145/3254779","url":null,"abstract":"","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124792809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sang-Su Lee, Jeonghun Chae, Hyunjeong Kim, Youn-kyung Lim, Kun-Pyo Lee
Advances in dynamic gesture recognition technologies now make it possible to investigate freehand input techniques. This study tried to understand how users manipulate digital content on a distant screen by hand gesture interaction in a living room environment. While there have been many existing studies that investigate freehand input techniques, we developed and applied a novel study methodology based on a combination of both an existing user elicitation study and conventional Wizard-of-Oz study that involved another non-technical user for providing feedback. Through the study, many useful issues and implications for making freehand gesture interaction design more natural in a living room environment were generated which have not been covered in previous works. Furthermore, we could observe how the initial user-defined gestures are changed over time.
{"title":"Towards more natural digital content manipulation via user freehand gestural interaction in a living room","authors":"Sang-Su Lee, Jeonghun Chae, Hyunjeong Kim, Youn-kyung Lim, Kun-Pyo Lee","doi":"10.1145/2493432.2493480","DOIUrl":"https://doi.org/10.1145/2493432.2493480","url":null,"abstract":"Advances in dynamic gesture recognition technologies now make it possible to investigate freehand input techniques. This study tried to understand how users manipulate digital content on a distant screen by hand gesture interaction in a living room environment. While there have been many existing studies that investigate freehand input techniques, we developed and applied a novel study methodology based on a combination of both an existing user elicitation study and conventional Wizard-of-Oz study that involved another non-technical user for providing feedback. Through the study, many useful issues and implications for making freehand gesture interaction design more natural in a living room environment were generated which have not been covered in previous works. Furthermore, we could observe how the initial user-defined gestures are changed over time.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122107097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Context sensing","authors":"T. Ploetz","doi":"10.1145/3254778","DOIUrl":"https://doi.org/10.1145/3254778","url":null,"abstract":"","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114489053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design and implementation of a programming system that enables third-party developers to add spoken natural language (SNL) interfaces to standalone mobile applications. The central challenge is to create statistical recognition models that are accurate and resource-efficient in the face of the variety of natural language, while requiring little specialized knowledge from developers. We show that given a few examples from the developer, it is possible to elicit comprehensive sets of paraphrases of the examples using internet crowds. The exhaustive nature of these paraphrases allows us to use relatively simple, automatically derived statistical models for speech and language understanding that perform well without per-application tuning. We have realized our design fully as an extension to the Visual Studio IDE. Based on a new benchmark dataset with 3500 spoken instances of 27 commands from 20 subjects and a small developer study, we establish the promise of our approach and the impact of various design choices.
本文介绍了一个编程系统的设计和实现,该系统使第三方开发人员能够向独立的移动应用程序添加语音自然语言(SNL)接口。核心挑战是创建统计识别模型,该模型在面对各种自然语言时准确且资源高效,同时对开发人员的专业知识要求很少。我们表明,给出一些来自开发人员的例子,有可能利用互联网人群引出对这些例子的综合解释。这些解释的详尽性使我们能够使用相对简单的、自动派生的语音和语言理解统计模型,这些模型无需每个应用程序调优就能很好地执行。我们已经将我们的设计完全实现为Visual Studio IDE的扩展。基于一个新的基准数据集,其中包含来自20个主题的27个命令的3500个口头实例和一个小型开发人员研究,我们建立了我们的方法的承诺和各种设计选择的影响。
{"title":"NLify: lightweight spoken natural language interfaces via exhaustive paraphrasing","authors":"Seungyeop Han, Matthai Philipose, Y. Ju","doi":"10.1145/2493432.2493458","DOIUrl":"https://doi.org/10.1145/2493432.2493458","url":null,"abstract":"This paper presents the design and implementation of a programming system that enables third-party developers to add spoken natural language (SNL) interfaces to standalone mobile applications. The central challenge is to create statistical recognition models that are accurate and resource-efficient in the face of the variety of natural language, while requiring little specialized knowledge from developers. We show that given a few examples from the developer, it is possible to elicit comprehensive sets of paraphrases of the examples using internet crowds. The exhaustive nature of these paraphrases allows us to use relatively simple, automatically derived statistical models for speech and language understanding that perform well without per-application tuning. We have realized our design fully as an extension to the Visual Studio IDE. Based on a new benchmark dataset with 3500 spoken instances of 27 commands from 20 subjects and a small developer study, we establish the promise of our approach and the impact of various design choices.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114727139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonzalo Garcia-Perate, N. Dalton, R. Dalton, Duncan Wilson
In this paper we present the design and first-stage analysis of a purposely built, smart, pop-up wine shop. Our shop learns from visitors' choices and recommends wine using collaborative filtering and ambient feedback displays integrated into its furniture. Our ambient recommender system was tested in a controlled laboratory environment. We report on the qualitative feedback and between subjects study, testing the influence the system had in wine choice behavior. Participants reported the system helpful, and results from our empirical analysis suggest it influenced buying behavior.
{"title":"Ambient recommendations in the pop-up shop","authors":"Gonzalo Garcia-Perate, N. Dalton, R. Dalton, Duncan Wilson","doi":"10.1145/2493432.2494525","DOIUrl":"https://doi.org/10.1145/2493432.2494525","url":null,"abstract":"In this paper we present the design and first-stage analysis of a purposely built, smart, pop-up wine shop. Our shop learns from visitors' choices and recommends wine using collaborative filtering and ambient feedback displays integrated into its furniture. Our ambient recommender system was tested in a controlled laboratory environment. We report on the qualitative feedback and between subjects study, testing the influence the system had in wine choice behavior. Participants reported the system helpful, and results from our empirical analysis suggest it influenced buying behavior.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Ouyang, Animesh Srivastava, P. Prabahar, Romit Roy Choudhury, Merideth A. Addicott, F. J. McClernon
This paper presents iSee, a crowdsourced approach to detecting and localizing events in outdoor environments. Upon spotting an event, an iSee user only needs to swipe on her smartphone's touchscreen in the direction of the event. These swiping directions are often inaccurate and so are the compass measurements. Moreover, the swipes do not encode any notion of how far the event is located from the user, neither is the GPS location of the user accurate. Furthermore, multiple events may occur simultaneously and users do not explicitly indicate which events they are swiping towards. Nonetheless, as more users start contributing data, we show that our proposed system is able to quickly detect and estimate the locations of the events. We have implemented iSee on Android phones and have experimented in real-world settings by planting virtual "events" in our campus and asking volunteers to swipe on seeing one. Results show that iSee performs appreciably better than established triangulation and clustering-based approaches, in terms of localization accuracy, detection coverage, and robustness to sensor noise.
{"title":"If you see something, swipe towards it: crowdsourced event localization using smartphones","authors":"W. Ouyang, Animesh Srivastava, P. Prabahar, Romit Roy Choudhury, Merideth A. Addicott, F. J. McClernon","doi":"10.1145/2493432.2493455","DOIUrl":"https://doi.org/10.1145/2493432.2493455","url":null,"abstract":"This paper presents iSee, a crowdsourced approach to detecting and localizing events in outdoor environments. Upon spotting an event, an iSee user only needs to swipe on her smartphone's touchscreen in the direction of the event. These swiping directions are often inaccurate and so are the compass measurements. Moreover, the swipes do not encode any notion of how far the event is located from the user, neither is the GPS location of the user accurate. Furthermore, multiple events may occur simultaneously and users do not explicitly indicate which events they are swiping towards. Nonetheless, as more users start contributing data, we show that our proposed system is able to quickly detect and estimate the locations of the events. We have implemented iSee on Android phones and have experimented in real-world settings by planting virtual \"events\" in our campus and asking volunteers to swipe on seeing one. Results show that iSee performs appreciably better than established triangulation and clustering-based approaches, in terms of localization accuracy, detection coverage, and robustness to sensor noise.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122477935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}