首页 > 最新文献

Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing最新文献

英文 中文
An RF doormat for tracking people's room locations 用来追踪房间位置的射频门垫
Juhi Ranjan, Yu Yao, K. Whitehouse
Many occupant-oriented smarthome applications such as automated lighting, heating and cooling, and activity recognition need room location information of residents within a building. Surveillance based tracking systems used to track people in commercial buildings, are privacy invasive in homes. In this paper, we present the RF Doormat - a RF threshold system that can accurately track people's room locations by monitoring their movement through the doorways in the home. We also present a set of guidelines and a visualization to easily and rapidly setup the RF-Doormat system on any doorway. To evaluate our system, we perform 580 doorway crossings across 11 different doorways in a home. Results indicate that our system can detect doorway crossings made by people with an average accuracy of 98%. To our knowledge, the RF Doormat is the first highly accurate room location tracking system that can be used for long time periods without the need for privacy invasive cameras.
许多以居住者为导向的智能家居应用,如自动照明、供暖和制冷、活动识别,都需要建筑物内居民的房间位置信息。用于在商业建筑中跟踪人的基于监视的跟踪系统,在家庭中是侵犯隐私的。在本文中,我们提出了射频门垫-一种射频阈值系统,可以通过监测人们在家中门口的运动来准确跟踪人们的房间位置。我们还提供了一套指导方针和可视化,以便在任何门口轻松快速地设置rf门垫系统。为了评估我们的系统,我们在一个家庭的11个不同的门道上进行了580次门道穿越。结果表明,该系统能够检测出行人过街,平均准确率达到98%。据我们所知,RF门垫是第一个高度精确的房间位置跟踪系统,可以长时间使用,而不需要侵犯隐私的摄像头。
{"title":"An RF doormat for tracking people's room locations","authors":"Juhi Ranjan, Yu Yao, K. Whitehouse","doi":"10.1145/2493432.2493514","DOIUrl":"https://doi.org/10.1145/2493432.2493514","url":null,"abstract":"Many occupant-oriented smarthome applications such as automated lighting, heating and cooling, and activity recognition need room location information of residents within a building. Surveillance based tracking systems used to track people in commercial buildings, are privacy invasive in homes. In this paper, we present the RF Doormat - a RF threshold system that can accurately track people's room locations by monitoring their movement through the doorways in the home. We also present a set of guidelines and a visualization to easily and rapidly setup the RF-Doormat system on any doorway. To evaluate our system, we perform 580 doorway crossings across 11 different doorways in a home. Results indicate that our system can detect doorway crossings made by people with an average accuracy of 98%. To our knowledge, the RF Doormat is the first highly accurate room location tracking system that can be used for long time periods without the need for privacy invasive cameras.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125834401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Session details: Location privacy 会话详细信息:位置隐私
Frank Dürr
{"title":"Session details: Location privacy","authors":"Frank Dürr","doi":"10.1145/3254797","DOIUrl":"https://doi.org/10.1145/3254797","url":null,"abstract":"","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134332620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Headio: zero-configured heading acquisition for indoor mobile devices through multimodal context sensing 头部:通过多模态上下文感知,为室内移动设备进行零配置头部采集
Zheng Sun, Shijia Pan, Yu-Chi Su, Pei Zhang
Heading information becomes widely used in ubiquitous computing applications for mobile devices. Digital magnetometers, also known as geomagnetic field sensors, provide absolute device headings relative to the earth's magnetic north. However, magnetometer readings are prone to significant errors in indoor environments due to the existence of magnetic interferences, such as from printers, walls, or metallic shelves. These errors adversely affect the performance and quality of user experience of the applications requiring device headings. In this paper, we propose Headio, a novel approach to provide reliable device headings in indoor environments. Headio achieves this by aggregating ceiling images of an indoor environment, and by using computer vision-based pattern detection techniques to provide directional references. To achieve zero-configured and energy-efficient heading sensing, Headio also utilizes multimodal sensing techniques to dynamically schedule sensing tasks. To fully evaluate the system, we implemented Headio on both Android and iOS mobile platforms, and performed comprehensive experiments in both small-scale controlled and large-scale public indoor environments. Evaluation results show that Headio constantly provides accurate heading detection performance in diverse situations, achieving better than 1 degree average heading accuracy, up to 33X improvement over existing techniques.
标题信息在移动设备的普适计算应用中得到了广泛的应用。数字磁力计,也被称为地磁场传感器,提供相对于地球磁北的绝对设备航向。然而,由于存在磁干扰,例如来自打印机,墙壁或金属架子的磁干扰,在室内环境中,磁力计读数容易出现显着误差。这些错误会对需要设备标题的应用程序的性能和用户体验质量产生不利影响。在本文中,我们提出了一种在室内环境中提供可靠设备标题的新方法Headio。Headio通过聚合室内环境的天花板图像,并使用基于计算机视觉的模式检测技术来提供方向参考,从而实现了这一点。为了实现零配置和节能的航向传感,Headio还利用多模态传感技术来动态调度传感任务。为了充分评估该系统,我们在Android和iOS两个移动平台上实现了Headio,并在小规模受控和大规模公共室内环境下进行了综合实验。评估结果表明,headadio在不同情况下持续提供准确的航向检测性能,实现了优于1度的平均航向精度,比现有技术提高了33倍。
{"title":"Headio: zero-configured heading acquisition for indoor mobile devices through multimodal context sensing","authors":"Zheng Sun, Shijia Pan, Yu-Chi Su, Pei Zhang","doi":"10.1145/2493432.2493434","DOIUrl":"https://doi.org/10.1145/2493432.2493434","url":null,"abstract":"Heading information becomes widely used in ubiquitous computing applications for mobile devices. Digital magnetometers, also known as geomagnetic field sensors, provide absolute device headings relative to the earth's magnetic north. However, magnetometer readings are prone to significant errors in indoor environments due to the existence of magnetic interferences, such as from printers, walls, or metallic shelves. These errors adversely affect the performance and quality of user experience of the applications requiring device headings. In this paper, we propose Headio, a novel approach to provide reliable device headings in indoor environments. Headio achieves this by aggregating ceiling images of an indoor environment, and by using computer vision-based pattern detection techniques to provide directional references. To achieve zero-configured and energy-efficient heading sensing, Headio also utilizes multimodal sensing techniques to dynamically schedule sensing tasks. To fully evaluate the system, we implemented Headio on both Android and iOS mobile platforms, and performed comprehensive experiments in both small-scale controlled and large-scale public indoor environments. Evaluation results show that Headio constantly provides accurate heading detection performance in diverse situations, achieving better than 1 degree average heading accuracy, up to 33X improvement over existing techniques.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Your reactions suggest you liked the movie: automatic content rating via reaction sensing 你的反应表明你喜欢这部电影:通过反应感应自动内容评级
Xuan Bao, Songchun Fan, A. Varshavsky, Kevin A. Li, Romit Roy Choudhury
This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on today's smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the user's experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the user's actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.
本文描述了一个多粒度自动分级内容(主要是电影和视频)的系统。我们的主要观察结果是,如今智能手机和平板电脑上丰富的传感器可以用来捕捉用户在这些设备上观看电影时的各种反应。从笑声的声音特征来检测哪些场景是有趣的,到平板电脑的静止表明激烈的戏剧。此外,与大多数传统系统不同,这些评级不需要只产生一个数字分数,而是可以扩展以捕获用户体验。我们将这些想法整合到一个名为Pulse的Android原型中,并让11名在三星平板电脑上观看4到6部电影的用户进行测试。令人鼓舞的结果表明,用户的实际评分与系统生成的评分之间存在一致的相关性。通过更严格的测试和优化,Pulse可能成为现实世界采用的候选产品。
{"title":"Your reactions suggest you liked the movie: automatic content rating via reaction sensing","authors":"Xuan Bao, Songchun Fan, A. Varshavsky, Kevin A. Li, Romit Roy Choudhury","doi":"10.1145/2493432.2493440","DOIUrl":"https://doi.org/10.1145/2493432.2493440","url":null,"abstract":"This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on today's smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the user's experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the user's actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115633452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Session details: At work 会话细节:在工作中
A. Dey
{"title":"Session details: At work","authors":"A. Dey","doi":"10.1145/3254779","DOIUrl":"https://doi.org/10.1145/3254779","url":null,"abstract":"","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124792809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards more natural digital content manipulation via user freehand gestural interaction in a living room 通过用户在客厅的徒手手势交互,实现更自然的数字内容操作
Sang-Su Lee, Jeonghun Chae, Hyunjeong Kim, Youn-kyung Lim, Kun-Pyo Lee
Advances in dynamic gesture recognition technologies now make it possible to investigate freehand input techniques. This study tried to understand how users manipulate digital content on a distant screen by hand gesture interaction in a living room environment. While there have been many existing studies that investigate freehand input techniques, we developed and applied a novel study methodology based on a combination of both an existing user elicitation study and conventional Wizard-of-Oz study that involved another non-technical user for providing feedback. Through the study, many useful issues and implications for making freehand gesture interaction design more natural in a living room environment were generated which have not been covered in previous works. Furthermore, we could observe how the initial user-defined gestures are changed over time.
动态手势识别技术的进步使得研究徒手输入技术成为可能。这项研究试图了解用户如何在客厅环境中通过手势交互操作远程屏幕上的数字内容。虽然已有许多研究调查徒手输入技术,但我们开发并应用了一种新的研究方法,该方法基于现有用户启发研究和传统的Wizard-of-Oz研究的结合,该研究涉及另一个非技术用户提供反馈。通过研究,产生了许多有用的问题和启示,使徒手手势交互设计在客厅环境中更加自然,而这些问题和启示在以前的作品中没有涉及到。此外,我们可以观察到初始用户定义的手势是如何随时间变化的。
{"title":"Towards more natural digital content manipulation via user freehand gestural interaction in a living room","authors":"Sang-Su Lee, Jeonghun Chae, Hyunjeong Kim, Youn-kyung Lim, Kun-Pyo Lee","doi":"10.1145/2493432.2493480","DOIUrl":"https://doi.org/10.1145/2493432.2493480","url":null,"abstract":"Advances in dynamic gesture recognition technologies now make it possible to investigate freehand input techniques. This study tried to understand how users manipulate digital content on a distant screen by hand gesture interaction in a living room environment. While there have been many existing studies that investigate freehand input techniques, we developed and applied a novel study methodology based on a combination of both an existing user elicitation study and conventional Wizard-of-Oz study that involved another non-technical user for providing feedback. Through the study, many useful issues and implications for making freehand gesture interaction design more natural in a living room environment were generated which have not been covered in previous works. Furthermore, we could observe how the initial user-defined gestures are changed over time.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122107097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Session details: Context sensing 会话细节:上下文感知
T. Ploetz
{"title":"Session details: Context sensing","authors":"T. Ploetz","doi":"10.1145/3254778","DOIUrl":"https://doi.org/10.1145/3254778","url":null,"abstract":"","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114489053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NLify: lightweight spoken natural language interfaces via exhaustive paraphrasing NLify:通过详尽解释的轻量级口头自然语言接口
Seungyeop Han, Matthai Philipose, Y. Ju
This paper presents the design and implementation of a programming system that enables third-party developers to add spoken natural language (SNL) interfaces to standalone mobile applications. The central challenge is to create statistical recognition models that are accurate and resource-efficient in the face of the variety of natural language, while requiring little specialized knowledge from developers. We show that given a few examples from the developer, it is possible to elicit comprehensive sets of paraphrases of the examples using internet crowds. The exhaustive nature of these paraphrases allows us to use relatively simple, automatically derived statistical models for speech and language understanding that perform well without per-application tuning. We have realized our design fully as an extension to the Visual Studio IDE. Based on a new benchmark dataset with 3500 spoken instances of 27 commands from 20 subjects and a small developer study, we establish the promise of our approach and the impact of various design choices.
本文介绍了一个编程系统的设计和实现,该系统使第三方开发人员能够向独立的移动应用程序添加语音自然语言(SNL)接口。核心挑战是创建统计识别模型,该模型在面对各种自然语言时准确且资源高效,同时对开发人员的专业知识要求很少。我们表明,给出一些来自开发人员的例子,有可能利用互联网人群引出对这些例子的综合解释。这些解释的详尽性使我们能够使用相对简单的、自动派生的语音和语言理解统计模型,这些模型无需每个应用程序调优就能很好地执行。我们已经将我们的设计完全实现为Visual Studio IDE的扩展。基于一个新的基准数据集,其中包含来自20个主题的27个命令的3500个口头实例和一个小型开发人员研究,我们建立了我们的方法的承诺和各种设计选择的影响。
{"title":"NLify: lightweight spoken natural language interfaces via exhaustive paraphrasing","authors":"Seungyeop Han, Matthai Philipose, Y. Ju","doi":"10.1145/2493432.2493458","DOIUrl":"https://doi.org/10.1145/2493432.2493458","url":null,"abstract":"This paper presents the design and implementation of a programming system that enables third-party developers to add spoken natural language (SNL) interfaces to standalone mobile applications. The central challenge is to create statistical recognition models that are accurate and resource-efficient in the face of the variety of natural language, while requiring little specialized knowledge from developers. We show that given a few examples from the developer, it is possible to elicit comprehensive sets of paraphrases of the examples using internet crowds. The exhaustive nature of these paraphrases allows us to use relatively simple, automatically derived statistical models for speech and language understanding that perform well without per-application tuning. We have realized our design fully as an extension to the Visual Studio IDE. Based on a new benchmark dataset with 3500 spoken instances of 27 commands from 20 subjects and a small developer study, we establish the promise of our approach and the impact of various design choices.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114727139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Ambient recommendations in the pop-up shop 弹出式商店中的环境推荐
Gonzalo Garcia-Perate, N. Dalton, R. Dalton, Duncan Wilson
In this paper we present the design and first-stage analysis of a purposely built, smart, pop-up wine shop. Our shop learns from visitors' choices and recommends wine using collaborative filtering and ambient feedback displays integrated into its furniture. Our ambient recommender system was tested in a controlled laboratory environment. We report on the qualitative feedback and between subjects study, testing the influence the system had in wine choice behavior. Participants reported the system helpful, and results from our empirical analysis suggest it influenced buying behavior.
在本文中,我们提出了设计和第一阶段的分析,故意建造,智能,弹出式葡萄酒商店。我们的商店从游客的选择中学习,并使用协同过滤和环境反馈显示器集成到家具中来推荐葡萄酒。我们的环境推荐系统在一个受控的实验室环境中进行了测试。我们报告了定性反馈和受试者之间的研究,测试了系统对葡萄酒选择行为的影响。参与者报告该系统有帮助,我们的实证分析结果表明它影响了购买行为。
{"title":"Ambient recommendations in the pop-up shop","authors":"Gonzalo Garcia-Perate, N. Dalton, R. Dalton, Duncan Wilson","doi":"10.1145/2493432.2494525","DOIUrl":"https://doi.org/10.1145/2493432.2494525","url":null,"abstract":"In this paper we present the design and first-stage analysis of a purposely built, smart, pop-up wine shop. Our shop learns from visitors' choices and recommends wine using collaborative filtering and ambient feedback displays integrated into its furniture. Our ambient recommender system was tested in a controlled laboratory environment. We report on the qualitative feedback and between subjects study, testing the influence the system had in wine choice behavior. Participants reported the system helpful, and results from our empirical analysis suggest it influenced buying behavior.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
If you see something, swipe towards it: crowdsourced event localization using smartphones 如果你看到了什么,就朝它滑过去:使用智能手机进行众包活动本地化
W. Ouyang, Animesh Srivastava, P. Prabahar, Romit Roy Choudhury, Merideth A. Addicott, F. J. McClernon
This paper presents iSee, a crowdsourced approach to detecting and localizing events in outdoor environments. Upon spotting an event, an iSee user only needs to swipe on her smartphone's touchscreen in the direction of the event. These swiping directions are often inaccurate and so are the compass measurements. Moreover, the swipes do not encode any notion of how far the event is located from the user, neither is the GPS location of the user accurate. Furthermore, multiple events may occur simultaneously and users do not explicitly indicate which events they are swiping towards. Nonetheless, as more users start contributing data, we show that our proposed system is able to quickly detect and estimate the locations of the events. We have implemented iSee on Android phones and have experimented in real-world settings by planting virtual "events" in our campus and asking volunteers to swipe on seeing one. Results show that iSee performs appreciably better than established triangulation and clustering-based approaches, in terms of localization accuracy, detection coverage, and robustness to sensor noise.
本文介绍了iSee,一种在室外环境中检测和定位事件的众包方法。一旦发现事件,iSee用户只需要在智能手机的触摸屏上朝着事件的方向滑动即可。这些滑动方向通常是不准确的,指南针的测量也是如此。此外,滑动不会对事件与用户之间的距离进行编码,用户的GPS位置也不准确。此外,多个事件可能同时发生,而用户没有明确指出他们正在滑向哪个事件。尽管如此,随着越来越多的用户开始提供数据,我们证明了我们提出的系统能够快速检测和估计事件的位置。我们已经在安卓手机上实现了iSee,并在现实环境中进行了实验,在我们的校园里植入了虚拟的“事件”,并要求志愿者在看到事件时刷卡。结果表明,iSee在定位精度、检测覆盖率和对传感器噪声的鲁棒性方面明显优于现有的基于三角测量和聚类的方法。
{"title":"If you see something, swipe towards it: crowdsourced event localization using smartphones","authors":"W. Ouyang, Animesh Srivastava, P. Prabahar, Romit Roy Choudhury, Merideth A. Addicott, F. J. McClernon","doi":"10.1145/2493432.2493455","DOIUrl":"https://doi.org/10.1145/2493432.2493455","url":null,"abstract":"This paper presents iSee, a crowdsourced approach to detecting and localizing events in outdoor environments. Upon spotting an event, an iSee user only needs to swipe on her smartphone's touchscreen in the direction of the event. These swiping directions are often inaccurate and so are the compass measurements. Moreover, the swipes do not encode any notion of how far the event is located from the user, neither is the GPS location of the user accurate. Furthermore, multiple events may occur simultaneously and users do not explicitly indicate which events they are swiping towards. Nonetheless, as more users start contributing data, we show that our proposed system is able to quickly detect and estimate the locations of the events. We have implemented iSee on Android phones and have experimented in real-world settings by planting virtual \"events\" in our campus and asking volunteers to swipe on seeing one. Results show that iSee performs appreciably better than established triangulation and clustering-based approaches, in terms of localization accuracy, detection coverage, and robustness to sensor noise.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122477935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1