首页 > 最新文献

2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)最新文献

英文 中文
Chitchat: Navigating tradeoffs in device-to-device context sharing Chitchat:在设备到设备的上下文共享中导航权衡
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456512
Samuel Sungmin Cho, C. Julien
Acquiring local context information and sharing it among co-located devices is critical for emerging pervasive computing applications. The devices belonging to a group of co-located people may need to detect a shared activity (e.g., a meeting) to adapt their devices to support the activity. Today's devices are almost universally equipped with device-to-device communication that easily enables direct context sharing. While existing context sharing models tend not to consider devices' resource limitations or users' constraints, enabling devices to directly share context has significant benefits for efficiency, cost, and privacy. However, as we demonstrate quantitatively, when devices share context via device-to-device communication, it needs to be represented in a size-efficient way that does not sacrifice its expressiveness or accuracy. We present CHITCHAT, a suite of context representations that allows application developers to tune tradeoffs between the size of the representation, the flexibility of the application to update context information, the energy required to create and share context, and the quality of the information shared. We can substantially reduce the size of context representation (thereby reducing applications' overheads when they share their contexts with one another) with only a minimal reduction in the quality of shared contexts.
获取本地上下文信息并在共存的设备之间共享它对于新兴的普适计算应用程序至关重要。属于同一位置的一组人员的设备可能需要检测共享活动(例如,会议),以调整其设备以支持该活动。今天的设备几乎普遍配备了设备到设备的通信,这很容易实现直接的上下文共享。虽然现有的上下文共享模型往往不考虑设备的资源限制或用户的约束,但允许设备直接共享上下文在效率、成本和隐私方面都有显著的好处。然而,正如我们定量演示的那样,当设备通过设备到设备通信共享上下文时,它需要以一种尺寸有效的方式表示,而不会牺牲其表达性或准确性。我们提出了一套上下文表示,它允许应用程序开发人员在表示的大小、应用程序更新上下文信息的灵活性、创建和共享上下文所需的能量以及共享信息的质量之间进行权衡。我们可以大大减少上下文表示的大小(从而减少应用程序在彼此共享上下文时的开销),而共享上下文的质量只会有最小的降低。
{"title":"Chitchat: Navigating tradeoffs in device-to-device context sharing","authors":"Samuel Sungmin Cho, C. Julien","doi":"10.1109/PERCOM.2016.7456512","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456512","url":null,"abstract":"Acquiring local context information and sharing it among co-located devices is critical for emerging pervasive computing applications. The devices belonging to a group of co-located people may need to detect a shared activity (e.g., a meeting) to adapt their devices to support the activity. Today's devices are almost universally equipped with device-to-device communication that easily enables direct context sharing. While existing context sharing models tend not to consider devices' resource limitations or users' constraints, enabling devices to directly share context has significant benefits for efficiency, cost, and privacy. However, as we demonstrate quantitatively, when devices share context via device-to-device communication, it needs to be represented in a size-efficient way that does not sacrifice its expressiveness or accuracy. We present CHITCHAT, a suite of context representations that allows application developers to tune tradeoffs between the size of the representation, the flexibility of the application to update context information, the energy required to create and share context, and the quality of the information shared. We can substantially reduce the size of context representation (thereby reducing applications' overheads when they share their contexts with one another) with only a minimal reduction in the quality of shared contexts.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124471222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Whose move is it anyway? Authenticating smart wearable devices using unique head movement patterns 到底是谁的行动?使用独特的头部运动模式验证智能可穿戴设备
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456514
Sugang Li, A. Ashok, Yanyong Zhang, Chenren Xu, J. Lindqvist, M. Gruteser
In this paper, we present the design, implementation and evaluation of a user authentication system, Headbanger, for smart head-worn devices, through monitoring the user's unique head-movement patterns in response to an external audio stimulus. Compared to today's solutions, which primarily rely on indirect authentication mechanisms via the user's smartphone, thus cumbersome and susceptible to adversary intrusions, the proposed head-movement based authentication provides an accurate, robust, light-weight and convenient solution. Through extensive experimental evaluation with 95 participants, we show that our mechanism can accurately authenticate users with an average true acceptance rate of 95.57% while keeping the average false acceptance rate of 4.43%. We also show that even simple head-movement patterns are robust against imitation attacks. Finally, we demonstrate our authentication algorithm is rather light-weight: the overall processing latency on Google Glass is around 1.9 seconds.
在本文中,我们介绍了一个用户认证系统Headbanger的设计、实现和评估,该系统用于智能头戴式设备,通过监测用户响应外部音频刺激时独特的头部运动模式。目前的解决方案主要依赖于通过用户智能手机的间接认证机制,因此繁琐且容易受到对手入侵,与之相比,提出的基于头部运动的认证提供了准确、健壮、轻量级和方便的解决方案。通过95名参与者的广泛实验评估,我们表明我们的机制可以准确地认证用户,平均真实接受率为95.57%,平均错误接受率为4.43%。我们还表明,即使是简单的头部运动模式也能抵御模仿攻击。最后,我们证明了我们的认证算法是相当轻量级的:Google Glass上的总体处理延迟约为1.9秒。
{"title":"Whose move is it anyway? Authenticating smart wearable devices using unique head movement patterns","authors":"Sugang Li, A. Ashok, Yanyong Zhang, Chenren Xu, J. Lindqvist, M. Gruteser","doi":"10.1109/PERCOM.2016.7456514","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456514","url":null,"abstract":"In this paper, we present the design, implementation and evaluation of a user authentication system, Headbanger, for smart head-worn devices, through monitoring the user's unique head-movement patterns in response to an external audio stimulus. Compared to today's solutions, which primarily rely on indirect authentication mechanisms via the user's smartphone, thus cumbersome and susceptible to adversary intrusions, the proposed head-movement based authentication provides an accurate, robust, light-weight and convenient solution. Through extensive experimental evaluation with 95 participants, we show that our mechanism can accurately authenticate users with an average true acceptance rate of 95.57% while keeping the average false acceptance rate of 4.43%. We also show that even simple head-movement patterns are robust against imitation attacks. Finally, we demonstrate our authentication algorithm is rather light-weight: the overall processing latency on Google Glass is around 1.9 seconds.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130612932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
PanoVC: Pervasive telepresence using mobile phones PanoVC:使用移动电话的普遍网真
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456508
Jörg Müller, T. Langlotz, H. Regenbrecht
We are presenting PanoVC - a mobile telepresence system based on continuously updated panoramic images. We are showing that the experience of telepresence, i.e. the sense of "being there together" at a distant location can be achieved with standard state-of-the-art mobile phones. Because mobile phones are always on hand users can share their environments with others in a pervasive way. Our approach is opening up the pathway for applications in a variety of domains such as the exploration of remote environments or novel forms of videoconferencing. We present implementation details, technical evaluation results, and the findings of a user study of an indoor-outdoor environments sharing task as proof of concept.
我们正在介绍PanoVC——一个基于不断更新的全景图像的移动远程呈现系统。我们正在展示远程呈现的体验,即在遥远的地方“在一起”的感觉,可以通过标准的最先进的移动电话实现。因为手机总是在手,用户可以以一种普遍的方式与他人分享他们的环境。我们的方法为各种领域的应用开辟了道路,例如远程环境的探索或新形式的视频会议。我们提出了实现细节、技术评估结果和室内外环境共享任务的用户研究结果,作为概念的证明。
{"title":"PanoVC: Pervasive telepresence using mobile phones","authors":"Jörg Müller, T. Langlotz, H. Regenbrecht","doi":"10.1109/PERCOM.2016.7456508","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456508","url":null,"abstract":"We are presenting PanoVC - a mobile telepresence system based on continuously updated panoramic images. We are showing that the experience of telepresence, i.e. the sense of \"being there together\" at a distant location can be achieved with standard state-of-the-art mobile phones. Because mobile phones are always on hand users can share their environments with others in a pervasive way. Our approach is opening up the pathway for applications in a variety of domains such as the exploration of remote environments or novel forms of videoconferencing. We present implementation details, technical evaluation results, and the findings of a user study of an indoor-outdoor environments sharing task as proof of concept.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114912566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Video recognition using ambient light sensors 使用环境光传感器的视频识别
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456511
Lorenz Schwittmann, V. Matkovic, Matthäus Wander, Torben Weis
We present a method for recognizing a video that is playing on a TV screen by sampling the ambient light sensor of a user's smartphone. This improves situation awareness in pervasive systems because the phone can determine what the user is currently watching on TV. Our method works even if the phone has no direct line of sight to the TV screen, since ambient light reflected from walls is sufficient. Our evaluation shows that a 100% recognition ratio of the current TV channel is possible by sampling a sequence of 15 to 120 seconds length, depending on more or less favorable measuring conditions. In addition, we evaluated the recognition ratio when the user is watching video-on-demand, which exhibits a large set of possible videos. Recognizing professional YouTube videos resulted in a 92% recognition ratio; amateur videos were recognized correctly with 60% because these videos have fewer cuts. Our method focuses on detecting the time difference between video cuts because the light emitted by the screen changes instantly with most cuts and this is easily measurable with the ambient light sensor. Using the ambient light sensor instead of the camera greatly benefits energy consumption, bandwidth usage and raises less privacy concerns. Hence, it is feasible to run the measurement in the background for a longer time without draining the battery and without sending camera shots to a remote server for analysis.
我们提出了一种通过对用户智能手机的环境光传感器进行采样来识别电视屏幕上播放的视频的方法。这提高了普适系统的态势感知能力,因为手机可以确定用户当前正在看什么电视节目。即使手机与电视屏幕没有直接的视线,我们的方法也有效,因为墙壁反射的环境光已经足够了。我们的评估表明,根据或多或少有利的测量条件,通过采样15到120秒的序列,当前电视频道的100%识别率是可能的。此外,我们评估了用户观看视频点播时的识别率,其中展示了大量可能的视频。识别专业YouTube视频的识别率达到92%;业余视频的正确率为60%,因为这些视频的剪辑较少。我们的方法侧重于检测视频剪辑之间的时间差,因为屏幕发出的光线会随着大多数剪辑而瞬间变化,这很容易用环境光传感器测量。使用环境光传感器代替摄像头大大有利于能源消耗和带宽使用,并减少隐私问题。因此,在后台运行更长时间的测量是可行的,而不会耗尽电池,也不会将相机拍摄的照片发送到远程服务器进行分析。
{"title":"Video recognition using ambient light sensors","authors":"Lorenz Schwittmann, V. Matkovic, Matthäus Wander, Torben Weis","doi":"10.1109/PERCOM.2016.7456511","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456511","url":null,"abstract":"We present a method for recognizing a video that is playing on a TV screen by sampling the ambient light sensor of a user's smartphone. This improves situation awareness in pervasive systems because the phone can determine what the user is currently watching on TV. Our method works even if the phone has no direct line of sight to the TV screen, since ambient light reflected from walls is sufficient. Our evaluation shows that a 100% recognition ratio of the current TV channel is possible by sampling a sequence of 15 to 120 seconds length, depending on more or less favorable measuring conditions. In addition, we evaluated the recognition ratio when the user is watching video-on-demand, which exhibits a large set of possible videos. Recognizing professional YouTube videos resulted in a 92% recognition ratio; amateur videos were recognized correctly with 60% because these videos have fewer cuts. Our method focuses on detecting the time difference between video cuts because the light emitted by the screen changes instantly with most cuts and this is easily measurable with the ambient light sensor. Using the ambient light sensor instead of the camera greatly benefits energy consumption, bandwidth usage and raises less privacy concerns. Hence, it is feasible to run the measurement in the background for a longer time without draining the battery and without sending camera shots to a remote server for analysis.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128668801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Adaptive activity learning with dynamically available context 具有动态可用上下文的自适应活动学习
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456502
Jiahui Wen, J. Indulska, Mingyang Zhong
Numerous methods have been proposed to address different aspects of human activity recognition. However, most of the previous approaches are static in terms of the data sources used for the recognition task. As sensors can be added or can fail and be replaced by different types of sensors, creating an activity recognition model that is able to leverage dynamically available sensors becomes important. In this paper, we propose methods for activity learning and activity recognition adaptation in environments with dynamic sensor deployments. Specifically, we propose sensor and activity context models to address the problem of sensor heterogeneity, so that sensor readings can be pre-processed and populated into the recognition system properly. Based on those context models, we propose the learning-to-rank method for activity learning and its adaptation. To model the temporal characteristics of the human behaviours, we add temporal regularization into the learning and prediction phases. We use comprehensive datasets to demonstrate effectiveness of the proposed method, and show its advantage over the conventional machine learning algorithms in terms of recognition accuracy. Our method outperforms hybrid models that combine typical machine learning methods with graphical models (i.e. HMM, CRF) for temporal smoothing.
已经提出了许多方法来解决人类活动识别的不同方面。然而,大多数以前的方法在用于识别任务的数据源方面是静态的。由于传感器可以添加,也可以失效并被不同类型的传感器替换,因此创建能够利用动态可用传感器的活动识别模型变得非常重要。在本文中,我们提出了在动态传感器部署的环境中进行活动学习和活动识别的方法。具体来说,我们提出了传感器和活动上下文模型来解决传感器异构问题,以便传感器读数可以进行预处理并适当地填充到识别系统中。在这些情境模型的基础上,我们提出了活动学习的学习排序方法及其适应性。为了模拟人类行为的时间特征,我们在学习和预测阶段加入了时间正则化。我们使用综合数据集来证明所提出方法的有效性,并显示其在识别精度方面优于传统机器学习算法。我们的方法优于将典型机器学习方法与图形模型(即HMM, CRF)相结合的混合模型,用于时间平滑。
{"title":"Adaptive activity learning with dynamically available context","authors":"Jiahui Wen, J. Indulska, Mingyang Zhong","doi":"10.1109/PERCOM.2016.7456502","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456502","url":null,"abstract":"Numerous methods have been proposed to address different aspects of human activity recognition. However, most of the previous approaches are static in terms of the data sources used for the recognition task. As sensors can be added or can fail and be replaced by different types of sensors, creating an activity recognition model that is able to leverage dynamically available sensors becomes important. In this paper, we propose methods for activity learning and activity recognition adaptation in environments with dynamic sensor deployments. Specifically, we propose sensor and activity context models to address the problem of sensor heterogeneity, so that sensor readings can be pre-processed and populated into the recognition system properly. Based on those context models, we propose the learning-to-rank method for activity learning and its adaptation. To model the temporal characteristics of the human behaviours, we add temporal regularization into the learning and prediction phases. We use comprehensive datasets to demonstrate effectiveness of the proposed method, and show its advantage over the conventional machine learning algorithms in terms of recognition accuracy. Our method outperforms hybrid models that combine typical machine learning methods with graphical models (i.e. HMM, CRF) for temporal smoothing.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129487364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Smart cities: Intelligent environments and dumb people? Panel summary 智慧城市:智能环境和哑巴?小组总结
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456522
F. Zambonelli, W. Meuter, S. Kanhere, S. Loke, Flora D. Salim
Pervasive and mobile computing technologies can make our everyday living environments and our cities "smart", i.e., capable of reaching awareness of physical and social processes and of dynamically affecting them in a purposeful way. In general, living in a smart environment and being made part of its activities somehow make us - as individuals - smarter as well, by increasing our perceptory and social capabilities. However, a potential risk could be to start delegating too much to the environment itself, losing in critical attention, abandoning individual decision making for relying on collective computational governance of our activity, and in the end also losing awareness of environmental and social processes. The panel intends to discuss the above issues with the help of relevant researchers in the area of pervasive computing, smart environments, collective intelligence.
无处不在的移动计算技术可以使我们的日常生活环境和城市变得“智能”,即能够意识到物理和社会过程,并以有目的的方式动态地影响它们。总的来说,生活在一个智能的环境中,并成为其活动的一部分,通过提高我们的感知能力和社交能力,以某种方式使我们——作为个人——也变得更聪明。然而,一个潜在的风险可能是开始把太多的事情委托给环境本身,失去批判性的关注,放弃个人决策,转而依赖于我们活动的集体计算治理,最终也会失去对环境和社会过程的认识。该小组打算在普适计算、智能环境、集体智能领域的相关研究人员的帮助下讨论上述问题。
{"title":"Smart cities: Intelligent environments and dumb people? Panel summary","authors":"F. Zambonelli, W. Meuter, S. Kanhere, S. Loke, Flora D. Salim","doi":"10.1109/PERCOM.2016.7456522","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456522","url":null,"abstract":"Pervasive and mobile computing technologies can make our everyday living environments and our cities \"smart\", i.e., capable of reaching awareness of physical and social processes and of dynamically affecting them in a purposeful way. In general, living in a smart environment and being made part of its activities somehow make us - as individuals - smarter as well, by increasing our perceptory and social capabilities. However, a potential risk could be to start delegating too much to the environment itself, losing in critical attention, abandoning individual decision making for relying on collective computational governance of our activity, and in the end also losing awareness of environmental and social processes. The panel intends to discuss the above issues with the help of relevant researchers in the area of pervasive computing, smart environments, collective intelligence.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115753024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task phase recognition for highly mobile workers in large building complexes 大型建筑群中高流动性工作人员的任务阶段识别
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456504
Allan Stisen, Andreas Mathisen, S. Sørensen, H. Blunck, M. Kjærgaard, Thor S. Prentow
Being aware of activities of co-workers is a basic and vital mechanism for efficient work in highly distributed work settings. Thus, automatic recognition of the task phases the mobile workers are currently (or have been) in has many applications, e.g., efficient coordination of tasks by visualizing co-workers' task progress, automatic notifications based on context awareness, and record filing of task statuses and completions. This paper presents methods to sense and detect highly mobile workers' tasks phases in large building complexes. Large building complexes restrict the technologies available for sensing and recognizing the activities and task phases the workers currently perform as such technologies have to be easily deployable and maintainable at a large scale. The methods presented in this paper consist of features that utilize data from sensing systems which are common in large-scale indoor work environments, namely from a WiFi infrastructure providing coarse grained indoor positioning, from inertial sensors in the workers' mobile phones, and from a task management system yielding information about the scheduled tasks' start and end locations. The methods presented have low requirements on the accuracy of the indoor positioning, and thus come with low deployment and maintenance effort in real-world settings. We evaluated the proposed methods in a large hospital complex, where the highly mobile workers were recruited among the non-clinical workforce. The evaluation is based on manually labelled real-world data collected over 4 days of regular work life of the mobile workforce. The collected data yields 83 tasks in total involving 8 different orderlies from a major university hospital with a building area of 160, 000 m2. The results show that the proposed methods can distinguish accurately between the four most common task phases present in the orderlies' work routines, achieving Fi-Scores of 89.2%.
在高度分散的工作环境中,了解同事的活动是高效工作的基本和重要机制。因此,对移动工作者当前(或曾经)所处的任务阶段的自动识别有许多应用,例如,通过可视化同事的任务进度来有效地协调任务,基于上下文感知的自动通知,以及任务状态和完成的记录归档。本文提出了在大型建筑群中感知和检测高流动性工人任务阶段的方法。大型建筑群限制了可用于感知和识别工人当前执行的活动和任务阶段的技术,因为此类技术必须易于大规模部署和维护。本文提出的方法包括利用大型室内工作环境中常见的传感系统数据的特征,即来自提供粗粒度室内定位的WiFi基础设施,来自工人手机中的惯性传感器,以及来自任务管理系统的关于计划任务的开始和结束位置的信息。所提出的方法对室内定位的精度要求较低,因此在实际环境中部署和维护的工作量较小。我们在一家大型综合医院评估了所提出的方法,在那里,高流动性的工作人员是在非临床工作人员中招募的。评估是基于手动标记的真实世界的数据收集超过4天的正常工作生活的流动劳动力。收集的数据产生83个任务,涉及8个不同的护理员,来自一个建筑面积为160,000平方米的大型大学医院。结果表明,所提出的方法能够准确区分护理员日常工作中最常见的四个任务阶段,Fi-Scores达到89.2%。
{"title":"Task phase recognition for highly mobile workers in large building complexes","authors":"Allan Stisen, Andreas Mathisen, S. Sørensen, H. Blunck, M. Kjærgaard, Thor S. Prentow","doi":"10.1109/PERCOM.2016.7456504","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456504","url":null,"abstract":"Being aware of activities of co-workers is a basic and vital mechanism for efficient work in highly distributed work settings. Thus, automatic recognition of the task phases the mobile workers are currently (or have been) in has many applications, e.g., efficient coordination of tasks by visualizing co-workers' task progress, automatic notifications based on context awareness, and record filing of task statuses and completions. This paper presents methods to sense and detect highly mobile workers' tasks phases in large building complexes. Large building complexes restrict the technologies available for sensing and recognizing the activities and task phases the workers currently perform as such technologies have to be easily deployable and maintainable at a large scale. The methods presented in this paper consist of features that utilize data from sensing systems which are common in large-scale indoor work environments, namely from a WiFi infrastructure providing coarse grained indoor positioning, from inertial sensors in the workers' mobile phones, and from a task management system yielding information about the scheduled tasks' start and end locations. The methods presented have low requirements on the accuracy of the indoor positioning, and thus come with low deployment and maintenance effort in real-world settings. We evaluated the proposed methods in a large hospital complex, where the highly mobile workers were recruited among the non-clinical workforce. The evaluation is based on manually labelled real-world data collected over 4 days of regular work life of the mobile workforce. The collected data yields 83 tasks in total involving 8 different orderlies from a major university hospital with a building area of 160, 000 m2. The results show that the proposed methods can distinguish accurately between the four most common task phases present in the orderlies' work routines, achieving Fi-Scores of 89.2%.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121175768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The dawn of the age of responsive media (Keynote abstract) 响应式媒体时代的来临(主题演讲摘要)
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456510
J. Begole
Summary form only given. What will be the next computing paradigm as the ubiquitous computing wave crests: autonomobiles, robots, virtual reality, internet of things, intelligent agents? While pundits search for the next big thing among a dizzying array of shiny ideas, the truth is that pervasive technologies have reached critical mass that we no longer need to ask "what if" and we can shift our attention to "what when". One picture of that future is a recasting of how we think about designing digital experiences - rather than systems that react to user direction, we can design systems that respond dynamically to the users' attention, engagement and context: Responsive Media. The same machine learning technologies that have made speech and image recognition surprisingly accurate are also enhancing our devices' abilities to sense user activities, emotions and intentions and to deliver services and information proactively. Media experiences will be dramatically changed by the next generation of these technologies embedded into smartphones and VR goggles and robots and smart homes and autonomobiles so that they not only sense the audience's engagement in real time, but they can also predict disengagement and prevent it by dynamically shifting the content to appeal to an individual's preferences, emotion state and situation. More than just media experiences, imagine robots that can sense a child's frustration and actively assist in the homework, digital assistants that do not interrupt inappropriately, semi-autonomobiles that ensure the media is not disrupting the driver's attention demands, and more. Responsive media will be more like an engaging conversation among humans, rather than just passive consumption. What are the requirements for a conversational interaction? This talk will paint a picture and challenge the audience to identify the remaining technology barriers, architectures, business ecosystems, threats, and yes, the killer applications. I seek your input as we create the future beyond ubiquitous computing.
只提供摘要形式。当泛在计算浪潮达到顶峰时,下一个计算范式将是什么:自动驾驶、机器人、虚拟现实、物联网、智能代理?当专家们在一堆令人眼花缭乱的闪亮想法中寻找下一个大事件时,事实是,无处不在的技术已经达到了临界质量,我们不再需要问“如果”,我们可以把注意力转移到“什么时候”。未来的一幅图景是我们如何重新思考设计数字体验——我们可以设计出对用户的注意力、参与度和环境做出动态反应的系统,而不是对用户的方向做出反应的系统:响应式媒体。同样的机器学习技术,使语音和图像识别惊人地准确,也增强了我们的设备感知用户活动、情感和意图的能力,并主动提供服务和信息。下一代这些技术被嵌入到智能手机、VR眼镜、机器人、智能家居和自动驾驶汽车中,将极大地改变媒体体验,这样它们不仅可以实时感知观众的参与度,还可以预测脱离参与度,并通过动态改变内容来吸引个人的偏好、情绪状态和情况来防止脱离参与度。不仅仅是媒体体验,想象一下,机器人可以感知孩子的沮丧情绪,并积极帮助他们完成家庭作业,数字助理不会不恰当地打断他们,半自动驾驶汽车可以确保媒体不会干扰司机的注意力需求,等等。响应式媒体将更像是人类之间的对话,而不仅仅是被动的消费。会话交互的要求是什么?本次演讲将描绘一幅图景,并挑战观众识别剩下的技术障碍、架构、业务生态系统、威胁,当然,还有杀手级应用。在我们创造超越无处不在的计算的未来时,我希望你们能提供意见。
{"title":"The dawn of the age of responsive media (Keynote abstract)","authors":"J. Begole","doi":"10.1109/PERCOM.2016.7456510","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456510","url":null,"abstract":"Summary form only given. What will be the next computing paradigm as the ubiquitous computing wave crests: autonomobiles, robots, virtual reality, internet of things, intelligent agents? While pundits search for the next big thing among a dizzying array of shiny ideas, the truth is that pervasive technologies have reached critical mass that we no longer need to ask \"what if\" and we can shift our attention to \"what when\". One picture of that future is a recasting of how we think about designing digital experiences - rather than systems that react to user direction, we can design systems that respond dynamically to the users' attention, engagement and context: Responsive Media. The same machine learning technologies that have made speech and image recognition surprisingly accurate are also enhancing our devices' abilities to sense user activities, emotions and intentions and to deliver services and information proactively. Media experiences will be dramatically changed by the next generation of these technologies embedded into smartphones and VR goggles and robots and smart homes and autonomobiles so that they not only sense the audience's engagement in real time, but they can also predict disengagement and prevent it by dynamically shifting the content to appeal to an individual's preferences, emotion state and situation. More than just media experiences, imagine robots that can sense a child's frustration and actively assist in the homework, digital assistants that do not interrupt inappropriately, semi-autonomobiles that ensure the media is not disrupting the driver's attention demands, and more. Responsive media will be more like an engaging conversation among humans, rather than just passive consumption. What are the requirements for a conversational interaction? This talk will paint a picture and challenge the audience to identify the remaining technology barriers, architectures, business ecosystems, threats, and yes, the killer applications. I seek your input as we create the future beyond ubiquitous computing.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129707320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging proximity sensing to mine the behavior of museum visitors 利用近距离感应来挖掘博物馆游客的行为
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456513
Claudio Martella, Armando Miraglia, M. Cattani, M. Steen
Face-to-face proximity has been successfully leveraged to study the relationships between individuals in various contexts, from a working place, to a conference, a museum, a fair, and a date. We spend time facing the individuals with whom we chat, discuss, work, and play. However, face-to-face proximity is not the realm of solely person-to-person relationships, but it can be used as a proxy to study person-to-object relationships as well. We face the objects with which we interact on a daily basis, like a television, the kitchen appliances, a book, including more complex objects like a stage where a concert is taking place. In this paper, we focus on the relationship between the visitors of an art exhibition and its exhibits. We design, implement, and deploy a sensing infrastructure based on inexpensive mobile proximity sensors and a filtering pipeline that we use to measure face-to-face proximity between individuals and exhibits. Our pipeline produces an improvement in measurement accuracy of up to 64% relative to raw data. We use this data to mine the behavior of the visitors and show that group behavior can be recognized by means of data clustering and visualization.
面对面的接近已经成功地用于研究不同环境下的个人关系,从工作场所到会议、博物馆、博览会和约会。我们花时间面对与我们聊天、讨论、工作和玩耍的人。然而,面对面的接近并不仅仅是人与人之间关系的领域,但它也可以作为研究人与人之间关系的代理。我们面对着每天与我们互动的物体,比如电视,厨房用具,一本书,包括更复杂的物体,比如正在举行音乐会的舞台。在本文中,我们关注的是艺术展览的参观者与展品之间的关系。我们设计、实现和部署了一个基于廉价移动接近传感器和过滤管道的传感基础设施,我们使用它来测量个人和展品之间的面对面接近度。与原始数据相比,我们的管道测量精度提高了64%。我们利用这些数据来挖掘访问者的行为,并表明通过数据聚类和可视化的方法可以识别群体行为。
{"title":"Leveraging proximity sensing to mine the behavior of museum visitors","authors":"Claudio Martella, Armando Miraglia, M. Cattani, M. Steen","doi":"10.1109/PERCOM.2016.7456513","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456513","url":null,"abstract":"Face-to-face proximity has been successfully leveraged to study the relationships between individuals in various contexts, from a working place, to a conference, a museum, a fair, and a date. We spend time facing the individuals with whom we chat, discuss, work, and play. However, face-to-face proximity is not the realm of solely person-to-person relationships, but it can be used as a proxy to study person-to-object relationships as well. We face the objects with which we interact on a daily basis, like a television, the kitchen appliances, a book, including more complex objects like a stage where a concert is taking place. In this paper, we focus on the relationship between the visitors of an art exhibition and its exhibits. We design, implement, and deploy a sensing infrastructure based on inexpensive mobile proximity sensors and a filtering pipeline that we use to measure face-to-face proximity between individuals and exhibits. Our pipeline produces an improvement in measurement accuracy of up to 64% relative to raw data. We use this data to mine the behavior of the visitors and show that group behavior can be recognized by means of data clustering and visualization.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"474 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114844237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
A novel multivariate spectral regression model for learning relationships between communication activity and urban ecology 传播活动与城市生态关系的多元光谱回归模型研究
Pub Date : 2016-03-14 DOI: 10.1109/PERCOM.2016.7456525
Xuhong Zhang, C. Butts
In this paper we demonstrate a novel approach to the use of spatio-temporally aggregated cell phone data to learn features of urban ecology (i.e., spatial distributions of distinct social and economic entities and their associated activities). Specifically, our technique involves four stages: (i) decomposing the aggregated cell phone activity within local areal units using spectral methods; (ii) learning spectral characteristics associated with ecological features using a training set; (iii) predicting local ecology composition for out-of-sample areas; and (iv) predicting activity time series for out-of-sample areas. The core of our approach is the projection of spectral features in cell phone activity series into an ecology-associated basis, allowing both identification of communication patterns arising from particular types of local activities and/or institutions and leveraging of those patterns for classification and activity prediction. We apply our methodology to aggregated communication and Internet traffic data from the cities of Milan and Trento to show the effectiveness of our method.
在本文中,我们展示了一种利用时空聚合的手机数据来了解城市生态特征(即不同社会和经济实体及其相关活动的空间分布)的新方法。具体来说,我们的技术包括四个阶段:(i)使用光谱方法在局部区域单位内分解聚合的手机活动;(ii)使用训练集学习与生态特征相关的光谱特征;(iii)预测样本外地区的本地生态组成;(iv)预测样本外区域的活动时间序列。我们方法的核心是将手机活动系列的频谱特征投射到与生态相关的基础上,从而既可以识别由特定类型的本地活动和/或机构产生的通信模式,又可以利用这些模式进行分类和活动预测。我们将我们的方法应用于米兰和特伦托城市的汇总通信和互联网流量数据,以显示我们方法的有效性。
{"title":"A novel multivariate spectral regression model for learning relationships between communication activity and urban ecology","authors":"Xuhong Zhang, C. Butts","doi":"10.1109/PERCOM.2016.7456525","DOIUrl":"https://doi.org/10.1109/PERCOM.2016.7456525","url":null,"abstract":"In this paper we demonstrate a novel approach to the use of spatio-temporally aggregated cell phone data to learn features of urban ecology (i.e., spatial distributions of distinct social and economic entities and their associated activities). Specifically, our technique involves four stages: (i) decomposing the aggregated cell phone activity within local areal units using spectral methods; (ii) learning spectral characteristics associated with ecological features using a training set; (iii) predicting local ecology composition for out-of-sample areas; and (iv) predicting activity time series for out-of-sample areas. The core of our approach is the projection of spectral features in cell phone activity series into an ecology-associated basis, allowing both identification of communication patterns arising from particular types of local activities and/or institutions and leveraging of those patterns for classification and activity prediction. We apply our methodology to aggregated communication and Internet traffic data from the cities of Milan and Trento to show the effectiveness of our method.","PeriodicalId":275797,"journal":{"name":"2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126581578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2016 IEEE International Conference on Pervasive Computing and Communications (PerCom)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1