首页 > 最新文献

Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献

英文 中文
Accurate trajectory prediction in a smart building using recurrent neural networks 基于递归神经网络的智能建筑精确轨迹预测
Anooshmita Das, Emil Stubbe Kolvig Raun, M. Kjærgaard
Occupant behavioral patterns, once extracted, could reveal cues about activities and space usage that could effectively get used for building systems to achieve energy savings. The ability to accurately predict the trajectories of occupants inside a room branched into different zones has many notable and compelling applications. For example - efficient space utilization and floor plans, intelligent building operations, crowd management, comfortable indoor environment, security, and evacuation or managing personnel. This paper proposes future occupant trajectory prediction using state-of-the-art time series prediction methods, i.e., Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models. These models are being implemented and compared to forecast occupant trajectories at a given time and location in a non-intrusive and reliable manner. The considered test-space for the collection of the dataset is a multi-utility area in an instrumented public building. The deployed 3D Stereo Vision Cameras capture the spatial location coordinates (x- and y- coordinates) from a bird's view angle without eliciting any other information that could reveal confidential data or uniquely identify a person. Our results showed that the GRU model forecasts were considerably more accurate than the LSTM model for the trajectory prediction. GRU prediction model achieved a Mean Squared Error (MSE) of 30.72 cm between actual and predicted location coordinates, and LSTM achieved an MSE of 47.13 cm, respectively, for multiple occupant trajectories within the monitored area. Another evaluation metric Mean Absolute Error (MAE) is used, and the GRU prediction model achieved an MAE of 3.14 cm, and the LSTM model achieved an MAE of 4.07 cm. The GRU model guarantees a high-fidelity occupant trajectory prediction for any given case with higher accuracy when compared to the baseline LSTM model.
居住者的行为模式一旦被提取出来,就可以揭示有关活动和空间使用的线索,这些线索可以有效地用于建筑系统,以实现节能。准确预测房间内不同区域的居住者轨迹的能力有许多值得注意和引人注目的应用。例如,高效的空间利用和楼层规划、智能建筑操作、人群管理、舒适的室内环境、安全、疏散或人员管理。本文提出了未来乘员轨迹预测使用最先进的时间序列预测方法,即长短期记忆(LSTM)和门控循环单元(GRU)模型。这些模型正在实施,并以非侵入性和可靠的方式与给定时间和地点的预测乘员轨迹进行比较。考虑的数据集收集的测试空间是一个仪器公共建筑中的多功能区域。部署的3D立体视觉摄像头从鸟瞰角度捕捉空间位置坐标(x坐标和y坐标),而不会引发任何其他可能泄露机密数据或唯一识别一个人的信息。结果表明,GRU模型的轨迹预测精度明显高于LSTM模型。对于监测区域内的多个乘员轨迹,GRU预测模型与预测位置坐标的均方误差(Mean Squared Error, MSE)分别为30.72 cm和47.13 cm。采用另一个评价指标平均绝对误差(MAE), GRU预测模型的MAE为3.14 cm, LSTM模型的MAE为4.07 cm。与基线LSTM模型相比,GRU模型保证了任何给定情况下的高保真乘员轨迹预测,精度更高。
{"title":"Accurate trajectory prediction in a smart building using recurrent neural networks","authors":"Anooshmita Das, Emil Stubbe Kolvig Raun, M. Kjærgaard","doi":"10.1145/3410530.3414319","DOIUrl":"https://doi.org/10.1145/3410530.3414319","url":null,"abstract":"Occupant behavioral patterns, once extracted, could reveal cues about activities and space usage that could effectively get used for building systems to achieve energy savings. The ability to accurately predict the trajectories of occupants inside a room branched into different zones has many notable and compelling applications. For example - efficient space utilization and floor plans, intelligent building operations, crowd management, comfortable indoor environment, security, and evacuation or managing personnel. This paper proposes future occupant trajectory prediction using state-of-the-art time series prediction methods, i.e., Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models. These models are being implemented and compared to forecast occupant trajectories at a given time and location in a non-intrusive and reliable manner. The considered test-space for the collection of the dataset is a multi-utility area in an instrumented public building. The deployed 3D Stereo Vision Cameras capture the spatial location coordinates (x- and y- coordinates) from a bird's view angle without eliciting any other information that could reveal confidential data or uniquely identify a person. Our results showed that the GRU model forecasts were considerably more accurate than the LSTM model for the trajectory prediction. GRU prediction model achieved a Mean Squared Error (MSE) of 30.72 cm between actual and predicted location coordinates, and LSTM achieved an MSE of 47.13 cm, respectively, for multiple occupant trajectories within the monitored area. Another evaluation metric Mean Absolute Error (MAE) is used, and the GRU prediction model achieved an MAE of 3.14 cm, and the LSTM model achieved an MAE of 4.07 cm. The GRU model guarantees a high-fidelity occupant trajectory prediction for any given case with higher accuracy when compared to the baseline LSTM model.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91517829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nurse care activity recognition based on convolution neural network for accelerometer data 基于卷积神经网络的加速度计数据护理活动识别
Md. Golam Rasul, Mashrur Hossain Khan, Lutfun Nahar Lota
Human activity recognition on sensor data plays a vital role in health monitoring and elderly care service monitoring. Although tremendous progress has been noticed to the use of sensor technology to collect activity recognition data, recognition still remains challenging due to the pervasive nature of the activities. In this paper, we present a Convolution Neural Network (CNN) model by our team DataDrivers_BD in "The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data" which is quite challenging because of the similarity among the tasks. On the other hand, the dissimilarity among the users patterns of working for a particular task. Since CNN can retrieve informative features automatically, it has become one of the most prominent methods in activity recognition. Our extensive experiment on nurse care activity recognition challenge dataset also achieved significant accuracy of 91.59% outperforming the existing state of the art algorithms.
基于传感器数据的人体活动识别在健康监测和养老服务监测中具有重要作用。尽管使用传感器技术收集活动识别数据已经取得了巨大进展,但由于活动的普遍性,识别仍然具有挑战性。在本文中,我们提出了一个卷积神经网络(CNN)模型,由我们的团队DataDrivers_BD在“使用实验室和现场数据的第二次护士护理活动识别挑战”中提出,由于任务之间的相似性,该模型具有相当大的挑战性。另一方面,用户对特定任务的工作模式的差异。由于CNN可以自动检索信息特征,它已经成为活动识别中最突出的方法之一。我们在护士护理活动识别挑战数据集上的广泛实验也取得了显著的准确率,达到91.59%,优于现有的最先进算法。
{"title":"Nurse care activity recognition based on convolution neural network for accelerometer data","authors":"Md. Golam Rasul, Mashrur Hossain Khan, Lutfun Nahar Lota","doi":"10.1145/3410530.3414335","DOIUrl":"https://doi.org/10.1145/3410530.3414335","url":null,"abstract":"Human activity recognition on sensor data plays a vital role in health monitoring and elderly care service monitoring. Although tremendous progress has been noticed to the use of sensor technology to collect activity recognition data, recognition still remains challenging due to the pervasive nature of the activities. In this paper, we present a Convolution Neural Network (CNN) model by our team DataDrivers_BD in \"The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data\" which is quite challenging because of the similarity among the tasks. On the other hand, the dissimilarity among the users patterns of working for a particular task. Since CNN can retrieve informative features automatically, it has become one of the most prominent methods in activity recognition. Our extensive experiment on nurse care activity recognition challenge dataset also achieved significant accuracy of 91.59% outperforming the existing state of the art algorithms.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"30 4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89160905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Spinal curve assessment of idiopathic scoliosis with a small dataset via a multi-scale keypoint estimation approach 基于多尺度关键点估计方法的小数据集对特发性脊柱侧凸的脊柱曲线评估
Tianyun Liu, Yukang Yang, Yu Wang, Ming Sun, Wenhui Fan, Cheng Wu, C. Bunger
Idiopathic scoliosis (IS) is the most common type of spinal deformity, which leads to severe pain and potential heart and lung damage. The clinical diagnosis and treatment strategies for IS highly depend on the radiographic assessment of spinal curve. With improvements in image recognition via deep learning, learning-based methods can be applied to facilitate clinical decision-making. However, these methods usually require sufficiently large training datasets with precise annotation, which are very laborious and time-consuming especially for medical images. Moreover, the medical images of serious IS always contain the blurry and occlusive parts, which would make the strict annotation of the spinal curve more difficult. To address these challenges, we utilize the dot annotations approach to simply annotate the medical images instead of precise annotation. Then, we design a multi-scale keypoint estimation approach that incorporates Squeeze-and-Excitation(SE) blocks to improve the representational capacity of the model, achieving the assessment of spinal curve without large-size dataset. The proposed approach uses pose estimation framework to detect keypoints of spine with simple annotation and small-size dataset for the first time. Finally, we conduct experiments on a collected clinical dataset, and results illustrate that our approach outperforms the mainstream approaches.
特发性脊柱侧凸(IS)是最常见的脊柱畸形类型,它会导致严重的疼痛和潜在的心肺损伤。IS的临床诊断和治疗策略在很大程度上取决于脊柱曲度的影像学评估。随着深度学习对图像识别的改进,基于学习的方法可以应用于促进临床决策。然而,这些方法通常需要足够大的训练数据集和精确的注释,这是非常费力和耗时的,特别是对于医学图像。此外,严重IS的医学图像往往包含模糊和闭塞的部分,这给脊柱曲线的严格标注增加了难度。为了解决这些问题,我们利用点注释方法对医学图像进行简单的注释,而不是精确的注释。然后,我们设计了一种多尺度关键点估计方法,结合挤压和激励(SE)块来提高模型的表示能力,实现了在没有大数据集的情况下对脊柱曲线的评估。该方法首次采用姿态估计框架对脊柱关键点进行检测,标注简单,数据量小。最后,我们在收集的临床数据集上进行了实验,结果表明我们的方法优于主流方法。
{"title":"Spinal curve assessment of idiopathic scoliosis with a small dataset via a multi-scale keypoint estimation approach","authors":"Tianyun Liu, Yukang Yang, Yu Wang, Ming Sun, Wenhui Fan, Cheng Wu, C. Bunger","doi":"10.1145/3410530.3414317","DOIUrl":"https://doi.org/10.1145/3410530.3414317","url":null,"abstract":"Idiopathic scoliosis (IS) is the most common type of spinal deformity, which leads to severe pain and potential heart and lung damage. The clinical diagnosis and treatment strategies for IS highly depend on the radiographic assessment of spinal curve. With improvements in image recognition via deep learning, learning-based methods can be applied to facilitate clinical decision-making. However, these methods usually require sufficiently large training datasets with precise annotation, which are very laborious and time-consuming especially for medical images. Moreover, the medical images of serious IS always contain the blurry and occlusive parts, which would make the strict annotation of the spinal curve more difficult. To address these challenges, we utilize the dot annotations approach to simply annotate the medical images instead of precise annotation. Then, we design a multi-scale keypoint estimation approach that incorporates Squeeze-and-Excitation(SE) blocks to improve the representational capacity of the model, achieving the assessment of spinal curve without large-size dataset. The proposed approach uses pose estimation framework to detect keypoints of spine with simple annotation and small-size dataset for the first time. Finally, we conduct experiments on a collected clinical dataset, and results illustrate that our approach outperforms the mainstream approaches.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83408918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
OfficeBP: noninvasive continuous blood pressure monitoring based on PPT in office environment OfficeBP:办公环境下基于PPT的无创血压连续监测
M. Guo, Hongbo Ni, Alex Q. Chen
Blood pressure (BP), as a crucial vital sign of human beings, reflects the physical state of the cardiovascular system. Currently, blood pressure is mainly measured by collecting the changes in pressure in the vessel using cuff-sensors. It is a manual operation and cannot achieve continuous BP monitoring. In this work, we developed OfficeBP, a novel non-intrusive BP monitoring system for a typical office environment. OfficeBP relies on measuring the pulse transit time (PTT) between the pulse propagate from arterial proximal to the distal site on once heartbeat. For calculating the PTT, the user's face and thumb fingertip are regarded as the start and end points respectively. A twin-channel PPG sensing system is presented, that is, the fingertip pulse recording photoplethysmography (PPG) is obtained by a low-cost photoelectric sensor integrated with a mouse. Using image processing the face pulse is acquired by remote-PPG (rPPG) that based on a commercial off-the-shelf camera collecting facial video frames. OfficeBP was evaluated on 11 participants in different working conditions including the external illumination factor and personal internal factors, and achieved RMSE result of diastolic blood pressure 4.81 mmHg, systolic blood pressure 5.35 mmHg, demonstrate the feasibility of the system in an office environment.
血压(BP)作为人类重要的生命体征,反映了心血管系统的生理状态。目前,血压的测量主要是通过袖带传感器收集血管内压力的变化。这是一种人工操作,不能实现连续的血压监测。在这项工作中,我们开发了OfficeBP,一种新型的非侵入式BP监测系统,用于典型的办公环境。OfficeBP依赖于测量一次心跳中脉搏从动脉近端传播到远端之间的脉冲传递时间(PTT)。在计算PTT时,将用户的脸和拇指指尖分别作为起点和终点。提出了一种双通道PPG传感系统,即通过与鼠标集成的低成本光电传感器获得指尖脉冲记录光电体积脉搏波(PPG)。利用图像处理技术,利用基于商用摄像机采集人脸视频帧的远程脉冲传感器(rPPG)获取人脸脉冲。对11名被试在不同的工作条件下,包括外部照明因素和个人内部因素,对OfficeBP进行了评估,得到了舒张压4.81 mmHg、收缩压5.35 mmHg的RMSE结果,证明了该系统在办公环境下的可行性。
{"title":"OfficeBP: noninvasive continuous blood pressure monitoring based on PPT in office environment","authors":"M. Guo, Hongbo Ni, Alex Q. Chen","doi":"10.1145/3410530.3414398","DOIUrl":"https://doi.org/10.1145/3410530.3414398","url":null,"abstract":"Blood pressure (BP), as a crucial vital sign of human beings, reflects the physical state of the cardiovascular system. Currently, blood pressure is mainly measured by collecting the changes in pressure in the vessel using cuff-sensors. It is a manual operation and cannot achieve continuous BP monitoring. In this work, we developed OfficeBP, a novel non-intrusive BP monitoring system for a typical office environment. OfficeBP relies on measuring the pulse transit time (PTT) between the pulse propagate from arterial proximal to the distal site on once heartbeat. For calculating the PTT, the user's face and thumb fingertip are regarded as the start and end points respectively. A twin-channel PPG sensing system is presented, that is, the fingertip pulse recording photoplethysmography (PPG) is obtained by a low-cost photoelectric sensor integrated with a mouse. Using image processing the face pulse is acquired by remote-PPG (rPPG) that based on a commercial off-the-shelf camera collecting facial video frames. OfficeBP was evaluated on 11 participants in different working conditions including the external illumination factor and personal internal factors, and achieved RMSE result of diastolic blood pressure 4.81 mmHg, systolic blood pressure 5.35 mmHg, demonstrate the feasibility of the system in an office environment.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90574005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lifelog visualization based on social and physical activities 基于社交和体育活动的生活日志可视化
Akane Okuno, Y. Sumi
This paper presents the visualization of lifelog based on the amount of social and physical activities for well-being. The motivation is that enables users to aware their social, physical, and moderate activities for behavioral change aiming a comfortable how to spend life for individuals. In this paper, three experiments were conducted to examine the feasibility of measuring and visualizing daily activities. We classified the one student's various daily activities to see the tendency of activity levels and classes. Also, we examined individual differences of three people in the same spatiotemporal space. Finally, we examined how the one student's activity changes of half-day can be visualized.
这篇论文提出了基于社会和身体活动数量的生活日志可视化。其动机是使用户能够意识到他们的社交,身体和适度的活动,以改变行为,旨在为个人提供舒适的生活方式。本文通过三个实验来检验日常活动测量和可视化的可行性。我们对一个学生的各种日常活动进行分类,以查看活动水平和班级的趋势。此外,我们还研究了三个人在同一时空空间中的个体差异。最后,我们研究了如何将一个学生半天的活动变化可视化。
{"title":"Lifelog visualization based on social and physical activities","authors":"Akane Okuno, Y. Sumi","doi":"10.1145/3410530.3414377","DOIUrl":"https://doi.org/10.1145/3410530.3414377","url":null,"abstract":"This paper presents the visualization of lifelog based on the amount of social and physical activities for well-being. The motivation is that enables users to aware their social, physical, and moderate activities for behavioral change aiming a comfortable how to spend life for individuals. In this paper, three experiments were conducted to examine the feasibility of measuring and visualizing daily activities. We classified the one student's various daily activities to see the tendency of activity levels and classes. Also, we examined individual differences of three people in the same spatiotemporal space. Finally, we examined how the one student's activity changes of half-day can be visualized.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"107 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76846097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CML-IOT 2020: the second workshop on continual and multimodal learning for internet of things CML-IOT 2020:第二次关于物联网持续和多模式学习的研讨会
Susu Xu, Shijia Pan, Tong Yu
With the deployment of Internet of Things (IoT), large amount of sensors are connected into the Internet, providing large-amount, streaming, and multimodal data. These data have distinct statistical characteristics over time and sensing modalities, which are hardly captured by traditional learning methods. Continual and multimodal learning allows integration, adaptation, and generalization of the knowledge learned from experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to enable efficient ubiquitous computing on IoT devices. The major challenges to combine continual learning and multimodal learning with real-world data include 1) how to fuse and transfer knowledge between the multimodal data under constrained computational resources, 2) how to learn continually despite the missing, imbalanced or noisy data under constrained computational resources, 3) how to effectively reserve privacy and retain security when learning knowledge from streaming and multimodal data collected by multiple stakeholders, and 4) how to develop large-scale distributed learning systems to efficiently learn from continual and multimodal data. We organize this workshop to bring people working on different disciplines together to tackle these challenges in this topic. This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in the Internet of Things. The workshop welcomes works addressing these issues in different applications/domains as well as algorithmic and systematic approaches to leverage continual learning on multimodal data. We further seek to develop a community that systematically handles the streaming multimodal data widely available in real-world ubiquitous computing systems. In 2019, we held the First Workshop on Continual and Multimodal Learning for Internet of Things (https://cmliot2019.github.io/) with Ubicomp 2019, London, UK. The First workshop accepted 12 papers from 17 submissions. The one-day agenda included 3 sessions and attracted around 20 attendees from academia and industries to discuss and share visions.
随着物联网的部署,大量传感器接入互联网,提供海量、流化、多模态的数据。这些数据随时间和感知方式的变化具有明显的统计特征,而传统的学习方法很难捕捉到这些特征。持续和多模式的学习允许整合、适应和概括从异质性收集的经验数据中学到的知识到新的情况。因此,持续和多模式学习是在物联网设备上实现高效泛在计算的重要一步。将持续学习和多模态学习与现实数据相结合面临的主要挑战包括:1)如何在计算资源受限的多模态数据之间融合和传递知识;2)如何在计算资源受限的情况下,在数据缺失、不平衡或有噪声的情况下持续学习;3)如何在从多个利益相关者收集的流数据和多模态数据中学习知识时,有效地保留隐私和安全。4)如何开发大规模分布式学习系统,从连续的多模态数据中高效学习。我们组织这次研讨会是为了把不同学科的人聚集在一起,共同应对这一主题的挑战。本次研讨会旨在探讨持续机器学习和多模态建模与物联网应用的交叉和结合。研讨会欢迎在不同应用/领域中解决这些问题的工作,以及利用对多模态数据的持续学习的算法和系统方法。我们进一步寻求开发一个社区,系统地处理在现实世界的普适计算系统中广泛可用的流多模态数据。2019年,我们与英国伦敦Ubicomp 2019联合举办了首届物联网持续多模式学习研讨会(https://cmliot2019.github.io/)。第一届研讨会共收到17篇论文中的12篇。为期一天的会议议程包括3场会议,吸引了来自学术界和工业界的约20名与会者讨论和分享愿景。
{"title":"CML-IOT 2020: the second workshop on continual and multimodal learning for internet of things","authors":"Susu Xu, Shijia Pan, Tong Yu","doi":"10.1145/3410530.3414613","DOIUrl":"https://doi.org/10.1145/3410530.3414613","url":null,"abstract":"With the deployment of Internet of Things (IoT), large amount of sensors are connected into the Internet, providing large-amount, streaming, and multimodal data. These data have distinct statistical characteristics over time and sensing modalities, which are hardly captured by traditional learning methods. Continual and multimodal learning allows integration, adaptation, and generalization of the knowledge learned from experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to enable efficient ubiquitous computing on IoT devices. The major challenges to combine continual learning and multimodal learning with real-world data include 1) how to fuse and transfer knowledge between the multimodal data under constrained computational resources, 2) how to learn continually despite the missing, imbalanced or noisy data under constrained computational resources, 3) how to effectively reserve privacy and retain security when learning knowledge from streaming and multimodal data collected by multiple stakeholders, and 4) how to develop large-scale distributed learning systems to efficiently learn from continual and multimodal data. We organize this workshop to bring people working on different disciplines together to tackle these challenges in this topic. This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in the Internet of Things. The workshop welcomes works addressing these issues in different applications/domains as well as algorithmic and systematic approaches to leverage continual learning on multimodal data. We further seek to develop a community that systematically handles the streaming multimodal data widely available in real-world ubiquitous computing systems. In 2019, we held the First Workshop on Continual and Multimodal Learning for Internet of Things (https://cmliot2019.github.io/) with Ubicomp 2019, London, UK. The First workshop accepted 12 papers from 17 submissions. The one-day agenda included 3 sessions and attracted around 20 attendees from academia and industries to discuss and share visions.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78586611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Personal laughter archives: reflection through visualization and interaction 个人笑声档案:通过可视化和互动的反思
Kimiko Ryokai, Julia Park, Wesley Hanwen Deng
We present our ongoing effort to capture, represent, and interact with the sounds of our loved ones' laughter in order to offer unique opportunities for us to celebrate the positive affect in our shared lived experiences. We present our informal evaluation of laughter visualizations and argue for applications in ubiquitous computing scenarios including Mobile Augmented Reality (MAR).
我们不断努力捕捉、表现和互动我们所爱的人的笑声,为我们提供独特的机会来庆祝我们共同生活经历中的积极影响。我们提出了对笑声可视化的非正式评估,并论证了在包括移动增强现实(MAR)在内的无处不在的计算场景中的应用。
{"title":"Personal laughter archives: reflection through visualization and interaction","authors":"Kimiko Ryokai, Julia Park, Wesley Hanwen Deng","doi":"10.1145/3410530.3414419","DOIUrl":"https://doi.org/10.1145/3410530.3414419","url":null,"abstract":"We present our ongoing effort to capture, represent, and interact with the sounds of our loved ones' laughter in order to offer unique opportunities for us to celebrate the positive affect in our shared lived experiences. We present our informal evaluation of laughter visualizations and argue for applications in ubiquitous computing scenarios including Mobile Augmented Reality (MAR).","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"159 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75410916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Handwriting behavior as a self-confidence discriminator 书写行为作为自信的鉴别器
Takanori Maruichi, Taichi Uragami, Andrew W. Vargo, K. Kise
Receiving feedback based on the combination of self-confidence and correctness of an answer can help learners to improve learning efficiency. In this study, we propose a self-confidence estimation method using a simple touch up/move/down events that can be measured in a classroom environment. We recorded handwriting behavior during the answering vocabulary questions with a tablet and a stylus pen, estimating self-reported confidence. We successfully built a method that can predict the user's self-confidence with a maximum of 73% accuracy.
在自信和答案正确的基础上接受反馈,可以帮助学习者提高学习效率。在这项研究中,我们提出了一种自信估计方法,使用简单的触摸/移动/下降事件,可以在课堂环境中测量。我们在用平板电脑和手写笔回答词汇问题时记录了书写行为,估计了自我报告的信心。我们成功地建立了一种方法,可以预测用户的自信心,准确率最高为73%。
{"title":"Handwriting behavior as a self-confidence discriminator","authors":"Takanori Maruichi, Taichi Uragami, Andrew W. Vargo, K. Kise","doi":"10.1145/3410530.3414383","DOIUrl":"https://doi.org/10.1145/3410530.3414383","url":null,"abstract":"Receiving feedback based on the combination of self-confidence and correctness of an answer can help learners to improve learning efficiency. In this study, we propose a self-confidence estimation method using a simple touch up/move/down events that can be measured in a classroom environment. We recorded handwriting behavior during the answering vocabulary questions with a tablet and a stylus pen, estimating self-reported confidence. We successfully built a method that can predict the user's self-confidence with a maximum of 73% accuracy.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77888444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pose evaluation for dance learning application using joint position and angular similarity 利用关节位置和角度相似度评价舞蹈学习中的姿势应用
Jae-Jun Lee, Jong-Hyeok Choi, Tserenpurev Chuluunsaikhan, A. Nasridinov
In this paper, we propose a dance pose evaluation method for a dance learning application using a smartphone. In the past, methods for classifying and comparing dance gestures through 3-D joint information obtained through a 3-D camera have been proposed, but there is a problem in using them for accurate dance pose evaluation. That is, these methods simply compare the similarity between the dance gestures without evaluation of the exact dance pose. To solve this problem, we propose a new method that can be operated on a smartphone for exact dance pose evaluation that simultaneously performs an affine transformation and an evaluation method to compare the joint position and joint angle information. In addition, we prove that the proposed method is suitable for dance learning applications through comparative experiments on a smartphone with real-world datasets.
在本文中,我们提出了一种基于智能手机的舞蹈学习应用程序的舞蹈姿势评估方法。过去已经提出了利用三维摄像机获取的三维关节信息对舞蹈姿态进行分类和比较的方法,但在对舞蹈姿态进行准确评价时存在问题。也就是说,这些方法只是比较舞蹈姿势之间的相似性,而没有评估确切的舞蹈姿势。为了解决这一问题,我们提出了一种可以在智能手机上进行精确舞蹈姿态评估的新方法,该方法同时执行仿射变换和比较关节位置和关节角度信息的评估方法。此外,我们通过智能手机与现实世界数据集的对比实验证明了所提出的方法适用于舞蹈学习应用。
{"title":"Pose evaluation for dance learning application using joint position and angular similarity","authors":"Jae-Jun Lee, Jong-Hyeok Choi, Tserenpurev Chuluunsaikhan, A. Nasridinov","doi":"10.1145/3410530.3414402","DOIUrl":"https://doi.org/10.1145/3410530.3414402","url":null,"abstract":"In this paper, we propose a dance pose evaluation method for a dance learning application using a smartphone. In the past, methods for classifying and comparing dance gestures through 3-D joint information obtained through a 3-D camera have been proposed, but there is a problem in using them for accurate dance pose evaluation. That is, these methods simply compare the similarity between the dance gestures without evaluation of the exact dance pose. To solve this problem, we propose a new method that can be operated on a smartphone for exact dance pose evaluation that simultaneously performs an affine transformation and an evaluation method to compare the joint position and joint angle information. In addition, we prove that the proposed method is suitable for dance learning applications through comparative experiments on a smartphone with real-world datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Blink rate variability: a marker of sustained attention during a visual task 眨眼频率可变性:在视觉任务中持续注意力的标志
R. Gavas, M. B. Sheshachala, D. Chatterjee, R. K. Ramakrishnan, V. Viraraghavan, Achanna Anil Kumar, M. Chandra
Eye blinks are vital components of human gaze which are used for assessing human behaviour. We have analyzed the variability of the inter-blink durations, termed as blink rate variability (BRV), for analysing sustained attention for visual tasks. Uniformly sampled BRV series is reconstructed from the gaze data recorded using an eye tracker. A number of features are extracted from this series. We proposed a new feature based on pareto principle. Results show that skewness, kurtosis, mean frequency and pareto frequency are good indicators of sustained attention. We observed that with increase in attention level, the power of BRV series tends to have a normal distribution whereas the mean and pareto frequency decreases. Results were generated on a small dataset as a proof of concept of our hypothesis that BRV is a potential bio-marker of sustained attention in a visual task.
眨眼是人类凝视的重要组成部分,用来评估人类的行为。我们分析了眨眼间持续时间的可变性,称为眨眼速率可变性(BRV),用于分析视觉任务的持续注意力。根据眼动仪记录的注视数据重构均匀采样的BRV序列。从这个系列中提取了许多特征。我们提出了一种基于帕累托原理的新特征。结果表明,偏度、峰度、平均频率和帕累托频率是持续关注的良好指标。我们观察到,随着注意力水平的增加,BRV级数的幂趋于正态分布,而均值和帕累托频率则降低。结果是在一个小数据集上生成的,作为我们假设BRV是视觉任务中持续注意力的潜在生物标志物的概念证明。
{"title":"Blink rate variability: a marker of sustained attention during a visual task","authors":"R. Gavas, M. B. Sheshachala, D. Chatterjee, R. K. Ramakrishnan, V. Viraraghavan, Achanna Anil Kumar, M. Chandra","doi":"10.1145/3410530.3414431","DOIUrl":"https://doi.org/10.1145/3410530.3414431","url":null,"abstract":"Eye blinks are vital components of human gaze which are used for assessing human behaviour. We have analyzed the variability of the inter-blink durations, termed as blink rate variability (BRV), for analysing sustained attention for visual tasks. Uniformly sampled BRV series is reconstructed from the gaze data recorded using an eye tracker. A number of features are extracted from this series. We proposed a new feature based on pareto principle. Results show that skewness, kurtosis, mean frequency and pareto frequency are good indicators of sustained attention. We observed that with increase in attention level, the power of BRV series tends to have a normal distribution whereas the mean and pareto frequency decreases. Results were generated on a small dataset as a proof of concept of our hypothesis that BRV is a potential bio-marker of sustained attention in a visual task.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73805769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1