首页 > 最新文献

Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献

英文 中文
DenseNetX and GRU for the sussex-huawei locomotion-transportation recognition challenge DenseNetX和GRU为sussexhuawei移动运输识别挑战
Yida Zhu, Haiyong Luo, Runze Chen, Fang Zhao, Li Su
The Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge organized at the HASCA Workshop of UbiComp 2020 presents a large and realistic dataset with different activities and transportation. The goal of this human activity recognition challenge is to recognize eight modes of locomotion and transportation from 5-second frames of sensor data of a smartphone carried in the unknown position. In this paper, our team (We can fly) summarize our submission to the competition. We proposed a one-dimensional (1D) DenseNetX model, a deep learning method for transportation mode classification. We first convert sensor readings from the phone coordinate system to the navigation coordinate system. Then, we normalized each sensor using different maximums and minimums and construct multi-channel sensor input. Finally, 1D DenseNetX with the Gated Recurrent Unit (GRU) model output the predictions. In the experiment, we utilized four internal datasets for training our model and achieved averaged F1 score of 0.7848 on four valid datasets.
在UbiComp 2020的HASCA研讨会上组织的sussexhuawei Locomotion-Transportation (SHL)识别挑战展示了一个具有不同活动和运输的大型真实数据集。这项人类活动识别挑战的目标是从未知位置携带的智能手机的5秒传感器数据帧中识别出八种运动和运输模式。在这篇论文中,我们的团队(我们可以飞)总结了我们的参赛作品。我们提出了一种一维(1D) DenseNetX模型,这是一种用于交通方式分类的深度学习方法。我们首先将传感器读数从手机坐标系转换为导航坐标系。然后,我们使用不同的最大值和最小值对每个传感器进行归一化,并构建多通道传感器输入。最后,使用门控循环单元(GRU)模型的1D DenseNetX输出预测结果。在实验中,我们使用了4个内部数据集来训练我们的模型,在4个有效数据集上获得了平均F1分数0.7848。
{"title":"DenseNetX and GRU for the sussex-huawei locomotion-transportation recognition challenge","authors":"Yida Zhu, Haiyong Luo, Runze Chen, Fang Zhao, Li Su","doi":"10.1145/3410530.3414349","DOIUrl":"https://doi.org/10.1145/3410530.3414349","url":null,"abstract":"The Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge organized at the HASCA Workshop of UbiComp 2020 presents a large and realistic dataset with different activities and transportation. The goal of this human activity recognition challenge is to recognize eight modes of locomotion and transportation from 5-second frames of sensor data of a smartphone carried in the unknown position. In this paper, our team (We can fly) summarize our submission to the competition. We proposed a one-dimensional (1D) DenseNetX model, a deep learning method for transportation mode classification. We first convert sensor readings from the phone coordinate system to the navigation coordinate system. Then, we normalized each sensor using different maximums and minimums and construct multi-channel sensor input. Finally, 1D DenseNetX with the Gated Recurrent Unit (GRU) model output the predictions. In the experiment, we utilized four internal datasets for training our model and achieved averaged F1 score of 0.7848 on four valid datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77009143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A multi-view architecture for the SHL challenge 针对SHL挑战的多视图体系结构
Massinissa Hamidi, A. Osmani, Pegah Alizadeh
To recognize locomotion and transportation modes in a user-independent manner with an unknown target phone position, we (team Eagles) propose an approach based on two main steps: reduction of the impact of regular effects that stem from each phone position, followed by the recognition of the appropriate activity. The general architecture is composed of three groups of neural networks organized in the following order. The first group allows the recognition of the source, the second group allows the normalization of data to neutralize the impact of the source on the activity learning process, and the last group allows the recognition of the activity itself. We perform extensive experiments and the preliminary results encourage us to follow this direction, including the source learning to reduce the phone position's biases and activity separately.
为了以用户独立的方式识别未知目标手机位置的移动和运输模式,我们(团队Eagles)提出了一种基于两个主要步骤的方法:减少来自每个手机位置的常规影响的影响,然后识别适当的活动。总体架构由按以下顺序组织的三组神经网络组成。第一组允许对源进行识别,第二组允许对数据进行规范化,以抵消源对活动学习过程的影响,最后一组允许对活动本身进行识别。我们进行了大量的实验,初步的结果鼓励我们遵循这个方向,包括源学习分别减少手机位置的偏差和活动。
{"title":"A multi-view architecture for the SHL challenge","authors":"Massinissa Hamidi, A. Osmani, Pegah Alizadeh","doi":"10.1145/3410530.3414351","DOIUrl":"https://doi.org/10.1145/3410530.3414351","url":null,"abstract":"To recognize locomotion and transportation modes in a user-independent manner with an unknown target phone position, we (team Eagles) propose an approach based on two main steps: reduction of the impact of regular effects that stem from each phone position, followed by the recognition of the appropriate activity. The general architecture is composed of three groups of neural networks organized in the following order. The first group allows the recognition of the source, the second group allows the normalization of data to neutralize the impact of the source on the activity learning process, and the last group allows the recognition of the activity itself. We perform extensive experiments and the preliminary results encourage us to follow this direction, including the source learning to reduce the phone position's biases and activity separately.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83393815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Respiratory events screening using consumer smartwatches 使用消费者智能手表进行呼吸事件筛查
Illia Fedorin, Kostyantyn Slyusarenko, Margaryta Nastenko
Respiratory related events (RE) during nocturnal sleep disturb the natural physiological pattern of sleep. This events may include all types of apnea and hypopnea, respiratory-event-related arousals and snoring. The particular importance of breath analysis is currently associated with the COVID-19 pandemic. The proposed algorithm is a deep learning model with long short-term memory cells for RE detection for each 1 minute epoch during nocturnal sleep. Our approach provides the basis for a smartwatch based respiratory-related sleep pattern analysis (accuracy of epoch-by-epoch classification is greater than 80 %), can be applied for a potential risk of respiratory-related diseases screening (mean absolute error of AHI estimation is about 6.5 events/h on the test set, which includes participants with all types of apnea severity; two class screening accuracy (AHI threshold is 15 events/h) is greater than 90 %).
夜间睡眠中的呼吸相关事件(RE)扰乱了睡眠的自然生理模式。这些事件可能包括所有类型的呼吸暂停和低呼吸,呼吸事件相关的觉醒和打鼾。呼吸分析的特别重要性目前与COVID-19大流行有关。提出的算法是一个深度学习模型,具有长短期记忆细胞,用于夜间睡眠中每1分钟epoch的RE检测。我们的方法为基于智能手表的呼吸相关睡眠模式分析提供了基础(逐epoch分类准确率大于80%),可用于呼吸相关疾病的潜在风险筛查(在测试集中,AHI估计的平均绝对误差约为6.5事件/小时,其中包括所有类型的呼吸暂停严重程度的参与者;二级筛查准确率(AHI阈值为15个事件/小时)大于90%。
{"title":"Respiratory events screening using consumer smartwatches","authors":"Illia Fedorin, Kostyantyn Slyusarenko, Margaryta Nastenko","doi":"10.1145/3410530.3414399","DOIUrl":"https://doi.org/10.1145/3410530.3414399","url":null,"abstract":"Respiratory related events (RE) during nocturnal sleep disturb the natural physiological pattern of sleep. This events may include all types of apnea and hypopnea, respiratory-event-related arousals and snoring. The particular importance of breath analysis is currently associated with the COVID-19 pandemic. The proposed algorithm is a deep learning model with long short-term memory cells for RE detection for each 1 minute epoch during nocturnal sleep. Our approach provides the basis for a smartwatch based respiratory-related sleep pattern analysis (accuracy of epoch-by-epoch classification is greater than 80 %), can be applied for a potential risk of respiratory-related diseases screening (mean absolute error of AHI estimation is about 6.5 events/h on the test set, which includes participants with all types of apnea severity; two class screening accuracy (AHI threshold is 15 events/h) is greater than 90 %).","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83769477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Human activity recognition using multi-input CNN model with FFT spectrograms 基于FFT谱图的多输入CNN模型的人体活动识别
Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda
An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.
描述了由DSML-TDU团队为sussexhuawei运动运输(SHL)识别挑战开发的一种活动识别方法。自2018年的挑战赛以来,我们的团队一直在使用来自移动传感器的快速傅立叶变换(FFT)频谱图开发基于卷积神经网络(CNN)的人类活动识别模型。在2020年的挑战中,我们开发了我们的模型,以适应在特定位置配备传感器的各种用户。从线性加速度计、陀螺仪和磁传感器数据的三个轴生成的FFT频谱图的九种模态被用作我们模型的输入数据。首先,我们创建了一个CNN模型,从训练数据和验证数据中估计四种保持姿势(包、手、臀部和躯干)。所提供的测试数据预计来自Hips。接下来,我们创建了另一个(预训练的)CNN模型,从大量的用户1训练数据(Hips)中估计8个活动。然后,通过使用少量用户2和用户3 (Hips)的验证数据,对该模型进行了针对不同用户的微调。最后,通过5倍交叉验证,f值为96.7%。
{"title":"Human activity recognition using multi-input CNN model with FFT spectrograms","authors":"Keiichi Yaguchi, Kazukiyo Ikarigawa, R. Kawasaki, Wataru Miyazaki, Yuki Morikawa, Chihiro Ito, M. Shuzo, Eisaku Maeda","doi":"10.1145/3410530.3414342","DOIUrl":"https://doi.org/10.1145/3410530.3414342","url":null,"abstract":"An activity recognition method developed by Team DSML-TDU for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge was descrived. Since the 2018 challenge, our team has been developing human activity recognition models based on a convolutional neural network (CNN) using Fast Fourier Transform (FFT) spectrograms from mobile sensors. In the 2020 challenge, we developed our model to fit various users equipped with sensors in specific positions. Nine modalities of FFT spectrograms generated from the three axes of the linear accelerometer, gyroscope, and magnetic sensor data were used as input data for our model. First, we created a CNN model to estimate four retention positions (Bag, Hand, Hips, and Torso) from the training data and validation data. The provided test data was expected to from Hips. Next, we created another (pre-trained) CNN model to estimate eight activities from a large amount of user 1 training data (Hips). Then, this model was fine-tuned for different users by using the small amount of validation data for users 2 and 3 (Hips). Finally, an F-measure of 96.7% was obtained as a result of 5-fold-cross validation.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77467663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Action recognition using spatially distributed radar setup through microdoppler signature 利用微多普勒特征的空间分布雷达装置进行动作识别
Smriti Rani, A. Chowdhury, Andrew Gigie, T. Chakravarty, A. Pal
Small form factor off-the shelf radar sensor nodes are being investigated for various privacy preserving non-contact sensing applications. This paper, presents a novel method, based on a system of spatially distributed radar setup(panel radar), for real time action recognition. Proposed method uses spatially distributed two single channel Continuous Wave (CW) radars to classify actions. For classification, a unique two layered classifier, is employed on novel features. Layer I performs coarse limb level classification followed by finer action detection in Layer II. For validation of the proposed system, 7 actions were targeted and data was collected for 20 people. Accuracy of 88.6 % was obtained, with a precision and recall of 0.9 and 0.89 respectively, hence proving the efficacy of this novel approach.
小尺寸的现成雷达传感器节点正在被研究用于各种保护隐私的非接触式传感应用。本文提出了一种基于空间分布式雷达装置(面板雷达)的实时动作识别新方法。该方法利用空间分布的两个单通道连续波雷达对动作进行分类。在分类方面,采用独特的两层分类器对新特征进行分类。第一层进行粗肢体级分类,第二层进行精细动作检测。为了验证所提议的系统,针对20人收集了7项行动和数据。准确率为88.6%,查准率为0.9,查全率为0.89,证明了该方法的有效性。
{"title":"Action recognition using spatially distributed radar setup through microdoppler signature","authors":"Smriti Rani, A. Chowdhury, Andrew Gigie, T. Chakravarty, A. Pal","doi":"10.1145/3410530.3414362","DOIUrl":"https://doi.org/10.1145/3410530.3414362","url":null,"abstract":"Small form factor off-the shelf radar sensor nodes are being investigated for various privacy preserving non-contact sensing applications. This paper, presents a novel method, based on a system of spatially distributed radar setup(panel radar), for real time action recognition. Proposed method uses spatially distributed two single channel Continuous Wave (CW) radars to classify actions. For classification, a unique two layered classifier, is employed on novel features. Layer I performs coarse limb level classification followed by finer action detection in Layer II. For validation of the proposed system, 7 actions were targeted and data was collected for 20 people. Accuracy of 88.6 % was obtained, with a precision and recall of 0.9 and 0.89 respectively, hence proving the efficacy of this novel approach.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80011979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Identifying label noise in time-series datasets 识别时间序列数据集中的标签噪声
G. Atkinson, V. Metsis
Reliably labeled datasets are crucial to the performance of supervised learning methods. Time-series data pose additional challenges. Data points lying on borders between classes can be mislabeled due to perception limitations of human labelers. Sensor measurements may not be directly interpretable by humans. Thus label noise cannot be manually removed. As a result, time-series datasets often contain a significant amount of label noise that can degrade the performance of machine learning models. This work focuses on label noise identification and removal by extending previous methods developed for static instances to the domain of time-series data. We use a combination of deep learning and visualization algorithms to facilitate automatic noise removal. We show that our approach can identify mislabeled instances, which results in improved classification accuracy on four synthetic and two real publicly available human activity datasets.
可靠标记的数据集对监督学习方法的性能至关重要。时间序列数据带来了额外的挑战。由于人类标记器的感知限制,位于类之间边界的数据点可能会被错误标记。传感器测量结果可能不能被人类直接解释。因此,不能手动去除标签噪声。因此,时间序列数据集通常包含大量的标签噪声,这些噪声会降低机器学习模型的性能。这项工作的重点是通过将以前为静态实例开发的方法扩展到时间序列数据领域来识别和去除标签噪声。我们使用深度学习和可视化算法的组合来促进自动噪声去除。我们证明了我们的方法可以识别错误标记的实例,从而提高了四个合成和两个真实公开的人类活动数据集的分类精度。
{"title":"Identifying label noise in time-series datasets","authors":"G. Atkinson, V. Metsis","doi":"10.1145/3410530.3414366","DOIUrl":"https://doi.org/10.1145/3410530.3414366","url":null,"abstract":"Reliably labeled datasets are crucial to the performance of supervised learning methods. Time-series data pose additional challenges. Data points lying on borders between classes can be mislabeled due to perception limitations of human labelers. Sensor measurements may not be directly interpretable by humans. Thus label noise cannot be manually removed. As a result, time-series datasets often contain a significant amount of label noise that can degrade the performance of machine learning models. This work focuses on label noise identification and removal by extending previous methods developed for static instances to the domain of time-series data. We use a combination of deep learning and visualization algorithms to facilitate automatic noise removal. We show that our approach can identify mislabeled instances, which results in improved classification accuracy on four synthetic and two real publicly available human activity datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78862112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Exploring chatbot user interfaces for mood measurement: a study of validity and user experience 探索聊天机器人用户界面的情绪测量:有效性和用户体验的研究
Helma Torkamaan, J. Ziegler
With the growth of interactive text or voice-enabled systems, such as intelligent personal assistants and chatbots, it is now possible to easily measure a user's mood using a conversation-based interaction instead of traditional questionnaires. However, it is still unclear if such mood measurements would be valid, akin to traditional measures, and user-engaging. Using smartphones, we compare in this paper two of the most popular traditional measures of mood: International PANAS-Short Form (I-PANAS-SF) and Affect Grid. For each of these measures, we then investigate the validity of mood measurement with a modified, chatbot-based user interface design. Our preliminary results suggest that some mood measures may not be resilient to modifications and that their alteration could lead to invalid, if not meaningless results. This exploratory paper then presents and discusses four voice-based mood tracker designs and summarizes user perception of and satisfaction with these tools.
随着交互式文本或语音系统(如智能个人助理和聊天机器人)的发展,现在可以通过基于对话的互动而不是传统的问卷调查来轻松测量用户的情绪。然而,目前尚不清楚这种情绪测量是否有效,类似于传统的测量方法,以及用户粘性。使用智能手机,我们在本文中比较了两种最流行的传统情绪测量方法:国际PANAS-Short Form (I-PANAS-SF)和Affect Grid。对于每一种测量方法,我们用一个改进的、基于聊天机器人的用户界面设计来研究情绪测量的有效性。我们的初步结果表明,一些情绪测量方法可能无法适应修改,它们的改变可能导致无效的结果,如果不是无意义的结果。这篇探索性论文随后提出并讨论了四种基于语音的情绪追踪器设计,并总结了用户对这些工具的感知和满意度。
{"title":"Exploring chatbot user interfaces for mood measurement: a study of validity and user experience","authors":"Helma Torkamaan, J. Ziegler","doi":"10.1145/3410530.3414395","DOIUrl":"https://doi.org/10.1145/3410530.3414395","url":null,"abstract":"With the growth of interactive text or voice-enabled systems, such as intelligent personal assistants and chatbots, it is now possible to easily measure a user's mood using a conversation-based interaction instead of traditional questionnaires. However, it is still unclear if such mood measurements would be valid, akin to traditional measures, and user-engaging. Using smartphones, we compare in this paper two of the most popular traditional measures of mood: International PANAS-Short Form (I-PANAS-SF) and Affect Grid. For each of these measures, we then investigate the validity of mood measurement with a modified, chatbot-based user interface design. Our preliminary results suggest that some mood measures may not be resilient to modifications and that their alteration could lead to invalid, if not meaningless results. This exploratory paper then presents and discusses four voice-based mood tracker designs and summarizes user perception of and satisfaction with these tools.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79470998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards a wearable system for assessing couples' dyadic interactions in daily life 一个可穿戴系统,用于评估夫妻在日常生活中的二元互动
George Boateng
Researchers are interested in understanding the dyadic interactions of couples as they relate to relationship quality and chronic disease management. Currently, ambulatory assessment of couples' interactions entail collecting data at random times in the day. There is no ubiquitous system that leverages the dyadic nature of couples' interactions (eg. collecting data when partners are interacting) and also performs real-time inference relevant for relationship quality and chronic disease management. In this work, we seek to develop a smartwatch system that can collect data about couples' dyadic interactions, and infer and track indicators of relationship quality and chronic disease management. We plan to collect data from couples in the field and use the data to develop methods to detect the indicators. Then, we plan to implement these methods as a smartwatch system and evaluate its performance in real-time and everyday life through another field study. Such a system can be used by social psychology researchers to understand the social dynamics of couples in everyday life and their impact on relationship quality, and also by health psychology researchers for developing and delivering behavioral interventions for couples who are managing chronic diseases.
研究人员对了解夫妻之间的二元互动感兴趣,因为它们与关系质量和慢性疾病管理有关。目前,对夫妻互动的动态评估需要在一天中的随机时间收集数据。没有一种普遍存在的系统可以利用夫妻互动的二元性(例如。收集伴侣互动时的数据),并对关系质量和慢性疾病管理进行实时推断。在这项工作中,我们试图开发一种智能手表系统,可以收集关于夫妻二元互动的数据,并推断和跟踪关系质量和慢性疾病管理的指标。我们计划从实地夫妇中收集数据,并利用这些数据制定检测指标的方法。然后,我们计划将这些方法作为智能手表系统来实现,并通过另一个实地研究来评估其在实时和日常生活中的性能。这样的系统可以被社会心理学研究人员用来了解夫妻在日常生活中的社会动态及其对关系质量的影响,也可以被健康心理学研究人员用于为管理慢性病的夫妻开发和提供行为干预措施。
{"title":"Towards a wearable system for assessing couples' dyadic interactions in daily life","authors":"George Boateng","doi":"10.1145/3410530.3414331","DOIUrl":"https://doi.org/10.1145/3410530.3414331","url":null,"abstract":"Researchers are interested in understanding the dyadic interactions of couples as they relate to relationship quality and chronic disease management. Currently, ambulatory assessment of couples' interactions entail collecting data at random times in the day. There is no ubiquitous system that leverages the dyadic nature of couples' interactions (eg. collecting data when partners are interacting) and also performs real-time inference relevant for relationship quality and chronic disease management. In this work, we seek to develop a smartwatch system that can collect data about couples' dyadic interactions, and infer and track indicators of relationship quality and chronic disease management. We plan to collect data from couples in the field and use the data to develop methods to detect the indicators. Then, we plan to implement these methods as a smartwatch system and evaluate its performance in real-time and everyday life through another field study. Such a system can be used by social psychology researchers to understand the social dynamics of couples in everyday life and their impact on relationship quality, and also by health psychology researchers for developing and delivering behavioral interventions for couples who are managing chronic diseases.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80887541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
WellComp 2020: third international workshop on computing for well-being WellComp 2020:第三届健康计算国际研讨会
T. Okoshi, J. Nakazawa, JeongGil Ko, F. Kawsar, S. Pirttikangas
With the advancements in ubiquitous computing, ubicomp technology has deeply spread into our daily lives, including office work, home and house-keeping, health management, transportation, or even urban living environments. Furthermore, beyond the initial metric of computing, such as "efficiency" and "productivity", the benefits that people (users) benefit on a well-being perspective based on such ubiquitous technology has been greatly paid attention in the recent years. In our third "WellComp" (Computing for Well-being) workshop, we intensively discuss about the contribution of ubiquitous computing towards users' well-being that covers physical, mental, and social wellness (and their combinations), from the viewpoints of various different layers of computing. After big success of two previous workshops WellComp 2018 and 2019, with strong international organization members in various ubicomp research domains, WellComp 2020 will bring together researchers and practitioners from the academia and industry to explore versatile topics related to well-being and ubiquitous computing.
随着普适计算(ubiquitous computing)的发展,ubicomp技术已经深入到我们的日常生活中,包括办公室工作、家庭家政、健康管理、交通运输,甚至城市生活环境。此外,除了“效率”和“生产力”等最初的计算度量之外,基于这种无处不在的技术,人们(用户)从福祉角度受益的好处近年来受到了极大的关注。在我们的第三次“WellComp”(福祉计算)研讨会上,我们从不同的计算层的角度,深入讨论了无处不在的计算对用户福祉的贡献,包括身体、精神和社会健康(及其组合)。在前两届WellComp 2018和2019研讨会取得巨大成功之后,在各个ubicomp研究领域拥有强大的国际组织成员,WellComp 2020将汇集来自学术界和工业界的研究人员和实践者,探索与福祉和普适计算相关的多种主题。
{"title":"WellComp 2020: third international workshop on computing for well-being","authors":"T. Okoshi, J. Nakazawa, JeongGil Ko, F. Kawsar, S. Pirttikangas","doi":"10.1145/3410530.3414614","DOIUrl":"https://doi.org/10.1145/3410530.3414614","url":null,"abstract":"With the advancements in ubiquitous computing, ubicomp technology has deeply spread into our daily lives, including office work, home and house-keeping, health management, transportation, or even urban living environments. Furthermore, beyond the initial metric of computing, such as \"efficiency\" and \"productivity\", the benefits that people (users) benefit on a well-being perspective based on such ubiquitous technology has been greatly paid attention in the recent years. In our third \"WellComp\" (Computing for Well-being) workshop, we intensively discuss about the contribution of ubiquitous computing towards users' well-being that covers physical, mental, and social wellness (and their combinations), from the viewpoints of various different layers of computing. After big success of two previous workshops WellComp 2018 and 2019, with strong international organization members in various ubicomp research domains, WellComp 2020 will bring together researchers and practitioners from the academia and industry to explore versatile topics related to well-being and ubiquitous computing.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81577319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tackling the SHL recognition challenge with phone position detection and nearest neighbour smoothing 利用手机位置检测和最近邻平滑来解决SHL识别难题
P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle
We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two "target" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) "smoothing" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.
我们提出了MDCA团队的解决方案,以应对2020年suskes - huawei移动运输(SHL)识别挑战。这项任务是通过两名用户在固定但未知的位置佩戴手机的5秒钟智能手机传感器数据来识别交通方式。训练数据由不同的用户收集,他们同时在四个不同的位置佩戴四部手机。只提供了来自两个“目标”用户的小标记数据集。我们的解决方案包括三个步骤:1)检测手机佩戴位置,2)选择训练数据以创建特定于用户和位置的分类模型,3)通过识别测试集中可能属于同一类的相似数据帧组来“平滑”预测。我们通过与基线模型的比较来证明处理管道的有效性。通过4倍交叉验证,我们的方法平均F1得分为75.3%。
{"title":"Tackling the SHL recognition challenge with phone position detection and nearest neighbour smoothing","authors":"P. Widhalm, Philipp Merz, L. Coconu, Norbert Brändle","doi":"10.1145/3410530.3414344","DOIUrl":"https://doi.org/10.1145/3410530.3414344","url":null,"abstract":"We present the solution of team MDCA to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020. The task is to recognize the mode of transportation from 5-second frames of smartphone sensor data from two users, who wore the phone in a constant but unknown position. The training data were collected by a different user with four phones simultaneously worn at four different positions. Only a small labelled dataset from the two \"target\" users was provided. Our solution consists of three steps: 1) detecting the phone wearing position, 2) selecting training data to create a user and position-specific classification model, and 3) \"smoothing\" the predictions by identifying groups of similar data frames in the test set, which probably belong to the same class. We demonstrate the effectiveness of the processing pipeline by comparison to baseline models. Using 4-fold cross-validation our approach achieves an average F1 score of 75.3%.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80865936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1