首页 > 最新文献

Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction最新文献

英文 中文
Co-Creating Emotionally Aligned Smart Homes Using Social Psychological Modeling 利用社会心理学模型共同创造情感一致的智能家居
Julie M. Robillard, Aaron W. Li, Shilpa Jacob, Dan Wang, Xin Zou, J. Hoey
Smart homes have long been proposed as a viable mechanism to promote independent living for older adults in the home environment. Despite tremendous progress on the technology front, there has been limited uptake by end-users. A critical barrier to the adoption of smart home technology by older adults is the lack of engagement of end-users in the development process and the resulting one-size-fits-all solutions that fail to recognize the specific needs of the older adult demographic. In this paper, we propose a novel online platform aimed at closing the gap between older adults and technology developers: ASPIRE (Alignment of Social Personas in Inclusive Research Engagement). ASPIRE is an online collaborative network (OCN) that allows older adults, care partners, and developers to engage in the design and development of a joint shared product: the smart-home solution. To promote the adoption of the OCN and the alignment of this collaborative network with the values and emotional needs of its end-users, ASPIRE harnesses a social-psychological theory of identity. This paper presents ASPIRE as a conceptual model, with a preliminary implementation.
长期以来,智能家居一直被认为是促进老年人在家庭环境中独立生活的可行机制。尽管在技术方面取得了巨大的进步,但最终用户的接受程度有限。老年人采用智能家居技术的一个关键障碍是终端用户在开发过程中缺乏参与,以及由此产生的一刀切的解决方案未能认识到老年人的具体需求。在本文中,我们提出了一个新颖的在线平台,旨在缩小老年人和技术开发人员之间的差距:ASPIRE(包容性研究参与中的社会角色对齐)。ASPIRE是一个在线协作网络(OCN),允许老年人、护理伙伴和开发人员参与设计和开发共同共享的产品:智能家居解决方案。为了促进OCN的采用,并使这种协作网络与最终用户的价值观和情感需求保持一致,ASPIRE利用了社会心理学的身份理论。本文提出了ASPIRE作为一个概念模型,并进行了初步实现。
{"title":"Co-Creating Emotionally Aligned Smart Homes Using Social Psychological Modeling","authors":"Julie M. Robillard, Aaron W. Li, Shilpa Jacob, Dan Wang, Xin Zou, J. Hoey","doi":"10.1145/3134230.3134242","DOIUrl":"https://doi.org/10.1145/3134230.3134242","url":null,"abstract":"Smart homes have long been proposed as a viable mechanism to promote independent living for older adults in the home environment. Despite tremendous progress on the technology front, there has been limited uptake by end-users. A critical barrier to the adoption of smart home technology by older adults is the lack of engagement of end-users in the development process and the resulting one-size-fits-all solutions that fail to recognize the specific needs of the older adult demographic. In this paper, we propose a novel online platform aimed at closing the gap between older adults and technology developers: ASPIRE (Alignment of Social Personas in Inclusive Research Engagement). ASPIRE is an online collaborative network (OCN) that allows older adults, care partners, and developers to engage in the design and development of a joint shared product: the smart-home solution. To promote the adoption of the OCN and the alignment of this collaborative network with the values and emotional needs of its end-users, ASPIRE harnesses a social-psychological theory of identity. This paper presents ASPIRE as a conceptual model, with a preliminary implementation.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134619408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The SPHERE Experience SPHERE体验
I. Craddock
The talk will describe the experience for researchers and the public alike in co-producing and deploying at scale a bespoke wearable, video and environmental sensor system for activity monitoring at home. It will consider the health requirements that drove the development, the design constraints imposed by users, technology and budgets, and how the initial design, production and installation has progressed. Data from a number of local family homes will be presented, along with an early view of what can be seen in the analysed data.
该演讲将为研究人员和公众描述共同生产和大规模部署定制的可穿戴,视频和环境传感器系统的经验,用于家庭活动监测。它将考虑推动开发的健康要求,用户、技术和预算施加的设计限制,以及初始设计、生产和安装的进展情况。将展示一些当地家庭住宅的数据,以及分析数据中可以看到的早期视图。
{"title":"The SPHERE Experience","authors":"I. Craddock","doi":"10.1145/3134230.3135629","DOIUrl":"https://doi.org/10.1145/3134230.3135629","url":null,"abstract":"The talk will describe the experience for researchers and the public alike in co-producing and deploying at scale a bespoke wearable, video and environmental sensor system for activity monitoring at home. It will consider the health requirements that drove the development, the design constraints imposed by users, technology and budgets, and how the initial design, production and installation has progressed. Data from a number of local family homes will be presented, along with an early view of what can be seen in the analysed data.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117276738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preliminary Evaluation of a Framework for Overhead Skeleton Tracking in Factory Environments using Kinect 工厂环境中使用Kinect的架空骨架跟踪框架的初步评估
M. M. Marinho, Yuki Yatsushima, T. Maekawa, Y. Namioka
This paper presents a preliminary evaluation of a framework that allows an overhead RGBD camera to segment and track workers skeleton in an unstructured factory environment. The default Kinect skeleton tracking algorithm was developed using front-view artificial depth images generated from a 3D model of a person in an empty room. The proposed framework is inspired in this concept, and works by capturing motion data of worker movements performing a real factory task. That motion data is matched to the 3D model of the worker. In a novel approach, the largest elements in the workspace (e.g. desks, racks) are modeled with simple shapes, and the artificial depth images are generated in a "simplified workspace" in contrast with an "empty workspace". We show in preliminary experiments that the addition of the simplified models during training can increase, ceteris paribus, the segmentation accuracy by over 3 times and the recall by about one and a half times when the workspace is highly cluttered. Evaluation is made using real depth images obtained in a factory environment, and as ground-truth manually segmented images are used.
本文提出了一个框架的初步评估,该框架允许头顶RGBD相机在非结构化工厂环境中分割和跟踪工人骨架。Kinect的默认骨骼跟踪算法是利用一个人在一个空房间里的3D模型生成的前视人工深度图像开发的。所提出的框架受到这一概念的启发,并通过捕获执行真实工厂任务的工人运动的运动数据来工作。这些运动数据与工人的3D模型相匹配。在一种新颖的方法中,工作空间中最大的元素(例如桌子,架子)用简单的形状建模,并在“简化工作空间”中生成人工深度图像,与“空工作空间”形成对比。我们在初步实验中表明,在其他条件相同的情况下,当工作空间高度混乱时,在训练过程中加入简化模型可以将分割精度提高3倍以上,召回率提高1.5倍左右。使用在工厂环境中获得的真实深度图像进行评估,并使用真实的手动分割图像。
{"title":"Preliminary Evaluation of a Framework for Overhead Skeleton Tracking in Factory Environments using Kinect","authors":"M. M. Marinho, Yuki Yatsushima, T. Maekawa, Y. Namioka","doi":"10.1145/3134230.3134232","DOIUrl":"https://doi.org/10.1145/3134230.3134232","url":null,"abstract":"This paper presents a preliminary evaluation of a framework that allows an overhead RGBD camera to segment and track workers skeleton in an unstructured factory environment. The default Kinect skeleton tracking algorithm was developed using front-view artificial depth images generated from a 3D model of a person in an empty room. The proposed framework is inspired in this concept, and works by capturing motion data of worker movements performing a real factory task. That motion data is matched to the 3D model of the worker. In a novel approach, the largest elements in the workspace (e.g. desks, racks) are modeled with simple shapes, and the artificial depth images are generated in a \"simplified workspace\" in contrast with an \"empty workspace\". We show in preliminary experiments that the addition of the simplified models during training can increase, ceteris paribus, the segmentation accuracy by over 3 times and the recall by about one and a half times when the workspace is highly cluttered. Evaluation is made using real depth images obtained in a factory environment, and as ground-truth manually segmented images are used.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123567360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bottom-up Investigation: Human Activity Recognition Based on Feet Movement and Posture Information 自下而上的调查:基于足部运动和姿态信息的人类活动识别
Rafael de Pinho André, Pedro Diniz, H. Fuks
Human Activity Recognition (HAR) research on feet posture and movement information has seen an intense growth during the last five years, drawing attention of fields such as healthcare systems and context inference. In this work, we tested our 6-activity classes machine learning HAR classifier using a foot-based wearable device in an experiment involving 11 volunteers. The classifier uses a Random Forest algorithm with Leave-one-out Cross Validation, achieving an average of 93.34% accuracy. Targeting at a replicable research, we provide full hardware information, system source code and a public domain dataset consisting of 800,000 samples.
人类活动识别(HAR)对足部姿势和运动信息的研究在过去五年中得到了强烈的发展,引起了医疗保健系统和上下文推理等领域的关注。在这项工作中,我们在一个涉及11名志愿者的实验中,使用基于脚的可穿戴设备测试了我们的6个活动类机器学习HAR分类器。该分类器使用随机森林算法进行留一交叉验证,平均准确率达到93.34%。针对可复制的研究,我们提供完整的硬件信息,系统源代码和由80万个样本组成的公共领域数据集。
{"title":"Bottom-up Investigation: Human Activity Recognition Based on Feet Movement and Posture Information","authors":"Rafael de Pinho André, Pedro Diniz, H. Fuks","doi":"10.1145/3134230.3134240","DOIUrl":"https://doi.org/10.1145/3134230.3134240","url":null,"abstract":"Human Activity Recognition (HAR) research on feet posture and movement information has seen an intense growth during the last five years, drawing attention of fields such as healthcare systems and context inference. In this work, we tested our 6-activity classes machine learning HAR classifier using a foot-based wearable device in an experiment involving 11 volunteers. The classifier uses a Random Forest algorithm with Leave-one-out Cross Validation, achieving an average of 93.34% accuracy. Targeting at a replicable research, we provide full hardware information, system source code and a public domain dataset consisting of 800,000 samples.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130545270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Knowledge Extraction from Task Narratives 从任务叙述中提取知识
Kristina Yordanova, Carlos Monserrat Aranda, David Nieves, J. Hernández-Orallo
One of the major difficulties in activity recognition stems from the lack of a model of the world where activities and events are to be recognised. When the domain is fixed and repetitive we can manually include this information using some kind of ontology or set of constraints. On many occasions, however, there are many new situations for which only some knowledge is common and many other domain-specific relations have to be inferred. Humans are able to do this from short descriptions in natural language, describing the scene or the particular task to be performed. In this paper we apply a tool that extracts situation models and rules from natural language description to a series of exercises in a surgical domain, in which we want to identify the sequence of events that are not possible, those that are possible (but incorrect according to the exercise) and those that correspond to the exercise or plan expressed by the description in natural language. The preliminary results show that a large amount of valuable knowledge can be extracted automatically, which could be used to express domain knowledge and exercises description in languages such as event calculus that could help bridge these high-level descriptions with the low-level events that are recognised from videos.
活动识别的主要困难之一源于缺乏一个可以识别活动和事件的世界模型。当领域固定且重复时,我们可以使用某种本体或约束集手动包含该信息。然而,在许多情况下,存在许多新情况,其中只有一些知识是公共的,并且必须推断许多其他特定于领域的关系。人类能够通过自然语言的简短描述来做到这一点,描述场景或要执行的特定任务。在本文中,我们将一种从自然语言描述中提取情境模型和规则的工具应用于外科领域的一系列练习,在这些练习中,我们希望识别不可能发生的事件序列,可能发生的事件序列(但根据练习不正确)以及与用自然语言描述表达的练习或计划相对应的事件序列。初步结果表明,可以自动提取大量有价值的知识,这些知识可以用于表达领域知识和用事件演算等语言进行描述,这可以帮助将这些高级描述与从视频中识别的低级事件联系起来。
{"title":"Knowledge Extraction from Task Narratives","authors":"Kristina Yordanova, Carlos Monserrat Aranda, David Nieves, J. Hernández-Orallo","doi":"10.1145/3134230.3134234","DOIUrl":"https://doi.org/10.1145/3134230.3134234","url":null,"abstract":"One of the major difficulties in activity recognition stems from the lack of a model of the world where activities and events are to be recognised. When the domain is fixed and repetitive we can manually include this information using some kind of ontology or set of constraints. On many occasions, however, there are many new situations for which only some knowledge is common and many other domain-specific relations have to be inferred. Humans are able to do this from short descriptions in natural language, describing the scene or the particular task to be performed. In this paper we apply a tool that extracts situation models and rules from natural language description to a series of exercises in a surgical domain, in which we want to identify the sequence of events that are not possible, those that are possible (but incorrect according to the exercise) and those that correspond to the exercise or plan expressed by the description in natural language. The preliminary results show that a large amount of valuable knowledge can be extracted automatically, which could be used to express domain knowledge and exercises description in languages such as event calculus that could help bridge these high-level descriptions with the low-level events that are recognised from videos.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exercise Monitoring On Consumer Smart Phones Using Ultrasonic Sensing 利用超声波感应在消费者智能手机上进行运动监测
Biying Fu, Dinesh Vaithyalingam Gangatharan, Arjan Kuijper, Florian Kirchbuchner, Andreas Braun
Quantified self has been a trend over the last several years. An increasing number of people use devices, such as smartwatches or smartphones to log activities of daily life, including step count or vital information. However, most of these devices have to be worn by the user during the activities, as they rely on integrated motion sensors. Our goal is to create a technology that enables similar precision with remote sensing, based on common sensors installed in every smartphone, in order to enable ubiquitous application. We have created a system that uses the Doppler effect in ultrasound frequencies to detect motion around the smartphone. We propose a novel use case to track exercises, based on several feature extraction methods and machine learning classification. We conducted a study with 14 users, achieving an accuracy between 73 % and 92% for the different exercises.
在过去的几年里,量化自我已经成为一种趋势。越来越多的人使用智能手表或智能手机等设备来记录日常生活活动,包括步数或重要信息。然而,这些设备中的大多数必须由用户在活动期间佩戴,因为它们依赖于集成的运动传感器。我们的目标是创造一种技术,基于安装在每个智能手机上的通用传感器,实现与遥感相似的精度,从而实现无处不在的应用。我们创造了一个系统,利用超声波频率中的多普勒效应来检测智能手机周围的运动。我们提出了一个新的用例来跟踪练习,基于几种特征提取方法和机器学习分类。我们对14名用户进行了一项研究,不同运动的准确率在73%到92%之间。
{"title":"Exercise Monitoring On Consumer Smart Phones Using Ultrasonic Sensing","authors":"Biying Fu, Dinesh Vaithyalingam Gangatharan, Arjan Kuijper, Florian Kirchbuchner, Andreas Braun","doi":"10.1145/3134230.3134238","DOIUrl":"https://doi.org/10.1145/3134230.3134238","url":null,"abstract":"Quantified self has been a trend over the last several years. An increasing number of people use devices, such as smartwatches or smartphones to log activities of daily life, including step count or vital information. However, most of these devices have to be worn by the user during the activities, as they rely on integrated motion sensors. Our goal is to create a technology that enables similar precision with remote sensing, based on common sensors installed in every smartphone, in order to enable ubiquitous application. We have created a system that uses the Doppler effect in ultrasound frequencies to detect motion around the smartphone. We propose a novel use case to track exercises, based on several feature extraction methods and machine learning classification. We conducted a study with 14 users, achieving an accuracy between 73 % and 92% for the different exercises.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114630861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Smartwatch based Respiratory Rate and Breathing Pattern Recognition in an End-consumer Environment 终端消费者环境中基于智能手表的呼吸频率和呼吸模式识别
John Trimpop, Hannes Schenk, G. Bieber, Friedrich Lämmel, Paul Burggraf
Smartwatches as wearables became part of social life and practically and technically offer the possibility to collect medical body parameters next to usual fitness data. In this paper, we present an evaluation of the respiratory rate detection of the &gesund system. &gesund is a health assistance system, which automatically records detailed long-term health data with end-consumer smartwatches. The &gesund core is based on technology exclusively licensed from the Fraunhofer Institute of applied research. In our study, we compare the &gesund algorithms for respiration parameter detection in low-amplitude activities against data recorded from actual sleep laboratory patients. The results show accuracies of up to 89%. We are confident that wearable technologies will be used for medical health assistance in the near future.
智能手表作为可穿戴设备成为社会生活的一部分,在实际和技术上提供了收集常规健身数据之外的医疗身体参数的可能性。本文对超声系统的呼吸频率检测进行了评价。&gesund是一个健康辅助系统,它可以通过终端消费者的智能手表自动记录详细的长期健康数据。&gesund核心是基于夫琅和费应用研究所独家授权的技术。在我们的研究中,我们比较了&gesund算法在低幅度活动中的呼吸参数检测与实际睡眠实验室患者记录的数据。结果表明,准确率高达89%。我们相信,可穿戴技术将在不久的将来用于医疗卫生援助。
{"title":"Smartwatch based Respiratory Rate and Breathing Pattern Recognition in an End-consumer Environment","authors":"John Trimpop, Hannes Schenk, G. Bieber, Friedrich Lämmel, Paul Burggraf","doi":"10.1145/3134230.3134235","DOIUrl":"https://doi.org/10.1145/3134230.3134235","url":null,"abstract":"Smartwatches as wearables became part of social life and practically and technically offer the possibility to collect medical body parameters next to usual fitness data. In this paper, we present an evaluation of the respiratory rate detection of the &gesund system. &gesund is a health assistance system, which automatically records detailed long-term health data with end-consumer smartwatches. The &gesund core is based on technology exclusively licensed from the Fraunhofer Institute of applied research. In our study, we compare the &gesund algorithms for respiration parameter detection in low-amplitude activities against data recorded from actual sleep laboratory patients. The results show accuracies of up to 89%. We are confident that wearable technologies will be used for medical health assistance in the near future.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Detecting Process Transitions from Wearable Sensors: An Unsupervised Labeling Approach 从可穿戴传感器检测过程转换:一种无监督标记方法
S. Böttcher, P. Scholl, Kristof Van Laerhoven
Authoring protocols for manual tasks such as following recipes, manufacturing processes, or laboratory experiments requires a significant effort. This paper presents a system that estimates individual procedure transitions from the user's physical movement and gestures recorded with inertial motion sensors. Combined with egocentric or external video recordings this facilitates efficient review and annotation of video databases. We investigate different clustering algorithms on wearable inertial sensor data recorded on par with video data, to automatically create transition marks between task steps. The goal is to match these marks to the transitions given in a description of the workflow, thus creating navigation cues to browse video repositories of manual work. To evaluate the performance of unsupervised clustering algorithms, the automatically generated marks are compared to human-expert created labels on publicly available datasets. Additionally, we tested the approach on a novel data set in a manufacturing lab environment, describing an existing sequential manufacturing process.
为手工任务(如遵循食谱、制造过程或实验室实验)编写协议需要大量的工作。本文提出了一种系统,该系统从惯性运动传感器记录的用户的物理运动和手势中估计单个过程过渡。结合以自我为中心或外部视频记录,这有利于视频数据库的有效审查和注释。我们研究了不同的聚类算法对可穿戴惯性传感器数据记录与视频数据,以自动创建任务步骤之间的过渡标记。目标是将这些标记与工作流描述中给出的转换相匹配,从而创建导航提示来浏览手动工作的视频库。为了评估无监督聚类算法的性能,将自动生成的标记与人类专家在公开可用数据集上创建的标签进行比较。此外,我们在制造实验室环境中的新数据集上测试了该方法,该数据集描述了现有的顺序制造过程。
{"title":"Detecting Process Transitions from Wearable Sensors: An Unsupervised Labeling Approach","authors":"S. Böttcher, P. Scholl, Kristof Van Laerhoven","doi":"10.1145/3134230.3134233","DOIUrl":"https://doi.org/10.1145/3134230.3134233","url":null,"abstract":"Authoring protocols for manual tasks such as following recipes, manufacturing processes, or laboratory experiments requires a significant effort. This paper presents a system that estimates individual procedure transitions from the user's physical movement and gestures recorded with inertial motion sensors. Combined with egocentric or external video recordings this facilitates efficient review and annotation of video databases. We investigate different clustering algorithms on wearable inertial sensor data recorded on par with video data, to automatically create transition marks between task steps. The goal is to match these marks to the transitions given in a description of the workflow, thus creating navigation cues to browse video repositories of manual work. To evaluate the performance of unsupervised clustering algorithms, the automatically generated marks are compared to human-expert created labels on publicly available datasets. Additionally, we tested the approach on a novel data set in a manufacturing lab environment, describing an existing sequential manufacturing process.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133609569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Smarter Smart Homes with Social and Emotional Intelligence 拥有社交和情商的智能家居
J. Hoey
Pervasive intelligent assistive technologies promise to alleviate some of the increasing burden of care for persons with age-related cognitive disabilities, such as Alzheimer's disease. However, despite tremendous progress, many attempts to develop and implement real world applications have failed to become widely adopted. In this talk, I will argue that a key barrier to the adoption of these technologies is a lack of alignment, on a social and emotional level, between the technology and its users. I argue that products which do not deeply embed social and emotional intelligence will fail to align with the needs and values of target end-users, and will thereby have only limited utility. I will then introduce a socio-cultural reasoning engine called "BayesACT" that can be used to provide this level of affective reasoning. BayesACT is arises from the symbolic interactionist tradition in sociological social psychology, in which culturally shared affective and cognitive meanings provide powerful predictive insights into human action. BayesACT can learn these shared meanings during an interaction, and can tailor interventions to specific individuals in a way that ensures smoother and more effective uptake and response. I will give an introduction to this reasoning engine, and will discuss how affective reasoning could be used to create truly adaptive assistive technologies.
无处不在的智能辅助技术有望减轻与年龄有关的认知残疾(如阿尔茨海默病)患者日益增加的护理负担。然而,尽管取得了巨大的进步,但许多开发和实现现实世界应用程序的尝试未能得到广泛采用。在这次演讲中,我将讨论采用这些技术的一个关键障碍是,在社会和情感层面上,技术和用户之间缺乏一致性。我认为,没有深度嵌入社交和情商的产品将无法与目标最终用户的需求和价值观保持一致,因此只有有限的效用。然后,我将介绍一个叫做“BayesACT”的社会文化推理引擎,它可以用来提供这种程度的情感推理。BayesACT源于社会学社会心理学中的符号互动主义传统,其中文化共享的情感和认知意义为人类行为提供了强大的预测性见解。BayesACT可以在互动过程中学习这些共同的含义,并可以为特定的个体量身定制干预措施,以确保更顺畅、更有效的吸收和反应。我将介绍这个推理引擎,并讨论如何使用情感推理来创建真正的自适应辅助技术。
{"title":"Smarter Smart Homes with Social and Emotional Intelligence","authors":"J. Hoey","doi":"10.1145/3134230.3134243","DOIUrl":"https://doi.org/10.1145/3134230.3134243","url":null,"abstract":"Pervasive intelligent assistive technologies promise to alleviate some of the increasing burden of care for persons with age-related cognitive disabilities, such as Alzheimer's disease. However, despite tremendous progress, many attempts to develop and implement real world applications have failed to become widely adopted. In this talk, I will argue that a key barrier to the adoption of these technologies is a lack of alignment, on a social and emotional level, between the technology and its users. I argue that products which do not deeply embed social and emotional intelligence will fail to align with the needs and values of target end-users, and will thereby have only limited utility. I will then introduce a socio-cultural reasoning engine called \"BayesACT\" that can be used to provide this level of affective reasoning. BayesACT is arises from the symbolic interactionist tradition in sociological social psychology, in which culturally shared affective and cognitive meanings provide powerful predictive insights into human action. BayesACT can learn these shared meanings during an interaction, and can tailor interventions to specific individuals in a way that ensures smoother and more effective uptake and response. I will give an introduction to this reasoning engine, and will discuss how affective reasoning could be used to create truly adaptive assistive technologies.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116013914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Embedded Recognition of Sign Language Alphabet Fingerspelling in an IMU-Based Glove 基于imu手套的手语字母拼写实时嵌入式识别
Chaithanya Kumar Mummadi, Frederic Philips Peter Leo, Keshav Deep Verma, Shivaji Kasireddy, P. Scholl, Kristof Van Laerhoven
Data gloves have numerous applications, including enabling novel human-computer interaction and automated recognition of large sets of gestures, such as those used for sign language. For most of these applications, it is important to build mobile and self-contained applications that run without the need for frequent communication with additional services on a back-end server. We present in this paper a data glove prototype, based on multiple small Inertial Measurement Units (IMUs), with a glove-embedded classifier for the french sign language. In an extensive set of experiments with 57 participants, our system was tested by repeatedly fingerspelling the French Sign Language (LSF) alphabet. Results show that our system is capable of detecting the LSF alphabet with a mean accuracy score of 92% and an F1 score of 91%, with all detections performed on the glove within 63 milliseconds.
数据手套有许多应用,包括实现新颖的人机交互和对大量手势的自动识别,例如用于手语的手势。对于大多数此类应用程序,构建移动且自包含的应用程序非常重要,这些应用程序运行时不需要与后端服务器上的其他服务进行频繁通信。在本文中,我们提出了一个基于多个小型惯性测量单元(imu)的数据手套原型,其中包含一个用于法语手语的手套嵌入式分类器。在一组有57名参与者的广泛实验中,我们的系统通过反复用手指拼写法国手语(LSF)字母表来测试。结果表明,该系统能够检测LSF字母,平均准确率为92%,F1得分为91%,所有检测都在63毫秒内完成。
{"title":"Real-time Embedded Recognition of Sign Language Alphabet Fingerspelling in an IMU-Based Glove","authors":"Chaithanya Kumar Mummadi, Frederic Philips Peter Leo, Keshav Deep Verma, Shivaji Kasireddy, P. Scholl, Kristof Van Laerhoven","doi":"10.1145/3134230.3134236","DOIUrl":"https://doi.org/10.1145/3134230.3134236","url":null,"abstract":"Data gloves have numerous applications, including enabling novel human-computer interaction and automated recognition of large sets of gestures, such as those used for sign language. For most of these applications, it is important to build mobile and self-contained applications that run without the need for frequent communication with additional services on a back-end server. We present in this paper a data glove prototype, based on multiple small Inertial Measurement Units (IMUs), with a glove-embedded classifier for the french sign language. In an extensive set of experiments with 57 participants, our system was tested by repeatedly fingerspelling the French Sign Language (LSF) alphabet. Results show that our system is capable of detecting the LSF alphabet with a mean accuracy score of 92% and an F1 score of 91%, with all detections performed on the glove within 63 milliseconds.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129832871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1