智能家居系统的可解释活动识别

IF 4.3 3区 材料科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC ACS Applied Electronic Materials Pub Date : 2023-05-05 DOI:https://dl.acm.org/doi/10.1145/3561533
Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova
{"title":"智能家居系统的可解释活动识别","authors":"Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova","doi":"https://dl.acm.org/doi/10.1145/3561533","DOIUrl":null,"url":null,"abstract":"<p>Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder “Why did the smart home do that?” In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Activity Recognition for Smart Home Systems\",\"authors\":\"Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova\",\"doi\":\"https://dl.acm.org/doi/10.1145/3561533\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder “Why did the smart home do that?” In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.</p>\",\"PeriodicalId\":3,\"journal\":{\"name\":\"ACS Applied Electronic Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2023-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Electronic Materials\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/https://dl.acm.org/doi/10.1145/3561533\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3561533","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

智能家居环境旨在通过安装在整个空间中的各种传感器和执行器,为居住者提供有助于提高生活质量的服务。智能家居的许多自动操作都是由底层活动识别系统的输出控制的。然而,活动识别系统可能并不完全准确,因此智能家居操作的不一致性可能导致依赖智能家居预测的用户想知道“为什么智能家居会这样做?”在这项工作中,我们建立在可解释人工智能(XAI)技术的见解基础上,并引入了一个可解释的活动识别框架,在该框架中,我们利用领先的XAI方法(局部可解释模型不可知性解释,SHapley加性解释(SHAP),锚点)生成自然语言解释,解释活动导致给定分类的原因。我们在一个常见的目标智能家居场景中评估我们的框架:为独居或需要帮助的个人提供自主远程护理监控。在远程护理人员监测的背景下,我们执行两步评估:(a)利用机器学习专家评估解释的敏感性,(b)在两个用户远程护理人员监测场景(同步和异步)中招募非专家,以评估通过我们的框架生成的解释的有效性。我们的结果表明,XAI方法(SHAP)在生成合理解释方面有92%的成功率。此外,在83%的采样场景中,用户更喜欢自然语言解释而不是简单的活动标签,这强调了对可解释的活动识别系统的需求。最后,我们表明,一些XAI方法生成的解释可能导致用户对底层活动识别模型的准确性失去信心,而另一些方法则导致用户获得信心。考虑到所有研究的因素,我们就现有的XAI方法在智能家居自动化领域的最佳性能提出了建议,并讨论了未来工作的一系列主题,以进一步提高可解释的活动识别。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainable Activity Recognition for Smart Home Systems

Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder “Why did the smart home do that?” In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.20
自引率
4.30%
发文量
567
期刊最新文献
Hyperbaric oxygen treatment promotes tendon-bone interface healing in a rabbit model of rotator cuff tears. Oxygen-ozone therapy for myocardial ischemic stroke and cardiovascular disorders. Comparative study on the anti-inflammatory and protective effects of different oxygen therapy regimens on lipopolysaccharide-induced acute lung injury in mice. Heme oxygenase/carbon monoxide system and development of the heart. Hyperbaric oxygen for moderate-to-severe traumatic brain injury: outcomes 5-8 years after injury.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1