{"title":"联网家庭中语音的局限性:人类活动识别自我报告工具的实验比较","authors":"Guillaume Levasseur , Kejia Tang , Hugues Bersini","doi":"10.1016/j.ijhcs.2024.103404","DOIUrl":null,"url":null,"abstract":"<div><div>Data annotation for human activity recognition is a well-known challenge for researchers. In particular, annotation in daily life settings relies on self-reporting tools with unknown accuracy. Speech is a promising interface for activity labeling. In this work, we compare the accuracy of two commercially available tools for annotation: voice diaries and connected buttons. We retrofit the water meters of thirty homes in the USA for infrastructure-mediated sensing. Participants are split into equal groups and receive one of the self-reporting tools. The balanced accuracy metric is transferred from the field of machine learning to the evaluation of the annotation performance. Our results show that connected buttons perform significantly better than the voice diary, with 92% median accuracy and 65% median reporting rate. Using questionnaire answers, we highlight that annotation performance is impacted by habit formation and sentiments toward the annotation tool. The use case for data annotation is to disaggregate water meter data into human activities beyond the point of use. We show that it is feasible with a machine-learning model and the corrected annotations. Finally, we formulate recommendations for the design of studies and intelligent environments around the key ideas of proportionality and immediacy.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"195 ","pages":"Article 103404"},"PeriodicalIF":5.3000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Limits of speech in connected homes: Experimental comparison of self-reporting tools for human activity recognition\",\"authors\":\"Guillaume Levasseur , Kejia Tang , Hugues Bersini\",\"doi\":\"10.1016/j.ijhcs.2024.103404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Data annotation for human activity recognition is a well-known challenge for researchers. In particular, annotation in daily life settings relies on self-reporting tools with unknown accuracy. Speech is a promising interface for activity labeling. In this work, we compare the accuracy of two commercially available tools for annotation: voice diaries and connected buttons. We retrofit the water meters of thirty homes in the USA for infrastructure-mediated sensing. Participants are split into equal groups and receive one of the self-reporting tools. The balanced accuracy metric is transferred from the field of machine learning to the evaluation of the annotation performance. Our results show that connected buttons perform significantly better than the voice diary, with 92% median accuracy and 65% median reporting rate. Using questionnaire answers, we highlight that annotation performance is impacted by habit formation and sentiments toward the annotation tool. The use case for data annotation is to disaggregate water meter data into human activities beyond the point of use. We show that it is feasible with a machine-learning model and the corrected annotations. Finally, we formulate recommendations for the design of studies and intelligent environments around the key ideas of proportionality and immediacy.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"195 \",\"pages\":\"Article 103404\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581924001873\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581924001873","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Limits of speech in connected homes: Experimental comparison of self-reporting tools for human activity recognition
Data annotation for human activity recognition is a well-known challenge for researchers. In particular, annotation in daily life settings relies on self-reporting tools with unknown accuracy. Speech is a promising interface for activity labeling. In this work, we compare the accuracy of two commercially available tools for annotation: voice diaries and connected buttons. We retrofit the water meters of thirty homes in the USA for infrastructure-mediated sensing. Participants are split into equal groups and receive one of the self-reporting tools. The balanced accuracy metric is transferred from the field of machine learning to the evaluation of the annotation performance. Our results show that connected buttons perform significantly better than the voice diary, with 92% median accuracy and 65% median reporting rate. Using questionnaire answers, we highlight that annotation performance is impacted by habit formation and sentiments toward the annotation tool. The use case for data annotation is to disaggregate water meter data into human activities beyond the point of use. We show that it is feasible with a machine-learning model and the corrected annotations. Finally, we formulate recommendations for the design of studies and intelligent environments around the key ideas of proportionality and immediacy.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...